Future software engineering's three bases


Harrison Ainsworth

The future of software engineering is in combining less deliberate design with more automated generation and evolution

There are three basic ways to make software:

  • Design it – exactly and completely out of previously designed building blocks.
  • Generate it – with algorithmic techniques filling in the gaps of rough, incomplete specifications.
  • Evolve it – let it autonomously adapt to, and be randomly shaped by, its usage environment.

The Current: Design

So far, software engineering has done the first, and raised efficiency by increasing and improving reuse and sharing. But there is a hard limit set by the ‘no silver bullet’ law: essential complexity is dominant and unavoidable. Design complexity ultimately equates to resultant functionality – so you cannot substantially reduce design effort without losing functionality.

The Future: Automation

The way forward is with the latter two, because they avoid the essential complexity problem by automating the effort. This can yield greater gains than any improvement to design techniques. The trade-off is loss of control, but that is OK because only moderate control is really needed – we currently overcontrol development.


There are many and various potentials for generation, all founded on reasonable means. Ideally, we would like to say, e.g., “Make me an e-book reader program!” and have some AI figure out all the details. But the concept covers a large range, and the simple end is very feasible. With a compiler, we do say “Make me a foldl!”, and the instructions are generated by optimisation rules. This is not reuse of pre-made designs, but new specific, automated, effort. It is not as effective as manual design, but sufficiently close.

Generation can be understood as automated design, following and adapting the same process and the same consituents:

  • Infer what is wanted, and mine what component functionalities are available.
  • Hold these in various rational structures, to be analysable and sythesisable.
  • Follow a recursive and iterative process to assemble a design from parts.
  • Optimise for maximising requirements fulfilment, and minimising design effort.

These reduce to the familiar. No single magical breakthrough is required, just steady development of known methods focused on appropriate areas.


We know evolution can produce immensely functional software because the evidence of DNA is all around. But it is also hard to understand and control – partly necessarily, so it must be circumscribed in use. Conveniently, the way evolution works and what it does fits with the way design works – they are complementary.

Alongside engineering design's essential basis of theoretical knowledge is use of specific as-needed experimentation. Here is where evolution can be included. What experiment does by deliberate human steering, evolution can automate with random variation and statistical selection: both sharing the same iterative process. The less knowledge design has, and hence also the less control is useful, the more evolution is effective – and in a very robust and complete way.

Further, there are clearly hierarchies of spatial and temporal structure in living forms. Though we do not currently understand how natural evolution works with abstraction, it does appear to be doing so. As more is discovered of DNA's logical structure and systems, it should eventually be possible to use and incorporate them in making artificial software.

Combining all three

Any of the three alone is sub-optimal or inadequate; we want the best combination:

  • Design only what is needed – only what needs to be, and can be, exact.
  • Evolve for the unknown – the pieces and aspects that are indeterminate.
  • Generate as much as possible – providing something between the previous two – and this is the leading element, setting the balance of all.

So for the longer term of software engineering, perhaps it is worth investigating generation and evolution, and their integration with deliberate design.