Power, predictability, and regularity

NOTE HXA7241 2011-11-13T12:41Z

At the deepest level, software seems to have an essential trade-off of power against predictability. And the problem there is not a matter of conformance but of clarity. That means perhaps the best we can do is design programming languages to have a kind of ‘regularity’.

John Shutt considers recently whether there is, in software/programming-languages, an essential unavoidable trade-off between:

  • allowing maximally versatile ways of doing things, with maximal facility,
  • disallowing undesirable behavior.

Let us look at this somewhat from the view of the Wittgensteinian ‘propositional model’ of software.


This depends on how one defines undesirable behaviour.

If it means things like dereferencing null pointers, then there really seems no necessary/essential conflict or trade-off. It is just a technical detail, and you just build a language which avoids it. It puts no limits on computation in general. Any trade-off is an a posteriori problem: some languages and features might imply particular trade-offs, others different ones, and there seems always the possibility of finding/devising one better.

But if undesirable behaviour means instead something more general, like anything unpredicted, or unwanted as judged by requirements, then there really does seem to be a deep, inescapable trade-off.

Because what does the most basic principle of the theory of computation say, in effect? That a limited set of elements can produce an unlimited set of behaviours. That really seems to be saying some must be unpredictable – and you cannot pre-empt behaviours you cannot predict.

But this choice of two intepretations does not really give a way out. (And not just because the boundary between them seems actually quite blurred.) Because the side of merely technical matters (null deref etc.) is perhaps not very important: it could be completely solved and maybe it would not do much good. The real problem is the general one.


The fundamental issue in programming, relevant here, is clarity.

A programming language just gives you a limited set of elemental actions, which you assemble to make particular programs. Ideally, you want a language to simply show you clearly what you are building: you look at the software, and if some part does what you want you keep it, and if it does not you remove it. The only conformance is to your intention. There are ideally, at bottom, no ‘undesirable’ or disallowed actions; there are just different features that do different things, and you choose them.

There is, ultimately, no allowing and disallowing. The language just does exactly what you say, and shows you clearly what you did say.

Type systems (or any automated checking) do not really check ‘conformance’ (quite, exactly), they can only check consistency. They check if what you say in one place matches what you say somewhere else. They cannot check if what you say will create the effect you want.


‘Undesirable behaviour’, then, is indirect consequences that were unforeseen. Here is where the real problem is, and where the idea of regularity comes in.

No matter how clear the basic programming elements are, in a fully powerful language it is always possible to make structures that go further than can be (easily, arbitrarily) predicted. So it is impossible, in general, to make software completely clear.

But what we can perhaps do is give programming and languages some ‘regularity’: keep the range and consequences of that unpredictability constrained within some bounds. To put it analogously to mechanics: to give language constructs some ‘damping’, to make them more ‘stable’. Each programming element interacts with others – hence toward unpredictable ramifications – and what we want is that those interactions sort-of reduce with distance. This is a kind of expanded notion of the principle of least surprise.

This is a rather vague and somewhat metaphorical description. Figuring out what it means more exactly is the thing to do. But this still does not really solve the original problem. It is in effect suggesting we address the problem of the power of computation by making it less powerful. There is, at the most general level, an essential conflict/trade-off: the only thing we can do is find the optimal arrangement for our circumstances, and human abilities.