‘Trustless’ systems will let you down

NOTE HXA7241 2017-05-07T10:23Z

Aspiring to be rid of trust is a mistake. Even a perfect artefact depends on an imperfect context for use. Patching things up is a respectable tactic, and indeed ethos.

——

‘Trustless’ is a mystifying term. Attributed to systems built on cryptographic algorithms, it ought to be clear, yet it has a slight air of magic. Does it mean not depending on humans? Or non-malfeasant – not merely never acting incorrectly, but never acting badly? Is a coin-toss trustless? Is AI trustless? …

Its real meaning is (almost) its very opposite: ‘trustless’ means an extremely trustworthy system or machine – one very reliable, predictable. Trust must be understood in terms of novelty: the regulation and limiting of novelty. Trustworthiness gives you what you expect, untrustworthiness what you do not. (Whether these would be ‘good’ or ‘bad’ is best separated from the core, mechanically amenable, definition.).

But thus lurks an innate problem: the cost of ‘trustless’ is inflexibility. Perfect fidelity to expectation is only good at first glance. A ‘trustless’ system is the rigid tree that breaks in the wind, because to ‘minimise trust’ (a related term) is to harden predictability – and riveted to the original plan, it cannot respond to a shifting world. System is about context too, and thereby novelty leaks in, even if initially carefully cleansed away. Unmaintained software is the exemplar: it is physically invulnerable, it never decays or weakens or fades – yet it is soon faulty, exploited, obsolete, because everything else changes around it. (If hard-to-change software entails adoption of tech-debt, ‘trustless’ is an unwitting commitment to bankruptcy.). ‘Trustless’ is premised on an idealisation of logic, but an actual artefact is ineluctably bound to its various inapt surroundings; and blindness to that compromise is self-destructive.

In practice, fulfilling intention – trust – is not about perfect planning, but about intelligent response to novelty. Deviation from expectation is valuable because the context changes – to realise an intent often requires novelty in adaptation. But the best recourse for handling the unexpected is the very thing ‘trustless’ eschews in superintendence – humans. In human usage, an intended goal is an abstract thing: we can always see variations, and alternative routes to them, and so our judgement will place one of those above a strict adherence to an impossible end. The vagueness of trust is not a deficit but really an apprehension of the complexity of the domain. If you ban change, and humans to direct it, you drive the outcome toward a failure that could have been avoided.

——

Related: