Overall well-being

NOTE HXA7241 2021-03-07T08:40Z

An aggregation fallacy in common moral conceptions means we need a more abstract idea of a moral agent.

If you read Parfit's ‘Reasons & Persons’, you will find quite a session of aggregate population happiness problems. But something there should seem deeply problematic: you can no more add happiness than add up everyone having a mental image of a chair. Happiness is not a kind of thing that can be added. Adding/summation is a particular mathematical structure, and it only applies to certain things. Some things can be added, some not. You can add numbers of people with a mental image, but what does adding mental images themselves mean? You can arrive at having more people, but the thing we are trying to get at is what is good, and that comes not from the number but only from the ‘happiness’ part ‒ and that, like mental images of chairs, is the part we cannot add. And this is direly critical: if you cannot aggregate, then you cannot outbalance any individual claim, and the whole system falls apart because it cannot resolve conflicts even at the most basic level. This problem is what passes by too easily in the common idea of ‘overall well-being’.

The only way to a solution ‒ to find a meaning for aggregate population happiness (or such) ‒ seems that one needs to see that aggregate as a single coherent object with a moral status of its own. This seems odd, but think of morality in a more abstract way. Because unless you can unify the basis of both ends of this (part and whole, individual and aggregate), you cannot solve it as a moral problem, you would need some other basis of evaluation as well. (In maths, it is not ‘closed under’ this concept.). So you need a more abstract, generalised (panethicist) idea of moral status, something more like inclinations or tendencies, that you can imagine very non-human things (including aggregates) can have too.

When you construe the aggregate in that more mechanical way as having inclinations/tendencies (rather than ‘happiness’), it can be understood as made of components which can be meaningfully assessed for how they contribute to the overall. It solves the problem by being something that can ‘add up’. And they need not simply add like numbers or an amorphous lump (like ‘happiness’ was supposed to), but more like parts of building, or lines of software forming an algorithm. This provides a basis for sophisticated models of morality.

But then what is the ‘inclination’ or ‘goal’ of that aggregate, of humanity overall? That becomes the problem now, and it seems difficult to say. And yet it seems not impossible … (Perhaps something derived from the free energy principle, or Integrated Information Theory … ?)