My piece Cost of Living is up at Reason. There are a whole bunch of intriguing philosophical issues raised by the question of how to value human lives that I didn’t have space to explore there, and it’s a problem that we ultimately can’t avoid grappling with.
One interesting posit is that in addition to discounting the very old (because they lose less than someone who dies at, say, 30), we may want to discount the very young as well. Someone who dies at 30, 20, or even 15 has by that time acquired a whole set of life plans and purposes that are frustrated by their deaths. A toddler, most likely, has not. So while the toddler clearly loses more potential future life years, the actual loss from a first-person perspective is arguably lower. (I don’t know whether that argument is really sound; just throwing it out all the difficulties involved.)
Oh, and it’s worth noting that a commenter over at Radley’s suggested using the market-like mechanism of accounting for “depreciation.” But depreciation is a measure of the utility of a market good to others, as measured by changes in the sale price. A big part of what we want to account for, though, is the value of people’s lives to themselves. One big problem, of course, is one I noted in the article: the “public” value of anything in market terms is always determined by the context of the range of options in which we’re asked to rank it. How much I’m willing to pay to reduce a certain low-but-lethal risk will be highly dependent on my income, the number of other goods or ends to which I want to devote my money, the number of other risks I face, and so on.
We can’t even get a meaningful total for the value of a human life to the person whose life it is. We can measure how much you have to pay people to get them to accept a certain risk… say an extra $5k per year for a one-in-a-thousand risk of death on the job, relative to comparable safer professions. But you can’t just multiply that number by a thousand to get the auto-valuation of a life, because the function is non-linear, and (except in unusual cases where someone has cash-stared loved ones, or some charity they’re really into) asymptotic to infinity as probability approaches one and time-to-demise approaches zero. (In other words, there’s no amount you’d take to be killed with certainty five seconds from now.) And there’s all sorts of interesting findings to the effect that people often have all sorts of preferences across lotteries that don’t map on any rational utility function, so what do we make of that? Anyway, plenty to think about there, and mayhap one day soon I will.
Update: Glen comments, working with the distinction between a definite harm and a risk a bit more. Here, again, is another of those hairy problems raised by risk. Imagine that I’ve rigged a bomb alongside a major highway, with a randomizer such that it’ll pick a number n (less than 5,000, let’s stipulate) and then explode when car n rolls by a sensor. Something might always go wrong, of course, but say that we can determine that with probability .95, some unknown person will be killed.
The same is true, though, with most forms of pollution: if there are enough people affected, we might be able to say the same thing. This presents, incidentally, a special problem for folks who want to use a strictly individual-rights based view. Every day, we take actions (like walking outside) that have very, very tiny probabilites of bringing about someone’s death in ways for which, precisely because the probability is so small, we wouldn’t normally be held liable, even if that improbable event did occur. (I’m assuming a negligence standard here.) Yet if you do the same low-probability-of-harm act enough times, or to enough people, it begins to become quite probable indeed that someone or other will be harmed or killed. (Cf. also Derek Parfit’s “harmless torturers,” about whom I’ve written here before.)
On a somewhat lighter note, Radley links to a test that can tell you how much you, personally, are worth.