Bioethics

my bioass.

From the Heidegger-would-not-approve department:

The moral imperative to extend human life for as long as conceivably possible, and to improve its quality by artificial means, is no different from the responsibility to save lives in danger of ending prematurely, Professor Harris will say. Any technology that can achieve this should be actively pursued. |link|

A long life doesn’t mean a quality life. One might think that we have the imperative to genetically engineer kids to learn at even more advanced rates early on, while their brains are still plastic, for a fuller and more productive early life, even at the risk of shortening its length. I’m no ethicist, but I dont see either consequentialist or deontological reasons for rejecting that possibility from the start.
In any case, it seems like this same argument could be phrased as: we have an obligation to make humans as cybernetic and artificial as possible. Well, thats just silly. I speak up for machines a lot here, but central to my view is that we need to draw a distinction between humans and machines. Our machines are not just extensions of persons, they are participants in their own right. Ignoring this fact inclines us to think that the sole purpose of technology is to envelope the individual in a technological womb, to protect us from the world. But technology is no protector. Technology doesnt give us a free win, it changes the game.

2 Comments

  1. As someone familiar with bioethics literature at least a little bit, and teaching it now, I have to agree with your basic point. More life is not the same as improved quality of life. No plausible ethical theory will tell you otherwise. I have another point to make about the so-called “culture of life” but I think you can work it out: life, just like everything else, is valued as we actually do value it; the problem is that it is psychologically implausible to equate all value as “the same kind” or something (a consequentialist fallacty, most often) or to make odd claims to the effect that the value of length of life is not parasitic on the value of life in terms of “quality” (fear of death notwithstanding).

    Humans need to get over both fear of death and, more importantly, stupidity. To say that we have a duty to extend “natural” life expectancy to some arbitrary length is just false. False, because even if longer lives are valuable for any reason at all–other than their sheer length, which is implausible–you’ll actually end up with an analysis that shows such actions to fall somewhere between supererogatory (deontological) and value-neutral (naturalistic virtue theory). Of course if you also assume that long lives will have lots of, say, “happiness” you would be if you were a Singerite a commitment to maximize life length–assuming, of course, the costs of doing so are not greater than the benefits. If you work out that older people will be holding their jobs and expertise longer, that older (and, psychological realitywise speaking more conservative) people will be in power, the economic consequences of keeping all of these people around (think of the food requirements!) might actually turn out unsatisfactory. The whole argument for lengthening life, of course, relies on the not-so-hidden premise that life itself, sans phrase, has intrinsic and (like gold) dense value. Bullshit.

  2. “Academic philosophers who have speculated on life extension have worried that a longer life would be a more boring life, like Sisyphus rolling the boulder up the mountain again and again… But this worry may say more about the kinds of lives typically led by academic philosophers than it does about life extension.” – Carl Elliott, Better than Well (2003), page 285

Submit a comment