Matt McIntosh over at Catallarchy serves up a question that I think is supposed to count in favor of psychological egoism:
Now say some clever scientist, considering him a danger to society, knocks out our sociopathic subject and plants a small device in his head — call it the Artificial Conscience ââ??¢ — that will cause him intense pain every time the thinks about harming someone. In order to avoid crippling pain, our subject learns to be a much less abusive person. Note that his motives in reforming himself are entirely self-interested. (Note also that this could be inverted so that rather than feeling pain when he considers abusing people, he gets a shot of endorphins from behaving benevolently.)Now: in what way is a normal conscience disanalogous from an artificial one?
In fact, this is a decent way of illustrating—if it’s not already obvious—why psychological egoism is wrong. And it may be related to what I suspect goes amiss in neuroethicist Josh Greene’s inferences from fMRI results to broad conclusions about deontic vs. consequentialist reasoning. In both cases, you have an automatic-seeming moral reaction that can lead you to some awfully confused conclusions about either people’s moral reasoning or their moral motivation if you detach that reaction from the networks of belief and deliberation that produced them. So, for instance, if I care about other people, don’t wish to harm them, and so on, I will naturally develop an averse reaction to the realization that I have acted badly toward them. But to conclude from this that good behavior is primarily motivated by a desire to avoid pain, precisely as if I were a sociopath implanted with some Clockwork Orange device, is to get things very badly backwards. I’m reminded of a social science experiment I read about a while back comparing different kinds fo maternal reactions to crying infants. Most mothers had an empathic reaction probably best described as “concern”, motivating them to try to alleviate whatever was causing the infant to cry; others registered annoyance and distress—motivating them to want to leave. You could, of course, argue that in the former case it’s just that the subjects have more more developed moral imaginations, such that they’d continue to feel guilt about the crying infant after they’d left if they didn’t do anything, but you end up needing to add an awful lot of epicycles to make this work.
Anyway, the most obvious distinction is that the sociopath would presumably remove the implant if he could, while most of us would not want to anaesthetise the agenbite of inwit, if this were somehow possible. (In a sense, of course, it probably is: People do sometimes decide they’re so empathic they’re letting themselves be taken advantage of and try, presumably with occasional success, to dampen those reactions.) Ordinary guilt and conscience emphatically aren’t some kind of alien psychic burden we’d as soon do without: They’re parts of a complex emotional ecology deeply embedded in our systems of evaluation. The kernel of truth in the egoistic analysis is that we would not need these things if it weren’t true that a direct feeling of personal distress may be a more effective short-term motivator than more abstract concern for others, especially where there’s an immediate and palpable gain to be had by acting badly. But they aren’t phenomena that can be invoked to explain away an illusory altruism; rather, they’re what we need altruism to explain.
6 responses so far ↓
1 FS // Oct 23, 2006 at 2:35 pm
More The Terminal Man than A Clockwork Orange, I think.
2 apk01004 // Oct 23, 2006 at 4:40 pm
So in your description, what is the conscience for? You have these altruistic twinges that compel you to treat your friends well, and not reach into the poor box, right? And (since your altruism is the primary factor in your decision making) your moral decisions stand or fall on whether you are feeling benevolent at the moment. If you act badly, your conscience bites you later on, but it’s not really necessary; you (altruistic soul that you are) know that you should have done better in the first place, and that’s all you need for next time.
Why do we have consciences, if that’s the way healthy people make choices? Is it just to mop up any lingering sociopathic tendencies we may have? Is it strictly vestigial, or an unintended consequence of the way our brains make moral decisions?
I don’t know about you, but I take the dolors that my conscience dispenses very seriously when I am making a decision. If I am deciding whether to buy somebody a present or spit in somebody’s eye, oftentimes the only thing holding me back is the fear of my conscience.
But then, I would also remove my conscience if I could, were it not for the fact that it usually causes me to make decisions that are in my interest. I’d be interested to know why you find the idea so repellent; It’s not like you think it would pain your conscience to get rid of your conscience. And it’s not like you think it would cause you to act immorally (you have your altruistic nature to prevent that). So elaborate, won’t you?
On second thought, maybe this is an intuition that everybody shares but me. Maybe I’m the only one who finds his conscience to be hectoring and overactive. Maybe I’m really a sociopath. How do you tell? I haven’t vivisected any animals lately.
3 Julian Sanchez // Oct 23, 2006 at 6:03 pm
Well, the “what it’s for” was covered in the “kernel of truth” sentence, I thought. It’s true that in the face of short term temptations to behave badly, the immediate pressure of guilt or shame can be more motivationally effective. But it’s not just *random* stuff that provokes this reaction, and we *can* actually change the triggers. Someone might be raised to feel guilty about, say, masturbating, then get a little older and decide there’s actually nothing wrong with it, and stop feeling any guilt.
Anyway, yes: The reason not to get rid of conscience is that I don’t imagine I’d always act in line with my own higher-order values in every situation in the absence of conscience. (Hell, I don’t always do that in the presence of conscience.) You don’t get rid of your conscience for the same reason someone on a diet might put the cookies on a high shelf.
Anyway, I don’t know why this is any more mysterious, ultimately, than egoistic motivation. If I say “why don’t you want to be in pain? why don’t you want to feel bad about yourself?” there’s not some deep further reason: You just don’t. I don’t see why “not wanting other people to feel bad” can’t equally be a final goal.
4 Matt McIntosh // Oct 24, 2006 at 11:10 am
Thanks for the response, Julian.
“I think is supposed to count in favor of psychological egoism:”
Nah, WYSIWYG. My position is pretty much aligned with yours, I was trying to get people to think about what makes the two different.
5 Jadagul // Oct 24, 2006 at 3:27 pm
Julian, to lawyer for the devil for a moment: perhaps we don’t want to remove the conscience only because our conscience tells us not to? That is, if I didn’t have my conscience I wouldn’t miss it, but my conscience tells me that removing my conscience would be wrong, so I don’t do it. Sort of like if you implanted the device in the sociopath, and one of the triggers was thinking about removing the device: he can’t want to remove it because it doesn’t let him, but if he didn’t have it he wouldn’t want it.
6 Julian Sanchez // Oct 25, 2006 at 2:51 pm
Matt-
OK, sorry, wasn’t sure–apologies for reading in an endorsement that wasn’t there.
Jadagul-
That doesn’t seem very plausible if you’re trying to make this work in an egoistic framework. Say I contemplate removing my conscience somehow. I may in the short term feel guilty about this, but I also know that once it’s removed, I won’t feel any such pangs. So if we want to be psychological egoists, we have to believe the psychic pain I experience while contemplating doing this is so severe that it trumps my expectation of a life free of future such pains. Given that we frequently take a little pain now (the trip to the dentist; the vaccination shot) to prevent more in the future, it seems like we should be able to do the same here if that’s all that’s going on.
Of course, in some broader sense, this is clearly right: If I didn’t care about other people, I wouldn’t care about having the psychological mechanisms in place to get me to act on that concern. But then it’s no longer an egoistic account we’re giving, really.