A brief illustration:
A well-written, entertaining, and impressively comprehensive
mini-book article from the web-journal “The New Atlantis” has been making the rounds lately, and I highly recommend it to you, “Do Elephants Have Souls?”
Whatever you think about the topic, there is a treasure trove of fascinating trivia as well as an excellent collection of elephant-related literary commentary.
For example, did you know that Elephants commit suicide? That they might be able to exchange infrasonics with Blue whales? That they occasionally rape rhinoceroses, perhaps in part due to culling having , ‘…disrupted the transmission of elephant culture from one generation to the next.’? (Don’t ask about human analogies).
Which justifies an obvious pun: this joke will only work if presented audibly, in a cockney accent:
A: Hey Costello, did you know that Elephants sometimes rape Rhinos?
C: No Abbott! That’s shocking! What happens when an Elephant rapes a Rhino?
A: Hell If I Know!
Of course, since the answer given by the article is essentially, ‘close enough’, a more honest title would have been, “Treat Elephants Better, You Evil Bastards!”
It’s a classic of moral persuasion in the field of humane-treatment of animals, somewhat reminiscent of David Foster Wallace’s ‘Consider The Lobster‘, but with a much more arguably deserving (and less tasty) subject.
First we are going to examine reality, and show you that that most people are making an error – perhaps an innocent oversight – into something empirically observable, and should thus adjust their beliefs about the natural world.
And next, we are going to plug that amended paradigm into your moral calculator and show you that you (well, everyone) ought to be modifying their behavioral choices. We would like you to adjust your moral calculus in our direction, of course, but even if you don’t, we think you should find this argument compelling even on the basis of your existing moral principles.
Because I am an incurable quant-type at heart, I tend to conceive of such arguments in a pseudo-mathematical fashion.
So, I imagine some empirical metric for moral ‘worthiness’, as in: “The well-being of which is worthy of our moral concern and is thus something we should take into account to some degree when we make moral decisions.”
Although it would be extremely crude to do so (not to mention highly controversial if one draws certain human analogies), I think the tone of the article and the instinct of animal-lovers everywhere is that this ‘worthiness’ is somewhat proportional to higher brain functioning, with perhaps total mass, number, and efficiency of convoluted grey-matter neurons in the cerebral cortex being a passable intraspecific proxy. Consider these images:
One can see that Primates, Marine Mammals, and the larger Terrestrial Mammal Quadrupeds dominate. From La Wik’s neuron count list, we can see Humans, Elephants, Whales, Chimpanzees, Gorillas, Dolphins, Monkeys, Dogs, and Cats (perhaps also Horses) earning places at the top of the biological charts. This seems to accord with a lot of people’s observations about relative animal intelligence and their emotional instincts to admire – even love – some of these creatures and believe they should be treated with something other than mere material indifference.
So, the primary line of attack of the article seems to be a demonstration arguing for this adjustment:
Well, perhaps. But you know, there’s a problem. The problem is that the author knows that the empirical argument is being made in a particular moral climate: a function that maps ‘objective’ moral worthiness to arbitrary ethical imperatives in terms of treatment. That tends to map onto this graph as so:
So, the question is one of these is the Dog and which is the Tail? Well, if you’re trying to change people’s behaviors, you could try to shift their treatment-mapping curve to the left, some element of their empirical worthiness curves (such as the elephant’s) to the right, or some combination of both.
It all depends on which is easier to do, and I submit the ‘objective’ curve-shift is usually easier than convincing someone to change their moral mapping. After all, you ‘know’ your own moral calculator pretty well, but you probably don’t know much of anything about elephants, and ignorance is the mother of indifference. One you learn about how human-like the creatures are, well, now you’re got to do some serious recalculation. Then again, we’re ignorant and indifferent to a lot of things, so whoever is in charge of providing us with educations and information about the world has a tremendous opportunity to decide what we learn about, and thus what we come to care about.
This of course gives rise to an awfully tempting incentive to be less than fully honest about your empiricism (and less than perfectly forthcoming as to its political ulterior motive). I wonder if we just occasionally observe this in the Social Sciences (Oh, don’t try to hide. I’m looking at you too, Economics!)
For problems like this, the ‘moral landscape’ of belief reduces to shifts in the mean and also variance (but only on the low side, human moral systems seem to have an asymmetric bias, but look, it’s just a diagram to ease communication). Here’s how you might label them:
PETA: Include more animals into your care-focus.
SPCA: Treat some charismatic macrofauna humanely, but then sharp drop off.
NIET: Nietzsche-Bomb Ichiban! Only Übermenschen need apply!
ARIS: Either Aristotle or Aristocratic, take your pick, but it’s the view that there’s a lot of variety in the worthiness of human beings and how you are ethically compelled to treat them.
TRAD: Willing to judge some vilest-slice of humanity as deserving to be treated no better than animals, but being strictly humane above that level.
So, quick gist of the argument behind the Elephant Article is: Perform Empirical Adjustment from v. 1.0 to 2.0 because it is more correct, and also perform Moral Adjustment from Avg. to SPCA/PETA because you should; it is more ethical.
I find this a pretty extensible and handy tool when I contemplate the political-moral landscape (for instance, on display recently at
Marginal MORAL REVOLUTION – allcaps theirs, not mine), and all you have to do is replace ‘animals’ with ‘groups of people’, depending how you label them.
On the Empirical side of things, you can compare those who believe in Human Biodiversity with those who insist of Human Neurological Uniformity. Before everyone calls me mean names, you can replace ‘worthiness’ with any objective metric, it doesn’t matter.
And you can also model moral-mapping functions as well.
Conservatives, even nominal Universalists, tend to emphasize a series of concentric circles from the individual of proximate-relations, radiating outward with further distance in ranking in terms of moral concern. So there is one’s family, community, religious-group, country/nation, culturally, geographically, and ethnically close foreigners … eventually landing in all humanity. As the Arab Bedouins say, “I against my brother, my brothers and I against my cousins, then my cousins and I against strangers”
But Western Liberals tend to display a xenophilia that Sailer calls ‘Leapfrogging Loyalties‘. After all, if you need to distinguish your class as superior to your bumpkin compatriots (always a popular source of humorous, camaraderie-building conversation among asses everywhere), then it helps to ally with the alien against him.
One can use this moral-visualization tool to picture the difference between Universalists vs. Particularists, between Immigration Selectionists vs. Open-Borders proponents, between patriots vs. leapfroggers, and between citizenists and bubblers.
The point is to analyze what people write by working backwards. Everyone is trying to influence you to change your behavior, and if they can’t coerce you, then the are going to try and change your beliefs, both type 1 and type 2.
So, whenever I see the kind of moralistic nonsense exploding in certain parts of the blogosphere – like we’ve been observing lately with the open borders / amnesty issue – I try to break the assertions apart into claims about reality and claims about morality.
But it’s usually easy and straightforward to contest falsifiable claims about reality. And it also easy to say, ‘well, that’s your opinion’ when someone is being honest and telling you they are merely expressing their arbitrary political preferences.
But it’s a pointless, futile, and frustrating exercise to ever try to prove to someone they are ‘wrong’ about their moral-mapping, especially if you yourself remain beholden to a shared premise that it is even possible for them to be right about it, and that there there actually exists a thing to be ‘right’ about that doesn’t rely on a shared belief in a common moral authority. You know, like God or something.
But now that God’s dead, there’s a great incentive to fill the vacuum of a desire for universal moral certitude, confuse the issue, and deceive yourself and everyone else by conflating the two and asserting that the political is the moral is the rational is the empirical. Which means you’re going to see it all the time, and the only real question is where will it take us?
Samuel Johnson once said, “Patriotism is the last refuge of the scoundrel.”, but today, unsubstantiated moral proclamation is his first and only resort. After all, with suckers born every minute, what else does he need? It doesn’t matter if an advocate is wrong about the elephants in reality if he can always compensate and adjust the moral goal posts to whatever extent necessary to maintain his ability to say it is right to protect them.
This feedback between political ends and moral means may work to the great benefit of the animal elephant. But I’m increasingly convinced it will inevitably result – and sooner rather than later – in the effective extinction of the Grand Old Pachyderm.