I have now completed my metaphysics of race series, so here is a list of all the posts with links for easier navigation.
1. Metaphysics of Race: Introduction
2. Classic Biological Racial Realism
3. Race Anti-Realism
4. Races as Social Kinds
5. Social Constructionist Views of Race
6. The New Biological Race View
7. The Ethics of the Metaphysics of Race
8. Minimalist Race and Whiteness
9. Short-Term Retentionism, Long-Term Revisionism
This is the eighth post in my metaphysics of race series. If you want to start at the beginning, you can go right to the introduction to the series, or you can go to the list of all the posts with links.
In the last post, I introduced the notions of retentionism, revisionism, and eliminativism. Should we give up on race notions (eliminativism), keep them as they are (retentionism), or seek to modify them (revisionism)? I argued that in the long-term we ought to seek to change them but in the short term need to keep them as they are. What does this look like in practice? I will start by looking at a couple issues regarding the use of race language itself, and in the next and final post in this series I will look at practical behavior that seems supported by evidence-based studies to serve the short-term and long-term goals of my approach.
The Ordinary, Minimalist Concept of Race
I want to start by thinking about what Michael Hardimon calls the minimalist concept of race. He also calls it the ordinary concept of race. He is distinguishing this notion from a more robust notion of race like the scientific essentialist notion that races have genetic components built in that determine all kinds of things about us, such as our intelligence, moral value, and capabilities of various sorts. He thinks that notion of race is no longer part of the ordinary concept of race. There might be experts who look at the history of race who see that it was true about early ideas in the modern development of the concept of race, but ordinary people do not think that that's what race is. One way we know this is that philosophers working on these questions have collaborated with sociologists to do careful empirical study of people's notions of race and of particular races. We can also look at how language is used by ordinary people, a opposed to scientists, sociologists, or philosophers working on these questions. Words mean what people use them to mean, and all it takes for a word to change its meaning is for people to use it in a different way for a long enough time that it gradually becomes a new meaning for the word. If we stop using it in the first way, then that no longer becomes its meaning.
Hardimon argues that the minimalist notion of race is the ordinary concept of race of most people. It's the view on the street. We don't let academics or activists decide what words mean unless we start using the words the way the activists and academics are using them. Once we do, then those are part of the meaning. But we don't defer to expert son the meaning of a term. The experts have to do empirical research to see how people are using terms, and we should listen to their expertise on that. But it's not like physics where we just have to hear what those who have studied subatomic particles think about the nature of electrons. There are no such experts on what race-language refers to, other than those who have done empirical research on how language is used.
So here is Hardimon's account of what it is to be a race:
"A race is a group of human beings
(C1) that, as a group, is distinguished from other groups of human beings by patterns of visible physical features,
(C2) whose members are linked by a common ancestry peculiar to members of the group, and
(C3) that originates from a distinctive geographic location"
I tend to think that's a pretty good definition. It might require a tweak or two to allow for weird cases like a duplicate of Chris Rock, say, appearing out of nowhere and with no ancestry or history, who I think obviously would still count as black. (But I realize there are philosophers who think the Chris Rock duplicate wouldn't even be human, never mind black.) I also think Tuvok, a black Vulcan in Star Trek, is obviously black, and he certainly isn't human. But those tweaks are oddities on the extremes of our language use, and problems with that sort of thing are similar to problems in deciding whether someone is the same person after being dismantled entirely by a Star Trek transporter, while a duplicate is constructed somewhere else out of new matter that looks and talks and has memories just like mine. I happen to think the transporter killed the original person and created a duplicate, but a lot of philosophers agree with how Star Trek presents such cases and thinks it merely transports the original person to a new location. We don't need to think our disagreement on such side cases tells us our definitions of "person" or "human being" are wrong. We can push those issues aside for most discussions, and the same is true of weird side cases of race (although in both cases I would want to get to those discussions eventually to get a full theory together).
But I agree that Hardimon's account is the basic idea that most people have of what it is to be a race and what is true of the racial groups we ordinarily refer to. Hardimon's definition notices that races are groups of people with differences. Those differences have to do with patterns of visible physical features and common ancestry distinctive of each group. That common ancestry originates from a certain location.
That's it. There is no commitment to whether races are social kinds or biological kinds. For all this says, they could be both or neither or one or the other. This definition captures what it is to be a race in a way that we can then still have the debate about which kind of thing the races we have are. And it seems to get the basic aspects of racehood correct, in my view, and Hardimon and others have given empirical research that I think back that up.
Given such a definition, I want to get to an issue that looms large right now in certain circles. There is, in my view, a very harmful way of talking about certain kinds of race-based phenomena. It's also a very popular way of talking, and I don't think those who talk that way realize how harmful it is to speak the way they do. It is confusing to many who hear them and don't understand them, and it reinforces the very things about race that we should want to revise long-term and remove from our notions of races. It in fact is not in the minimalist, ordinary concept of race, so speaking this way reinforces aspects of race that are not even in the ordinary concept. In my view, what we ought to do is steer more toward thinking of race in that way and away from other ways, including ways tied to the social constructions we add to race. We need to realize those constructions are there, but we don't want to reinforce them.
This way of talking that I think is so destructive is common in a field that is sometimes called whiteness studies. It is common on an academic level from people who do whiteness studies or who work on questions of systemic and structural problems related to race. It has filtered down into a certain segment of the general public, especially among activists, but I am seeing it more and more among people who are just becoming aware of and interested in race issues (those who might describe themselves, or those whom their critics would describe them as, becoming more woke). Thr National Museum of African-American History and Culture, a subsidary of the Smithsonian, recnently caused a bit of a flap over posting a document to their website that engaged in this use. The document was removed, and they replaced it with this justification of that use of language.
I think this way of talking is both false and dangerous. It involves thinking of and speaking of whiteness as something other than being a member of a race in the sense of the ordinary concept of race above. It is becoming common to speak of whiteness as an ideology or a set of social constructions or a set of advantages rather than as simply the property of belonging to a certain group. In this way of seeing whiteness, it is the way that systems of power and influence, advantage and privilege position white people. It is an agenda of seeking to preserve those and continue to institute them. It is the way that society maintains white advantage. This is a real phenomenon. Most of it is unconscious. Some of it is the very indirect consequence of practices long ago that set up systems that still have those results today. Some of it just the inevitable result of not very carefully evaluated ways of living life as a member of a majority group. Some of is is from having been affected by stereotypes to have biases that are often unconscious. Some of it is having absorbed stigmatized notions and a sense of what is normative from what is around us. All this is to say that I am not denying the phenomenon that people are calling whiteness. What I want to say is that we should not be calling it whiteness, and I think it is morally wrong to do so.
There are several strong reasons for thinking this way of talking is deeply immoral. And no, it's not that it's racist. I believe I've said enough to show that it's not racist. It's targeting a genuine phenomenon and simply mislabeling it, and that sends a message that people wrongly perceive as racist. They can be blamed for that only to the extent that they understand this gross misuse of language is occurring, and many of them don't. The most obvious reason not to talk this way is that it is inaccurate. You end up saying false things. If you say that whiteness causes some kind of disparity, you are telling a falsehood. Whiteness is simply membership in a group that has ancestry and surface-level physical features in common, going by the Hardimon definition above that I have endorsed. Unless you are die-hard essentialist about races, thinking races have essential natures that make people racists, you are simply saying something false when you talk about whiteness in this way. We should care about truth. Truth is important. Whiteness does not actually cause anything of the sort. Merely being a member of a group that has certain visually identifiable features and ancestry does not cause anyone to resist calls for a living wage. So why are we calling it whiteness?
Marilyn Frye saw this problem 18 years ago when she distinguished between what she called whiteliness (which corresponds to how the use of "whiteness" I am critiquing) and mere whiteness, which corresponds more to what the ordinary, minimal concept of race would say that whiteness is. She called the stuff we should avoid and seek to change by some new term rather than trying to co-opt an existing one that already has a meaning and thus could sow nothing but confusion among those not in the insular circle of those talking about whiteness in this way. Alas, her voice did not win out.
A further reason we should avoid this way of talking is that it confuses people. The point of communication is to get what you are saying across to the person you are talking to. This use of the term is very insular. People who read a lot of scholarship about race understand it. People who spend a lot of time in activist circles understand it. But go back to the ordinary concept of race. How do you expect most people are going to hear if I say that we need to dismantle whiteness? What do they hear if you tell them that oppression of black people is whiteness? What do they hear when I say that I am engaging in whiteness when I exhibit unconscious biases against certain racial groups? What do they hear when I say that it is whiteness (and it is bad) to expect people to be on time for something or to want to do well in school? What do they hear when I say that it is whiteness to have little concern for those of other races or to use white privilege to discount the experiences of others? I would add that the newest trend (and it's all over social media) is not to stick with using "whiteness" in this way but even to extend it to "white people," i.e. saying that white people do such-and-such and then when challenged on it to say that they don't mean it's something white people do but something that whiteness does, in this already-problematic sense of the term. And we get even further along on the path to make the statement sound as racist as possible while insisting it does not mean that.
I will tell you how most white people hear this sort of statement. They feel as if they are being accused of being the most despicable racist possible. They hear it as saying that all white people are white supremacists and neo-Nazis of the worst sorts, because the statement is connecting something terrible and evil with whiteness, as if there is something like a racial essence (something biologists rejected more than a half century ago) behind why white people are so evil. In other words, it comes across as the most vile racism there can be. Saying something that sounds like that in order to try to communicate something very different is simply a big communication fail. I feel like posting one of those "You had one job" memes with someone using the word "whiteness" (or worse, "white people") in this sort of way. It's almost as if people who talk this way are trying their hardest not to get their message across and instead to try to make lots and lots of white people mishear them in order to be able to accuse them of having white fragility when they object. As a friend of mine has been saying a lot recently, I'm generally not one to go for conspiracy theories, but this seems like a case where it's sorely tempting. I hope that's not the motive, but it's accomplishing that goal very well. People are drawing their battle lines. I spend lots of time literally every day having to explain what people mean when they say stuff like this and how it's not the racist thing that it sounds like, and people simply don't believe me. They think I'm trying to explain away and justify actual racism for simply explaining what people who talk about whiteness this way mean. That alone is an incredibly powerful reason never to talk this way. Ever. It's easily one of the best ways you can divide people over race without ever lifting a finger.
As if falsity and miscommunication, leading to divisiveness, were not already enough reasons, there is a more subtle reason why we should not talk this way, one that connects directly with the overall project of short-term retentionism and long-term revisionism. It actually gets things backwards. What we want to do is recognize racial realities and use racial terms in a way that captures what really happens, but we want to move toward removing the problematic associations and assumptions connected with those terms in practice. We want to move toward the ordinary, minimalist concept of race, and speaking this way goes the wrong direction. It actually reinforces the aspects of race thinking and racial interaction that we want to move away from.
I have two sons with autism, and one of them has low impulse control. Behavior is a frequent topic of conversation in our household, and we have spent much time looking at evidence-based research about autism and behavior. One of the things an evidence-based approach will do is expose him to various conditions and then see how he responds. He is very quick to figure out what people will respond to. The behavioral therapists who worked with him would stop responding to his attempts to fake-hit them to get away from a certain task, and he would quickly learn that fake-hitting wouldn't get him the iPad he wants. He would have to ask or write it out or sign, depending on which method of asking they were reinforcing during that trial. When they gave him the iPad if he fake-hit them, they reinforced the fake-hitting. When they ignored the fake-hitting by only gave it to him when he asked, they were reinforcing the asking. It is well-documented that in cases like his nearly any audible or physical response to the behavior you want to reduce will reinforce it, because he's likely either seeking sensory feedback (and you are giving it to him) or trying to get attention (and you're giving it to him. So when he pushes me during one of his online sessions with his teacher during this pandemic, and I respond by telling him, "No" or pushing back or any other response, careful study of him and other people like him actually shows, I am reinforcing the behavior. The way to reduce to behavior is to ignore it when he does it but to model for him what he should be doing and to reinforce that when he does it. That's a bit counterintuitive, but it's what careful psychological studies have shown over and over again in this kind of case.
How does using "whiteness" the way I have been describing reinforce what we want to remove? Well, it builds it into the very definition of whiteness. You can't very well tell people that whiteness is evil and that they need to divest themselves of it, all the while building into their very identity that they are the kind of people who have essences that manifest themselves this way. Those who use this term this way don't believe that, but they are speaking publicly to people who see whiteness just as their belonging to a group that is grounded in skin color, hair type, ancestry, etc. When you say of such a group that something that sounds like it is part of their essence to behave in this kind of way, it reinforces all the associations with whiteness that we should be working hard to remove from our racial identities. You can't very well divest yourself of whiteness when people are working very hard to send a signal that being white includes all these terrible things. And I can't imagine what this is reinforcing about white people in the minds of those who talk this way regularly. We don't want to build racism into our race definitions. We want to move toward the minimalist concept so that we can affirm that racial essentialism is false and not send any messages that come across as assuming it, We want to remove racist elements from our racial concepts, so let's not reinforce those notions by using language in a way that divides rather than brings us together, that sounds like it has assumptions about racial essences that science disproved 70 years ago, that confirms all the notions that any racially forward person should not want to reinforce.
If that doesn't convince you, compare the parallel way of talking that we would get if we did the same thing to blackness. If we want to see whiteness as an ideology of white supremacy or a set of systemic structures that perpetuate white normativity, then we should also see blackness as an ideology of black inferiority or a set of systemic structures that disadvantage black people. We should not see blackness as merely belonging to a category of the minimalist race. We should not see blackness as cultural, either, not if the cultural elements we are referring to could ever be seen as positive. Blackness would have to be (to be parallel) the forces in place that operate to exclude, stigmatize, and enforce disadvantage. Blackness would be just as evil as whiteness. It would be because of someone's blackness that they do less well on standardized tests. It would be because of someone's blackness that they don't know what clothing is appropriate for a job interview or would need to be prepped for how to dress for such an interview. Can you see how racist that sounds? Yet it's precisely parallel, and if it's conceptually legitimate to use "whiteness" in the way that people are, then it is equally conceptually legitimate to use "blackness" in such a way. Indeed, both concepts are getting at a real phenomenon. But should we call that phenomenon whiteness and blackness? I don't see how it is remotely legitimate to do so, either in terms of accuracy or when we evaluate this way of speaking morally.
I have no problem talking about these phenomena, but please don't do so by calling it whiteness or blackness unless you want to perpetuate the racial disparities and stigmatized associations that we already have, indeed unless you want to reinforce those and make them stronger and to foster pointless division over a mere disagreement in language. There are philosophers who define races as groups that are put into a hierarchy, so that they wouldn't be races at all without the hierarchy. They do this to address the fact that hierarchies do exist in how we see and treat each other. But it's counterproductive to define races in such a way. We need to go the other way, which is why the minimalist view of race is so important. It moves us in the direction of allowing us to refer to races and say that racial groups are in fact treated in a hierarchical way while also not building it into the notion of races that the are hierarchical. So we can move toward the revision, which you can't do if whiteness (or blackness) is evil. But we can name the evils that are present by having racial terms that we preserve and can use to state such problems.
I was going to finish up the series with this post, but it got too long, so I will be continuing in one more post. So the ninth post will look at some much more practical matters of how to live in a way that keeps in mind both short-term retentionist and long-term revisionist goals.
One of the philosophy Facebook groups I'm in was talking about Ayn Rand, and several people expressed consternation that some philosophers treat her seriously as a philosopher. I think that's a big mistake. I think seeing her as a non-philosopher undermines the effort to convince students that we are all philosophers, and some of us just choose it as a profession by publishing and teaching philosophy, but philosophy is for everyone, and it's good for us all to engage in it. I always point out when I teach her that she was not a professional philosopher in the sense that most of the recent philosophers we are reading are, but that's also true of most of the earlier philosophers we read. Socrates, Augustine, Thomas Aquinas, John Locke, Gottfried Leibniz and many others who are part of the canon of Western philosophy had their main sources of income doing things other than philosophy.
A few things that I see as a reason for including Rand in the canon:
(1) She is a woman, and I think it's good for students to see women doing philosophy. If philosophy is for everyone, then only giving them readings by men undermines that.
(2) She is an outsider to the discipline who has offered a theory that contributes to the philosophical discussion in a way that is unique.
(3) Her particular contribution is worth having on the table among the views students are presented with. She is an egoist but not an entirely consequentialist egoist, and that's interesting. It's a deontological/virtue egoism, and we don't quite see that in Epicurus (who has a consequentialist/virtue egoism) or the Sophists (who seem purely consequentialist egoists), whereas she thinks we have deontological obligations to ourselves to seek our own good and to other people to allow them to do so. That's a unique view in the history of philosophy and shows creative theoretical philosophical reasoning.
(4) It's a way of exploring what a moderated Glaucon-style egoism can look like in modern times. I cover Book II of Plato's Republic and have them read Antiphon the Sophist, and her view allows some fleshing out of what that can look like in the 20th century.
(5) Her examples of bad character, the beggar and the sucker, while way overblown in her actual application to real-life categories, still represent actual vices, and everyone knows examples of people who are like those, so it illustrates both Aristotle's doctrine of the mean and how that can be misapplied if you misrepresent what's going on in terms of the facts of a case.
(6) Her critiques of altruism, while they don't actually show altruism to be bad, offer some good criticisms of ways that it is sometimes done, without any concern for the desires of the people intended to be helped or what's actually in their best interests rather than the projected interests of those doing the helping.
Now I do think teaching her can be done in a responsible way or an irresponsible way, and teaching her without exposing them to some opportunity to hear critiques of her ways of thinking would be irresponsible. But that's true of every philosopher I teach. I haven't found one that I think got everything right. I either get to critiques when I cover later philosophers' arguments against them, or I look at objections while covering them, because we won't get to those with later philosophers. Some of her arguments rely on several overstatements or misrepresentations of those she disagrees with, and it's irresponsible to teach her sympathetically (as I try to do, at least at the outset), without also confronting some objections to those arguments.
I think it's good for them to be exposed to her approach, her creative and unique way of putting together various approaches that we have already covered in the class, and the glaring problems her overall pictures faces when you look at it more carefully. That's precisely what a philosophy class should be doing, and presenting her as not a philosopher gets it so very wrong that I resist that kind of attitude pretty strongly. She was certainly doing philosophy, and some aspects of her approach to doing philosophy are actually a good model for doing it creatively and thoughtfully. Some were not, and they serve as a good model for how not to do it. In that light, why wouldn't I teach her?
I just read a thoughtful post on the Pop Culture and Philosophy blog about the concept of balance in the Force in Star Wars. I’ve been struggling to understand that concept myself as I’ve been reading through a lot of the Star Wars comics, both Legends canon and new canon, and thinking them through in light of the movies, Clone Wars show, and Rebels show. I don’t think the post I linked to has it right, but I’m linking to it as a thoughtful piece trying to come to grips with this issue. A quick Google search revealed quite a number of other views on this, again none of it seeming to me to get things quite right. So I wanted to put some of my own thoughts on this into writing, however, so here are some rough musings attempting to put many months of thought on this into something somewhat digestible.
Here are several things that didn’t make a lot of sense to me, when put together:
The Supreme court released a bunch of opinions yesterday. One of them isn't all that interesting to me, but a little exchange on a side point caught my attention. From the SCOTUSBlog writeup:
In a five-page concurrence, Justice Kennedy went out of his way to raise concern over the proliferation of solitary confinement in U.S. prisons, bemoaning the extent to which "the conditions in which prisoners are kept simply has not been a matter of sufficient public inquiry or interest," even though "consideration of these issues is needed." Thus, he concluded, "[i]n a case that presented the issue, the judiciary may be required . . . to determine whether workable alternative systems for long-term confinement exist, and, if so, whether a correctional system should be required to adopt them." Justice Thomas responded in a rather curt, one-paragraph opinion, noting that "the accommodations in which Ayala is housed are a far sight more spacious than those in which his victims . . . now rest," and that "Ayala will soon have had as much or more time to enjoy those accommodations as his victims had time to enjoy this Earth."
I'm not interested in adjudicating that particular dispute, but I'm interested in (1) its very existence and (2) the particular reasoning used in each case. There's a correct moral principle behind each justice's point (just retribution for a heinous act and ensuring we don't ourselves do evil in how we treat those who do evil). It seems as if this might be a case where we can't satisfy either concern without going against the other concern, so we have to decide which principle we'll give more importance to. These two justices end up on opposite sides on that question.
Some of the early reports about yesterday's report from the Vatican conference on family issues seem to me to betray a serious misunderstanding of Catholic teaching on these issues. In the NPR story I just linked, we see two views being put into contrast that I don't think any Catholic who understands the concepts involved would recognize as being in conflict. On the one hand, Catholics have long taught that homosexuality and same-sex sexual relationships are intrinsically disordered, and Catholics insist on the wrongness of any sexual relations outside marriage. On the other hand, this report speaks of Catholic communities "accepting and valuing their sexual orientation" and "positive aspects to a couple living together without being married". It all depends on the context and what is meant by these expressions, but I see no reason yet to take these in a way that contradicts anything in Catholic teaching.
The crucial element is the concept of intrinsic disordering. If something is intrinsically disordered, it means that the good in the relationship is put together wrongly in some way. It means either something is missing, or the parts are not working together the way they ought to. But the concept of intrinsic disordering requires there to be some good, since intrinsic disordering means something is less good, as opposed to some positive evil being introduced, which is impossible on an Augustinian conception of evil that serves as the basis of the notion of intrinsic disordering.
You can't have something intrinsically disordered that doesn't have some positive good. No positive good means no existence. Intrinsic disordering means a disordering of positive good. That means there is positive good. And that means this change in emphasis isn't a change in doctrine, if all it's saying is that there is some positive good in same-sex relationships and in unmarried couples living together (implying sexual relations).
In particular, you can think value all manner of things about a same-sex relationship: you can recognize the good in a couple's self-sacrifice for each other, the good in their parenting of any children they might have, the good in the degree to which they fulfill their desire for companionship, even some level of good in the sexual pleasure they provide each other. You can do that even if you think the relationship itself is immoral and if you think they're seeking the wrong object to fulfill sexual desires and the wrong ways of fulfilling their companionship needs. You couldn't think they are good in every respect, but you have to think there is some good there, or else there would be nothing. That follows from the very notion of intrinsic disordering.
Similarly, the Catholic church holds that there are good things in opposite-sex sexual relationships between unmarried people. Catholic doctrine declares such relationships immoral. There is a difference in that they're not disordered in terms of the object of sexual desire (or at least in terms of the sex of the object of sexual desire). But there's plenty of intrinsic disordering of a different sort in those relationships (e.g. the marital status of the two people, which is an issue to do with the object of one's desire, just not about the person's sex). Most importantly, the person and relationship are placed on a higher level than God, because they refuse to honor God's command to marry before having sex. That is an intrinsic disordering, since it demonstrates one's desires are not well-ordered, which is what virtue is on an Augustinian view. Any sin is an intrinsic disordering, since it involves a disordering within one's desires. That assumes some good in the desiring and in the fulfillment. Otherwise there would be no desiring or fulfillment.
Compare the intrinsic disordering of a shoe fetish. What's disordered about that is that shoes are not an appropriate object of sexual desire. Homosexuality, by contrast, involves a desire for a human being. Human beings are the appropriate objects of human sexual desire in general, even if there is some intrinsic disordering when it involves same-sex desires. That means there's something good about same-sex desire that isn't present for the shoe fetish. It's not clear to me that the Catholic statement is doing anything more than acknowledging things like that. That's compatible with thinking same-sex relationships are intrinsically disordered to the point of being immoral. I think people who don't have a view like the Catholic view will be inclined to think that anyone who thinks homosexuality is intrinsically disordered must think it the height of all evil, with nothing redeemable or good about it, but that's simply not what the view holds. Many who hold the Catholic view might not see this, but there's a difference between how proponents of a view understand it and what the official view is, at least when you're talking about a view held by those who believe their views come from some authoritative source. (The No True Scotsman fallacy is simply not an issue when you have an authoritative person, text, or organization that determines what the official view is. There is a genuine Catholic position, and those who don't hold that view do not hold the Catholic view.)
There may be a different emphasis here, but it's not at odds with thinking the relationship is intrinsically disordered anymore than the idea that it's good to support our troops is at odds with being opposed to a particular conflict they've been fighting in. So don't believe anyone claiming that this is a change in Catholic doctrine. It's not a conflict or departure from the concept of intrinsic disordering. It in fact brings to the fore something that follows from the notion of intrinsic disordering. Perhaps that's something that those who believe homosexuality is intrinsically disordered should be emphasizing more. But it's not a new position. It even follows from the idea of intrinsic disordering. Anyone claiming the two are at odds simply doesn't understand what it means to be intrinsically disordered, or they couldn't think that.
Thabiti Anyabwile has come under a lot of criticism from many quarters for his recent post on the gag reflex and Christian opposition to same-sex sexual acts, increasingly called "homosex" of late. [I'm still getting used to that word, because it still feels like an adjective to me (one without its proper ending), but it's a useful word compared with writing out something like "engaging in same-sex sexual activity, so I will use it.]
He has just posted a followup responding to some of the criticisms as well.
As I see it, there are several issues going on here, and I don't think all the participants in the conversation are keeping them straight. There are a number of ways his argument is being misrepresented (and then made fun of in pretty vile ways as a result), but there are also some genuine philosophical difficulties with some of the things he's saying, and I'm not entirely sure I agree with some of the key points. Even so, I think some of the things for which he's being unfairly made fun of by a lot of the opposition seem to me to be largely correct and even relatively obvious, things I'm not sure many people will really want to rid themselves of in their ethical theorizing if they were to think their views through more carefully. So maybe they should refrain from making fun of them, if I'm right about that. I want to work my way to that gradually, however, with a bit of a review of some of the key philosophical moves that have been made about the connection between morality and emotion.
1. Ethics and Emotion
I'm not interested first in the application to homosex, although I will say a few things about that later on. I'm primarily interested in the general strategy of ethical reasoning that involves paying heed to emotions like disgust. A good friend of mine complained on Twitter about the arguments found in the original post, arguing that if we allow disgust to guide our ethical judgments it would mean racists' disgust for racial interaction could generate moral principles against interracial marriage (or more particularly against interracial sex). If disgust shows us anything at all about genuine moral principles, the argument goes, then we have to follow our disgust whatever it leads us to loathe. And people can loathe all sorts of things, in ways that don't at all track genuine moral principles. So we shouldn't rely on our disgust to show us anything about morality.
I think this argument is a mistake. The fact that disgust can be directed against things that are not wrong does not show us that disgust isn't ever a guide to morality. All it shows us is that disgust can be fallible. It can sometimes be directed against things that are not morally wrong. But the same is true of emotionless reason. Emotionless reason presumably led Immanuel Kant to say that lying is always wrong. However, it also has presumably led other philosophers to say that lying, while usually wrong, is sometimes the morally right thing to do. If emotionless reason can generate both principles, then obviously it's fallible. But that doesn't mean it never helps us end up with correct moral principles. It just means it's fallible. It sometimes gets things wrong. We can't trust it 100%. But only a radical skeptic (or someone who grants the radical skeptic far too much, as Rene Descartes did) would claim that a source of information is worthless just because it's not 100% reliable. So I don't think we can rule out a connection between emotion and morality so quickly.
As it happens, recent work in feminist ethics has drawn a lot of attention to attempts to separate emotion from ethical reasoning that have led to a bias against ways of moral reasoning that have tended to be more paradigmatic of women than of men. This bias has had the effect of marginalizing women's ethical reasoning, to the detriment of our overall ethical reasoning. Alison Jaggar has argued that much of the history of ethical theory, which happens to have been done mostly by men, has either treated emotion as something completely isolated from ethical reasoning (as Kant did; emotion cannot be trusted, and the only way to get ethical understanding is to reason in a way that doesn't involve emotion) or as the foundation of all our ethics but a foundation that has no basis in any ethical truth (as David Hume did; there is no ethical truth, because ethics is pure emotion and not reasoned).
Thankfully, Jaggar is wrong about the history of philosophy. Sometimes it's because she misinterprets particular philosophers, such as her reading of the Stoics as being opposed to all emotion, which she can be forgiven for, because, well, they do actually say that. But philosophers are often bad reporters of their own views, and it turns out it's not feelings that the Stoics think we should rid ourselves of. It's bad reasoning, which is how they define emotion. There are plenty of feelings, according to the Stoics, that are perfectly fine to have as long as they're compatible with reasoning well. Certainly the Stoics emphasize reason and say they oppose emotion, but what they oppose isn't what we normally call emotion. The Stoic view on emotion is perfectly compatible with taking what most of us call emotions to be very important for ethics. In fact, having the right feelings, ones compatible with reason, is even crucial for the Stoics. They just won't call those feelings emotions.
Jaggar also seems to me to underemphasize the ways that historical philosophers even put a good deal of effort into organizing their ethical theories around emotions. Plato considered it extremely important for the best possible life that your emotions be engaged in appreciating goodness itself on an emotional level. Aristotle explained some of the most important virtues as simply having the tendency to respond to your circumstances with the right level of emotional response. Augustine's entire account of virtue makes it emotional: virtue is having well-ordered love, whereby you love the best things the most and the less-good things less fully. I myself think all three of them were largely right in these things. Ethics is very much tied up with emotion, and attempts to separate ethics from emotion the way Hume and Kant did are, to my thinking, disastrous.
But several questions remain. It's one thing to say that ethics involves having the right emotions. It's another to say that our emotions are, even sometimes, a good guide to the right ethical principles. We certainly can't just read our ethics off whatever emotions we happen to have. There are plenty of times when my emotional response isn't proportional to an offense that's committed, and I either overreact or underestimate a wrong that's taken place. Or I might not be properly placed to experience the good in something and not be as able to rejoice as I should at some good. There are lots of cases where our emotional judgments are a little off, and there are enough cases, such as with the racist example above, where they are drastically off. Indeed, a Christian who believes in the doctrine of the fall should be the first to recognize that, and that was even crucial for Augustine's ethical theory. Our emotions are often not directed in ways that remotely match up with what's truly good.
2. Ethics, Disgust, and Moral Reasoning
But that doesn't mean there's no role for disgust to play in helping us to see certain ethical truths. Jaggar's feminist treatment of this subject is a good example. She argues that women, having been oppressed for the entirety of recorded history by being told that their emotions are wrong when those emotions contradict how they're being treated, are nevertheless right to pay heed to those emotions, because those emotions are genuine clues to the reality that our socially-constructed narrative is otherwise blinding us to. A member of an oppressed group might have absorbed the narrative that they, as unintelligent slaves, have no rights and need the help of those who are guiding society along to make their decisions for them, but their emotions tell them that the views they've officially adopted on the level of conscious reason are somehow wrong. This can be so for any oppressed or marginalized group, not just women, but she picks out women as a group because women have been told (and less so in outright words in recent years but still conditioned by society in this direction) that they are emotional rather than reasoning beings, that their emotions are less trustworthy than the reasoning that's been identified as paradigmatic of men. I don't agree with everything Jaggar says along these lines, but there's quite a lot of it that strikes me as right about the history of how women are viewed and about some of the elements of how we (men and women today) are still conditioned to view each other and ourselves.
So if Jaggar is right, then there are at least some contexts in which emotions will be even a better guide to truth than the more emotionless reasoning that can easily be simply the reflex of our socially-conditioned environment, our lip service to the biases of our day. Now emotions can do that, too, as evidenced by racist disgust at interracial sex, for example. But all Jaggar is claiming is that sometimes emotions can be a better guide to moral truth than whatever process underlies what we're conditioned to call emotionless reason. And that seems to me to be absolutely right.
Even more, I think there are cases where we can show that our emotion adds something to moral reasoning that you simply cannot get from the emotionless reasoning. A friend of mine who works in aesthetics once gave a case that seems to me to indicate this pretty nicely. Suppose you're eating a kidney and a little bit disgusted at it. This is not moral disgust at all. You just ended up in a situation where you're expected to eat something that you don't like the taste of, and you find it a bit disgusting. But after you've been eating it for a few minutes, you discover that it's a human kidney. Suddenly your level of disgust goes way up. That's not from the taste of it, which didn't change, or from any emotionless reason, because emotionless reason has no emotion and thus by itself wouldn't increase your disgust. Rather, your level of disgust increases because of some moral principle lying behind the disgust, one that upon rational examination would easily stand up. Eating humans is morally worse than eating a kidney from some other animal. It should disgust us, and it does. We should feel greater disgust at eating humans, if we're morally healthy. That doesn't mean that it follows that eating humans is always wrong. It's compatible with this disgust that eating humans who died independently of our actions in a case of survival is morally allowable. Yet it does seem that there's a moral principle lying behind the disgust, one that very few people would question, and it's hard to argue that the disgust isn't a sign of that moral truth. The disgust signifies that truth. Its continuation from generation to generation helps maintain our resistance to cannibalism, and we should be glad for that.
(I should note that this example is a lot like C.S. Lewis' example of finding out that you're eating a deer that was a talking deer in The Silver Chair. The difference, there, however, is that those eating the deer didn't have disgust at all until they found out it was a talking deer. Here there's already disgust at eating the kidney, but it takes on a whole new level of disgust when you learn that it's a human kidney.)
There are several different things someone might mean when they speak of imposing religious beliefs on those who don't hold them. There are two different axes to pay attention to. One is what is meant by "imposing", and the other is what is meant by "religion".
On the first axis, what is meant by "imposing", I can think of a number of things in decreasing order of severity:
1. Forcing people with threat of force or imprisonment
2. Coercing people by some manner less severe than force or threat of imprisonment (e.g. giving them incentives like a right to vote, to drive, to hold an independent job) that most Americans consider rights or close enough to it
3. Incentivizing by some manner less severe than coercion (e.g. government influencing social acceptance, giving tax credits or deductions, criminal penalties of smaller sort such as a fine)
4. Calling on people to change their mind or behavior, perhaps with strenuous argumentation
5. Explaining one's attitude on the issue
6. Simply stating what one's view happens to be
On the second axis, what is meant by "religion", I can again think of a number of things, in decreasing order of centrality to religion:
A. espousing a statement of faith or unfaith (that they might not actually agree with)
B. engaging in certain behavior that is motivated (on the part of those instituting the policy) merely by religious beliefs and not by any attempt at rational argument
C. engaging in certain behavior that is motivated (on the part of those instituting the policy) in part by religious beliefs but also by some attempt at rational argument, even if it's not a strong argument
D. engaging in certain behavior that is motivated (on the part of those instituting the policy) in part by religious beliefs but is held by most who hold it (even if controversially) by rationally-motivated arguments that, while disputed, at least are philosophically-driven in addition to or, for some, without the religious motivation
E. engaging in certain behavior that is motivated (on the part of those instituting the policy) in part by religious beliefs but is commonly held by most people, and for most people there is motivation that in their minds is on grounds entirely independent of religion
There are those who insist that even stating one's religious views counts as imposing them in an improper way, never mind preaching them. Fortunately, in the United States even 4A is protected speech by the first amendment. I'm not about to argue for 1 either, so we're really looking at 2 and 3. In the history of the world, we've certainly seen pseudo-conversions coerced at swordpoint or recantations of religious beliefs at the threat of martyrdom. In comparison with that, the idea that one is imposing one's religion merely by trying to make a case for it seems absurd. It's similar to the War on Christmas people complaining of Christians being persecuted in the United States just because schools are refusing to sing Jingle Bells in schools on the ground that the song is tied to a religious holiday. (In my experience, schools nowadays don't reduce Christian content at Christmas but simply include it alongside religious content for other religions' holidays too, so this complaint is getting even more stale than it was when I was younger, when such songs might have been excluded on the strange claim that they're somehow religious).
We do have some laws that are all the way down to 1E or sometimes 1D, however. For example, same-sex sodomy laws, bans on selling contraceptives, and bans on teaching evolution (all deemed unconstitutional now) were often religiously-motivated but did include arguments, often arguments widely accepted at the time, that didn't rely on religious premises. Evolution was thought not to be as well-supported as its proponents think. Creation science has insisted that evolution is just bad science. This isn't about whether their arguments are good but about what kind of arguments they are. Similarly, bans on same-sex sodomy were justified more by disgust at such acts than any biblical prohibition on them, and the Connecticut ban on selling contraceptives was supported by an argument about population control.
But there remain some laws at level 1E or 1D and some attempts at instituting laws at this level. Sodomy laws are deemed unconstitutional by the Supreme Court since 2004, but incest laws vary from state to state. It's not criminal in Rhode Island to have sex with a close relative, but you can't marry them unless you're Jewish (to allow for Levirate customs, I assume). In Ohio it's criminal to have sex with your children, but only the parents are criminal even if the children are adults. But in Massachusetts you can get 20 years in prison for having sex with your adult sibling, even if one of the two parties is demonstrably infertile or if it's a same-sex act, in either case removing any chance of genetic problems with offspring. Such a law is, as far as the courts have so far indicated, perfectly constitutional. Yet I can think of no easy argument against it unless you rely on beliefs that are either very controversial and often supported by religion or simply feelings of disgust. Arguments against pornography aren't all religious (see the feminist arguments), but we make distributing or producing certain kinds of pornography illegal in part because a lot of people have religious objections to it. (But I should say that this is clearly 1E and not 1D, since almost all religious people who object to pornography would agree with just about the entire feminist case against pornography, despite feminist claims to the contrary.)
In fact, 1E prohibitions occur all the time. Laws against murder or robbery fit into this category. People certainly have religious reasons for thinking such acts are wrong and ought to be given severe penalties. But the arguments for them are widely accepted by religious and non-religious people, and the secularly-accessible arguments are usually present even for religious people.
Coercion of sorts 2 and 3 is a little more commonly thought of as imposing religion, and there are some ways that can occur today in the United States with legal sanction (although for letters further down the list than happens with Islam). You're not going to find 2A or 3A in the U.S. today, but you will find both in Islamic countries. Most debates in the political context of the U.S. about imposing religion aren't even about 2B or 3B. The kinds of things that get labeled as Taliban-like behavior in the U.S. aren't about matters that have purely religious support. They at least make an attempt at rational argumentation. But that's also true of the Islamic laws requiring women to wear veils or prohibiting girls from being educated in any formal way. The supposed rational argumentation in both cases is extremely weak and based on false views of the capabilities of women or false priorities, elevating the concern with provoking male lust to a point where it overcomes eminently reasonable considerations about freedom in how women might dress and conduct themselves in public. Even the most stringent Christian concerns about modesty in women's dress are going to allow for much more freedom than you'll find in many Islamic prohibitions on female dress.
I think most cases I'm aware of on level 2 are actually all the way down to 2E. I'm thinking of laws that prohibit minority religious behavior, such as requiring a photo ID for a driver's license (which some orthodox Jews resist and even some Muslims, or like the Florida law requiring a photo ID not to have a face covered too much, which some Muslim women won't do). The attempted ban on peyote even in Native American religious ceremonies would have fallen into this category, but Supreme Court, in an opinion by Justice Scalia, overturned that. Banning certain kinds of political protests that someone might have religious reasons for insisting on doing, e.g. perhaps an abortion protest of a certain nature, amounts to a 2C imposition.
Level 3C is much more fair game for a lot of issues in the U.S. We don't imprison people for much at level C, but we do incentivize religious charitable giving by giving tax deductions, and we recognize (so far) a privileged position for opposite-sex unions to be called marriage at the federal level and in most states. That gives government sanction for something with some secular arguments but also based on religious motivation for many supporters of that policy, and it has an effect of cultural sanction or respect for certain behavior over other behavior. If we ban a certain religious act but without criminal penalty other than a fine, that would fall under 3C. There are religious and non-religious arguments for abortion protests that cross the line into illegality to a point of a fine but not to the point of imprisonment.
In the UK and Canada in the last couple years, pastors have been carted off to prison for preaching that same-sex sexual acts are immoral. This isn't quite an expectation of having a certain view, but it's prohibiting the speaking of such a view. It's a level 1 prohibition of level 6 behavior. Americans rightly deride such policies as contrary the value of debate as a basic, fundamental component of civil society. Speech codes that prohibit even stating your religious views if such views are considered offensive to someone, while indisputably unconstitutional in the United States, somehow manage to appear at most universities anyway. Even 4A is uncontroversially protected speech under the first amendment, unless it takes it to a level of actually provoking people to a fight or to the level of panic that would result by yelling "fire" in a crowded theater. Yet I've encountered a number of people who have considered it a clear case of immorally imposing one's religion, as if trying to persuade someone of a view you happen to find true is somehow wrong. Some take it to a further extreme, considering even the reporting of your view to be inappropriate when it's a controversial view that some might find offensive. Merely indicating that one believes Jews who don't accept Christ as the Messiah will go to hell would, to some people's mind, count as imposing one's religion in an immoral way. I find such an analysis so unhealthy that I almost consider it undeserving of a reply. But if pressed I would insist on the value of philosophical debate, the importance of understanding those who disagree with you, and the moral importance to certain religions of attempting to win people over to something they consider very urgent for all humanity, which prevents them from remaining silent if they're taking their own religion seriously.
What's the moral of the story? Mostly what motivated me to work through all this is that I think we should be wary of anyone who makes blanket statements about imposing religion, whether moral statements or simply factual claims that it has happened. It should be pretty clear from all this that it's never clear what people mean by that unless the specify, and the debate that might ensure once they do specify is probably worth having. Most people who make such comments haven't thought them through and could benefit from some effort to explore precisely what they mean. The term "imposing religion" is at this point so unhelpful as to be worth avoiding whenever we can, and in its place let's clarify the particular elements that we're concerned about, since the different items in both lists above certainly do involve different moral considerations.
I've been trying to put Norman Geisler's normative theory on the map of positions I'm aware of, because I think he makes a genuine contribution to the field, and he's been pretty much on the sidelines in terms of ethical theory given that he's only published with Christian publishers for Christian audiences. He calls his view Graded Absolutism, which I think is a misleading term (and arguably a misapplication of the term, depending on how he means it).
Here are four views along a spectrum:
1. Consequentialism (Jeremy Bentham, John Stuart Mill, G.E. Moore) -- consequences are the only determinant of whether an act is right or wrong; genuinely moral principles never conflict, because there is only one -- to seek the best consequences [but much of the work is determined by what counts as the best consequences, with utilitarians focusing only on pleasure and pain and more comprehensive consequentialists including many other consequences)
2. Rossian deontology (W.D. Ross) -- several moral principles are relevant, and consequences play a role as one of them; different principles take precedence in different situations
3. graded absolutism (Norman Geisler) -- several moral principles are relevant, but not consequences; the same hierarchy of importance exists for these principles no matter the circumstances
4. Kantian deontology (Immanuel Kant) -- moral truths are absolute in the sense that they never have exceptions, no matter how serious the consequences are; moral principles never conflict
Consequentialism and Kantianism are absolute in the sense philosophers usually mean when they use the term about an ethical theory. Moral rules are absolute, and there is never any genuine conflict between them. There is at least one moral principle with no exceptions for consequentialists, because there is only one, and it never has exceptions. For Kant, there are several principles, but he thinks they will never conflict. Deontologists think either that there are more principles that matter than just consequences (as Ross thinks) or that consequences are entirely irrelevant (as Geisler and Kant think). Many deontologists find Kant's view implausible, because there are often cases where moral principles conflict. But they also want there to be moral principles besides just consequences. Ross and Geisler offer different views on what happens next.
According to Ross, there are sometimes several moral principles that play a role in a given case, and one of them will take precedence in each case. But it's not according to a pre-existing hierarchy. Sometimes the situation will make one principle more appropriate than another, but in a different situation the hierarchy is reversed. Perhaps the lying principle is more important than the principle of seeking the best consequences when not much is at stake in terms of consequences. It might make things a little better in the world if you tell a lie, but the principle against lying is more important when the difference in your self-interest and the interest of others is not much changed whether you lie or not. But in a case where hundreds of lives are at stake, the principle of not lying becomes less significant than the principle of promoting the good of others (which is a consequence). When I teach ethical theory, I teach consequentialism and Kant and then present Ross as a moderating position, taking aspects of each but rejecting other aspects of each. Geisler seems to have found a different moderating position along this spectrum, one that's closer to Kant in two respects than Ross's view is.
One Kantian element Geisler wants to retain that Ross rejects is in not counting consequences at all. There might be cases where lying is all right, according to Geisler, if a more important moral principle is at stake. But that principle won't be framed in terms of consequences, and how serious the consequences are plays no role in the moral status of the action. (On this point, I side clearly with Ross. Of course consequences can play a role in determining how good or bad an action is, even if they are not always decisive.)
Second, Ross thinks which principle is more important will vary from situation to situation. Geisler doesn't like that. He wants a rigid hierarchy that is the same in every case. The only thing that determines which moral principle applies is which ones are relevant, and then you go with the highest one in the list that's relevant. This is in fact why Geisler misleadingly calls his view absolutist and why he would not think Ross's views is absolutist. What is absolute is the structure of the moral hierarchy. That never has exceptions and doesn't vary from situation to situation. But only the very top moral principle is absolute, strictly speaking, because the others all allow for exceptions. So it's not absolutist about most moral principles, like Kant's view, just about the top one and about the relative positions of all the moral principles in the hierarchy. Most ethicists who speak of absolutism are thinking in terms of whether moral principles in general are absolute, and Geisler's view would say no to that. But if absolutism is the view that at least one moral principle is absolute, then Geisler would agree with that. The top moral principle in the hierarchy is absolute.
I want to distinguish both of these moderating positions from a number of views that they get confused with fairly easily. First, there's situational ethics. Situational ethics is itself often confused with relativism. Situational ethics in reality is a consequentialist position that takes love to be the only important consequence. It is not relativism, and neither is consequentialism in general or utilitarianism in particular, despite all these views sometimes being called relativism.
The views most commonly called moral relativism are meta-ethical views about the nature of moral language. They find ways to account for moral language without there being objective moral truths. Subjectivism says what's right is just whatever the individual person considers right. Cultural relativism says what's right is whatever your culture says is right. Emotivism says there are no truths or falsehoods about right and wrong, and attempts to say something is right or wrong are more like expressing your approval (they mean, roughly, things like "Hooray for helping people out!" and "Boo! Abortion!" but don't express any content that can be true or false). There are other variations, but what all these views have in common is that there is no truth or falsity of moral statements except, possibly, to express truths about the person making the statement or about that person's culture.
Sometimes an incoherent view common among college students is called relativism. This view is basically an inconsistent combination of one of the above meta-ethical views (usually subjectivism or cultural relativism or an inconsistent adoption of both) with the moral absolute that we ought not to criticize other people's moral views or other culture's moral views. I don't consider that a genuine view, just a confusion and an attempt to combine incompatible claims.
But the views I'm talking about here are very different from what's usually called relativism. They are not situational ethics, because they are not consequentialist, and situational ethics is a consequentialist theory involving love as the only important consequence. They are not meta-ethical relativism. The meta-ethical position they endorse is objectivism. The moral principles for Ross and Geisler are objectively true. It's just that sometimes one principle is more important than another (for Ross) or some principles are always more important than certain others (for Geisler).In both cases, the facts that determine which principles are relevant in a case are objective. For Geisler, the hierarchy of principles remains constant across situations. For Ross, it doesn't. But even for Ross there are objective facts about the situation that make certain principles more appropriate for that situation than other principles. Contrast the view that your psychological makeup, moral views, or cultural background is what determines which principles are important. These views are not relativism but genuinely objectivist moral theories.
I've been covering pacifism, just war, suicide, euthanasia, cloning, abortion, and capital punishment in my classes, and I've been thinking a lot about the "playing God" argument that arises in all these issues. It also plays a major role in arguments against contraception, which Wink and I treated not too long ago. What exactly is this argument supposed to amount to? The one underlying feature to the different versions I can think of is that somehow God has given us certain responsibilities to do but has withheld from us certain things to do, and it's playing God to do the latter. But which things would those be, and why those things? The different realms God is said to have exclusive rights over have been anything involving when someone might die or come into being, any way to affect the characteristics of someone as they come into being, and other issues related to life and death. A helpful analogy, though, is to consider groups like the Amish who make this argument not just about life and death but about many ways in which we live our life. They apply it to certain kinds of technology, though I've never been able to find a consistent standard behind their choices of which kinds of technology to use and which not to use. Knitting needles and computers are equally human-developed technology. But those of a more moderate persuasion who will still give such an argument seem to me to limit it to these life and/or death issues and to using technology to modify something seen to be fundamental to God's prerogative in giving and taking life (and determining what form such life will take, which is why cloning and genetic engineering are part of this).
Jeremy Pierce is a philosophy professor, Uber/Lyft driver, and father of five.