Earlier this week the NYT published a critique of Genesis based on, of all things, the appearance of camels within its narratives, and I'm starting to see more and more discussion of this, virtually all of it simply repeating the claims of that article, without much at all in the way of careful reflection on the problems in the broader thesis that it puts forward, which I don't think the evidence actually supports.
This isn't actually a very new objection. Scholars have long objected that there isn't a lot of evidence of domesticated animals within the Canaanite region during that time. But there is evidence of domesticated animals in Egypt and Mesopotamia during the period Genesis describes, and the NYT article even mentions that, and it says they were more commonly used more by those nomadic peoples living in the more desert regions. The only thing new here is some carbon dating of the bones of camels, along with techniques for measuring properties of the bone, which can allow them to determine whether they were wild or domesticated and had to carry greater weight for much of the time.
I think there are several reasons to be very skeptical of the conclusions the NYT article draws. Here are a few:
1. Genesis doesn't report lots of camels being used during the time of the patriarchs, as the article claims. They are sometimes listed among the animals they owned, but usually it's in smaller numbers, and the only reports of their being used for riding are to cross the desert regions or when referring to nomadic peoples like the Midianites who lived within such regions.
2. Abraham and Lot had to cross that desert to get to Canaan, and the only animals they could have used would have been camels. The NYT article even says that no other animals would be able to make that journey so easily, and even their skepticism doesn't apply to that sort of trip. So if Abraham did come from a region where camels were used regularly at this time (as the article admits), and he had to use them to cross the desert (as the article admits), it stands to reason that he wouldn't have killed them all when he got there and would have had at least a small enough number remaining when he had to send his servant to find a wife for Isaac and so on, and we know they kept their own cultural identity and may have been hesitant to trade their camels because of their relatively small number and inability or procure more while there. They might have increased in number during the time he was living in Canaan, as long as there were only a relatively small number of them in this period, belonging precisely to his family, but that doesn't mean we should think there would be evidence of the larger number of them that the NYT article seems to expect there would be if they had them.
3. Abraham is portrayed as being rich, and the existence of a small number of camels in the lists of animals he owned is presented in the book as evidence of his wealth. If they were common around him, the small number of camels would seem insignificant compared with the huge number of other animals he had. But even a smaller number is presented as evidence of his great wealth. So the portrayal of his camels in the book fits nicely with the claim that the locals didn't have them.
4. If his family only used them when traveling across the desert or on long journeys (as the narrative itself indicates) but just maintained them as domesticated by not pack animals or riding animals, then even the ones that they did have might not have appeared to be domesticated by the methods of measuring the bone density and such that these scientists have been using.
5. So I think at best the conclusion being put forward here goes way beyond the evidence. If someone were to conclude from the Genesis narrative that camels were being used throughout the Canaanite region the way the article assumes the book presents things, then it would create a problem. It's still an argument from silence, but it would be odd for there to be no preserved camels from this period if they were that commonly used. But the Genesis narrative doesn't present such a picture, and there's no reason to think the picture it does present is unlikely to have produced the (lack of) evidence this new research provides.
The imminent ban on 40-watt and 60-watt incandescent light bulbs is going to impose a significant cost on our household. This is an interesting case of a somewhat bi-partisan attempt to save energy while imposing what they took to be only a small cost on most households. But it is a cost, and it's cost that poorer households will be more burdened by. So, like New York's recent bottle bill that adds 5 cents to the cost of a larger variety of bottles, people with lower income will be more burdened by it if they continue to buy products in those bottles, while more affluent households will not notice as much of an effect of the increased cost. Our household, however, will be much more burdened by this than most.
The alternatives to incandescent bulbs don't seem to me to be genuine alternatives for our household. LED bulbs really are the best you can get. LED flashlights fail when the flashlight itself fails. It's never the bulbs that are the problem, and the batteries should last a very long time unless you leave them on all the time or never turn it on (in which case the batteries will corrode). But LED bulbs for ordinary household lights are still very expensive. The prices I'm finding for them online are something like $10 per bulb. This might be fine if they last forever and will never need to be replaced, and the energy savings might also help make up for it, but that's for a household where you won't need to replace them except when they fail on their own. We have a child who actively seeks to smash light bulbs whenever people forget to turn the lights on when he's home or when we let our attention turn to deal with anything but him, allowing him to climb on something to reach them. I think we lose a light bulb or two every week, and we can't be spending $10 per bulb at that sort of replacement rate.
Compact fluorescents are not a viable alternative either, for two reasons. Fluorescent bulbs do last longer than incandescent bulbs if you simply measure how many hours they can be left on before breaking, but that's not how most people use them. For businesses that leave the lights on for long stretches of time, they make sense. But if you turn them on and off regularly, they break far, far sooner than incandescent bulbs. They often don't last more than a few months with the kind of use they get in our house. I've seen them last a day or two more than once. They might save energy if you're willing to eat the cost of constantly replacing them, but they're not cost-effective unless you keep them on all day. This is not easy if you have been conscientious enough to develop a muscle-memory habit of turning the lights off when you leave the room, and it's next to impossible if you have children who will turn lights on and off all the time. I have to remind myself constantly not to turn the lights off in my office at work and in the classrooms I teach in, because it will cost the college too much money to keep turning them off and on again and replacing the bulbs regularly. The bulbs in our office are constantly in need of replacement, because people often turn them off when they leave the room, either not knowing of this problem or not thinking about it when they leave. And those are adults. There's really no way to control for what small children or children with autism will do with lights, and we've got both.
Even worse is the health hazard given the amount of mercury inside compact fluorescent bulbs. It's not a huge amount of mercury in a given bulb. It's about the size of a period in standard-size type. But even that amount is not a good idea to have around small children, and the EPA's recommended precautions for cleaning them up are simply not possible in our household. When you add in an autistic child who goes out of his way to unscrew them and smash them on the floor, it's simply not viable to have them in any bulbs he can either reach or stand on something to reach, which means none except in lights with closed cases.
Fortunately, the law doesn't ban incandescents altogether, just ones that are below a certain energy efficiency. The market provided a solution in the first phase of the ban. The light bulb industry managed to produce some 100-watt and 75-watt bulbs that met the standards that the first phase imposed, and we've been buying those bulbs (and will have to buy exclusively those bulbs until the industry produces similarly more-efficient 60-watt and 40-watt bulbs). We're not actually going to see incandescent bulbs disappear. We'll just see more expensive ones. This is an expense we'll have to absorb without seeing as much benefit as most households would get from it, since our the bulbs will have a shorter life than in most households. But it seems to me to be the best alternative for us.
Thabiti Anyabwile has come under a lot of criticism from many quarters for his recent post on the gag reflex and Christian opposition to same-sex sexual acts, increasingly called "homosex" of late. [I'm still getting used to that word, because it still feels like an adjective to me (one without its proper ending), but it's a useful word compared with writing out something like "engaging in same-sex sexual activity, so I will use it.]
He has just posted a followup responding to some of the criticisms as well.
As I see it, there are several issues going on here, and I don't think all the participants in the conversation are keeping them straight. There are a number of ways his argument is being misrepresented (and then made fun of in pretty vile ways as a result), but there are also some genuine philosophical difficulties with some of the things he's saying, and I'm not entirely sure I agree with some of the key points. Even so, I think some of the things for which he's being unfairly made fun of by a lot of the opposition seem to me to be largely correct and even relatively obvious, things I'm not sure many people will really want to rid themselves of in their ethical theorizing if they were to think their views through more carefully. So maybe they should refrain from making fun of them, if I'm right about that. I want to work my way to that gradually, however, with a bit of a review of some of the key philosophical moves that have been made about the connection between morality and emotion.
1. Ethics and Emotion
I'm not interested first in the application to homosex, although I will say a few things about that later on. I'm primarily interested in the general strategy of ethical reasoning that involves paying heed to emotions like disgust. A good friend of mine complained on Twitter about the arguments found in the original post, arguing that if we allow disgust to guide our ethical judgments it would mean racists' disgust for racial interaction could generate moral principles against interracial marriage (or more particularly against interracial sex). If disgust shows us anything at all about genuine moral principles, the argument goes, then we have to follow our disgust whatever it leads us to loathe. And people can loathe all sorts of things, in ways that don't at all track genuine moral principles. So we shouldn't rely on our disgust to show us anything about morality.
I think this argument is a mistake. The fact that disgust can be directed against things that are not wrong does not show us that disgust isn't ever a guide to morality. All it shows us is that disgust can be fallible. It can sometimes be directed against things that are not morally wrong. But the same is true of emotionless reason. Emotionless reason presumably led Immanuel Kant to say that lying is always wrong. However, it also has presumably led other philosophers to say that lying, while usually wrong, is sometimes the morally right thing to do. If emotionless reason can generate both principles, then obviously it's fallible. But that doesn't mean it never helps us end up with correct moral principles. It just means it's fallible. It sometimes gets things wrong. We can't trust it 100%. But only a radical skeptic (or someone who grants the radical skeptic far too much, as Rene Descartes did) would claim that a source of information is worthless just because it's not 100% reliable. So I don't think we can rule out a connection between emotion and morality so quickly.
As it happens, recent work in feminist ethics has drawn a lot of attention to attempts to separate emotion from ethical reasoning that have led to a bias against ways of moral reasoning that have tended to be more paradigmatic of women than of men. This bias has had the effect of marginalizing women's ethical reasoning, to the detriment of our overall ethical reasoning. Alison Jaggar has argued that much of the history of ethical theory, which happens to have been done mostly by men, has either treated emotion as something completely isolated from ethical reasoning (as Kant did; emotion cannot be trusted, and the only way to get ethical understanding is to reason in a way that doesn't involve emotion) or as the foundation of all our ethics but a foundation that has no basis in any ethical truth (as David Hume did; there is no ethical truth, because ethics is pure emotion and not reasoned).
Thankfully, Jaggar is wrong about the history of philosophy. Sometimes it's because she misinterprets particular philosophers, such as her reading of the Stoics as being opposed to all emotion, which she can be forgiven for, because, well, they do actually say that. But philosophers are often bad reporters of their own views, and it turns out it's not feelings that the Stoics think we should rid ourselves of. It's bad reasoning, which is how they define emotion. There are plenty of feelings, according to the Stoics, that are perfectly fine to have as long as they're compatible with reasoning well. Certainly the Stoics emphasize reason and say they oppose emotion, but what they oppose isn't what we normally call emotion. The Stoic view on emotion is perfectly compatible with taking what most of us call emotions to be very important for ethics. In fact, having the right feelings, ones compatible with reason, is even crucial for the Stoics. They just won't call those feelings emotions.
Jaggar also seems to me to underemphasize the ways that historical philosophers even put a good deal of effort into organizing their ethical theories around emotions. Plato considered it extremely important for the best possible life that your emotions be engaged in appreciating goodness itself on an emotional level. Aristotle explained some of the most important virtues as simply having the tendency to respond to your circumstances with the right level of emotional response. Augustine's entire account of virtue makes it emotional: virtue is having well-ordered love, whereby you love the best things the most and the less-good things less fully. I myself think all three of them were largely right in these things. Ethics is very much tied up with emotion, and attempts to separate ethics from emotion the way Hume and Kant did are, to my thinking, disastrous.
But several questions remain. It's one thing to say that ethics involves having the right emotions. It's another to say that our emotions are, even sometimes, a good guide to the right ethical principles. We certainly can't just read our ethics off whatever emotions we happen to have. There are plenty of times when my emotional response isn't proportional to an offense that's committed, and I either overreact or underestimate a wrong that's taken place. Or I might not be properly placed to experience the good in something and not be as able to rejoice as I should at some good. There are lots of cases where our emotional judgments are a little off, and there are enough cases, such as with the racist example above, where they are drastically off. Indeed, a Christian who believes in the doctrine of the fall should be the first to recognize that, and that was even crucial for Augustine's ethical theory. Our emotions are often not directed in ways that remotely match up with what's truly good.
2. Ethics, Disgust, and Moral Reasoning
But that doesn't mean there's no role for disgust to play in helping us to see certain ethical truths. Jaggar's feminist treatment of this subject is a good example. She argues that women, having been oppressed for the entirety of recorded history by being told that their emotions are wrong when those emotions contradict how they're being treated, are nevertheless right to pay heed to those emotions, because those emotions are genuine clues to the reality that our socially-constructed narrative is otherwise blinding us to. A member of an oppressed group might have absorbed the narrative that they, as unintelligent slaves, have no rights and need the help of those who are guiding society along to make their decisions for them, but their emotions tell them that the views they've officially adopted on the level of conscious reason are somehow wrong. This can be so for any oppressed or marginalized group, not just women, but she picks out women as a group because women have been told (and less so in outright words in recent years but still conditioned by society in this direction) that they are emotional rather than reasoning beings, that their emotions are less trustworthy than the reasoning that's been identified as paradigmatic of men. I don't agree with everything Jaggar says along these lines, but there's quite a lot of it that strikes me as right about the history of how women are viewed and about some of the elements of how we (men and women today) are still conditioned to view each other and ourselves.
So if Jaggar is right, then there are at least some contexts in which emotions will be even a better guide to truth than the more emotionless reasoning that can easily be simply the reflex of our socially-conditioned environment, our lip service to the biases of our day. Now emotions can do that, too, as evidenced by racist disgust at interracial sex, for example. But all Jaggar is claiming is that sometimes emotions can be a better guide to moral truth than whatever process underlies what we're conditioned to call emotionless reason. And that seems to me to be absolutely right.
Even more, I think there are cases where we can show that our emotion adds something to moral reasoning that you simply cannot get from the emotionless reasoning. A friend of mine who works in aesthetics once gave a case that seems to me to indicate this pretty nicely. Suppose you're eating a kidney and a little bit disgusted at it. This is not moral disgust at all. You just ended up in a situation where you're expected to eat something that you don't like the taste of, and you find it a bit disgusting. But after you've been eating it for a few minutes, you discover that it's a human kidney. Suddenly your level of disgust goes way up. That's not from the taste of it, which didn't change, or from any emotionless reason, because emotionless reason has no emotion and thus by itself wouldn't increase your disgust. Rather, your level of disgust increases because of some moral principle lying behind the disgust, one that upon rational examination would easily stand up. Eating humans is morally worse than eating a kidney from some other animal. It should disgust us, and it does. We should feel greater disgust at eating humans, if we're morally healthy. That doesn't mean that it follows that eating humans is always wrong. It's compatible with this disgust that eating humans who died independently of our actions in a case of survival is morally allowable. Yet it does seem that there's a moral principle lying behind the disgust, one that very few people would question, and it's hard to argue that the disgust isn't a sign of that moral truth. The disgust signifies that truth. Its continuation from generation to generation helps maintain our resistance to cannibalism, and we should be glad for that.
(I should note that this example is a lot like C.S. Lewis' example of finding out that you're eating a deer that was a talking deer in The Silver Chair. The difference, there, however, is that those eating the deer didn't have disgust at all until they found out it was a talking deer. Here there's already disgust at eating the kidney, but it takes on a whole new level of disgust when you learn that it's a human kidney.)
I recently rewatched the 1975 Doctor Who episode "Genesis of the Daleks" by Terry Nation. Some online discussions I looked at about "Genesis of the Daleks" made some interesting, and to my mind obviously false, claims about how it fits (or doesn't) into the overall canonical fictional world of Doctor Who.
One claim in particular claim that caught my interest was the accusation that Terry Nation contradicted some of his earlier Doctor Who episodes about the Daleks in giving the origin of the Daleks in this serial. One discussion pointed out that Nation had made an effort not to contradict his first serial "The Daleks" from 1963, where he establishes the Daleks as creations of a race called the Dals in their war against the Thals. The supposed contradiction comes with "Genesis of the Daleks" when Nation actually shows us this war between the Thals and the race that created the Daleks, and the creator race is not called the Dals but is called The Kaleds.
Here's my problem. This is not a contradiction. A contradiction takes the form 'P and not-P". There is nothing of that form here. What you do have is:
1. The race who created the Daleks at the time of the Daleks' creation called themselves the Kaleds.
2. The Thals also called them the Kaleds at that time.
3. At a much later time, probably many centuries later, after an apocalyptic destruction of all civilization and a loss of a good deal of accurate information about the details of that earlier time, someone speaks of the race that created the Daleks as the Dals.
I'm sorry, but I'm not seeing how any of that makes for an inconsistency. If we were sure the person telling us they were called the Dals was speaking the truth, that would even be difficult to get a contradiction, because it's possible they came to be called the Dals at some time after "Genesis of the Daleks" or that they were called that at some earlier time, and that name came to be the more common one to use again after the apocalypse. But we can't even be sure the Thal telling us this has the right information. Maybe it's just that the wrong name was preserved. There are quite a number of things that could explain how 1-3 might all be true. Terry Nation simply did not contradict his earlier Dalek stories. What he did is use a different name without explaining why different names were used at those two different times, but it's not a contradiction.
I think there's a certain personality type that just likes to find contradictions in everything. A lot of fan criticism of science fiction and fantasy stories exhibits similar problems to the one I've been discussing here. I could point out lots of other examples. That doesn't mean there aren't legitimate criticisms to level against authors. I've criticized J.K. Rowling in print about her concept of changing the past in the third Harry Potter novel, although I did so after pointing out some rather implausible ways of making the story work to avoid the problem I raised. The implausibility there would involve reliable narrators who would know better telling untruths, however, which is more of a stretch than someone centuries after an apocalyptic event getting a name of an extinct civilization wrong or the possibility that the group was actually called by two different names.
How you evaluate such attempts to make canonical worlds coherent in part does depend on how plausible the explanation might be to avoid the contradiction. It's nice for fictional worlds to be coherent. Sometimes that's impossible. Sometimes it involves an implausibility but is possible. And sometimes it's not all that implausible if you just think a little harder to see how things might fit together, when at first they seem not to.
It's hard not to think of critics who like to find contradictions in the Bible when I look at these stories. There are some genuine difficulties in fitting together some parts of the Bible. I've never seen one that guarantees a contradiction, especially when you take into account that inerrantists don't take the current manuscripts to be inerrant but allow for errors in transcription from manuscript to manuscript. But I have seen places where it's not easy to come up with one highly plausible explanation that shows for sure why the apparent contradiction is not a real one. In most of them, there have been several explanations, where not one stands out as the most plausible, and even most of them involve something somewhat unlikely but possible. There's none I know of where I would judge all the explanations as so implausible as to require rational evaluators to think it has to involve two contradictory statements that can't be resolved. But I'm coming from an epistemological standpoint where I think the prior plausibility is relatively high. I consider myself to be in a position where I think I have good reasons for taking the Bible as it presents itself, as God's word, and it follows from that that it's more likely that there is a solution even if I don't know what it is than that there isn't. So I'm going to take the less-plausible-sounding accounts as less certain, but I'm going to be more likely to think that one of them is probably true.
That's one difference with fictional worlds. I don't believe there even are Daleks or Time Lords, never mind that the entire Doctor Who canon is consistent. (I think it certainly isn't coherent when it comes to fundamental questions of time travel, for example.) But someone who thinks God is real and is basically the way God is presented in the Bible is going to place a higher prior probability on there being some resolution to a proposed contradiction than someone who has no prior trust in those documents. And I would argue that someone doing this is right to do so if the prior probability is based on a good epistemic state to begin with. And that makes accepting truth in texts that are hard to fit together much easier to do (and not in a way that undermines rationality, assuming the prior probability itself has a rational grounding.
That assumption of prior probability, of course, is one of the fundamental disputes to begin with, but you can't just assume at the outset that someone who is more willing to trust a set of scriptures is wrong in doing so, and pointing to potential contradictions isn't necessarily going to turn the tide of the conversation unless you first undermine the prior probability. Supposed but not actual contradictions, even if they are difficult to put together, are therefore very weak evidence against the coherence of a worldview when the person who holds that worldview is more sure of it than they are of the irresolvability of the supposed contradiction. That makes for people coming from very different standpoints evaluating the supposed contradictions very differently, and from within their world view each seems to themselves to be right in how they do that. That's something that I think not enough people on either side of such debates can see.
In Thabiti Anyabwile's response to the George Zimmerman verdict yesterday, he made some comments about his ongoing position on the unreality of race, which I've tried to engage with him on before. I'm not surprised he wasn't interested in continuing that conversation on that post, but he did chime in to appreciate the conversation that arose between me and another commenter there. It's very different to engage with this issue on a popular level, as compared with the more technical philosophical engagement with this issue that I've spent much of the last decade of my life working on. It's also different to engage with particularly Christian arguments, which obviously don't arise very often among critical philosophers of race. I thought some of what I wrote in the conversation might be worth preserving here, so here are some excerpts. If you want to read the entire conversation, you can see my initial comment here and then the beginning of the conversation with another commenter here. Perhaps this can give a taste of my forthcoming book on this topic to those who have been asking about it (which I'm trying to finish revising this summer, with the hope of a publication date by the end of the year if I succeed).
Here are the excerpts I wanted to preserve, first from my initial comment:
Seven years ago I wrote a post explaining why I think a common theory among biblical scholars is both against our best evidence and unnecessary in order to explain a few puzzling features of the texts we have. The puzzling features are as follows:
When Aaron dies in the Torah, it says his son Eleazar takes over the high priestly position, and then Eleazar's son Phinehas inherits that role when Eleazar dies. Yet the line of Eleazar does not seem to maintain that position by the time of Samuel. Eli seems to be occupying a high priestly role, and he's descended from Eleazar's brother Ithamar. Yet the biblical texts do report of the line of Eleazar being preserved, notably in a man named Zadok, whom David seems to elevate to a high priestly role of seemingly equal authority with a continuing high priestly descendant of Ithamar. It's only when that man betrays David that we seem to have a return to one high priest.
The common scholarly theory takes the texts to be unreliable reports of events. There's no direct evidence that Zadok was anything other than a Levite descended from Aaron through Eleazar. There's no direct evidence that the Eleazar line was invented wholesale in the Torah in order to retcon Zadok as a more legitimate priest than Ithamar's by-then-disgraced servants who had sided with the coup against David. But the suspicion because of the puzzling facts of the previous paragraph has somehow become unquestioned and even is presented as obvious by a lot of biblical scholars, when there are several other explanations of why the text reports what it does, none of them less likely to my mind than the suspicious explanation. I give two in that post seven years ago, and a third occurred to me this morning.
One possibility from the previous post is that the descendants of Eleazar had forsaken their responsibilities during the time of the Judges, which is entirely fitting with how Israel is described during that time, and the descendants of Ithamar were left to run the operation of the tabernacle and early pre-Solomonic temple structures (like the one we see in the early chapters of I Samuel).
The other possibility from the earlier post is just a decentralization of worship, not really being faithful to the tabernacle set up in the Torah (which would also fit with what the book of Judges tells us of that period). In this second case, Phinehas' descendants might still have been operating as priests, and indeed may even have considered themselves high priests, but other priests were operating in other locations, contrary to Torah specifications, and in each location someone was functioning like a high priest for that location.
The third explanation that occurred to me this morning is that there was a pattern for selecting the high priest that didn't consistently follow our expected rules of succession. Perhaps Phinehas' selection of high priest to succeed Eleazar has wrongly suggested to us that it would always continue as father to eldest living son. But perhaps instead the rule was eldest living male descendant of Aaron. If Ithamar died before Eleazar, then Phinehas might well have been the oldest male descendant at that time, as the eldest son of the eldest son of Aaron who had children (the oldest two seem to have died without children, or else their entire lines were disqualified for their fathers' sins). But the next high priest might have been a younger brother of Phinehas or an uncle or cousin from the Ithamar line. And this might not have had to have been a rule adopted at the outset. It could even have been a modification implemented later on, whether legitimately or not. The Torah doesn't ever specify, from what I can remember, how the high priest would be chosen. It might have been by Urim and Thummim or something, in which case the high priest could even be the youngest priest of age.
I was thinking last night about the new show Once Upon a Time, and it occurred to me that it might provide a really good illustration of the difference between externalism and internalism in epistemology. (I haven't seen last night's episode yet, so please no one spoil it for me.)
Internalism holds that what justifies our beliefs or makes them rational or what grounds our knowledge must be something internal to our thinking, in other words something where the reasons why it is justified, rational, or grounded are accessible to our conscious thought. We have to be able to see why our beliefs are grounded for those beliefs to be grounded. We have to be aware of what makes it a good belief for it to be a good belief. It wouldn't be enough to have reliable belief-forming mechanisms (such as senses that reliably give me the right information).
Externalism holds that there might be things make our beliefs justified or rational or grounding our knowledge that are not accessible to our conscious thought. We don't have to be aware of what justifies us in thinking something for it to be a justified belief. For it to be well-grounded knowledge, we don't have to know that our knowledge is grounded in reliable practices and thus why it is well-grounded knowledge. It just has to be grounded in the right sort of ways.
Perhaps the biggest place of disagreement comes over how to respond to skepticism. If internalism is true, I would have to prove that my senses are reliable for them to ground my knowledge, which of course I can't do, because I might be in a virtual reality for all I can know by internalist standards. There are internalists would would disagree, but a lot of philosophers have concluded that internalism leads hopelessly to skepticism, because I can't prove that my senses are reliable, and just having reliable senses isn't enough. I'd have to be able to prove it, which I can't do. But externalism can handle skeptical arguments by pointing out that I can know all sorts of stuff even without being able to prove it. It doesn't mean I can prove I know things. It just means that skeptical arguments fail, because the skeptic has to show that my senses are unreliable to show that I don't know things. With internalism, all the skeptic has to show is that I don't know if my senses are unreliable. With externalism, the skeptic has to show that they are in fact unreliable. So the burden of proof on the skeptic is higher with externalism.
Once Upon a Time provides a nice illustration of externalist epistemology. The basic premise of the show is that the Evil Queen has cursed all the characters in the Enchanted Forest by bringing them to a terrible place where there are no happy endings except for her. That terrible place is Storybrooke, Maine, in a world otherwise very much like our current day. The Evil Queen is the mayor. The story shifts back and forth between events in the characters' lives back in the Enchanted Forest and events in their lives now in Storybrooke, where no one is supposed to remember their previous lives except the Evil Queen.
Snow White and Prince Charming are the Evil Queen's primary targets. She wants revenge against Snow White for something we haven't seen yet (as least as of last week's episode). She wants to ensure that they are not together. They have no memory of each other, certainly not of having been married to each other. He was in a coma when the show began, and apparently he had been since the curse began. She has no memory of him. When he awakes from his coma, he has no memory, until the Evil Queen at some point seems to have interfered to give him memories of being married to someone else, someone who turns out to have been engaged to him in the Enchanted Forest before he broke it off to marry Snow White. But when they meet up, they feel such a longing for each other, as if they have always been meant to be together.
Prince Charming tries to rebuild his marriage, but he can't ignore his feelings for Snow White. This woman whom he (falsely) thinks is his wife brings out no current feelings, but he seems to have memories of feelings for her, and he tries to make it work. Technically, he's living in an adulterous relationship with her while thinking his feelings for Snow White are the adulterous ones. But Snow White is really his wife, and some process within him is leading him to think he should be with her. But he has no access to what would be leading him to that. An externalist would say that he has some process within him that he can't understand that's leading him to know that Snow White is the one for him, and his false beliefs about his past do not interfere with that knowledge. An internalist has to say that his most justified beliefs are the false ones.
So suppose there's some reliable process whereby his body's memories of his love for Snow White are leading him to know that she's really the one he's supposed to be with. His resistance to this woman who isn't his wife, whom he believes is his wife, is then grounded in processes that he has no access to. An externalist could say that his belief that he should be with Snow White (whom he knows now by another name, of course) is justified by these processes he's unaware of, and it's bogus to rely on his memories for the belief that he's married to the other woman. An internalist would say that his belief that he is married to the other woman is in fact false but is justified. Which belief is justified, then, depends on which epistemology is correct.
Which view you adopt would seem to have significant moral implications. He's doing something clearly wrong, according to internalism, by having clandestine romantic interactions with Snow White. But what if he has knowledge on some level that can somehow cancel his seeming knowledge (that isn't knowledge at all) that this is adultery? Those are false beliefs, based on false memories. If he doesn't know those things but falsely believes them, and he also knows on some level that Snow White is his true love, is it enough to remove the wrongness of the adultery? Perhaps that's too much, but it does seem to be ethically different in some ways.
I've discovered the need to adopt a new way of speaking about people who are recently-descended from Africans. We've learned in the last couple decades that we ought to emphasize someone's personhood above any other characteristic, and thus it's thoroughly immoral to use any adjective in front of 'person'. We need to use predicate nouns instead. We no longer have sad people, for example. We simply have people with sadness. We no longer have short people. We have people with shortness. We don't want to define people with sadness as if their sadness is more important than their personhood, so we have a moral obligation to put the noun form after the word 'person'. Grammar does always indicate metaphysics, after all.
One sphere of language in which this lesson has never been properly applied is in the area of race. Why are we still talking about black people, for instance? Do we really want to define people solely in terms of their race? Do we really want to signal that their blackness is so central to who they are that we're going to pretend that people with blackness aren't people? If we call them black people, then we are treating their blackness as if it's a greater part of our conception of people with blackness than their personhood is. People with person-firstness have instructed us that we should never put disability-related adjectives in front of a noun or pronoun referring to a person, because we don't want them identified with that condition. But we've also learned from the same people that having a disability is not negative, which means this policy is not because disabilities are bad. Therefore, we ought to apply it to other cases when something is not bad but might wrongly be taken by someone to be bad, just as we would apply it to things that are genuinely bad. If race is not to be a negative, then I am not a white person. I'm a person with whiteness. It does make it a little awkward to speak of people with Asianness or people with Australian-first-people-ness (i.e. what used to be called aboriginalness). But it's worth the awkwardness of expression to avoid any chance of identifying them with the racial or ethnic group whose membership they possess.
Even worse, it's especially pernicious to say that someone is black (or African-American or whatever racial term we might choose). After all, using predicate adjectives amounts to making identity statements rather than merely ascribing a property to someone the way we would have thought that adjectives in English, even predicate adjectives, do. It's much more preferable to say that someone has blackness than to say that she is black. People aren't anything except persons. I'm not philosophical. I have philosophicalness. Glenn Beck is not unfair to his political adversaries. He has unfairness to the people who have political adversariness with him. President Obama is not bad at speaking without a teleprompter. He has badness at speaking without a teleprompter. I shouldn't say that I am Christian. I'm a person who has Christianity. I shouldn't be identified with my faith. I should claim, rather, to possess the entirety of Christianity, as if it belongs to me. We need to avoid identifying people with any property ascribed to them other than personhood. It's much better to say that they possess the entirety of the thing that formerly we would have used to describe them.
For more explanation, please see here (except you can ignore the sections explaining how people with blindness and people with deafness have offendedness at the obviously-correct way to refer to them, and you certainly shouldn't read person-with-autism Jim Sinclair's reasons for disliking person-first language).
At least twice in the last few weeks I've come across someone claiming that the U.S. Supreme Court affirmed the one-drop rule in 1986. I was surprised, because shortly before the first time I saw this claim I'd come across someone else saying that the 1967 case Loving v. Virginia, which is best known for overturning Virginia's ban on interracial marriage, also declared the one-drop rule unconstitutional. So I eventually started looking into both claims. It turns out that the first is false, and the second is true. That is, the Supreme Court did overturn one-drop-rule style racial classification laws in 1967, and they did not affirm a one-drop-rule law in 1986.
What Chief Justice Earl Warren's opinion in Loving actually says in the main text is that racial classifications need to be subjected to the most rigid scrutiny, especially if they form the basis of some impact in a criminal proceeding. But this isn't a new judgment. It's a quotation of a previous decision. And it's not clear what the most rigid scrutiny is supposed to be or how it would apply to one-drop rule laws, and he never applies it to such laws. But he points out that the basis of the racial classifications used in the Virginia law were instituted specifically to preserve the conception of white purity advocated by the invidious discrimination of 1924 Virginia that was of a piece with the kind of segregation at odds with the Equal Protection clause of the 14th Amendment, and that can't stand up to the most rigid scrutiny.
It's not quite clear, however, until you get to footnote 11, which says that the racial-classification system of Virginia is "repugnant to the Fourteenth Amendment" (and therefore presumably unconstitutional, although he never explicitly says they're overturning that law too). Since this is the reasoning for the overturning of the interracial-marriage ban, and not some aside on a topic not necessary for guiding the current case, I think it does count as overturning one-drop rule laws, at least any justified on the basis of white supremacy or purity (as I'm sure all actual one-drop rule laws were). But I now understand how it can do that in a way that I didn't really notice before. The real work is done in a footnote.
But the first claim is simply false. What happened in 1985 was a case involving a Louisiana woman who had thought of herself as white all her life who then discovered that her birth certificate listed her parents as colored. Louisiana law, until 1983, had a 1/32 one-drop rule, which counted someone as colored for having one black ancestor out of 32 great-great-great grandparents. Her parents were classified as colored by that law. She herself actually didn't count as black by that law, since it was her great-great-great-great grandmother who was black. But her birth certificate listed her as colored because her parents were listed as colored on theirs. So it wasn't the one-drop rule law that led her to be classified as black on her birth certificate. It was the cultural practice among doctors and midwives of transferring the racial-classification of the parents to the child when both parents had the same classification. Her parents had never objected to their classifications, and corrections to birth certificates apparently had to come from the person whose birth certificate it is issuing a complaint and request for correction.
So the state court concluded that there was no legal justification for forcing the birth certificate office to issue corrected birth certificates. They then said that the repealed 1/32 one-drop rule law was not relevant, because midwives and doctors aren't subject to the prohibition on government employees' violation of the 14th Amendment, since they're not government employees. Finally, they said the one-drop rule laws involved with this did, by their judgment, violate the Constitution, but they were bound by Louisiana Supreme Court precedent on that question. None of their analysis depended on any stance on the one-drop rule law, which was no longer on the books at this time anyway and thus could not be overturned by a court in any direct way. The case apparently got appealed to the Supreme Court in 1986, and they opted not to hear it, but it seems crazy to me to take that as a sign that they would affirm a one-drop-rule law.
There are several different things someone might mean when they speak of imposing religious beliefs on those who don't hold them. There are two different axes to pay attention to. One is what is meant by "imposing", and the other is what is meant by "religion".
On the first axis, what is meant by "imposing", I can think of a number of things in decreasing order of severity:
1. Forcing people with threat of force or imprisonment
2. Coercing people by some manner less severe than force or threat of imprisonment (e.g. giving them incentives like a right to vote, to drive, to hold an independent job) that most Americans consider rights or close enough to it
3. Incentivizing by some manner less severe than coercion (e.g. government influencing social acceptance, giving tax credits or deductions, criminal penalties of smaller sort such as a fine)
4. Calling on people to change their mind or behavior, perhaps with strenuous argumentation
5. Explaining one's attitude on the issue
6. Simply stating what one's view happens to be
On the second axis, what is meant by "religion", I can again think of a number of things, in decreasing order of centrality to religion:
A. espousing a statement of faith or unfaith (that they might not actually agree with)
B. engaging in certain behavior that is motivated (on the part of those instituting the policy) merely by religious beliefs and not by any attempt at rational argument
C. engaging in certain behavior that is motivated (on the part of those instituting the policy) in part by religious beliefs but also by some attempt at rational argument, even if it's not a strong argument
D. engaging in certain behavior that is motivated (on the part of those instituting the policy) in part by religious beliefs but is held by most who hold it (even if controversially) by rationally-motivated arguments that, while disputed, at least are philosophically-driven in addition to or, for some, without the religious motivation
E. engaging in certain behavior that is motivated (on the part of those instituting the policy) in part by religious beliefs but is commonly held by most people, and for most people there is motivation that in their minds is on grounds entirely independent of religion
There are those who insist that even stating one's religious views counts as imposing them in an improper way, never mind preaching them. Fortunately, in the United States even 4A is protected speech by the first amendment. I'm not about to argue for 1 either, so we're really looking at 2 and 3. In the history of the world, we've certainly seen pseudo-conversions coerced at swordpoint or recantations of religious beliefs at the threat of martyrdom. In comparison with that, the idea that one is imposing one's religion merely by trying to make a case for it seems absurd. It's similar to the War on Christmas people complaining of Christians being persecuted in the United States just because schools are refusing to sing Jingle Bells in schools on the ground that the song is tied to a religious holiday. (In my experience, schools nowadays don't reduce Christian content at Christmas but simply include it alongside religious content for other religions' holidays too, so this complaint is getting even more stale than it was when I was younger, when such songs might have been excluded on the strange claim that they're somehow religious).
We do have some laws that are all the way down to 1E or sometimes 1D, however. For example, same-sex sodomy laws, bans on selling contraceptives, and bans on teaching evolution (all deemed unconstitutional now) were often religiously-motivated but did include arguments, often arguments widely accepted at the time, that didn't rely on religious premises. Evolution was thought not to be as well-supported as its proponents think. Creation science has insisted that evolution is just bad science. This isn't about whether their arguments are good but about what kind of arguments they are. Similarly, bans on same-sex sodomy were justified more by disgust at such acts than any biblical prohibition on them, and the Connecticut ban on selling contraceptives was supported by an argument about population control.
But there remain some laws at level 1E or 1D and some attempts at instituting laws at this level. Sodomy laws are deemed unconstitutional by the Supreme Court since 2004, but incest laws vary from state to state. It's not criminal in Rhode Island to have sex with a close relative, but you can't marry them unless you're Jewish (to allow for Levirate customs, I assume). In Ohio it's criminal to have sex with your children, but only the parents are criminal even if the children are adults. But in Massachusetts you can get 20 years in prison for having sex with your adult sibling, even if one of the two parties is demonstrably infertile or if it's a same-sex act, in either case removing any chance of genetic problems with offspring. Such a law is, as far as the courts have so far indicated, perfectly constitutional. Yet I can think of no easy argument against it unless you rely on beliefs that are either very controversial and often supported by religion or simply feelings of disgust. Arguments against pornography aren't all religious (see the feminist arguments), but we make distributing or producing certain kinds of pornography illegal in part because a lot of people have religious objections to it. (But I should say that this is clearly 1E and not 1D, since almost all religious people who object to pornography would agree with just about the entire feminist case against pornography, despite feminist claims to the contrary.)
In fact, 1E prohibitions occur all the time. Laws against murder or robbery fit into this category. People certainly have religious reasons for thinking such acts are wrong and ought to be given severe penalties. But the arguments for them are widely accepted by religious and non-religious people, and the secularly-accessible arguments are usually present even for religious people.
Coercion of sorts 2 and 3 is a little more commonly thought of as imposing religion, and there are some ways that can occur today in the United States with legal sanction (although for letters further down the list than happens with Islam). You're not going to find 2A or 3A in the U.S. today, but you will find both in Islamic countries. Most debates in the political context of the U.S. about imposing religion aren't even about 2B or 3B. The kinds of things that get labeled as Taliban-like behavior in the U.S. aren't about matters that have purely religious support. They at least make an attempt at rational argumentation. But that's also true of the Islamic laws requiring women to wear veils or prohibiting girls from being educated in any formal way. The supposed rational argumentation in both cases is extremely weak and based on false views of the capabilities of women or false priorities, elevating the concern with provoking male lust to a point where it overcomes eminently reasonable considerations about freedom in how women might dress and conduct themselves in public. Even the most stringent Christian concerns about modesty in women's dress are going to allow for much more freedom than you'll find in many Islamic prohibitions on female dress.
I think most cases I'm aware of on level 2 are actually all the way down to 2E. I'm thinking of laws that prohibit minority religious behavior, such as requiring a photo ID for a driver's license (which some orthodox Jews resist and even some Muslims, or like the Florida law requiring a photo ID not to have a face covered too much, which some Muslim women won't do). The attempted ban on peyote even in Native American religious ceremonies would have fallen into this category, but Supreme Court, in an opinion by Justice Scalia, overturned that. Banning certain kinds of political protests that someone might have religious reasons for insisting on doing, e.g. perhaps an abortion protest of a certain nature, amounts to a 2C imposition.
Level 3C is much more fair game for a lot of issues in the U.S. We don't imprison people for much at level C, but we do incentivize religious charitable giving by giving tax deductions, and we recognize (so far) a privileged position for opposite-sex unions to be called marriage at the federal level and in most states. That gives government sanction for something with some secular arguments but also based on religious motivation for many supporters of that policy, and it has an effect of cultural sanction or respect for certain behavior over other behavior. If we ban a certain religious act but without criminal penalty other than a fine, that would fall under 3C. There are religious and non-religious arguments for abortion protests that cross the line into illegality to a point of a fine but not to the point of imprisonment.
In the UK and Canada in the last couple years, pastors have been carted off to prison for preaching that same-sex sexual acts are immoral. This isn't quite an expectation of having a certain view, but it's prohibiting the speaking of such a view. It's a level 1 prohibition of level 6 behavior. Americans rightly deride such policies as contrary the value of debate as a basic, fundamental component of civil society. Speech codes that prohibit even stating your religious views if such views are considered offensive to someone, while indisputably unconstitutional in the United States, somehow manage to appear at most universities anyway. Even 4A is uncontroversially protected speech under the first amendment, unless it takes it to a level of actually provoking people to a fight or to the level of panic that would result by yelling "fire" in a crowded theater. Yet I've encountered a number of people who have considered it a clear case of immorally imposing one's religion, as if trying to persuade someone of a view you happen to find true is somehow wrong. Some take it to a further extreme, considering even the reporting of your view to be inappropriate when it's a controversial view that some might find offensive. Merely indicating that one believes Jews who don't accept Christ as the Messiah will go to hell would, to some people's mind, count as imposing one's religion in an immoral way. I find such an analysis so unhealthy that I almost consider it undeserving of a reply. But if pressed I would insist on the value of philosophical debate, the importance of understanding those who disagree with you, and the moral importance to certain religions of attempting to win people over to something they consider very urgent for all humanity, which prevents them from remaining silent if they're taking their own religion seriously.
What's the moral of the story? Mostly what motivated me to work through all this is that I think we should be wary of anyone who makes blanket statements about imposing religion, whether moral statements or simply factual claims that it has happened. It should be pretty clear from all this that it's never clear what people mean by that unless the specify, and the debate that might ensure once they do specify is probably worth having. Most people who make such comments haven't thought them through and could benefit from some effort to explore precisely what they mean. The term "imposing religion" is at this point so unhelpful as to be worth avoiding whenever we can, and in its place let's clarify the particular elements that we're concerned about, since the different items in both lists above certainly do involve different moral considerations.
Jeremy Pierce is a philosophy professor, Uber/Lyft driver, and father of five.