Some of the early reports about yesterday's report from the Vatican conference on family issues seem to me to betray a serious misunderstanding of Catholic teaching on these issues. In the NPR story I just linked, we see two views being put into contrast that I don't think any Catholic who understands the concepts involved would recognize as being in conflict. On the one hand, Catholics have long taught that homosexuality and same-sex sexual relationships are intrinsically disordered, and Catholics insist on the wrongness of any sexual relations outside marriage. On the other hand, this report speaks of Catholic communities "accepting and valuing their sexual orientation" and "positive aspects to a couple living together without being married". It all depends on the context and what is meant by these expressions, but I see no reason yet to take these in a way that contradicts anything in Catholic teaching.
The crucial element is the concept of intrinsic disordering. If something is intrinsically disordered, it means that the good in the relationship is put together wrongly in some way. It means either something is missing, or the parts are not working together the way they ought to. But the concept of intrinsic disordering requires there to be some good, since intrinsic disordering means something is less good, as opposed to some positive evil being introduced, which is impossible on an Augustinian conception of evil that serves as the basis of the notion of intrinsic disordering.
You can't have something intrinsically disordered that doesn't have some positive good. No positive good means no existence. Intrinsic disordering means a disordering of positive good. That means there is positive good. And that means this change in emphasis isn't a change in doctrine, if all it's saying is that there is some positive good in same-sex relationships and in unmarried couples living together (implying sexual relations).
In particular, you can think value all manner of things about a same-sex relationship: you can recognize the good in a couple's self-sacrifice for each other, the good in their parenting of any children they might have, the good in the degree to which they fulfill their desire for companionship, even some level of good in the sexual pleasure they provide each other. You can do that even if you think the relationship itself is immoral and if you think they're seeking the wrong object to fulfill sexual desires and the wrong ways of fulfilling their companionship needs. You couldn't think they are good in every respect, but you have to think there is some good there, or else there would be nothing. That follows from the very notion of intrinsic disordering.
Similarly, the Catholic church holds that there are good things in opposite-sex sexual relationships between unmarried people. Catholic doctrine declares such relationships immoral. There is a difference in that they're not disordered in terms of the object of sexual desire (or at least in terms of the sex of the object of sexual desire). But there's plenty of intrinsic disordering of a different sort in those relationships (e.g. the marital status of the two people, which is an issue to do with the object of one's desire, just not about the person's sex). Most importantly, the person and relationship are placed on a higher level than God, because they refuse to honor God's command to marry before having sex. That is an intrinsic disordering, since it demonstrates one's desires are not well-ordered, which is what virtue is on an Augustinian view. Any sin is an intrinsic disordering, since it involves a disordering within one's desires. That assumes some good in the desiring and in the fulfillment. Otherwise there would be no desiring or fulfillment.
Compare the intrinsic disordering of a shoe fetish. What's disordered about that is that shoes are not an appropriate object of sexual desire. Homosexuality, by contrast, involves a desire for a human being. Human beings are the appropriate objects of human sexual desire in general, even if there is some intrinsic disordering when it involves same-sex desires. That means there's something good about same-sex desire that isn't present for the shoe fetish. It's not clear to me that the Catholic statement is doing anything more than acknowledging things like that. That's compatible with thinking same-sex relationships are intrinsically disordered to the point of being immoral. I think people who don't have a view like the Catholic view will be inclined to think that anyone who thinks homosexuality is intrinsically disordered must think it the height of all evil, with nothing redeemable or good about it, but that's simply not what the view holds. Many who hold the Catholic view might not see this, but there's a difference between how proponents of a view understand it and what the official view is, at least when you're talking about a view held by those who believe their views come from some authoritative source. (The No True Scotsman fallacy is simply not an issue when you have an authoritative person, text, or organization that determines what the official view is. There is a genuine Catholic position, and those who don't hold that view do not hold the Catholic view.)
There may be a different emphasis here, but it's not at odds with thinking the relationship is intrinsically disordered anymore than the idea that it's good to support our troops is at odds with being opposed to a particular conflict they've been fighting in. So don't believe anyone claiming that this is a change in Catholic doctrine. It's not a conflict or departure from the concept of intrinsic disordering. It in fact brings to the fore something that follows from the notion of intrinsic disordering. Perhaps that's something that those who believe homosexuality is intrinsically disordered should be emphasizing more. But it's not a new position. It even follows from the idea of intrinsic disordering. Anyone claiming the two are at odds simply doesn't understand what it means to be intrinsically disordered, or they couldn't think that.
Every now and then I come across someone claiming that the word "literally" is now being used as a self-antonym. In other words, it is being used to mean "figuratively". Consider the following sentences:
1. And when he gets into the red zone, he literally explodes. (from a football announcer)
2. [Tom Sawyer] was literally rolling in wealth. (Mark Twain)
3. [Jay Gatsby] literally glowed. (F. Scott Fitzgerald)
4. [A certain Mozart piece was] the acme of first class music as such, literally knocking everything else into a cocked hat. (James Joyce)
As you can see, this isn't that new a phenomenon. It goes back at least a couple hundred years. There seems to be an incredible amount of outrage about it in certain spheres. Vice-President Joe Biden gets made fun of a lot for his excessive use of the term this way. But consider the following sentences:
5. When he gets into the red zone, he really explodes.
6. He was really rolling in wealth.
7. He really glowed.
8. The piece of music was really knocking everything else into a cocked hat.
Those sound perfectly fine. The word "literally" and the word "really" both normally indicate some genuineness to something. Yet both are used in situations where it's not really or literally the way it's being said to be. Both are wrong, if the words are being used literally. But they aren't being used literally. They're being used as intensifiers. He doesn't just glow. He really glows. Saying he literally glows is doing something similar.
What is not going on here is the use of these words as self-antonyms. The seventh sentence above does not mean "He doesn't really glow." That sentence means something very different. Nor does the third sentence mean "He doesn't literally glow." That sentence also conveys something different. These words are being used as intensifiers. Saying "he doesn't literally glow" or "he doesn't really glow" is not intensifying the sentence "he glows". But 3 and 7 are intensifying it. So the word is not being used to mean its opposite, in either case.
The word "literally" is not being used to mean "figuratively". If it were, then we would expect 3 to be synonymous with:
9. He figuratively glowed.
But the two are not synonymous. 3 would not be used if you intended to be talking about the linguistic properties of the word "glowed". A sentence like 9 is commenting on its own language. A sentence like 3 is doing no such thing. Furthermore, 3 has the intensification that 7 has. 9 does not. These sentences are not at all equivalent. If the word "literally" were being used to mean "figuratively" then they would be synonymous. What's actually going on is that the word is being used as an intensifier, the same way the word "really" gets used. That's not at all the same thing as being used to mean "figuratively". I suppose you might say that the word "literally" is being used figuratively. But that's not the same thing being used to mean "figuratively".
I've several times now run across a new linguistic trend, mostly among a certain brand of academic. When writing about people we would normally call slaves, the new trend is to call them "enslaved people". I assume the reasoning here is because we don't want to define someone by their enslavement, as if it's an identity-forming feature of their existence, and we shouldn't let someone in one of the most oppressive situations be defined by something entirely outside their control that has demeaning connotations. In that way, it reflects some of the concerns of person-first language, which I've usually encountered in the context of disabilities.
[See my critique of person-first language. It's a bit over-the-top, as most satire is. The sense you get from it about what my views must be is not quite what they are. I'm not completely opposed to person-first language, and I even think sometimes it's the best way to go in certain settings. I would say that with small children it's far better to speak that way, whereas with older children and adults it's best to help them understand the categories we in fact use while drawing attention to the ways we illegitimately think about those categories and ways we process them unconsciously and thus denigrate the people we're talking about without always being aware of it.]
But this is different. For one thing, this isn't person-first language. Person-first language would not speak of enslaved people. It would speak of people with enslavement or people encumbered by, trapped by, oppressed by, or otherwise affected by enslavement. Person-first language is so roundabout, awkward, and unworkable that even those tempted to apply it in this case have actually refused to go that far. They will avail themselves of adjectives rather than nouns and use the adjectives to modify the noun 'people' or 'person'. It's grammatically parallel to "deaf people" or "autistic people" rather than "people without hearing" or "people with autism". But it's certainly a step in the direction of person-first language when compared with calling people slaves. The only grammatical equivalent is to speak of the deaf with no noun or to talk about people with autism as autists. [I should note that that's a bad idea even if there weren't any other problems with the term, because people will just think you're from Brooklyn or the Bronx and talking about people with very creative abilities and outlets.]
But there are differences, and I think some of them matter morally. One is that ordinary language does allow for slaves, and "enslaved people" is awkward, whereas "autistic people" or "people with autism" are both common, while "autists" is not. Another is that it's generally accepted that calling someone an autist is unacceptable, and it's at least not generally unacceptable to call someone who is enslaved a slave. That's not the only issue, but that's a difference. For example, it was much worse to call people retarded once that became a standard insult for people without any cognitive disabilities than it was when it was the accepted term and had not yet been used as an insult. Whether it was a good term ever is something people can debate, but surely it's made worse once it becomes used as an insult. So the fact that a lot of people do oppose a way of speaking does count more against it, and the fact that many people approve of a way of speaking does mean there's less to count against it, whatever else is true.
Another difference is that one is a disability and the other is an imposed condition. Both are involuntary, at least in most cases of slavery. Slavery can be accepted voluntary, especially in cases of indentured servanthood, selling oneself into slavery to pay off a debt, or accepting slavery to avoid a death penalty (well, that's at least not completely involuntary, although it's not actually a range of choices that anyone would consider sufficient for the choice to be fully voluntary). But one is known, at least by most people today, to be something that is not central to who one is but rather imposed. No one today, at least no one I personally know, thinks that anyone who is a slave is the sort of person whose slavery is necessary because they couldn't otherwise function in life. No one thinks slaved naturally deserve slavery. No one thinks it's part of a slave's nature to be a slave.
This is not true with racial categorizations. As much as we might discover scientifically about how there isn't all that much difference between different racial groups, we do process racial categories with stigmatized stereotypes, and scientific studies for decades now have consistently shown that these stereotypes and stigmatized categories will affect how we treat people, at least in small ways that most of us don't pick up on (and especially in situations where we're tired or busy and have to make decisions quickly without thinking carefully about them). This isn't true of the category "slave" even if it is true of other contingent categories. If I find out someone is a slave, I'm not going to process that the way I do if I find out they receive welfare, are homeless, or grew up in a ghetto. Whether I want to or not, I will make assumptions about the person if I discover they're in one of those other categories, and I won't if I find out someone had kidnapped and enslaved them. We're distant enough from the 19th-century practice of slavery (and what does go on today is both under the radar and officially disapproved of) that we just don't respond that way anymore.
So one of the important reasons for avoiding linguistic constructions that serve to foster innatist, essentialist thinking (which really only matters with small children anyway, according to the most careful psychological studies) does not matter with slavery. That means any argument for preferring "enslaved people" to "slaves" must have to do with how people in those categories would perceive it, not how others will be influenced by speaking or hearing the construction. And I suspect the same debate that occurs with disability would crop up here. People who prefer "person with autism" are usually parents, teachers, and psychologists who want to encourage not defining someone by the disability and who want others to respect them as people, taking their interests and desires as important, assuming competence first before assuming incompetence, and other essential features of treating someone as a person. Yet one can do that while using the word "autistic" as an adjective.
The other side is usually from people who have the condition who have the communication skills to express their view on the matter. They in fact prefer to be called "autistic" as an adjective, just as the deaf community generally prefers to be called "deaf" and thinks person-first language is insulting. Why is that? Because they see their condition (which they don't always see merely as a disability, because it involves both impairments and increased abilities) as something very important to who they are. It shouldn't define them as if it's the only thing that matters, but it is part of how they've formed their identity, just as race is for anyone who isn't in the dominant majority racial group in their social location. White people in the U.S. don't see whiteness as part of their identity, because it's part of white privilege not to be affected by race is ways that make you constantly think about those categories. Most members of other racial groups in the U.S. do consider their race to inform their sense of their own identity in significant enough ways that they wouldn't want people not to think of them according to those categories, as the dishonest color-blind ideal (does anyone really think they can pretend not to see race?) would have it.
How should this affect calling people "slaves" vs. "enslaved people"? Well, not having the chance to interview a bunch of people in that category, I just have to guess, but my suspicion is that it's going to be like race and disability, at least in terms of how they think of their identity while enslaved. It's pretty all-defining of what their life is. I can't see how that wouldn't be identity-forming. It's certainly more easily removed than the other cases I've been discussing, and that's why we can speak of people as former slaves. But that linguistic option show that we can handle the contingency of the category while still availing ourselves of the ordinary way of speaking, and there is at least some moral argument for retaining the category rather than abandoning it, which gives me little reason to want to engage in a major effort to revise our language in a pretty large way.
We got to see X-Men: Days of Future Past today, and I have to say that it's the best of all the X-Men movies so far. (Well, I haven't seen The Wolverine, but I can't imagine that's better. I'm also not sure it counts as an X-Men movie.) I do have a relatively unpopular ranking of X-Men movies. Of the ones I've seen, I think they tend to get better with each one, with one exception. I didn't like X2 nearly as much as the first one. But I think the remaining ones get better with each one, even the much-maligned X-Men Origins: Wolverine, which I do think is better than any of the original trilogy. And I think the third was better than the first two, which is also a very unpopular view among most people I know. (I also think the original Spider-Man trilogy improves with each movie, and hardly anyone agrees with me on that, and I loved Batman Begins but hated the Dark Knight, and I'll forever be on some people's nasty lists for that.) All that is to say that I certainly don't expect people to agree with me on every point when I evaluate this, but at least I can give reasons for what I think.
I wanted to reflect a bit on some of the things I did like and a couple things I didn't. First, what I didn't like. It seems action movies, and superhero movies especially, have lately became averse to explaining things. They include dialogue to explain things enough to prevent you from becoming completely lost, but it's not sufficient to get you a good sense of everything that's going on. A story like this with this many characters should include something to let us understand who it is that we're supposed to be watching. We got nothing about Blink except what she looks like and, after looking at her do what she does a few times, a vague sense of what her power does. We got even less on Bishop or Sunspot (and were there others in the opening future scenes that we haven't seen before? I wasn't sure at first who some of them were). The mutants in Vietnam were almost incognito, even to the audience, except for the obvious Toad, who we've seen a later version of. Ink was probably recognizable to comic readers who started after I did, but I'm sure most people had no idea who any of them were besides Toad. It's bad storytelling to have dialogue that no character would ever say, when everyone in the room should know it, just to explain things to the audience. But it's equally bad storytelling to do nothing to explain things to the audience when they do want to get to know these characters and how they work a bit more. Several of the X-Men movies have this problem, but this was particularly annoying, because some of these characters looked really interesting.
I also can't resist saying that the time travel metaphysics in this movie is just plain stupid. It uses a very common time travel story motif, that when you go back in time and change something you have the contradictory scenario where at one time the timeline is one way and then at a later time the entire timeline is different. At what point within the timeline is the entire timeline one way, and at what point within the timeline is the entire timeline a different way? There's simply no way to make sense of it the way they tell the story. The only way to do so is to have simply different timelines, all of which continue to exist, with no change having occurred, just one timeline that's one way and another that's another way, and someone from the future of one timeline is the explanation for events that occur in the past of another timeline. And it was always that way in both timelines. (This is what Abrams Star Trek did.) But the motivation for the story makes little sense there, and the trick of having everyone disappear and suddenly having always been somewhere else instead is a deception, because it's a switch to an entirely different timeline, and everyone still/always dies in the first one. Only in the new one is it different. No timeline actually was one way and then changed to another way. That would require a timeline of ordering where a whole timeline can be earlier than another, but time only occurs within timelines, not between them.
But I never let bad metaphysics ruin a fun time travel movie for me. I can enjoy a contradictory story, and I did enjoy this one, much as I did some of the worst offenders (the Back to the Future trilogy topping the list, with Timecop coming in a pretty close second). I am always impressed at someone doing it right, as Babylon 5, 12 Monkeys, LOST season 5, TNG Time's Arrow, TOS The City on the Edge of Forever, and a number of other stories have done. But fun stories abound with unworkable metaphysics, and this was certainly one of those. I'm always a sucker from time travel, no matter how badly it's done.
So on to what I liked. This was not just the best of the X-Men movies so far. It was an incredibly good story, rivaling the best of the Marvel movies.
It doesn't beat you over the head with a moral message. It's not even prominent, like in Iron Man, the three Sam Raimi Spider-Man movies, or the original X-Men trilogy. Nor is it a debate with unclear answers, as in Captain America: the Winter Soldier (and the followup in the Agents of SHIELD show), much as I enjoyed that. But it's there. And that usually makes a superhero movie better. In this case, it's not so much the usual mutant analogy with race or the like, although you do get references to that. It's actually the Spider-Man message that great power brings great responsibility, one of the things Sam Raimi did really well in all three films that the too-soon reboot of that franchise didn't do so well at. Iron Man had the same message. Charles Xavier was basically abandoning his responsibilities, and we begin the movie with dire consequences of that in the future (although we don't know Xavier is really the one to blame until much later. There were people under his charge who died, we discover from Magneto, all because he felt sorry for himself and his circumstances and couldn't bear to deal with the difficult situation he'd found himself in. And it ultimately leads to mutants being hunted down and wiped out.
It also didn't seem like it was bringing in as many characters as they could just to fill the movie with toys for marketing or to try to set up other movies that will likely fail (cough ... Amazing Spider-Man 2). The people who were in it from previous movies made sense to appear when they did, and the ones that only had cameos made sense only to have cameos. The ones that were in it more made sense to be in it more, and even the big change from the comics of making Wolverine the time-traveling consciousness instead of Shadowcat could make sense from a story point of view (and not just because Logan is a favorite of fans or because they needed someone who could play both parts as the same actor). Their explanation for why it has to be Wolverine is not that bad, anyway, even if it's clear that the writers really did it because of those other reasons. I was dreading Quicksilver, given the photos released ahead of time, but I liked how they pulled that character off, and the references hinting at his true parentage were nice. I'm not sure why they showed Polaris (his younger sister) and not Scarlet Witch (his twin), unless they were worried about too many comparisons with the Marvel versions of the twins from <s)Godzilla</s> Avengers: Age of Ultron. But that was a nice cameo of a very minor character for the sake of fans.
But the crucial thing is that they told a story. They told one story. It was cohesive and mostly made sense from the point of view of the characters, which is really saying something given how out of character some of them were acting at various times in the story. There was one overall problem to be solved, and every scene in the movie contributed toward that problem coming about or someone trying to stop it. It was a compelling, high-stakes problem, and you really don't have any assumptions about who is safe (other than Wolverine, of course), and that goes for either time period. When everything the characters do seems to make things worse, the story becomes far from predictable. But so many details that most viewers wouldn't notice are there to be picked up on by fans of the comic books, but none of them should distract from what else is going on for those who don't pick up on them. In that it very much resembles Captain America: the Winter Soldier. This didn't have the benefit of several successful franchises coming together, though, as the Marvel movies do. The fact that they pulled it all off without that really speaks well of the people Fox has gotten together to make this. I'm really looking forward to X-Men: Age of Apocalypse now.
These are my rankings of Doctor Who stories from the First Doctor period. I have categorized them into five categories, rather than finding a linear ranking order for each story.
Cream of the Crop
10. The Dalek Invasion of Earth: One of the best First Doctor stories. It's the second appearance of the Daleks, and given the original naming conventions (where individual episodes were named, not overall serials, as became standard practice later in the show) you wouldn't have gotten the presence of the Daleks spoiled by the title until the end of the first episode. The TARDIS crew ends up in 22nd Century London, where the city has been devastated, with very few people in sight, all of them acting in a robotic manner. When they discover the first Dalek they come across, it's a bit of a shock, because they'd only met the Daleks on their home planet in their first appearance. Despite a ridiculous sci-fi premise for why the Daleks have invaded Earth, this story works incredibly well, which certainly isn't true of all the Terry Nation Dalek stories in this period. I don't think it's his best. That honor goes to The Daleks' Master Plan. But this is among the truly classic stories of the First Doctor period.
21. The Daleks' Master Plan: This is by far my favorite First Doctor story. A full dozen episodes (a baker's dozen, if you count the prologue episode Mission to the Unknown, which came two stories before but was really part of this story). Unfortunately, only three episodes survive, so you either have to listen to the soundtracks for the rest or watch the fan-created reconstructions based on the large number of set photos that exist and the existing soundtracks. But it's worth it. The stakes are higher than any previous Dalek story, and it has better good science fiction concepts than many of the other non-historical earlier episodes. We get to see a future Earth empire with a military that knows all about the Daleks and is trained to fight them, including two noteworthy characters, a brother and sister played by Nicholas Courtney, who later went on to play Brigadier Lethbridge-Stewart, and Jean Marsh as Sara Kingdom, one of my favorite companions over the entire run of the origianl series. Marsh also had earlier played Princess Joanna in The Crusade and much later returned to play Morgana in the Seventh Doctor story Battlefield, which was also the final appearance of Brigadier Lethbridge-Stewart in the Doctor Who show. This was the only story featuring Sara Kingdom, unfortunately, but she's present for something like eight or nine episodes of it. Terry Nation wrote episodes 1-5 and 7. Unfortunately, the seventh was a Christmas episode that has nothing to do with the rest of the story, which is its only real low point. By that point in the story, we're reliving The Chase, where the Doctor, The Meddling Monk (from The Time Meddler), and the Daleks are running around through time, and it slows down a bit, but those parts are a little better than the middle episodes of The Chase in my view. But the first half of this story and the last two or three episodes are as enjoyable as the First Doctor gets, even with reconstructions of the episodes.
23. The Ark: This is one of the better "future of humans" stories of the First Doctor. The TARDIS appears on a human ship in the future, and there's another intelligent species serving humans as slaves, in effect, although from all appearances it's consensual, and the humans are unaware of the full intelligence of these beings. Halfway through, the TARDIS crew has resolved their original problem keeping them there, and they reappear in the same spot but much further in the future. Since this is a time when the Doctor had no control at all over where the TARDIS ends up, that seems remarkably odd. Then they discover that a revolution has occurred, and the other species has turned the tables on their human masters. Instead of being victims that we feel sorry for, they are now the villains. This was a nice nod to the common phenomenon in human history of the victims gaining control and becoming just as bad oppressors as those who had oppressed them. We also get to see an invisible (i.e. money-saving) but very powerful alien race that reminded me much of the sort of thing you might see on the original series of Star Trek, which was being made around the same time period as this episode. This episode didn't win me over to new companion Dodo. But it has some funny moments between her and the Doctor, where her slang expressions (that are entirely commonplace now, to a point where it shocked me that anyone wouldn't be used to it) give us a glimpse of the First Doctor's cantankerous nature in his complaints that she's not speaking English (which I should note is her first language and not his). And this is one of the few First Doctor stories that I'd gladly show to someone who wanted to see a good example of what the best of his period was like.
Very Enjoyable Stories
2. The Daleks (AKA The Mutants, not to be confused with a later Third Doctor story): This is the serial that gave the show its initial success. It drags a bit about 3/4 of the way through, but overall this is a great introduction to the Daleks. As with most of Terry Nation's Doctor Who stories, there are deeper themes to the story than just an action/adventure romp. In contrast to some of the emphasis of later Doctor Who stories (including some of Nation's own), here we see the Doctor encouraging pacifists to take up arms to destroy a menace that would otherwise end up destroying them. This is one of the best First Doctor stories.
17. The Time Meddler: In this story we get the introduction of our first Time Lord character (not that we have that name yet) besides the Doctor and Susan, and we even get to see his TARDIS, both inside and out. His chameleon circuit works, so we see a TARDIS properly disguised. The Meddling Monk returns as well in the Daleks' Master Plan, so he's also a recurring villain. A renegade Time Lord seeking to change history for some unclear profit motive (or perhaps for some higher good, but in any case the Doctor disapproves), the Meddling Monk has set himself up at a monastery, where he's pretending a whole group of monks are present by using future technology (including a phonograph with recordings of medieval-style chant) to give the appearance of a larger population of monks (as well as to make his stay more comfortable with appliances such as a toaster). The Doctor and his companions eventually figure out what's going on, and the Doctor manages to show some know-how when it comes to how a TARDIS works by sabotaging the Monk's TARDIS (which unfortunately never manages to help him get his TARDIS working properly again so he can actually control where it goes, not until the Time Lords help him later on during the Third Doctor period). This is the first time we see a historical setting with something non-historical worked in, a formula that the show eventually uses almost exclusively for stories taking place in the Earth's past, but we still have another season or so of purely historical episodes to go before that becomes standard. It's the first time also for the new lineup of the Doctor, Vicki, and Steven. It has some moments of lagging, as historical episodes tend to do, and it's the first historical episode with discussion of the real possibility of history-changing (see The Space Museum for the first instance of this, however, although this doesn't have the complete incoherence of that story). That is a disappointment from the perspective of metaphysics, but the unique elements of this story more than make up for it.
27. The War Machines: This is one of my favorite. If it weren't for the musical companions, it would be in the top category. The adventure starts with the Doctor and Dodo arriving in Dodo's own time period (roughly the time the episode aired). She's in the first episode and maybe part of the second. She never even appears to say goodbye to the Doctor. It introduced Ben and Polly, but Polly is brainwashed for most of the episode, so we don't get to see her in her right mind very much. And much of the episode Ben hasn't really connected with the Doctor. So it's not really the usual Doctor and his companion (or companions) sort of piece. That being said, this was a great introduction to what became a much more standard format for the Second Doctor period, where the Doctor (and in the other cases his companions) is in the time period when the show was being made, the mid-late 1960s, fighting off some menace threatening the time period of the viewers of the show. In this case, it's an artificial intelligence that, in a rare case, seems to have nothing to do with aliens, but you do get some rather rudimentary-looking robot threats (in keeping with the era they couldn't have them be too sci-fi looking). The Doctor uses logical paradoxes to undo the machine, as he does in several other stories (The Green Death, Death to the Daleks, and Shada come to mind). I do tend to like Ben and Polly, but we don't see a lot of Polly in this one. There's a nice scene at the end where the Doctor thinks he's all alone for the first time since the show began, but he ends up getting surprised with some unintended stowaways at the end, leading into the next season (and his final two stories).
29. The Tenth Planet: This is the introduction of the Cybermen and the last story for the First Doctor, so there's particular significance to it, but it doesn't work as well as I'd like. The Second Doctor Cybermen stories are much better. They look like they're wearing cloth outfits instead of metal. It's hard to hear what they're saying sometimes. The Doctor is showing his age, and several of his scenes had to be given to Ben or Polly. (Both Hartnell and the character are dying of old age at this point.) At the end, after defeating the Cybermen, he just collapses and dies, only to be regenerated into the Second Doctor. They don't explain the regenaration all that well, and the final episode is missing (although there are copies of the regeneration scene that have been released on DVD and online). Fortunately, this is one of the missing episodes that have now been animated. Still, this is a decent base-under-siege story, a template that becomes much more common with the Second Doctor, and as the introduction to the Cybermen and the final First Doctor story, it's certainly one to see.
Earlier this week the NYT published a critique of Genesis based on, of all things, the appearance of camels within its narratives, and I'm starting to see more and more discussion of this, virtually all of it simply repeating the claims of that article, without much at all in the way of careful reflection on the problems in the broader thesis that it puts forward, which I don't think the evidence actually supports.
This isn't actually a very new objection. Scholars have long objected that there isn't a lot of evidence of domesticated animals within the Canaanite region during that time. But there is evidence of domesticated animals in Egypt and Mesopotamia during the period Genesis describes, and the NYT article even mentions that, and it says they were more commonly used more by those nomadic peoples living in the more desert regions. The only thing new here is some carbon dating of the bones of camels, along with techniques for measuring properties of the bone, which can allow them to determine whether they were wild or domesticated and had to carry greater weight for much of the time.
I think there are several reasons to be very skeptical of the conclusions the NYT article draws. Here are a few:
1. Genesis doesn't report lots of camels being used during the time of the patriarchs, as the article claims. They are sometimes listed among the animals they owned, but usually it's in smaller numbers, and the only reports of their being used for riding are to cross the desert regions or when referring to nomadic peoples like the Midianites who lived within such regions.
2. Abraham and Lot had to cross that desert to get to Canaan, and the only animals they could have used would have been camels. The NYT article even says that no other animals would be able to make that journey so easily, and even their skepticism doesn't apply to that sort of trip. So if Abraham did come from a region where camels were used regularly at this time (as the article admits), and he had to use them to cross the desert (as the article admits), it stands to reason that he wouldn't have killed them all when he got there and would have had at least a small enough number remaining when he had to send his servant to find a wife for Isaac and so on, and we know they kept their own cultural identity and may have been hesitant to trade their camels because of their relatively small number and inability or procure more while there. They might have increased in number during the time he was living in Canaan, as long as there were only a relatively small number of them in this period, belonging precisely to his family, but that doesn't mean we should think there would be evidence of the larger number of them that the NYT article seems to expect there would be if they had them.
3. Abraham is portrayed as being rich, and the existence of a small number of camels in the lists of animals he owned is presented in the book as evidence of his wealth. If they were common around him, the small number of camels would seem insignificant compared with the huge number of other animals he had. But even a smaller number is presented as evidence of his great wealth. So the portrayal of his camels in the book fits nicely with the claim that the locals didn't have them.
4. If his family only used them when traveling across the desert or on long journeys (as the narrative itself indicates) but just maintained them as domesticated by not pack animals or riding animals, then even the ones that they did have might not have appeared to be domesticated by the methods of measuring the bone density and such that these scientists have been using.
5. So I think at best the conclusion being put forward here goes way beyond the evidence. If someone were to conclude from the Genesis narrative that camels were being used throughout the Canaanite region the way the article assumes the book presents things, then it would create a problem. It's still an argument from silence, but it would be odd for there to be no preserved camels from this period if they were that commonly used. But the Genesis narrative doesn't present such a picture, and there's no reason to think the picture it does present is unlikely to have produced the (lack of) evidence this new research provides.
The imminent ban on 40-watt and 60-watt incandescent light bulbs is going to impose a significant cost on our household. This is an interesting case of a somewhat bi-partisan attempt to save energy while imposing what they took to be only a small cost on most households. But it is a cost, and it's cost that poorer households will be more burdened by. So, like New York's recent bottle bill that adds 5 cents to the cost of a larger variety of bottles, people with lower income will be more burdened by it if they continue to buy products in those bottles, while more affluent households will not notice as much of an effect of the increased cost. Our household, however, will be much more burdened by this than most.
The alternatives to incandescent bulbs don't seem to me to be genuine alternatives for our household. LED bulbs really are the best you can get. LED flashlights fail when the flashlight itself fails. It's never the bulbs that are the problem, and the batteries should last a very long time unless you leave them on all the time or never turn it on (in which case the batteries will corrode). But LED bulbs for ordinary household lights are still very expensive. The prices I'm finding for them online are something like $10 per bulb. This might be fine if they last forever and will never need to be replaced, and the energy savings might also help make up for it, but that's for a household where you won't need to replace them except when they fail on their own. We have a child who actively seeks to smash light bulbs whenever people forget to turn the lights on when he's home or when we let our attention turn to deal with anything but him, allowing him to climb on something to reach them. I think we lose a light bulb or two every week, and we can't be spending $10 per bulb at that sort of replacement rate.
Compact fluorescents are not a viable alternative either, for two reasons. Fluorescent bulbs do last longer than incandescent bulbs if you simply measure how many hours they can be left on before breaking, but that's not how most people use them. For businesses that leave the lights on for long stretches of time, they make sense. But if you turn them on and off regularly, they break far, far sooner than incandescent bulbs. They often don't last more than a few months with the kind of use they get in our house. I've seen them last a day or two more than once. They might save energy if you're willing to eat the cost of constantly replacing them, but they're not cost-effective unless you keep them on all day. This is not easy if you have been conscientious enough to develop a muscle-memory habit of turning the lights off when you leave the room, and it's next to impossible if you have children who will turn lights on and off all the time. I have to remind myself constantly not to turn the lights off in my office at work and in the classrooms I teach in, because it will cost the college too much money to keep turning them off and on again and replacing the bulbs regularly. The bulbs in our office are constantly in need of replacement, because people often turn them off when they leave the room, either not knowing of this problem or not thinking about it when they leave. And those are adults. There's really no way to control for what small children or children with autism will do with lights, and we've got both.
Even worse is the health hazard given the amount of mercury inside compact fluorescent bulbs. It's not a huge amount of mercury in a given bulb. It's about the size of a period in standard-size type. But even that amount is not a good idea to have around small children, and the EPA's recommended precautions for cleaning them up are simply not possible in our household. When you add in an autistic child who goes out of his way to unscrew them and smash them on the floor, it's simply not viable to have them in any bulbs he can either reach or stand on something to reach, which means none except in lights with closed cases.
Fortunately, the law doesn't ban incandescents altogether, just ones that are below a certain energy efficiency. The market provided a solution in the first phase of the ban. The light bulb industry managed to produce some 100-watt and 75-watt bulbs that met the standards that the first phase imposed, and we've been buying those bulbs (and will have to buy exclusively those bulbs until the industry produces similarly more-efficient 60-watt and 40-watt bulbs). We're not actually going to see incandescent bulbs disappear. We'll just see more expensive ones. This is an expense we'll have to absorb without seeing as much benefit as most households would get from it, since our the bulbs will have a shorter life than in most households. But it seems to me to be the best alternative for us.
Thabiti Anyabwile has come under a lot of criticism from many quarters for his recent post on the gag reflex and Christian opposition to same-sex sexual acts, increasingly called "homosex" of late. [I'm still getting used to that word, because it still feels like an adjective to me (one without its proper ending), but it's a useful word compared with writing out something like "engaging in same-sex sexual activity, so I will use it.]
He has just posted a followup responding to some of the criticisms as well.
As I see it, there are several issues going on here, and I don't think all the participants in the conversation are keeping them straight. There are a number of ways his argument is being misrepresented (and then made fun of in pretty vile ways as a result), but there are also some genuine philosophical difficulties with some of the things he's saying, and I'm not entirely sure I agree with some of the key points. Even so, I think some of the things for which he's being unfairly made fun of by a lot of the opposition seem to me to be largely correct and even relatively obvious, things I'm not sure many people will really want to rid themselves of in their ethical theorizing if they were to think their views through more carefully. So maybe they should refrain from making fun of them, if I'm right about that. I want to work my way to that gradually, however, with a bit of a review of some of the key philosophical moves that have been made about the connection between morality and emotion.
1. Ethics and Emotion
I'm not interested first in the application to homosex, although I will say a few things about that later on. I'm primarily interested in the general strategy of ethical reasoning that involves paying heed to emotions like disgust. A good friend of mine complained on Twitter about the arguments found in the original post, arguing that if we allow disgust to guide our ethical judgments it would mean racists' disgust for racial interaction could generate moral principles against interracial marriage (or more particularly against interracial sex). If disgust shows us anything at all about genuine moral principles, the argument goes, then we have to follow our disgust whatever it leads us to loathe. And people can loathe all sorts of things, in ways that don't at all track genuine moral principles. So we shouldn't rely on our disgust to show us anything about morality.
I think this argument is a mistake. The fact that disgust can be directed against things that are not wrong does not show us that disgust isn't ever a guide to morality. All it shows us is that disgust can be fallible. It can sometimes be directed against things that are not morally wrong. But the same is true of emotionless reason. Emotionless reason presumably led Immanuel Kant to say that lying is always wrong. However, it also has presumably led other philosophers to say that lying, while usually wrong, is sometimes the morally right thing to do. If emotionless reason can generate both principles, then obviously it's fallible. But that doesn't mean it never helps us end up with correct moral principles. It just means it's fallible. It sometimes gets things wrong. We can't trust it 100%. But only a radical skeptic (or someone who grants the radical skeptic far too much, as Rene Descartes did) would claim that a source of information is worthless just because it's not 100% reliable. So I don't think we can rule out a connection between emotion and morality so quickly.
As it happens, recent work in feminist ethics has drawn a lot of attention to attempts to separate emotion from ethical reasoning that have led to a bias against ways of moral reasoning that have tended to be more paradigmatic of women than of men. This bias has had the effect of marginalizing women's ethical reasoning, to the detriment of our overall ethical reasoning. Alison Jaggar has argued that much of the history of ethical theory, which happens to have been done mostly by men, has either treated emotion as something completely isolated from ethical reasoning (as Kant did; emotion cannot be trusted, and the only way to get ethical understanding is to reason in a way that doesn't involve emotion) or as the foundation of all our ethics but a foundation that has no basis in any ethical truth (as David Hume did; there is no ethical truth, because ethics is pure emotion and not reasoned).
Thankfully, Jaggar is wrong about the history of philosophy. Sometimes it's because she misinterprets particular philosophers, such as her reading of the Stoics as being opposed to all emotion, which she can be forgiven for, because, well, they do actually say that. But philosophers are often bad reporters of their own views, and it turns out it's not feelings that the Stoics think we should rid ourselves of. It's bad reasoning, which is how they define emotion. There are plenty of feelings, according to the Stoics, that are perfectly fine to have as long as they're compatible with reasoning well. Certainly the Stoics emphasize reason and say they oppose emotion, but what they oppose isn't what we normally call emotion. The Stoic view on emotion is perfectly compatible with taking what most of us call emotions to be very important for ethics. In fact, having the right feelings, ones compatible with reason, is even crucial for the Stoics. They just won't call those feelings emotions.
Jaggar also seems to me to underemphasize the ways that historical philosophers even put a good deal of effort into organizing their ethical theories around emotions. Plato considered it extremely important for the best possible life that your emotions be engaged in appreciating goodness itself on an emotional level. Aristotle explained some of the most important virtues as simply having the tendency to respond to your circumstances with the right level of emotional response. Augustine's entire account of virtue makes it emotional: virtue is having well-ordered love, whereby you love the best things the most and the less-good things less fully. I myself think all three of them were largely right in these things. Ethics is very much tied up with emotion, and attempts to separate ethics from emotion the way Hume and Kant did are, to my thinking, disastrous.
But several questions remain. It's one thing to say that ethics involves having the right emotions. It's another to say that our emotions are, even sometimes, a good guide to the right ethical principles. We certainly can't just read our ethics off whatever emotions we happen to have. There are plenty of times when my emotional response isn't proportional to an offense that's committed, and I either overreact or underestimate a wrong that's taken place. Or I might not be properly placed to experience the good in something and not be as able to rejoice as I should at some good. There are lots of cases where our emotional judgments are a little off, and there are enough cases, such as with the racist example above, where they are drastically off. Indeed, a Christian who believes in the doctrine of the fall should be the first to recognize that, and that was even crucial for Augustine's ethical theory. Our emotions are often not directed in ways that remotely match up with what's truly good.
2. Ethics, Disgust, and Moral Reasoning
But that doesn't mean there's no role for disgust to play in helping us to see certain ethical truths. Jaggar's feminist treatment of this subject is a good example. She argues that women, having been oppressed for the entirety of recorded history by being told that their emotions are wrong when those emotions contradict how they're being treated, are nevertheless right to pay heed to those emotions, because those emotions are genuine clues to the reality that our socially-constructed narrative is otherwise blinding us to. A member of an oppressed group might have absorbed the narrative that they, as unintelligent slaves, have no rights and need the help of those who are guiding society along to make their decisions for them, but their emotions tell them that the views they've officially adopted on the level of conscious reason are somehow wrong. This can be so for any oppressed or marginalized group, not just women, but she picks out women as a group because women have been told (and less so in outright words in recent years but still conditioned by society in this direction) that they are emotional rather than reasoning beings, that their emotions are less trustworthy than the reasoning that's been identified as paradigmatic of men. I don't agree with everything Jaggar says along these lines, but there's quite a lot of it that strikes me as right about the history of how women are viewed and about some of the elements of how we (men and women today) are still conditioned to view each other and ourselves.
So if Jaggar is right, then there are at least some contexts in which emotions will be even a better guide to truth than the more emotionless reasoning that can easily be simply the reflex of our socially-conditioned environment, our lip service to the biases of our day. Now emotions can do that, too, as evidenced by racist disgust at interracial sex, for example. But all Jaggar is claiming is that sometimes emotions can be a better guide to moral truth than whatever process underlies what we're conditioned to call emotionless reason. And that seems to me to be absolutely right.
Even more, I think there are cases where we can show that our emotion adds something to moral reasoning that you simply cannot get from the emotionless reasoning. A friend of mine who works in aesthetics once gave a case that seems to me to indicate this pretty nicely. Suppose you're eating a kidney and a little bit disgusted at it. This is not moral disgust at all. You just ended up in a situation where you're expected to eat something that you don't like the taste of, and you find it a bit disgusting. But after you've been eating it for a few minutes, you discover that it's a human kidney. Suddenly your level of disgust goes way up. That's not from the taste of it, which didn't change, or from any emotionless reason, because emotionless reason has no emotion and thus by itself wouldn't increase your disgust. Rather, your level of disgust increases because of some moral principle lying behind the disgust, one that upon rational examination would easily stand up. Eating humans is morally worse than eating a kidney from some other animal. It should disgust us, and it does. We should feel greater disgust at eating humans, if we're morally healthy. That doesn't mean that it follows that eating humans is always wrong. It's compatible with this disgust that eating humans who died independently of our actions in a case of survival is morally allowable. Yet it does seem that there's a moral principle lying behind the disgust, one that very few people would question, and it's hard to argue that the disgust isn't a sign of that moral truth. The disgust signifies that truth. Its continuation from generation to generation helps maintain our resistance to cannibalism, and we should be glad for that.
(I should note that this example is a lot like C.S. Lewis' example of finding out that you're eating a deer that was a talking deer in The Silver Chair. The difference, there, however, is that those eating the deer didn't have disgust at all until they found out it was a talking deer. Here there's already disgust at eating the kidney, but it takes on a whole new level of disgust when you learn that it's a human kidney.)
I recently rewatched the 1975 Doctor Who episode "Genesis of the Daleks" by Terry Nation. Some online discussions I looked at about "Genesis of the Daleks" made some interesting, and to my mind obviously false, claims about how it fits (or doesn't) into the overall canonical fictional world of Doctor Who.
One claim in particular claim that caught my interest was the accusation that Terry Nation contradicted some of his earlier Doctor Who episodes about the Daleks in giving the origin of the Daleks in this serial. One discussion pointed out that Nation had made an effort not to contradict his first serial "The Daleks" from 1963, where he establishes the Daleks as creations of a race called the Dals in their war against the Thals. The supposed contradiction comes with "Genesis of the Daleks" when Nation actually shows us this war between the Thals and the race that created the Daleks, and the creator race is not called the Dals but is called The Kaleds.
Here's my problem. This is not a contradiction. A contradiction takes the form 'P and not-P". There is nothing of that form here. What you do have is:
1. The race who created the Daleks at the time of the Daleks' creation called themselves the Kaleds.
2. The Thals also called them the Kaleds at that time.
3. At a much later time, probably many centuries later, after an apocalyptic destruction of all civilization and a loss of a good deal of accurate information about the details of that earlier time, someone speaks of the race that created the Daleks as the Dals.
I'm sorry, but I'm not seeing how any of that makes for an inconsistency. If we were sure the person telling us they were called the Dals was speaking the truth, that would even be difficult to get a contradiction, because it's possible they came to be called the Dals at some time after "Genesis of the Daleks" or that they were called that at some earlier time, and that name came to be the more common one to use again after the apocalypse. But we can't even be sure the Thal telling us this has the right information. Maybe it's just that the wrong name was preserved. There are quite a number of things that could explain how 1-3 might all be true. Terry Nation simply did not contradict his earlier Dalek stories. What he did is use a different name without explaining why different names were used at those two different times, but it's not a contradiction.
I think there's a certain personality type that just likes to find contradictions in everything. A lot of fan criticism of science fiction and fantasy stories exhibits similar problems to the one I've been discussing here. I could point out lots of other examples. That doesn't mean there aren't legitimate criticisms to level against authors. I've criticized J.K. Rowling in print about her concept of changing the past in the third Harry Potter novel, although I did so after pointing out some rather implausible ways of making the story work to avoid the problem I raised. The implausibility there would involve reliable narrators who would know better telling untruths, however, which is more of a stretch than someone centuries after an apocalyptic event getting a name of an extinct civilization wrong or the possibility that the group was actually called by two different names.
How you evaluate such attempts to make canonical worlds coherent in part does depend on how plausible the explanation might be to avoid the contradiction. It's nice for fictional worlds to be coherent. Sometimes that's impossible. Sometimes it involves an implausibility but is possible. And sometimes it's not all that implausible if you just think a little harder to see how things might fit together, when at first they seem not to.
It's hard not to think of critics who like to find contradictions in the Bible when I look at these stories. There are some genuine difficulties in fitting together some parts of the Bible. I've never seen one that guarantees a contradiction, especially when you take into account that inerrantists don't take the current manuscripts to be inerrant but allow for errors in transcription from manuscript to manuscript. But I have seen places where it's not easy to come up with one highly plausible explanation that shows for sure why the apparent contradiction is not a real one. In most of them, there have been several explanations, where not one stands out as the most plausible, and even most of them involve something somewhat unlikely but possible. There's none I know of where I would judge all the explanations as so implausible as to require rational evaluators to think it has to involve two contradictory statements that can't be resolved. But I'm coming from an epistemological standpoint where I think the prior plausibility is relatively high. I consider myself to be in a position where I think I have good reasons for taking the Bible as it presents itself, as God's word, and it follows from that that it's more likely that there is a solution even if I don't know what it is than that there isn't. So I'm going to take the less-plausible-sounding accounts as less certain, but I'm going to be more likely to think that one of them is probably true.
That's one difference with fictional worlds. I don't believe there even are Daleks or Time Lords, never mind that the entire Doctor Who canon is consistent. (I think it certainly isn't coherent when it comes to fundamental questions of time travel, for example.) But someone who thinks God is real and is basically the way God is presented in the Bible is going to place a higher prior probability on there being some resolution to a proposed contradiction than someone who has no prior trust in those documents. And I would argue that someone doing this is right to do so if the prior probability is based on a good epistemic state to begin with. And that makes accepting truth in texts that are hard to fit together much easier to do (and not in a way that undermines rationality, assuming the prior probability itself has a rational grounding.
That assumption of prior probability, of course, is one of the fundamental disputes to begin with, but you can't just assume at the outset that someone who is more willing to trust a set of scriptures is wrong in doing so, and pointing to potential contradictions isn't necessarily going to turn the tide of the conversation unless you first undermine the prior probability. Supposed but not actual contradictions, even if they are difficult to put together, are therefore very weak evidence against the coherence of a worldview when the person who holds that worldview is more sure of it than they are of the irresolvability of the supposed contradiction. That makes for people coming from very different standpoints evaluating the supposed contradictions very differently, and from within their world view each seems to themselves to be right in how they do that. That's something that I think not enough people on either side of such debates can see.
I've discovered the need to adopt a new way of speaking about people who are recently-descended from Africans. We've learned in the last couple decades that we ought to emphasize someone's personhood above any other characteristic, and thus it's thoroughly immoral to use any adjective in front of 'person'. We need to use predicate nouns instead. We no longer have sad people, for example. We simply have people with sadness. We no longer have short people. We have people with shortness. We don't want to define people with sadness as if their sadness is more important than their personhood, so we have a moral obligation to put the noun form after the word 'person'. Grammar does always indicate metaphysics, after all.
One sphere of language in which this lesson has never been properly applied is in the area of race. Why are we still talking about black people, for instance? Do we really want to define people solely in terms of their race? Do we really want to signal that their blackness is so central to who they are that we're going to pretend that people with blackness aren't people? If we call them black people, then we are treating their blackness as if it's a greater part of our conception of people with blackness than their personhood is. People with person-firstness have instructed us that we should never put disability-related adjectives in front of a noun or pronoun referring to a person, because we don't want them identified with that condition. But we've also learned from the same people that having a disability is not negative, which means this policy is not because disabilities are bad. Therefore, we ought to apply it to other cases when something is not bad but might wrongly be taken by someone to be bad, just as we would apply it to things that are genuinely bad. If race is not to be a negative, then I am not a white person. I'm a person with whiteness. It does make it a little awkward to speak of people with Asianness or people with Australian-first-people-ness (i.e. what used to be called aboriginalness). But it's worth the awkwardness of expression to avoid any chance of identifying them with the racial or ethnic group whose membership they possess.
Even worse, it's especially pernicious to say that someone is black (or African-American or whatever racial term we might choose). After all, using predicate adjectives amounts to making identity statements rather than merely ascribing a property to someone the way we would have thought that adjectives in English, even predicate adjectives, do. It's much more preferable to say that someone has blackness than to say that she is black. People aren't anything except persons. I'm not philosophical. I have philosophicalness. Glenn Beck is not unfair to his political adversaries. He has unfairness to the people who have political adversariness with him. President Obama is not bad at speaking without a teleprompter. He has badness at speaking without a teleprompter. I shouldn't say that I am Christian. I'm a person who has Christianity. I shouldn't be identified with my faith. I should claim, rather, to possess the entirety of Christianity, as if it belongs to me. We need to avoid identifying people with any property ascribed to them other than personhood. It's much better to say that they possess the entirety of the thing that formerly we would have used to describe them.
For more explanation, please see here (except you can ignore the sections explaining how people with blindness and people with deafness have offendedness at the obviously-correct way to refer to them, and you certainly shouldn't read person-with-autism Jim Sinclair's reasons for disliking person-first language).
Jeremy Pierce is a philosophy professor, Uber/Lyft driver, and father of five.