So far, my century has only one move…

So far, my century has only one move, and it consists in this:

First, take a person. Do not consider what she/he says. Rather, look at what she/he looks like.

Then, pick superficial features that are readily available to sight, like skin color. Assign the person membership to a group based on that feature. People are usually members of many groups at once. One can be, say, part of those-who-take-the-bus, those-who-were-born-on-a-Wednesday, those-who-speak-English, those-who-have-traveled-to-Turkey, those-who-passed-algebra, those-with-left-handed-mothers, those-who-listen-to-jazz, and so on. Disregard all these group memberships and focus only on the following axes, which reflect the current university trivium: race, class, and gender.

Within those axes, the only divisions that matter are white/non-white, upper/non-upper, and male/non-male. Here, be a staunch realist and insist that it is possible to detect and track robust distinctions. However, for sub-distinctions within non-white, non-upper, and non-male, be a staunch relativist and insist that it is impossible to detect and track robust distinctions.

The person so processed is now a spokesperson for the combination of the profile obtained, irrespective of whether she/he disowns that role or insists on representing one of the many other groups that she/he intersects with. So, henceforth, treat the person, not as a token, but as a type.

Once these steps have been followed (given the reliance on visual cues, this should only take a second), listen very quickly and superficially to the claim made by the person. It is important not to consider the claim, or any reason(s) offered in support of it. Only the topic needs to be noticed.

Next, compare the claim to the current demographic pie-chart of race, class and gender for that topic. Ask yourself whether assenting or dissenting to the claim would result in an increase or decrease of the slices for white, upper, and/or male. Refer to these guidelines to steer your judgement:

– If assenting to the claim would increase the area, dissent.

– If dissenting from the claim would increase the area, assent.

– If assenting to the claim would decrease the area, assent.

– If dissenting from the claim would decrease the area, dissent.

These guidelines allow one to judge claims pertaining to topics that one is ignorant of, provided one can forecast the relevant demographic increases or decreases. The ambit of activism thereby becomes unconstrained.

The legions who implement the foregoing steps hope to eventually bring about topic-specific pie-charts that match the pie-chart of society at large. This is their ideal. They all assume that, if such a demographic match is finally achieved, justice will ensue. Somehow.

Advertisements

The Jesuits used to say “Give me a child until he is seven…

The Jesuits used to say “Give me a child until he is seven and I do not care who has him afterward.” This actually reflects sound cognitive science. It is also the creed of cowards.

Some ideas are weak, so they can only enter defenseless minds. Other ideas are strong, so they do not fear opposition, like a boxing champion who seeks only worthy rivals.

Weak ideas need to shun the critical adversity of a capable mind, otherwise they would not spread. Strong ideas will persuade healthy adults, in time. Practically speaking, then, we can estimate the merit of one’s preferred ideas by asking: at what age do you want instructors to introduce those ideas into the schools? The younger the age, the weaker the idea (and the more cowardly the proponent).

My ideas are not taught in kindergartens. There are no rhymes that praise, say, a “Selfish Shellfish.” I therefore cannot benefit from such a head start. I teach adults only.

This contrasts with the current struggle to control the curriculum. In a state-run educational system, there is only one curriculum, so every group is vying for some precious air-time.

True, when you introduce an idea early, that idea will thereafter enjoy an unmatched advantage. Still, young adults are open to revising their beliefs. By that time, though, individuals are not so easily fooled or brainwashed. Candidate ideas must therefore knock on their front door and be let in, willingly, on account of their genuine rectitude.

This is a game-changer. It is, at any rate, the only game I am willing to play, for the cowardly method of spreading ideas is unbecoming to my soul.

What university now teaches

Conversation between A and B, version 1:

Person A: ‹ insert an object, act, or event › is ‹ insert a feel-good slogan or buzzword ›.

Person B: Why should anyone agree that ‹ object, act, or event › is ‹ feel-good slogan or buzzword ›?

Person A: Because those who do not agree are ‹ insert a derogatory label ›.

Person B: Why are they ‹ derogatory label ›?

Person A: Because they disagree that ‹ object, act, or event › is ‹ feel-good slogan or buzzword ›.

 

Conversation between A and B, version 2:

Person A: ‹ insert an object, act, or event › is ‹ insert a feel-good slogan or buzzword ›.

Person B: Why should anyone agree that ‹ object, act, or event › is ‹ feel-good slogan or buzzword ›?

Person A: Cuz.

 

Version 1 is in no better evidential and/or rational standing than version 2.

If, upon graduating from college or university, all that one can do is devise more dialectically elaborate and rhetorically eloquent variants of version 1, then no matter what the diploma or the rector or the grandparents say, the “skills” that one has been taught are worthless.

I want to put two claims side by side…

I want to put two claims side by side:

One person’s need is a demand on another person’s energy/life.

One person’s need is not a demand on another person’s energy/life.

Which is correct? Based on classroom experience, what amazes me is the ubiquity of the following answer: the upper claim. What also amazes me is the speed of the verdict. Yet, what troubles me is how this ubiquity and rapidity unfold without the slightest hint of (or demand for) a justification.

When asked to pick, students are quick. When asked why, it is dead silence.

Unless I am missing something, the above juxtaposition does not present any reason(s) for why one ought to endorse either claim. Each claim might serve as a conclusion in an argument but, as things stand, we find no premises alongside them that supply any kind of support. That is okay: linguistic competence can be deployed without inference. Still, to endorse or privilege one claim over the other is either to assume that the truth or falsity of the claims is self-evident. Since the truth or falsity of the claims is not self-evident, I would really like to see some arguments being made.

Ideally, I would a decision should be postponed until such arguments are given and compared. Only once an argument is actually made can it be evaluated. Until that work has been done, the endorsement of a conclusion means next to nothing. Feeling really strongly about something is a bit like going over a textual passage with a yellow highlighter: it lets others know what you like, but it leaves them unable to determine why you like it—or why they should like it too.

“I like pizza.” “Good for you. I like sushi.” Then what?

An argument can be good or bad, but a conclusion itself is neither good nor bad. To see this, consider the following claims, side by side:

I should cut off my leg.

I should not cut off my leg.

Which is correct? Unsurprisingly, the following answer is ubiquitously and rapidly given: the bottom one. Yet, what if we graft the following before the upper claim: “I have gangrene in my leg” and “The gangrene in my leg will soon kill me” and “There is no other way to remove the gangrene in my leg than by cutting it” and (crucially) “I want to live” therefore “I should cut off my leg”?

What if we graft this string of claims before the bottom claim: “I have gangrene in my leg” and “The gangrene in my leg will soon kill me” and “There is no other way to remove the gangrene in my leg than by cutting it” and “Once, there was an inscription on a rock that said that one should never cut off one’s limb under any circumstance” and “The rock with the inscription was found near where I was born” and “My parents raised me to believe in the sacred nature of the inscription” therefore “I should not cut off my leg”?

In an open contest between these two arguments, which conclusion would win out? I do not need a vote to know which conclusion I would endorse.

Admittedly, the assessment of an argument can be complicated, but amid this complexity there are howlers which reliably brand an argument as worthless. Chief among these howlers is assuming the very thing you ought to prove. In giving reasons for a claim, one cannot at any point employ that claim as a reason, since it is the very idea being called into question. To use a claim as both premise and conclusion is saying things twice, which is no better than saying them loudly. Both tactics have rhetorical merit (they get kids to move, for instance), but if one is aiming to address a healthy adult mind, then repetition brings no support. So, any viable support for a claim must not be circular. Most people serious about ideas already know this. Yet, somehow, when it comes to ethics, amnesia sets in. “I like the top proposition.” “Good for you. I like the bottom proposition.” Then what?

There is nothing magical that makes a principle automatically orient one’s conduct. Rather, principles provide a guidance that is rendered possible by one’s actions. Apart from this real-world activity (and its success or failure, as the case may be), there is no reason to think that some propositions are inherently better-suited at being principles. Hence, you cannot recognize a principle just by reading a formulation of it in isolation—much less determine whether it is a good principle just by re-reading that formulation. To ascertain whether a given principle has any merit, more is needed.

Once we demystify principles and their fallible working, we realize that a principle, like any other claim, stands in need of a justification.

Now, as mentioned, the two claims that I have juxtaposed at the start do not contain any reasons for them. Even so, inclinations will kick in, so one is unlikely to stay indifferent to these incompatible views.

Well and good, but philosophy is not psychology.

Most people have one major appliance…

Most people have one major appliance, or at least will purchase onem at some point in their lives. The purchase itself is an interesting event. The deliberations leading up to it can often take months or even years. Many factors have to be considered: price range, durability, functional options, fit with available space, mode of delivery, etc.

To get a handle on all these parameters, buyers typically shop around. And, if a previous brand what a let-down, switching to a competitor is almost assured. Name recognition carries some weight but, on the whole, consumer patterns are not beholden to much loyalty. The purchase of a durable household good is thus a solid case study in rational decision.

Now, how are religions chosen? They are not. Instead, the accidents of geography pretty much fix this from birth. Such a generalization of course admits of exceptions—because humans have free will. Still, knowing the womb that a given human comes out of is arguably the most reliable predictor of life-long religious affiliation.

Call me crazy, but I think that the choice of fundamental beliefs deserves more attention/deliberation than the choice of household appliances.

Is one person’s need a demand on another person’s energy/life?

The smallest unit of information is the “bit,” which is the computer geek’s short-hand for “binary digit” (actually, using “binary digit” in ordinary conversation might qualify you as a geek).

Binary means “based on two.” You might therefore think that 2 is the exemplary bit. However, 2 is a single digit, so it is not necessarily based on two. The idea of binary is meant to capture a contrast between two things. Interestingly, the exemplary bit is not 2 but 1. What makes 1 binary is that it is not 0. So, when the number 0 lurks in the background as an alternative to 1, then the presence of 1 suddenly carries information, because it tells you at least one thing, namely that it is not 0. Without this contrast, 1 is just 1, and as such cannot convey anything.

This may all sound abstract, but it is actually quite sensible. Imagine that you want to write something down. You have two kinds of paper before you: one stack of sheets is black and another is white. You also have two kinds of pens at your disposal: one uses black ink and the other uses white ink (Is there such a thing a white ink? If not, just switch the ink to paint). Which would you choose? Obviously, you have some leeway. There is no reason, for instance, why you should privilege black ink on white paper, since you could just as easily achieve everything that this conventional pair accomplishes by using white ink on black paper.

You would not, however, use black on black or white on white. Why is that?

To extract the philosophical lesson from this scenario, assume that the white and black tones match perfectly, that ink on a paper would not leave any relief, and so on. The point I want to make would still apply even with the nitpicking (reliefs, for instance, are a species of difference), but I want to get to the point without such tedious detours. The take-away message is that, without some contrast, the sheet can convey no information. You might reply that the mere choice of, say, a white sheet, can be significant. That is true. But, again, if it conveys the least bit of information, it is in virtue of being white and not black.

Here’s another variation on the same idea. When the high-ranking mystics at the Vatican get together to elect a new Catholic Pope, they isolate themselves in some room. No one really knows what goes on in there, but at any rate, when they have chosen the person, they signal their decision to the outside world by lighting a fireplace, which in turn lets smoke come out through a chimney (I am not making this up). Journalists have their television cameras aimed at the chimney and thus immediately get the message: a new Pope has been chosen. Now, what if, instead of lighting a fire, the Catholic figures instead extinguished a fire that was already burning steadily. Clearly, the difference between lit/unlit would achieve the same effect.

Conveying “that” someone has been chosen is a simple message, whereas conveying “who” had been chosen would take a more complex code. Morse code, for instance, would be more than enough to accomplish this task, since it could use smoke/smokeless sequences to convey everything that the regular alphabet can. However, for any of this to even take place, you have to have a binary system that has a minimum contrast.

The most prevalent ethical convictions are like a white color that has never been contrasted with a non-white one, or a stream of smoke that has never been interrupted. Despite (or because of) their prevalence, those beliefs are rarely juxtaposed with something genuinely different. Let me therefore propose the following. Consider this statement:

One person’s need is a demand on another person’s energy/life.

Seen from a certain perspective, this statement conveys a great deal of information. It contains many letters, for instance, and moreover strings those letters in a specific combination. In this sense, the statement is distinguished from all the other statements that are unlike it. However, all of this grammatical information is a means to an end, namely the conveyance of a single idea. The idea is that one person’s need is a demand on another person’s energy/life. What would be the contrast of this idea? Presumably, it would be this:

One person’s need is not a demand on another person’s energy/life.

All I have added here is the word “not.” This linguistic marker of negation makes the newest statement say the opposite of what the previous one said. Contrasting the two statements thus yields a binary opposition, which we could now convey with “1” or “0.” Indeed, one could generate the same opposing ideas, for example, by asking the following question:

Is one person’s need a demand on another person’s energy/life?

Some questions can be formulated in an open-ended fashion, but this question has been whittled in a way lets it admit two clear answers: yes or no. Because of this, it could be answered by choosing a white or black sheet of paper, starting or interrupting a chain of chimney smoke, and so on.

None of this stuff about information constitutes ethical deliberation. However, the geeky lesson is that, if the opposite side is ever to get a hearing, it must first be acknowledged that there is an opposite side. Without that, one is left with an un-contrasted belief—like a white noise that one has heard from birth without interruption, and thus, never noticed.

Give me a lie-detector test…

Give me a lie-detector test and ask me: “Do you think that vast segments of the world population are wrong?” and I will answer: “Yes.” Truthfully. Truth is, many others will answer that too, with just as much honesty. We can’t all be right.

The difficulty of this situation is amplified when the wrongness at hand pertains to ethical convictions. To be wrong in such matters not just to be “wrong,” it is to be “bad.” This distinction is a big deal: we correct people who are wrong, but we blame people who are bad.

How could one possibly cope with a belief in the vast wrongness of others? How can one step out of one’s home every morning and set foot in a world filled with bad people? Wouldn’t blaming the world almost across-the-board drive one mad? Maybe, but not necessarily. A lot turns on how ethical beliefs translate into practical conduct.

Here is a three-part way of understanding what humans do when they perform moral actions (the division be found in the work of Aristotle, Kant, and elsewhere—but it is often forgotten in professional discussions of ethics, and is virtually unknown in lay circles). At the top, you have a general principle. Ideally, it can be stated as a proposition, preferably a clear one; say, “You ought to take off your hat when visiting someone’s house.” I know, this example is boring, but it will keep heads cool to stick with an uncontroversial example at this point, so that we can get the three-part division right. Despite the clarity of a given formulation, a lot of aspects are left open in a general principle. Note, for example, that there are no particular names mentioned: the principle does not say whether it is dealing with a visit to the house of Ted, Jehad, Sally, etc. Likewise, no particular times are mentioned: even with the same house, the principle does not say whether it pertains to a visit on a Monday, on Tuesday the 24th of November 2027, etc. This lack of specificity is what makes the principle general. The boon of this generality is that the principle can now apply to many situations, not just one.

We can only act in the real world and, in the real world, everything is here and now. Particular real-world situations are thus the second component of the three-part model. Call them cases. The general principle I just spoke of applies to particular cases. So, if you visit Sally’s house on Tuesday the 24th of November 2027 while wearing a hat, and if you hold the principle “You ought to take off you hat when visiting someone’s house,” then you should take off your hat.

Now, a principle does not magically work on its own; you have to apply it. Applying is a form of action. As such, the application of a principle is no more or less mysterious than any other action (say, moving a coffee mug to one’s lips). However, the question of when and where to act is more problematic. Deciding if a given action is appropriate given some circumstances is a process called judging, so the faculty that figures this out is called judgement.

A moral agent has to judge when and how a general principle applies to a particular case. This, then, is the three-part model.

If you have only a principle, it means nothing, because it can make no tangible difference in the world. If you only have particular cases, they mean nothing, since they are brute situations that can be looked at any which way. And if you want to judge but have neither a re-applicable principle to guide you nor any particular cases to act on, your judgement will be like glue with nothing to stick together.

We are fallible creatures. So, for each of these three parts, we can err. I might think, for instance, that I ought to take off my hat when visiting someone’s house, yet in the throes of intense conversation I might fail to notice that I have entered Sally’s house. Or, I might notice the house yet forget that I have a hat on. Or, I might notice both my hat and the house yet forget that I have committed myself to endorsing the general principle. Or, I might take off my hat all the time, thereby contravening other principles I hold, say, that “I ought to keep my hat on in winter.” And so on.

Now that we are equipped with these distinctions, we are in a position to see how one can both believe in the vast wrongness of others yet not let vast moral condemnation drive one mad: in matters of morality, the people around me uphold a mistaken principle, but apply it with near 100% accuracy. They are trying in earnest to do the right thing—and in a way they are succeeding brilliantly. They spot the right cases and act in the manner specified by their adopted principles. I cannot condemn or blame them for being so consistent. It’s just that the principles that they are applying are bogus.