All hail the sacred ethnic pie-chart!

All hail the sacred ethnic pie-chart! When all groups will be proportionately represented, justice will ensue.

(Somehow.)

All hail the sacred ethnic pie-chart! When all groups will be proportionately represented, justice will ensue.

(Somehow.)

All hail the sacred ethnic pie-chart! When all groups will be proportionately represented, justice will ensue.

(Somehow.)

All hail the sacred ethnic pie-chart! When all groups will be proportionately represented, justice will ensue.

(Somehow.)

——-

Fools are chanting this tune in droves, but the massive following not make it any less mistaken.

A person-changing-her-mind is a fiction on a par with Bigfoot

A person-changing-her-mind is a fiction on a par with Bigfoot or the Loch Ness monster: it is something I have heard stories about and can readily envision, but never actually witnessed.

I teach the canons of proper argumentation, so I am well aware that the appeal of arguments comes from the prospect/promise that well-crafted instances will make people “change their minds.” Yet, by all regular inductive standards, this transformative power of argumentation occurs only infrequently, if at all.

I say this, not with any cynical intent, but in a strict empirical spirit: to say that arguments have the power to persuade is to make a statement open to either corroboration or falsification. By those standards, the idea of someone being “persuaded by an argument” is no different than other tenets deemed improbable, like the beasts of cryptozoological lore.

Call this the “non-persuasion induction.”

To sustain the salutary expectations traditionally placed on arguing, entrenched mindsets would have to routinely be changed by exposure to sound arguments. Alas, this is simply not so. By parity of reasoning with other fields of inquiry, the absence of success stories hardly justifies the faith placed on arguing. While there may be rare cases where one does alter one’s convictions upon being exposed to a sound argument, I have never actually seen such cases.

Now, my sample is admittedly limited. Even so, I do not think rare instances glimpsed by a few would gainsay a generalization like the non-persuasion induction.

One can of course engage in an activity like argumentation irrespective of the (low) odds of success. But, one should do so with a sober awareness that, in this case, the persuasive virtues of argumentation simply do not obtain.

This may seem damning to philosophers who, like me, teach critical thinking. However, the truth is that I do not care to change my students’ minds. It suffices that I am moved by reasons. So, I have resolved to merely show students what an intelligent lifestyle looks like and let them decide whether this is a laudable model to emulate.

Today’s academics are obsessed with race, class, and gender.

Today’s academics are obsessed with race, class, and gender. Consider a call for papers that states its “eagerness” to “assemble a demographically diverse program” and to achieve that end by allowing “members of an underrepresented group” to “identify yourself as such.” Should I, as a professional philosopher, submit to such a conference?

The first thing to stress is the nature and scope of my question. There is no point in trying to change decisions that have already been made. Hence, I do not want to evaluate whether the organizers did the right thing, but rather whether I, as an individual, should submit.

Also, I do not see why the decision I reach should be universalized to range over everyone. Hence, I do not want to determine what some nondescript person should do, but rather what I, as an individual, should do.

Keeping these two caveats in mind, I want to reflect on the institution of blind peer-review. To assess whether the device of blind review is useful, we must first ask: what is the goal of an academic conference? Is the goal to achieve a more adequate representation of certain features of reality, or to achieve a more proportionate representation of certain segments society?

As I understand it, removing names and other demographic details from submissions is meant to minimize biases and maximize objectivity. However, identifying the profile of submitters is like holding a marathon race with a beauty contest at the last kilometer.

It is unclear to me why, when the race/class/gender pie-chart of speakers is doctored to mirror society at large, epistemic progress somehow ensues. After all, I can easily envision a demographically varied line-up all singing the same tune. In fact, that is what tends to happen at academic conferences nowadays. Hence, letting underrepresented ideas (not groups) be heard seems more concordant with the call to not block the way of inquiry.

This kind of intellectual diversity, however, can already be determined from reading the content of a text, so profiling authors is needless.

So far, my century has only one move…

So far, my century has only one move, and it consists in this:

First, take a person. Do not consider what she/he says. Rather, look at what she/he looks like.

Then, pick superficial features that are readily available to sight, like skin color. Assign the person membership to a group based on that feature. People are usually members of many groups at once. One can be, say, part of those-who-take-the-bus, those-who-were-born-on-a-Wednesday, those-who-speak-English, those-who-have-traveled-to-Turkey, those-who-passed-algebra, those-with-left-handed-mothers, those-who-listen-to-jazz, and so on. Disregard all these group memberships and focus only on the following axes, which reflect the current university trivium: race, class, and gender.

Within those axes, the only divisions that matter are white/non-white, upper/non-upper, and male/non-male. Here, be a staunch realist and insist that it is possible to detect and track robust distinctions. However, for sub-distinctions within non-white, non-upper, and non-male, be a staunch relativist and insist that it is impossible to detect and track robust distinctions.

The person so processed is now a spokesperson for the combination of the profile obtained, irrespective of whether she/he disowns that role or insists on representing one of the many other groups that she/he intersects with. So, henceforth, treat the person, not as a token, but as a type.

Once these steps have been followed (given the reliance on visual cues, this should only take a second), listen very quickly and superficially to the claim made by the person. It is important not to consider the claim, or any reason(s) offered in support of it. Only the topic needs to be noticed.

Next, compare the claim to the current demographic pie-chart of race, class and gender for that topic. Ask yourself whether assenting or dissenting to the claim would result in an increase or decrease of the slices for white, upper, and/or male. Refer to these guidelines to steer your judgement:

– If assenting to the claim would increase the area, dissent.

– If dissenting from the claim would increase the area, assent.

– If assenting to the claim would decrease the area, assent.

– If dissenting from the claim would decrease the area, dissent.

These guidelines allow one to judge claims pertaining to topics that one is ignorant of, provided one can forecast the relevant demographic increases or decreases. The ambit of activism thereby becomes unconstrained.

The legions who implement the foregoing steps hope to eventually bring about topic-specific pie-charts that match the pie-chart of society at large. This is their ideal. They all assume that, if such a demographic match is finally achieved, justice will ensue. Somehow.

The Jesuits used to say “Give me a child until he is seven…

The Jesuits used to say “Give me a child until he is seven and I do not care who has him afterward.” This actually reflects sound cognitive science. It is also the creed of cowards.

Some ideas are weak, so they can only enter defenseless minds. Other ideas are strong, so they do not fear opposition, like a boxing champion who seeks only worthy rivals.

Weak ideas need to shun the critical adversity of a capable mind, otherwise they would not spread. Strong ideas will persuade healthy adults, in time. Practically speaking, then, we can estimate the merit of one’s preferred ideas by asking: at what age do you want instructors to introduce those ideas into the schools? The younger the age, the weaker the idea (and the more cowardly the proponent).

My ideas are not taught in kindergartens. There are no rhymes that praise, say, a “Selfish Shellfish.” I therefore cannot benefit from such a head start. I teach adults only.

This contrasts with the current struggle to control the curriculum. In a state-run educational system, there is only one curriculum, so every group is vying for some precious air-time.

True, when you introduce an idea early, that idea will thereafter enjoy an unmatched advantage. Still, young adults are open to revising their beliefs. By that time, though, individuals are not so easily fooled or brainwashed. Candidate ideas must therefore knock on their front door and be let in, willingly, on account of their genuine rectitude.

This is a game-changer. It is, at any rate, the only game I am willing to play, for the cowardly method of spreading ideas is unbecoming to my soul.

What university now teaches

Conversation between A and B, version 1:

Person A: ‹ insert an object, act, or event › is ‹ insert a feel-good slogan or buzzword ›.

Person B: Why should anyone agree that ‹ object, act, or event › is ‹ feel-good slogan or buzzword ›?

Person A: Because those who do not agree are ‹ insert a derogatory label ›.

Person B: Why are they ‹ derogatory label ›?

Person A: Because they disagree that ‹ object, act, or event › is ‹ feel-good slogan or buzzword ›.

Conversation between A and B, version 2:

Person A: ‹ insert an object, act, or event › is ‹ insert a feel-good slogan or buzzword ›.

Person B: Why should anyone agree that ‹ object, act, or event › is ‹ feel-good slogan or buzzword ›?

Person A: Cuz.

Version 1 of the conversation is in no better evidential and/or rational standing than version 2. If, upon graduating from college or university, all that one can do is devise more dialectically elaborate and rhetorically eloquent variants of version 1, then no matter what the diploma or the rector or the grandparents say, the “skills” that one has been taught are worthless.

I want to put two claims side by side…

I want to put two claims side by side:

One person’s need is a demand on another person’s energy/life.

One person’s need is not a demand on another person’s energy/life.

Which is correct? Based on classroom experience, what amazes me is the ubiquity of the following answer: the upper claim. What also amazes me is the speed of the verdict. Yet, what troubles me is how this ubiquity and rapidity can unfold without the slightest hint of (or demand for) a justification.

When asked to pick, students are quick. When asked why, it is dead silence.

Unless I am missing something, the above juxtaposition does not present any reason(s) for why one ought to endorse either claim. Each claim might serve as a conclusion in an argument, but as things stand we find no premises alongside them that supply any kind of support. That is okay: linguistic competence can be deployed without inference. Still, to endorse or privilege one claim over the other is either to assume that the truth or falsity of the claims is self-evident. Since the truth or falsity of the claims is not self-evident, I would really like to see some arguments being made.

Ideally, I would also like to see a decision postponed until such arguments given and compared. Only when an argument is made can it be evaluated. Until that work has actually been done, the endorsement of a conclusion means next to nothing. Feeling really strongly about something and expressing that feeling by an endorsement is a bit like going over a textual passage with a yellow highlighter: it lets others know what you like, but it leaves them unable to determine why you like it—or why they should like it too. “I like pizza.” “Good for you. I like sushi.” Then what?

An argument can be good or bad, but a conclusion itself is not good or bad. Consider the following claims side by side:

I should cut off my leg.

I should not cut off my leg.

Which claim is correct? Unsurprisingly, the following answer is ubiquitously and rapidly given: the bottom one. Yet, what if we graft the following before the upper claim: “I have gangrene in my leg” and “The gangrene in my leg will soon kill me” and “There is no other way to remove the gangrene in my leg than by cutting it” and (crucially) “I want to live” therefore “I should cut off my leg”?

What if we graft this string of claims before the bottom claim: “I have gangrene in my leg” and “The gangrene in my leg will soon kill me” and “There is no other way to remove the gangrene in my leg than by cutting it” and “Once, there was an inscription on a rock that said that one should never cut off one’s limb under any circumstance” and “The rock with the inscription was found near where I was born” and “My parents raised me to believe in the sacred nature of the inscription” therefore “I should not cut off my leg”?

In an open contest between these two arguments, which conclusion would win out? I do not need a vote to know which conclusion I would endorse.

Admittedly, the assessment of an argument can be complicated, but amid this complexity there are howlers which reliably brand an argument as worthless. Chief among these howlers is assuming the very thing you ought to prove. In giving reasons for a claim, one cannot at any point employ that claim as a reason, since it is the very idea being called into question. To use a claim as both premise and conclusion is saying things twice, which is no better than saying them loudly. Both tactics have rhetorical merit (they get kids to move, for instance), but if one is aiming to address a healthy adult mind, then repetition brings no support. So, any viable support for a claim must not be circular. Most people serious about ideas already know this. Yet, somehow, when it comes to ethics, amnesia sets in. “I like the top proposition.” “Good for you. I like the bottom proposition.” Then what?

There is nothing magical that makes a principle automatically orient one’s conduct. Rather, principles provide a guidance that is rendered possible by one’s actions. Apart from this real-world activity (and its success or failure, as the case may be), there is no reason to think that some propositions are inherently better-suited at being principles. Hence, you cannot recognize a principle just by reading a formulation of it in isolation—much less determine whether it is a good principle just by re-reading that formulation. To ascertain whether a given principle has any merit, more is needed.

Once we demystify principles and their fallible working, we realize that a principle, like any other claim, stands in need of a justification.

Now, as mentioned, the two claims that I have juxtaposed at the start do not contain any reasons for them. Even so, inclinations will kick in, so one is unlikely to stay indifferent to these incompatible views.

Well and good, but philosophy is not psychology.