If free will technically doesn’t exist, is anything our fault?

need caption

Read this post, have a conversation, or don’t. It’s really not up to you anyway.

 

Want to get really philosophical? How about having the argument to end (or begin) all arguments: Do we really even have free will?

For context, there is a growing amount of real neuroscience that says… we kind of don’t. Or it would seem that way, based on the fact that our bodies seem to act before our “thoughts” are enacted in our brains. And that’s only one piece of the puzzle. This Atlantic article goes into more of the science:

The contemporary scientific image of human behavior is one of neurons firing, causing other neurons to fire, causing our thoughts and deeds, in an unbroken chain that stretches back to our birth and beyond. In principle, we are therefore completely predictable. If we could understand any individual’s brain architecture and chemistry well enough, we could, in theory, predict that individual’s response to any given stimulus with 100 percent accuracy.

Yes, indeed. When asked to take a math test, with cheating made easy, the group primed to see free will as illusory proved more likely to take an illicit peek at the answers. When given an opportunity to steal—to take more money than they were due from an envelope of $1 coins—those whose belief in free will had been undermined pilfered more. On a range of measures, Vohs told me, she and Schooler found that “people who are induced to believe less in free will are more likely to behave immorally.”

…but also makes plain that to a certain degree, the same scientists who are disproving free will are in a way saying, “please do not act as if this truth we’re discovering is actually true.” They know that if we throw the premise of will out the window, life fundamentally changes, not necessarily for the better.

 

If your life is a series of reactions to the world that aren’t really up to you, can you be blamed for doing wrong?

 

How would thinking of the world this way totally rearrange how we think about people who commit crimes, or are just jerks? Or of good people who are kind and generous?

Regarding “A Red Dot”: When does the punishment for a crime, even a terrible one, become too much?

A topic best depicted in the abstract.

A topic best depicted in the abstract.

 

If you truly want to be challenged emotionally and ethically, I suggest — though with the requisite warnings about content that’s troubling, difficult, and may put you in a head space you do not want to be in — listening to the Love + Radio episode, “A Red Dot”, an extended interview with a man describing what it’s like to live life on the sex offender registry.

This isn’t a gawking look at how awful people live. It’s an attempt to empathize with a person who for many will be the least empathetic person you can think of. And it’s successful in that it doesn’t let him off the hook for making some very bad decisions, or having moments that suggest there’s a lingering disturbance within this person. But it also confronts us with the fact that a man can make a bad decision and continue paying the price for the rest of his life, no matter how he may learn, or grow, or change. It’s heavy stuff. I dare you to listen and not find yourself, at least at moments, feeling that empathy.

The tough question is, what can or should be done in this trickiest of situations?

 

If it’s acceptable to keep persecuting people after they’ve paid their debt, what are the limits to punishment?

 

Do we believe people can change enough to be forgiven, or at the very least left to live their life?

 

If we do, why is it ok to keep vilifying them? If we don’t, do they deserve what we put them through, or is there a better way to handle those we want to permanently ostracize?

Is it fair to sentence prisoners based on what they might do?

An altogether different sort of prisoner's dilemma.

An altogether different sort of prisoner’s dilemma.

 

A bit late to this one, but FiveThirtyEight did a piece on using statistical modeling to aid in prison sentencing that will definitely spark debate.

There are more than 60 risk assessment tools in use across the U.S., and they vary widely. But in their simplest form, they are questionnaires — typically filled out by a jail staff member, probation officer or psychologist — that assign points to offenders based on anything from demographic factors to family background to criminal history. The resulting scores are based on statistical probabilities derived from previous offenders’ behavior. A low score designates an offender as “low risk” and could result in lower bail, less prison time or less restrictive probation or parole terms; a high score can lead to tougher sentences or tighter monitoring.

The risk assessment trend is controversial. Critics have raised numerous questions: Is it fair to make decisions in an individual case based on what similar offenders have done in the past? Is it acceptable to use characteristics that might be associated with race or socioeconomic status, such as the criminal record of a person’s parents? And even if states can resolve such philosophical questions, there are also practical ones: What to do about unreliable data? Which of the many available tools — some of them licensed by for-profit companies — should policymakers choose?

It’s almost as if they’re stealing my schtick right there in the article. But there’s an overriding question I think is the most interesting angle.

 

Is it inherently wrong to sentence people on predicted behavior, even if using this more mathematical model is a net positive for society?

 

If we get a certain percent of punitive imprisonments “wrong” now under the subjective sentencing of judges, but this system works “better” overall, which is more unfair?