Review: The Big Short – Is it wrong to profit from misfortune you’re powerless to prevent?

Featuring Steve Carrell as Angry Guy and Ryan Gosling as Slick Dude.

Featuring Steve Carrell as Angry Guy and Ryan Gosling as Slick Dude.

 

The Big Short probably shouldn’t exist as a movie. As an explanation of exactly how and why the financial meltdown of 2008 happened, it’s fascinating, and does a reasonable job laying out the series of events. But if you’ve read enough news articles, or listened to some of the great podcasts from This American Life or Planet Money since these events unfolded, it’s not really offering a lot of new info. As a story about a few specific finance guys who saw it coming and took action, it’s compelling, but also packed to the gills with journalism and outright explaining disguised as drama, just to allow the audience to follow along.

What results feels like a mix between a Michael Moore movie (specific agenda and point of view, humorous fourth-wall-breaking style) and the most star-studded, entertaining dramatization to escape the confines of what could have otherwise been a talking-head documentary. Its script makes it fun while its facts make it depressing; it has a stylish tone and voice I enjoyed, but comes off as schizophrenic in what type of movie it wants to be.

But that’s the film as an experience. Strangely, the movie seems only glancingly concerned with the moral questions involved. It clearly takes the stance of “The Big Banks are Evil,” which pretty much every non-rich person agrees with going in. The handful of traders and fund managers who saw the signs early enough to profit from it serve as our gateway into the story, a useful device for all the explaining the film has to do as they figure it all out. But while the movie also paints these people as our “heroes” — we follow their actions, we root for them to succeed — it pays only lip service to the fact that their success comes on the backs of millions of people losing their homes or jobs, and the entire globe suffering a huge financial disaster. There’s a lot of glee at them pulling it all off, only a couple quiet moments of realization at the implications. It’s so interested in using these characters to make a bigger point about “the system”, it brushes the possibly-more-nuanced character question under the rug in the process.

So.

 

If you know something terrible is going to happen, affecting millions of people, but stopping it is out of your control, is it wrong to take action to personally profit from that tragedy?

 

How would you feel about doing it?
Should it be legal or should the system be changed to prevent it?
Is it better that someone benefit than no one?
Would you feel obligated to use that profit for good?

How should self-driving cars handle potentially fatal accidents?

Turns out your answer depends a lot on whether you're the car or the pedestrian.

Turns out your answer depends a lot on whether you’re the car or the pedestrian.

 

Self driving cars sound awesome. Less traffic, fewer accidents, more free mental bandwidth while commuting. But nothing is perfect, and some scientists are beginning to examine how automated cars should handle accidents:

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

So one could abstractly argue all day about what’s right, and if you’re able to take yourself out of the equation, the math is what it is.

 

If it were up to you to decide how autonomous cars handle accidents, what do you program them to do?

 

How does your answer change if:
a) you’re the first one driving one?
b) you’re also in charge of convincing other people to buy one?
c) everyone is required to drive one (and is that worth doing)?