Review: Sea of Rust – What will robots fight over once we’re gone?

Guys? Little help? Trying to maintain the primacy of the individual over here.

Lately I’ve been gravitating toward sci-fi stories, no matter the medium. The way good sci-fi focuses so clearly on asking an interesting question, then exploring the implications of the answers that come back… strikes some chord deep in my brain.

Looks at website description above.
Oh. Right.

Sea of Rust doesn’t strive for literary prose or nuanced character study. But it does explore a specific potential version of a post-humanity world with a surprising depth of thought and feeling.

In this version, humanity created AI, and AI destroyed humanity. In the aftermath some AI are individuals, former servants or laborers scrounging for survival in a robotic Mad Max-style future. And they live in fear of hive-mind-style OWIs — skyscraper-sized “One World Intelligences” fighting to be the one and only being left on earth. OWIs want to subsume every other mind in existence; or use their mind-linked automaton armies to wipe out anyone who still clings to independence.

What it means to be an individual, what it means for a machine to have a soul, the long-term purpose of any “thinking thing” in the universe; these are big questions for a fun genre book full of robot gun fights. Instead of stopping at Terminator‘s Skynet, this book wonders what comes next when the artificial intelligences that outlive us start having conflicts among themselves.

What will the robots fight over once we’re all gone?

Is there anything essentially human they’d value enough to maintain in our absence?

Better apocalypse: AI takeover or climate catastrophe?

army of robots

Terminators? Ha. Any AI worth its microchips knows: they never see the cute ones coming.

As computing power rises exponentially, the singularity approaches. In our lifetimes, it’s very possible an artificial super-intelligence could essentially become a god on earth — our fates bound to the hope that new god we’ve created is a benevolent one.

Meanwhile, due to centuries of man-made destruction to the earth’s climate, temperatures rise and water reserves dwindle, while mass migration and war over resources lay just over the horizon.

Which end-of-the-world scenario would you rather face, AI takeover or climate catastrophe?

Assume you are indeed going to live through it, not die immediately as it kicks off. (Nice try.)

Bonus question: if this toss-up is too easy a choice, which two doomsday scenarios would be harder to choose between? 

should we ban AI-controlled weapons outright?

Hopefully no killer robots travel back from the future to prevent said ban.

Hopefully no killer robots travel back from the future to prevent said ban.

And now for the flip side of the robots-replacing-humans coin. Not that I was going for an AI theme this week, but as it turns out, the world’s top AI scientists proposed an international ban on AI-controlled offensive weapons.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Time to have all the arguments we’ve had for years now about the ethics of drone warfare, with a new and exciting layer of sci-fi conjecture.

Assuming the nations and corporations of the world all comply, is there any argument against this ban?

If the world can’t agree on an outright ban, what does the new arms race look like?

If AI weapons do move forward, what regulations or limitations would you put in place to prevent disaster — or even apocalypse?

review: AMC’s ‘Humans’ – what’s the point if robots are better than us?

They're probably reading a story about a personified object like a train or something. How childish.

They’re probably reading a story about a personified object like a train or something. How childish.

 

It’s been a big year for robots. Ultron was deliciously menacing and Spader-y; Terminators came back as they’re so fond of promising; and if you haven’t seen Ex Machina, you missed what is probably the best movie of the year (and which I may have to come back to in another post).

Those movies — and countless others — paint the robots as villains, as killers, as a sign of doom, the violent end of mankind. But what makes AMC’s new show Humans utterly compelling is how it subverts all of that. These robots don’t have a horrible agenda (mostly). They’re just really good at things. They do menial tasks we don’t want to do, efficiently and without complaint. They take care of the sick who need them. They’re hyper aware of their surroundings so they never hurt any humans, even by accident (unless they’re broken, in which case they are promptly repaired or replaced).

More than any other robot story I can remember, Humans brings to life how robots (or as the show calls them, “synths”) might end up being better than us not just at labor, but at the things we see as making us human — and as a result, taking our humanity away from us not by force, but by merit.

If robots can think and act more precisely, they can take over our most skilled professions, like surgeons or scientists — at which point, why bother trying to compete? They’ve stolen our ambition and aspiration. If they’re more patient, better listeners, and always make rational decisions about what’s best for us, could they be better parents than us at our most frazzled and frustrated? It might be better for the child if they take that away from us too. If a synth is totally loyal, physically perfect, and exists only for our happiness, then to an awkward lonely teen or adult in an unhappy marriage, how could they not pose a tempting alternative to the messiness of real relationships?

The show is a nice mix of mystery and crime story, science fiction and human drama, which makes it extremely watchable week to week. But what makes it special, why it deserves the most credit, is for making us consider how artificial intelligence might not take over suddenly and by force, but by a gradual superiority that leaves even us having to admit to ourselves: maybe they deserve it.

 

What would be the last things that only humans could do as robots get smarter and more capable?

 

What things that seem so central to your life now would you be happy to concede to a machine?

 

What would be the final leap they’d have to make before you could feel like you had a relationship with an artificially created life form?

 

Are some of the final things that make us distinctly human actually not so great after all, where we’d just be better off without them?