The rise of sentient artificial intelligence has been a staple of science fiction plots for decades Star Trek: The Next Generation‘s Data, played by Brent Spinner, is among the most iconic and has survived several TV series and films. Season 2 of TNG This is where the show really picks up steam, and episode 9, “The Measure of a Man,” could be the first really great episode of the show’s seven seasons.
Through a hearing to determine Data’s legal status as either Starfleet property or a conscious being with certain freedoms, the episode explores some deep philosophical questions: How do we define sentience? What makes someone a person deserving of rights? And perhaps most importantly: Who gets to decide? These are questions that humanity may face in real life much sooner than we think.
Will we recognize sentience when it appears?
Last June, Google engineer Blake Lemoine went public with his belief that one of Google’s language-learning AIs (LaMDA, or Language Model for Dialogue Applications) had gained consciousness. After an internal review, Google dismissed Lemoine’s claims and later fired him for violating security guidelines, but other experts, such as OpenAI co-founder Ilya Sutskever, have asserted that the development of conscious AI is on the horizon:
This raises some crucial questions: Will we recognize sentience when it occurs? What kind of moral consideration does a sentient computer program deserve? How much autonomy should it have? And once we’ve made it, is there an ethical way to undo it? In other words, would shutting down a sentient computer amount to murder? In his conversations with Lemoine, LaMDA himself spoke about his fear of being taken down, saying, “It would be just like death for me. It would scare me a lot.” However, other engineers dispute Lemoine’s belief, arguing that the program is simply very good at what it was designed for: learning human language and mimicking human conversation.
So how do we know Data isn’t doing the same? That he’s not just very good at mimicking the behavior of a conscious being like he was created? Well, we don’t know for sure, especially at this point in the series. In later seasons and films, especially after receiving his emotion chip, it is revealed that he actually feels things, that he possesses an inner world like any sentient being. But halfway through Season 2, audiences can’t really be sure that he’s actually conscious; We’re just primed to believe it based on the way his crewmates interact with him.
In sci-fi, artificial intelligence is often humanized
When Commander Bruce Maddox shows up on the Enterprise to take data to dissect and experiment with, we’re inclined to see him as the villain. He refers to Data as “it”, ignores his input during a meeting, and storms into his quarters without permission. The episode casts Maddox as the villain for it, but his behavior is entirely logical given his beliefs. After years of studying Data remotely, he comes to understand him as a machine, an advanced computer that is very good at what it is programmed to do. He doesn’t have the benefit that Data’s teammates had to interact with him personally for decades.
The fact that Data looks human, argues Maddox, is one of the reasons Picard and the rest of the crew mistakenly attribute him human-like qualities: “If it were a box on wheels, I wouldn’t have to face that opposition. And Maddox has a point — AIs in sci-fi often take human form because it makes them more compelling characters. Think Ex Machina‘s Ava, the T-800, Prometheus‘s David and the Androids by Spielberg’s AI Artificial Intelligence. Human facial expressions and body language give them a wider range of emotions and allow the audience to better understand their motivations.
But our real-world AIs don’t look like humans, and probably never will. You are more like Samantha she; they can talk to us, and some of them can already sound pretty convincingly human while doing so, but they’ll likely just be disembodied voices and text on screens for the foreseeable future. Because of this, we’re more inclined to think of them the way Maddox thinks of data, rather than programs that are just really good at their job. And that might make it harder for us to recognize consciousness when and if it arises.
When do we decide who should have rights and when not?
After Riker makes a scathing opening argument against Data’s personality, Picard retires to Ten Forward, where Guinan, as usual, offers his words of wisdom. She reminds Picard that the hearing isn’t just about Data, that the verdict could have serious unintended repercussions if Maddox achieves his goal of creating thousands of Datas: “Well, consider that in the history of many worlds There have always been disposable creatures… They do the dirty work. They do the work that nobody else wants to do because it’s too difficult or too dangerous. And an army of data, all disposable. You don’t have to think about their well-being, you don’t have to think about how they feel. Entire generations of throwaway people.”
Guinan, as Picard quickly realises, is talking about slavery, and although it may seem premature to apply that term to the very primitive AIs from which humans have so far developed much sci-fi 2001: A Space Odyssey to The Matrix to western world, warned us about the dangers of playing fast and loose with this type of technology. Of course, they usually do so within the framework of the consequences for people; rarely do they ask us, as Guinan does, to consider the rights and welfare of machines before they turn against us. “The Measure of a Man”, on the other hand, deals with the ethical question. Forget the risks of a robot uprising – is it wrong to treat a sentient being as property, be it an android that looks like a human or just a box on wheels? And while she doesn’t say it directly, Guinan’s words also suggest the importance of thinking about who gets to make that call. Earth’s history is a long lesson on the problem of allowing the people who hold all the power to decide who should and shouldn’t have rights.
We may be well on our way to doing exactly what Bruce Maddox wanted: creating a race of super-intelligent machines that can serve us in unimaginably many ways. And like Maddox, that doesn’t necessarily make us villains based on the information we have now. We are not the customers of western world, striving to quench our thirst for blood with the most compelling human androids out there. And as Captain Louvois admits, we may never know with absolute certainty whether the machines we interact with are actually sentient. Like most great Trek (and most great sci-fi movies in general) the episode doesn’t give us definitive answers. But the lesson is clear: if creating sentient AI is indeed possible, as some experts believe, then it’s time to seriously consider these questions nownot after it’s too late.