Ravi Komatireddy
Soapbox

The Turing Test is Dead

By Ravi Komatireddy, M.D., MCTI
Ravi Komatireddy

The world of medicine has long centered on the central maxim laid out in the Hippocratic Oath, “Do no harm.” When we look at the growth of AI in health care, I say let’s also include a 21st-century addendum, “Do not deceive.”

In May 2018, Google showcased a demo of Google Duplex, an AI-based technology that engages in real-world conversations to carry out simple tasks. The demo brought to mind some of Steve Jobs’ famous product keynotes, as Google Duplex was used to book a hair appointment over the phone with a stylist unsuspecting that she was speaking with a machine.

The demo was conducted live on stage in real-time with an audience of tech bloggers. The technology deftly maneuvered around the usual questions and nuances of human language, even parroting some very human pauses “umms” and other filler words.

But there’s something not right about watching someone be fooled. In this instance, no contract had been signed. No permission was given. With this type of AI voice assistant technology becoming more pervasive, all of us have at one point or the other over the last few years felt compelled to sheepishly ask the customer representative or chat interface, “Are you a human or a bot?”

This demonstration wasn’t just a test of a new technology—it was The Turing Test. Alan Turing was a true mathematical genius and pioneer in the fields of computation and information theory. He also cracked the Enigma code during WWII, which helped the Allies pour baking soda on the grease fire that was the Nazi vision for the world.

The Turing Test, as it came to be known over time, was a pretty simple idea: If a machine can interact with a human being without being detected as a machine, it has demonstrated artificial intelligence. Over time, the idea became encoded in our culture in countless pop culture touchpoints. It seems there is hardly a science fiction book, TV show, or movie that doesn’t have the stereotypical human-like AI machine wanting to be a human trope.

Over the years, the test has been modified, but at its core, it still requires that the human subject interacting with the machine cannot distinguish it from a human. This is important because it is a way of measuring the progress of machine learning and intelligence. If a machine is doing what it is programmed to do and can pass as a human, it should be able to function as one, or at least be able to manage the intricacies of human communication.

Time for a New Test

The ethical implications of AI taking over tasks previously carried out by humans has been a topic of debate for years, a debate that has only accelerated with the recent introduction of technologies, such as ChatGPT.

As AI technology has become more sophisticated, it’s easy to see how machines could replace human beings in many different roles. And the potential for AI to displace humans in the workforce has sparked concern among those who worry that advances in machine learning will lead to widespread unemployment. It’s hard to blame them in a time when trust in corporate leadership has fallen to all-time lows, and layoffs in the tech sector remind us of the brutal bottom-line considerations inherent in today’s economy for even the most talented professionals.

While it is true that AI has the potential to make certain jobs obsolete, it’s also important to consider the ways in which it can augment human capabilities, rather than replace them. In the case of Google Duplex, for example, it’s easy to see how the technology could be used to augment the work of customer service representatives.

Imagine being able to delegate the more mundane tasks of booking appointments or answering frequently asked questions to a machine, freeing up human customer service reps to focus on more complex and nuanced interactions with customers.

However, there are also potential downsides to consider. For one thing, there’s the issue of transparency. As was demonstrated in the Google Duplex demo, it is possible for a machine to convincingly imitate human speech patterns and responses, potentially leading to situations in which customers are unaware that they are interacting with a machine. The machine was able to deceive a human being into thinking it was another human being, but it did so by manipulating the nuances and inflections of human language in a way that felt uncomfortable and wrong. This is not the kind of interaction that we should be striving for with our AI technology.

Instead, we should focus on creating machines that can interact with us in a way that is open and transparent. This could involve developing more sophisticated chat interfaces that clearly communicate when they are a machine, or creating AI assistants that are able to engage with us in a way that is more natural and intuitive.

Ultimately, the goal of AI should not be to create machines that are indistinguishable from humans, but rather to create machines that are able to work alongside us and help us to achieve our goals in a way that is efficient, effective, and transparent. By focusing on these values, we can ensure that our AI technology is helping us to build a better future, rather than simply creating new ethical dilemmas for us to navigate.

Machines are tools, and the evolution of humans is intimately linked to the use of tools. It’s also natural for humans to personify tools. Today, we are creating, using, and bonding with conversational machines in unique and interesting ways, and there are opportunities to utilize AI, particularly in health care, in ways that can help us improve our health and well-being. But successful implementation begins with centering ethics at the heart of the user experience. The world of medicine has long centered on the central maxim laid out in the Hippocratic Oath, “Do no harm.” I say, let’s also include a 21st-century addendum, “Do not deceive.”

About The Author

Ravi Komatireddy