Earlier this week Google unveiled “Duplex” an intelligent calling feature for its updated Google Assistant AI software.
In contrast to simply responding to queries from users, with Google Duplex, Google Assistant can act on “offense”.
In other words, it can make calls for you to perform certain tasks, such as setting up meetings and making appointments.
Duplex does this with the most human-sounding voice ever… Complete with “…ums” and “…ahs” to make the artificial intelligence (AI) driven helper seem more realistic in phone conversations.
Indeed, Google Assistant seems so real that, during the demos, the humans on the other end of the line did not seem to know that they were talking to a robot.
It seems that, with Duplex, Google passed the Turing Test.
The video above of the Google Duplex demo (demon?) is incredible… The first demonstration is of an AI with a woman’s voice. It is remarkable… It is incredibly realistic.
In the first call, the AI calls a hair salon… The woman at the hair salon who answers the phone clearly has no idea she’s talking to a robot.
The AI even uses a robot version of the common teen inflection of high rising terminal… Otherwise known as “upspeak“.
The AI adds pauses and human-like fillers such as, “Mmmm hmmm.”
The next demonstration is of an AI with a male voice. The AI calls a restaurant and a woman who seems to have an Asian accent answers the phone.
The AI understands everything, and navigates through a complex call with unexpected situations. The AI doesn’t make a reservation, but finds out about the restaurant wait times.
Potentially useful data for Google’s local search.
Google Duplex’s AI accuracy is spectacular to the casual observer… I’m sure Google used the most impressive AI calls as examples in this presentation, but I would love to hear Siri be able to do this.
Siri has been publicly leapfrogged in this demo.
Google Duplex Security and Ethical Concerns
In contrast to Apple’s strategic focus on user safety, the performance of Google Duplex has obviously raised ethical concerns.
Indeed, alarm bells started ringing almost immediately with artificial intelligence experts and technology gurus criticizing Google.
Some are saying the company has a broken moral compass because the humans were unaware that they were talking to an AI.
While these demos were completely benign, there are easily imaginable negative scenarios in which an AI pretending to be a human with apparent authority could be used to do significant harm.
The ethical and security concerns of undetectable AI posing as humans are real…
And serious, negative voicejacking and facejacking implications are easily imagined, and therefore, totally real in the context of advancements in applications of artificial intelligence.
What Will Siri Do Next?
There is a bit of an AI arms race happening now… And Google just threw down the gauntlet with Duplex.
Siri has been lagging behind Google Assistant and it would seem to be losing more ground, at least in public demos such as this.
However, Apple recently hired John Giannandrea, former head of search (no small role) and head of machine learning at Google, who headed up Google’s AI pursuits including voice recognition software.
Apple is clearly working on improving Siri… And there are other areas where Apple’s technology could be extremely useful to humans as AI becomes more pervasive in everyday life.
Apple Face ID As AI Detector
Apple already leads all of Silicon Valley in its efforts to protect user privacy.
As the best selling smartphone in the U.S., iPhone X was the first Apple device to be released with Face ID, a form of biometric authentication that is very difficult to spoof.
Face ID uses a facial recognition sensor that projects 30,000 infrared dots onto a user’s face while a corresponding software module evaluates the pattern to determine if the user is who he or she says they are.
The system uses machine learning to recognize a user’s face over time. A positive or negative result will confirm or deny access to the user’s iPhone.
The facial pattern is not accessible by Apple, in accordance with its privacy policies.
Apple reports that Face ID is 20x more difficult to unlock than Touch ID, the fingerprint sensor that has been built into iPhones since the iPhone 6 was released. iPhone Touch ID has a 1 in 50,000 chance of unlocking for the wrong person, whereas Face ID is 1 in 1,000,000.
Overall, even in its first iteration, Face ID is a system that is designed to recognize humans. When combined with Touch ID, both verification systems could do a great job as a source of human verification to identify artificial intelligence robots posing as human.
Face ID as Defense Against AI Robocalls
Listening to the demo above, it appears Google Duplex passes the Turing Test in its own way.
The Turing Test is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
This could be a problem in any number of situations where the artificial intelligence intends to confuse, or mislead or commit a crime… Basically anytime it has some evil intent.
I could easily see this worsening the problem with robocalls…
However, perhaps Apple’s Face ID offers a solution.
With Face ID, Apple could allow iPhone users to verify who they are talking to before or during any call with another party.
As Google demonstrated, humans did not know they were talking with an AI.
However, since the demo, there have been calls for Google to require its AI to identify itself as such… Such as saying “Hi, this is Google calling…” But this requires Google to police itself which it may not do.
Apple could resolve this by requiring Face ID in situations where you receive a call from anyone.
Indeed, this could eliminate all robocalls from AIs or robocalling software altogether.
As long as Apple has access to your profile and can verify your face using the iPhone, the company could provide a form of proof to the parties on your call that you are human.
Of course, Google Assistant, Siri, Alexa and Cortana are probably the least of our worries when it comes to nefarious potential uses of AI and technology to spoof users into unknowingly talking to an AI.
Facejacking is a real thing…
Facejacking makes it possible for anyone with the technology to create a realistic video of anyone else talking and saying what they facejacker wants them to say… All with a few clicks.
But facial recognition and verification software, such as Face ID, could help mitigate the threat of facejacking.
With Apple Face ID, we could require users to verify before anything we hear or see via phone call or video, respectively, is taken seriously.