By Taylor Armerding
May 17, 2017
Lyrebird itself, in a brief ethics statement on its website, acknowledges its product, “could potentially have dangerous consequences such as misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.”
The statement adds that, “by releasing our technology publicly and making it available to everyone, we want to ensure that there will be no such risks.”
The technology is getting mixed responses from the security community. Bruce Schneier, CTO of IBM Resilient, author and encryption guru, told Scientific American that fake audio clips have become, “the new reality.”
On his own blog, Schneier wrote: “Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows. I don't think we're ready for this.”
But that got a bit of pushback on his comment thread. One reader argued, “As a species ‘we are never ready’ for what comes along, we learn to adapt through experience, it's probably our strongest survival skill.”
Another commenter, noting that this concern is not new, cited a report from 2003 about a professor at Oregon Health & Science University’s OGI School of Science & Engineering questioning whether audiotapes periodically released by the late terrorist mastermind Osama bin Laden were real.
“Because voice transformation technologies are increasingly available, it is becoming harder to detect whether a voice has been faked,” said Jan van Santen, a mathematical psychologist at the university.
But, of course, the audio quality of those recordings was notoriously poor. The quality of voice imitation now, coming from Adobe’s VoCo, Alphabet’s (parent company of Google) WaveNet and Lyrebird is orders of magnitude better, and expected to become even better in the next year or two.
Still, authentication experts say voice can still be a credible factor in confirming identity – as long as it is not the only factor.
“If the sole determination of identity is voice, we are in trouble,” said James Stickland, CEO of Veridium.
But, if it is one element of what he called “an ensemble” that includes possession (something you have, like a token) and knowledge (password), voice can still “play an integral role” in authentication.
If, as Schneier wrote, we are not ready for voice spoofing technology, that is because, “most people still segment possession, knowledge-based and biometric authentication,” Stickland said. “The future of authentication combines all of these and more.”
Brett McDowell, executive director of the FIDO (Fast IDentity Online) Alliance, agrees that, “voice recognition is vulnerable to a presentation attack; where the adversary records a sample of the targeted user's physical characteristics and uses that to produce an imposter copy or ‘spoof’ of that user's biometrics.”