Last month, a recorded message urged voters to "save your vote for the November election," resembling the President's voice. However, it was a potent AI clone, raising concerns about audio fakery's increasing sophistication.

AI-Generated Voice Clones Mimic Candidates, Prompting Concerns About Growing Threat of Audio-Fakery in Elections
(Photo : Pixabay/albersHeinemann)
AI-Generated Voice Clones Mimic Candidates, Prompting Concerns About Growing Threat of Audio-Fakery in Elections

Experimental Call Reveals Potential for Voice-Cloning Scams and Election Manipulation

Rafe Pilling from SecureWorks set up a call In an experimental scenario in which during the conversation, "Chris" expressed concern about voice-cloning techniques and their potential malicious use. Mr Pilling, seemingly understanding the concern, suggested arranging an interview.

However, the person on the other end was not Mr Pilling but a demonstration by Secureworks, showcasing an AI system that could make calls, respond to reactions, and imitate voices. While the demonstration was not flawless, it highlighted the capabilities of current AI systems, particularly in conversation.

The technology used a freely available commercial platform capable of making millions of phone calls per day, potentially revolutionizing call centers and surveys. Secureworks emphasized the security risks associated with the rapid deployment of conversational AIs on a large scale, foreseeing voice cloning as an alarming development in the realm of AI-powered scams.

As AI technologies advance, the efficiency and scalability they offer become increasingly impactful, potentially transforming existing operations, including fraudulent activities like phone scams.

Furthermore, concerns are growing over the potential impact of AI-generated voice clones on upcoming major elections in the UK, US, and India, with fears that these sophisticated fake voices could manipulate democratic outcomes. Senior politicians in various countries have already been targeted by audio deepfakes.

As audio deepfakes become more common and challenging to verify than visual deepfakes, calls are increasing for social media firms to strengthen efforts against disinformation and for developers to consider potential misuse before launching voice cloning tools.

READ ALSO: Amazon's 'Grandma' Alexa Will Soon Read Bedtime Stories as Voice Cloning Starts

Voice Cloning Power and Safeguards in AI Evolution

Voice cloning is an emerging artificial intelligence technology with vast potential, enabling precise replication of a person's voice. The positive impact spans various domains, particularly in entertainment where overbooked voice-over artists can send voice samples for cloning, ensuring they are still compensated for the job.

Additionally, it facilitates language translation for actors, eliminating the need for foreign-language counterparts in film production. The technology's most significant potential benefit lies in the medical field, offering a lifeline to individuals with speech disabilities by creating artificial voices.

For patients facing larynx removal due to conditions like throat cancer, pre-surgery voice recordings can be cloned, preserving a familiar and personalized vocal identity.

While voice cloning presents exciting possibilities, precautions are crucial due to ethical, legal, and scam-related concerns. There are certain safeguards needed to implement like the following:

  1. Opt In/Opt Out Procedures: Adopt opt-in/opt-out consent procedures, similar to facial recognition, informing individuals about voice data collection, use, storage, and alternatives.
  2. Multi-Factor Authentication: Integrate multi-factor authentication for additional validation, sending codes to users' devices after the primary password or biometric entry.
  3. Liveness Detection: Implement liveness detection, a method common in facial recognition, to distinguish live voices from playback or spoof attempts, enhancing security in voice recognition authentication.


RELATED ARTICLE:  Mother Says AI Was Used To Clone 'Kidnapped' Daughter's Voice to Fool Her, Fake Abduction of Her Child

Check out more news and information on Artificial Intelligence in Science Times.