Voice Engine, a new OpenAI tool capable of reproducing voice in 15 seconds

Voice Engine, a new OpenAI tool capable of reproducing voice in 15 seconds

The California startup has introduced a new voice cloning tool, explaining that its use will be restricted due to the risk of fraud and other crimes.

OpenAI, the generative artificial intelligence (AI) giant and publisher of ChatGPT, on Friday introduced an audio cloning tool, the use of which will be restricted to prevent fraud or crimes, such as identity theft.

This AI model, called “Voice Engine,” can reproduce a person's voice from a 15-second audio sample, according to an OpenAI press release about the results of a small-scale test.

'Cautious and informed approach'

OpenAI confirmed that it had adopted a “cautious and informed approach” before wider distribution of the new tool “due to the potential for misuse of synthetic voices.”

“We recognize that the ability to generate human-like sounds carries serious risks, and is especially important in this election year,” the San Francisco-based company said.

“We are working with US and international partners from government, media, entertainment, education, civil society, and other sectors and taking their feedback into account as we develop the tool.”

In this crucial election year around the world, disinformation researchers fear the misuse of generative AI applications (automated production of text, images, etc.), especially voice cloning tools, which are cheap, easy to use and difficult to trace.

Rules to be followed

OpenAI explained that the partners testing the “Voice Engine” have agreed to rules that require, among other things, explicit and informed consent from anyone whose voice is duplicated and transparency to listeners: they must be clear that the voices they hear are generated by artificial intelligence.

See also  "This Bed We Made": Quebec studio Lowbirth Games presents the first game inspired by the works of Alfred Hitchcock and Agatha Christie

“We have implemented a range of security measures, including a watermark so we can trace the origin of all sounds generated by the Voice Engine, as well as proactively monitoring its usage,” OpenAI insisted.

This cautious offer comes after a major political incident, when a consultant working in the presidential campaign of Democratic rival Joe Biden developed an automated program that impersonated the US President, in his re-election campaign.

The voice imitating Joe Biden called on voters to encourage them to abstain from voting in the New Hampshire primary. Since then, the United States has banned calls that use cloned voices, generated by artificial intelligence, in order to combat political or commercial fraud.

Most read

You May Also Like

About the Author: Octávio Florencio

"Evangelista zumbi. Pensador. Criador ávido. Fanático pela internet premiado. Fanático incurável pela web."

Leave a Reply

Your email address will not be published. Required fields are marked *