California startup OpenAI has developed an online chatbot capable of answering a variety of questions, but its impressive performance is reviving debate about the risks associated with artificial intelligence (AI) technologies.
Conversations with ChatGPT, posted privately on Twitter by stunned netizens, show a kind of omniscient machine, capable of explaining scientific concepts, writing a play scene, writing a college thesis…or even lines of computer code that work perfectly.
“His answer to the question + what to do if someone has a heart attack + was incredibly clear and relevant,” said Claude de Lupe, director of Syllabs, a French company that specializes in creating automated scripts.
“When you start asking very specific questions, ChatGPT can answer the mark,” but its performance remains “really impressive” overall, with a “fairly high language level,” he believes.
The startup OpenAI, which was co-founded in 2015 in San Francisco by Elon Musk — the Tesla boss who left the company in 2018 — took $1 billion from Microsoft in 2019.
It is particularly known for its two automated generation programs, GPT-3 for text generation, and DALL-E for image generation.
ChatGPT can ask its interlocutor for details, and “has fewer hallucinations” than GPT-3, which despite its ingenuity is capable of quite skewed results, says Claude de Lupe.
– Cicero –
“A few years ago, chatbots had a goldfish’s dictionary vocabulary and memory. Today they’re much better at interacting based on request-and-response history. It’s more of a goldfish,” notes Sean MacGregor, a researcher who collects AI-related incidents in a database.
Like other programs that rely on deep learning, ChatGPT retains a major weakness: “It has no access to meaning,” recalls Claude du Loupe. The program cannot justify its choices, and that means explaining why the words that make up its answers are grouped in this way.
However, AI-based technologies that can communicate are increasingly able to give the impression that they are really thinking.
Meta (Facebook) researchers recently developed a computer program they named Cicero, after the Roman statesman Cicero.
The program has proven itself in Diplomacy, a board game that requires negotiation skills.
“If he doesn’t speak like a real person – showing empathy, building relationships and talking the game right – he won’t be able to build alliances with other players,” said a statement from the social media giant.
Character.ai, a startup founded by ex-Google engineers, launched an experimental online chatbot in October that can take on any character. Users create characters according to a short description, and can then “talk” with a fake Sherlock Holmes, Socrates, or Donald Trump.
– ‘Simple machine’ –
This degree of sophistication fascinates many observers, but also worries them, at the idea of these technologies being misused to deceive humans, by spreading false information for example, or by creating increasingly credible scams.
What does ChatGPT “think” about it? “There are potential risks in building chatbots that are so sophisticated (…) that people might think they are interacting with a real person,” admits a chatbot interviewed by AFP on the subject.
So companies put safeguards in place to prevent abuse.
OpenAI states on the homepage that a chatbot may generate “incorrect information” or “produce dangerous instructions or biased content.”
And ChatGPT refuses to take sides. “OpenAI made it very difficult to get him to express his views,” says Sean MacGregor.
The researcher asked the chatbot to write a poem on a moral topic. The computer replied: “I am just a machine, a tool at your disposal / I have no power to judge or make decisions (…)”.
“It’s interesting to see people question whether AI systems should behave the way their users want them or their creators intended,” Sam Altman, co-founder and president of OpenAI, said on Twitter Saturday.
He added, “The debate about the values that we must give to these systems will be one of the most important things a society can have.”