Apple loses AI whiz to Meta with an offer that will make your eyes water
It was just last month that OpenAI boss Sam Altman claimed that Meta had been trying to poach his top AI engineers by offering hiring bonuses of as mu
Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently — yet wrongly — claim that you had been jailed for 21 years for murdering members of your family.
Well, that’s exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAI’s widely used AI-powered chatbot.
Not surprisingly, Holmen has now filed a complaint with the Norwegian Data Protection Authority, demanding that OpenAI be fined for its distressing claim, the BBC reported this week.
In the response to Holmen’s ChatGPT inquiry about himself, the chatbot said he had “gained attention due to a tragic event.”
It went on: “He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son.”
The chatbot said the case “shocked the local community and the nation, and it was widely covered in the media due to its tragic nature.”
But nothing of the sort happened.
Understandably upset by the incident, Holmen told the BBC: “Some think that there is no smoke without fire — the fact that someone could read this output and believe it is true is what scares me the most.”
Digital rights group Noyb has filed the complaint on Holmen’s behalf, stating that ChatGPT’s response is defamatory and contravenes European data protection rules regarding accuracy of personal data. In its complaint, Noyb said that Holmen “has never been accused nor convicted of any crime and is a conscientious citizen.”
ChatGPT uses a disclaimer saying that the chatbot “can make mistakes,” and so users should “check important info.” But Noyb lawyer Joakim Söderberg said: “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
While it’s not uncommon for AI chatbots to spit out erroneous information — such mistakes are known as “hallucinations” — the egregiousness of this particular error is shocking.
Another hallucination that hit the headlines last year involved Google’s AI Gemini tool, which suggested sticking cheese to pizza using glue. It also claimed that geologists had recommended that humans eat one rock per day.
The BBC points out that ChatGPT has updated its model since Holmen’s search last August, which means that it now trawls through recent news articles when creating its response. But that doesn’t mean that ChatGPT is now creating error-free answers.
The story highlights the need to check responses generated by AI chatbots, and not to trust their answers blindly. It also raises questions about the safety of text-based generative- AI tools, which have operated with little regulatory oversight since OpenAI opened up the sector with the launch of ChatGPT in late 2022.
Digital Trends has contacted OpenAI for a response to Holmen’s unfortunate experience and we will update this story when we hear back.
It was just last month that OpenAI boss Sam Altman claimed that Meta had been trying to poach his top AI engineers by offering hiring bonuses of as mu
OpenAI released a paper last week detailing various internal tests and findings about its o3 and o4-mini models. The main differences between these ne
Nearly two billion people across the world suffer from a blood condition called anemia. People living with anemia have a lower than average number of
More than 25 years after its original release, Hayao Miyazaki’s action epic Princess Mononoke is back in theaters across North America. The movie is c
Snapchat is bringing generative AI videos to its social platform. The company has today introduced what it calls Video Gen AI Lenses, which essentiall
Google’s Gemini AI has steadily made its way to the best of its software suite, from native Android integrations to interoperability with Workspace ap
OpenAI has released its latest model, o1-pro, an updated version of its reasoning model o1 — but it’s not going to come cheap.“It uses more compute th
Andrew Tarantola / Google LabsGot a degree and no idea what to do with it? Google’s newest AI feature can help. The company announced on Wednesday the
We are a comprehensive and trusted information platform dedicated to delivering high-quality content across a wide range of topics, including society, technology, business, health, culture, and entertainment.
From breaking news to in-depth reports, we adhere to the principles of accuracy and diverse perspectives, helping readers find clarity and reliability in today’s fast-paced information landscape.
Our goal is to be a dependable source of knowledge for every reader—making information not only accessible but truly trustworthy. Looking ahead, we will continue to enhance our content and services, connecting the world and delivering value.