Apple needs an AI magic pill, but I’m not desperate for it on macOS
Over the past few months, all eyes have been fixated on Apple and what the company is going to do with AI. The pressure is palpable and well deserved.
An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns.
Former OpenAI policy researcher Miles Brundage criticized the company’s recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making “one giant leap,” saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage.
Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google’s AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data.
The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was “discontinuous” and that it was not initially released due to “concerns about malicious applications,” but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI.
“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”
Brundage also criticized the company’s apparent approach to risk based on this document, writing that, “It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems.”
This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes “shiny products” over safety.
Over the past few months, all eyes have been fixated on Apple and what the company is going to do with AI. The pressure is palpable and well deserved.
Samsung was one of the first smartphone makers to go all-in with AI, thanks to the Galaxy AI stack. The Galaxy flagships didn’t only ship the usual Ge
“The browser is bigger than chat. It’s a more sticky product, and it’s the only way to build agents. It’s the only way to build end-to-end “workflows,
As AI starts dominates the technology landscape, it becomes more and more appealing to give the software a try, however it can be somewhat intimidatin
Imagine what it would be like to know exactly what your dog was saying when it barked, or your cat when it miaowed, or your iguana when it … made what
OpenAI has released its latest model, o1-pro, an updated version of its reasoning model o1 — but it’s not going to come cheap.“It uses more compute th
Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently — yet wrongly — claim that you had been jailed for 2
MidjourneyWhen it comes to AI image generators, you’ve got your choice from dozens these days. Two standouts from that pack (which includes Dall-E, Fi
We are a comprehensive and trusted information platform dedicated to delivering high-quality content across a wide range of topics, including society, technology, business, health, culture, and entertainment.
From breaking news to in-depth reports, we adhere to the principles of accuracy and diverse perspectives, helping readers find clarity and reliability in today’s fast-paced information landscape.
Our goal is to be a dependable source of knowledge for every reader—making information not only accessible but truly trustworthy. Looking ahead, we will continue to enhance our content and services, connecting the world and delivering value.