Apple confirms long delay for AI-boosted Siri assistant
Apple’s efforts with putting advanced AI capabilities across its ecosystem, the way Google has implemented them with Gemini, have a lot of ground left

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models — and do so at a fraction of the price.
Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI’s o1 and DeepSeek’s R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google’s Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.
What’s more, on Tuesday, researchers from Hugging Face released a competitor to OpenAI’s Deep Research and Google Gemini’s (also) Deep Research tools, dubbed Open Deep Research, which they developed in just 24 hours. “While powerful LLMs are now freely available in open-source, OpenAI didn’t disclose much about the agentic framework underlying Deep Research,” Hugging Face wrote in its announcement post. “So we decided to embark on a 24-hour mission to reproduce their results and open-source the needed framework along the way!” It reportedly costs an estimated $20 in cloud compute credits, and would require less than 30 minutes, to train.
Hugging Face’s model subsequently notched a 55% accuracy on the General AI Assistants (GAIA) benchmark, which is used to test the capacities of agentic AI systems. By comparison, OpenAI’s Deep Research scored between 67 – 73% accuracy, depending on the response methodologies. Granted, the 24-hour model doesn’t perform quite as well as OpenAI’s offering, but it also didn’t take billions of dollars and the energy generation capacity of a mid-sized European nation to train.
These efforts follow news from January that a team out of University of California, Berkeley’s Sky Computing Lab managed to train their Sky T1 reasoning model for around $450 in cloud compute credits. The team’s Sky-T1-32B-Preview model proved the equal of early o1-preview reasoning model release. As more of these open-source competitors to OpenAI’s industry dominance emerge, their mere existence calls into question whether the company’s plan of spending half a trillion dollars to build AI data centers and energy production facilities is really the answer.
Apple’s efforts with putting advanced AI capabilities across its ecosystem, the way Google has implemented them with Gemini, have a lot of ground left
AWS sign in Javitz Center NYC. Fionna Agomuoh / Digital TrendsAmazon spent $26.3 billion in capital expenditures during the fourth quarter of 2024, an
Earlier today, I gave a live demo of ChatGPT as it served enlightening words on a lifestyle without nicotine vices. Two of my heavy-smoker friends saw
Adobe FireflyFollowing on the success of its IP-friendly Firefly Image model, Adobe announced on Wednesday the beta release of a new Firefly Video mod
LiveNOW from FOXJD Vance took to the stage at the Paris AI Action Summit on Wednesday declaring that, “the United States of America is the leader in A
What if you tell an AI chatbot to search the web, look up a certain kind of source, and then create a detailed report based on all the information it
We are a dynamic information platform dedicated to delivering timely, relevant, and reliable content across a broad spectrum of topics. From the latest in technology and business to lifestyle, health, and global affairs, we aim to keep our readers informed and inspired.
Our editorial team is committed to maintaining high standards of accuracy and clarity, ensuring that every article provides value and context in an ever-changing world. We believe in the importance of accessible knowledge and strive to make complex topics understandable for everyone.
Whether you're here to stay updated on current events, explore thought-provoking features, or simply learn something new, our goal is to offer a trustworthy source of information that meets the needs of a diverse and curious audience.