The excitement surrounding advanced artificial intelligence (AI) seems to be fading a bit. OpenAI’s latest language model, Orion, is not meeting the high expectations set by previous models like GPT-4 and GPT-3. Reports from Bloomberg and The Information indicate that the model is not performing as well as hoped, especially in areas like coding.
But OpenAI is not the only one facing challenges. Google’s Gemini model and Anthropic’s Claude 3.5 Opus are also falling short of their predicted performance levels. This trend of diminishing returns in the AI industry suggests that simply making models bigger and more powerful may not be a sustainable approach in the long term.
Margaret Mitchell, the chief ethics scientist at Hugging Face, believes that a shift in training methods may be necessary to achieve human-like levels of intelligence and versatility in AI. The industry’s heavy reliance on scaling up models and training data is proving to be costly and unsustainable. Companies are struggling to obtain high-quality datasets without human input, particularly in language-related tasks.
The costs of building and operating state-of-the-art AI models are rising rapidly. Anthropic CEO Dario Amodei estimates that by 2027, these models could cost over $10 billion each to develop. With Opus and Gemini facing performance issues and limited progress, it appears that the rapid advancement of the AI industry may be slowing down.
As noted by Noah Giansiracusa, a mathematics professor, the era of swift AI advancements may have been fleeting and unsustainable. The industry now faces the challenge of finding new approaches to AI development that can lead to significant breakthroughs without breaking the bank.
In conclusion, the hype around AI may be losing steam as companies confront the challenges of current scaling strategies. It is evident that a new direction is needed to drive AI innovation forward in a sustainable and cost-effective manner.