Amazon is reportedly investing heavily in the development of a large language model (LLM) codenamed “Olympus,” with a staggering 2 trillion parameters, potentially making it one of the largest models ever trained. This initiative aims to compete with leading models from OpenAI and Alphabet. Amazon’s team, led by Rohit Prasad, former head of Alexa, is working on this project, consolidating AI efforts across the company and uniting researchers from various divisions.
Amazon has previously trained smaller models like Titan and partnered with AI model startups, such as Anthropic and AI21 Labs, offering their services through Amazon Web Services (AWS). The company sees the development of in-house models as a way to enhance its offerings on AWS, catering to enterprise clients seeking top-performing models. While there is no set release timeline, Amazon is committed to advancing LLMs to power AI tools that generate human-like responses, despite the considerable computing power and costs involved.
In an earnings call, Amazon executives revealed their plans to increase investment in LLMs and generative AI while reducing investments in retail fulfillment and transportation. However, it’s important to note that Amazon has declined to comment on this project, and the details remain confidential.