Amazon is building a more “generalized and capable” large language model (LLM) to power Alexa, said Amazon CEO Andy Jassy during the company’s first-quarter earnings call yesterday. An LLM, like ChatGPT, is a deep learning algorithm that can recognize, summarize and generate text and other content based on knowledge from enormous amounts of text data.
Jassy said that although Amazon has had an LLM powering Alexa, the tech giant is working on one that is more capable than the current one. The Amazon executive believes that the addition of an improved LLM will help Amazon work toward its goal of building “the world’s best personal assistant,” but acknowledged that it will be difficult to do so across many domains.
“I think when people often ask us about Alexa, what we often share is that if we were just building a smart speaker, it would be a much smaller investment,” said Jassy during the call. “But we have a vision, which we have conviction about that we want to build the world’s best personal assistant. And to do that, it’s difficult. It’s across a lot of domains and it’s a very broad surface area. However, if you think about the advent of Large Language Models and generative AI, it makes the underlying models that much more effective such that I think it really accelerates the possibility of building that world’s best personal assistant.”
Jassy went on to say that he believes Amazon has a good starting point with Alexa, as it is has a “a couple of hundred million endpoints” being used across entertainment, shopping and smart homes. He also noted that there is a lot of involvement from third-party ecosystem partners.
“We’ve had a large language model underneath it, but we’re building one that’s much larger and much more generalized and capable,” Jassy said. “And I think that’s going to really rapidly accelerate our vision of becoming the world’s best personal assistant. I think there’s a significant business model underneath it.”
During the call, Jassy highlighted that Amazon has invested in AI and LLMs for years and that while it has the ability to invest heavily in building LLMs, small companies don’t, which is why the company launched Bedrock earlier this month. Bedrock provides a way to build generative AI-powered apps via pretrained models from startups including AI21 Labs , Anthropic and Stability AI . Available in a “limited preview,” Bedrock also offers access to Titan FMs (foundation models), a family of models trained in-house by AWS.
Since its launch last year, ChatGPT has dominated the internet and become increasingly popular. With all the hype surrounding ChatGPT, it’s no surprise that major tech companies are looking to incorporate LLM-based improvements to their own offerings to keep up with the fast-paced AI space. For instance, The Information reported yesterday that Apple is developing LLM-based improvements for Siri. It’s worth nothing that Google is likely doing something similar for Assistant.
Amazon wasn’t the only company to bring up AI during its quarterly call with investors, as Alphabet, Microsoft and Meta emphasized their investments in large language models as well. Alphabet CEO Sundar Pichai said that Google would continue to incorporate AI to advance search, while Microsoft CEO Satya Nadella said the company would continue to invest in AI, noting that Microsoft has already seen an increase in usage for Bing after the search engine was updated with a ChatGPT integration. In addition, Meta CEO Mark Zuckerberg said the company will be investing in AI and will introduce new AI-related updates across its apps.
Amazon reported first-quarter earnings that beat expectations and initially sent shares surging, but later reversed course after executives raised concerns of ongoing weakness in cloud growth. Revenue for the quarter increased 9.4% to $127.4 billion, while operating income came in at $4.8 billion.
Amazon omits India business in earnings, a first in years
Amazon is developing an improved LLM to power Alexa by Aisha Malik originally published on TechCrunch