AInsights: Everything You Need to Know About Google Gemini 1.5; Surpasses Anthropic, OpenAI, In Performance, For Now


AInsights: Your executive-level insights on the latest in generative AI
Google is making up for lost time in the AI race by following the age old Silicon Valley mantra, move fast and break things.
Google recently released Gemini 1.5, and it’s next level! This release dethrones Anthropic’s brief reign as leading foundation model. But by the time you read this, OpenAI will also have made an announcement about ChatGPT improvements. It’s now become a matter of perpetual leapfrogging, which benefits us as users, but makes it difficult to keep up! Note: I’ll follow up with a post about ChatGPT/DALL-E updates.
Here’s why it matters…
1 million tokens: It’s funny. I picture Dr. Evil raising his pinky to his mouth as he says, “1 million tokens.” Gemini 1.5 boasts a dramatically increased context window with the ability to process up to 1 million tokens. Think of tokens as inputs, i.e. words or parts of words, in a single context window. This is a massive increase from previous models like Gemini 1.0 (32k tokens) and GPT-4 (128k tokens). It also surpasses Anthropic’s context record at 200,000 tokens.
A 1 million token context window allows Gemini 1.5 to understand and process huge amounts of data. This unlocks multimodal super-prompting and higher caliber outputs. A 1 million token context window can support extremely long books, documents, scripts, codebases, video/audio files, specifically:
1 hour of video 
</div>
<div class=