Co-Founder
Hey friends,
Today’s newsletter is packed. Between Amazon clawing back relevance, Stanford and NVIDIA building minute-long AI cartoons, and Google turning Gemini into a legit research assistant, it feels like we’re watching the next AI wave take shape — one that’s less about novelty, more about actual utility.
Let’s jump in.
After years of choppy, forgettable AI video clips, NVIDIA and Stanford just raised the bar with Test-Time Training — a new technique that stitches together consistent, minute-long animations.
The model uses neural networks as a kind of working memory, and it shows. In demos with Tom & Jerry-style shorts, scenes flow logically and characters stay recognizable. It’s still early, but this is a real step toward actual AI filmmaking.
If you thought Amazon was sitting out the GenAI race, think again.
They just dropped:
Both models are live in Bedrock, and they’re cheap — Sonic clocks in at around 80% less than OpenAI’s equivalents. Combine that with their agentic browser tool (Nova Act) and Alexa+ rollout, and Amazon suddenly feels like a very real player again.
Google’s Gemini Advanced users can now leverage an AI researcher by trying out “Deep Research” — a new mode that synthesizes sources, adds audio summaries, and integrates cleanly with NotebookLM and AI Mode. It’s positioned less as a chatbot and more as an end-to-end knowledge worker.
Other new upgrades:
It’s one of the first AI tools that actually feels like it could replace some human workflows instead of just summarizing them.
Mira Murati’s startup, Thinking Machines, now has nearly half its team composed of OpenAI alumni. New additions include:
No one knows exactly what they’re building yet — but with this roster, it’s either going to be amazing… or a very expensive ghost ship.
If this helped you cut through the noise, forward it to someone who’s been asking “What’s real in AI right now?”
Or hit reply and tell me: what’s one tool or idea from today’s newsletter you’re actually going to try?
Talk soon,
David