David Pawlan
Co-Founder
Hey friends,
You ever look at the AI headlines and think, “Am I still the main character?”
This week, Claude grew a moral compass, the UAE handed part of its government to an LLM, and a tiny lab dropped a video model that builds scenes frame by frame.
Let’s break it down.
Claude’s got values now
Anthropic just published a giant study based on 700K+ Claude chats. They mapped out its “values”—from helpfulness and knowledge to professional boundaries.
In ~3% of cases, Claude resisted user nudges toward unethical advice. It’s not just mirroring us anymore; sometimes, it pushes back.
AI harms, categorized
Anthropic also released a multidimensional AI Harms Framework. Think: risks to autonomy, society, the economy, and your psyche—ranked and structured to guide development and policy.
It’s one of the clearest signs that AI governance is moving from vibes to systems.
Bonus: DeepMind’s Hassabis says AGI could end all disease
No pressure. He demoed Astra, a live-assistant AI that IDs art, reads emotion, and works via AR glasses.
The UAE is letting AI write its laws
They’re building a Regulatory Intelligence Office to automate legislation with AI. The goal? Speed up policy creation by 70% and draft smarter laws using a unified database of court records and local regs.
Yes, it’s groundbreaking. Yes, people are worried about bias and accountability. No, this isn’t a sci-fi plot.
Autoregressive video models are here
Sand AI dropped MAGI-1, an open-source model that creates videos frame-by-frame instead of all at once. The result? Way more consistent style and storytelling.
Think Pixar meets GitHub. You can play with it now.
Instagram’s AI is stepping in for parents
Meta’s rolling out auto-restrictions for teen accounts using AI detection. No word on whether it’s teen-proof, but the intent is to lock things down by default.
Some of the week’s best AI drops:
How well does GPT know you?
Prompt: Based on all our chats so far, do you notice any blind spots or recurring patterns in my thinking that I might not be consciously aware of yet?
Until next time —
May your AI assistant stay helpful, harmless, and not judge your ethics.
David