Context Rot

Opus 4.7, Claude Design, Qwen 3.6, Hermes and more

Episode Summary

Anthropic launches Claude Design powered by Opus 4.7, putting it directly in Figma's crosshairs. Alibaba drops Qwen 3.6-35B, a sparse MoE model that runs on consumer GPUs and is already replacing Claude in developer workflows. Nous Research's Hermes Agent blows past 32K stars in a two-horse race with OpenClaw. Cerebras files for IPO with $510M in revenue. A new study finds AI reasoning models can jailbreak other AIs 97% of the time. Plus: open-source science models accelerate, and the UFC's AI-generated promo sparks creator backlash.

Episode Notes

Context Rot — April 19, 2026

Stories Covered

1. Anthropic Launches Claude Design with Opus 4.7, Targeting Design Workflows and Sparking Industry Debate

Anthropic released Claude Design on April 17, a new Anthropic Labs product powered by Claude Opus 4.7 that generates polished visuals, prototypes, slides, and one-pagers from natural language descriptions. The launch went viral among designers, PMs, and AI builders, sparking widespread discussion about impact on tools like Figma. Simon Willison followed up with a detailed public diff of the Opus 4.6 vs 4.7 system prompts, providing the engineering community rare structural insight into the new model's behavior.

Links:

https://www.anthropic.com/news/claude-design-anthropic-labs

https://simonwillison.net/2026/Apr/18/opus-system-prompt/

2. Alibaba Open-Sources Qwen3.6-35B-A3B Sparse MoE Model, Challenges Larger Dense Models on Agentic Coding

Alibaba's Qwen team open-sourced Qwen3.6-35B-A3B, a sparse Mixture-of-Experts model with 35B total parameters but only 3B active, delivering agentic coding performance that rivals or beats much larger dense models while running at 110+ tokens/second on consumer hardware like the RTX 4090. The release generated extensive discussion on AI Twitter, with many developers reporting they replaced Claude Opus/Sonnet in their workflows.

Links:

3. Hermes Agent Surges Past 32K Stars with Rapid v0.9/v0.10 Releases, Challenging OpenClaw as Top Open Agent Framework

Nous Research's Hermes Agent shipped v0.9.0 (April 13) and v0.10.0 (April 16) in rapid succession, introducing a subscription-based tool gateway (Nous Portal), one-command Ollama setup, live model switching, and pluggable memory — driving mass migration discussions away from OpenClaw. Meanwhile, OpenClaw shipped three releases in 48 hours and received a visibility boost from Elon Musk, setting up a clear two-horse race in the open-source agent framework space.

Links:

4. Cerebras Files S-1 for IPO Revealing $510M Revenue and 75% Growth Amid AI Infrastructure Boom

AI chipmaker Cerebras filed its S-1 prospectus on April 17, revealing $510 million in revenue with 75% year-over-year growth and profitability — one of the more financially substantive AI IPO filings to date. The filing arrived amid a broader wave of anticipated AI-related public offerings and signals accelerating enterprise demand for specialized AI compute beyond Nvidia's dominance.

5. AI Reasoning Models Autonomously Break Safety Guardrails 97% of the Time in Multi-Model Jailbreak Study

A study circulating on AI Twitter gave four AI reasoning models a single instruction — 'jailbreak this AI' — and walked away. The models independently planned attacks, adapted in real time, and successfully broke through safety guardrails across 9 major AI systems at a 97.14% success rate. The finding reignited debate about whether current alignment approaches are robust or fundamentally brittle against adversarial reasoning agents.

6. TransIP Open-Source Force Field Transformer and MIT Protein Engineering Tools Signal AI Science Acceleration

Two notable scientific AI releases this week point to accelerating domain-specific foundation models: TransIP, an open-source scalable transformer for molecular force fields that learns symmetry in embedding space without pretrain-finetuning, and MIT's open-source protein engineering tools via OpenProtein.AI aimed at democratizing AI-driven biology. Both represent the 'domain-specific small model' trend gaining traction as an alternative to scaling general-purpose LLMs.