<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Daniel Keller - AI Tracker</title><description>Daily AI briefing: tools, open models, industry trends.</description><link>https://danielkeller.com/</link><language>en-us</language><item><title>AI Briefing — 2026-05-07</title><link>https://danielkeller.com/ai-tracker/2026-05-07/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-07/</guid><description>Anthropic&apos;s SpaceX/xAI compute deal doubles Claude Code rate limits and removes peak-hour throttling. New Claude Code v2.1.132 ships useful session-ID and alternate-screen env vars, and Anthropic&apos;s Managed Agents platform adds dreaming, outcomes, and multiagent orchestration.</description><pubDate>Thu, 07 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-05-06</title><link>https://danielkeller.com/ai-tracker/2026-05-06/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-06/</guid><description>A major day for Claude Code and local inference: Anthropic held their &apos;Code w/ Claude 2026&apos; event (live-blogged by Simon Willison), removed peak-hours throttling on Claude Code Pro/Max via a SpaceX compute deal, and shipped new plugin/dispatch features. Meanwhile, MTP (Multi-Token Prediction) is delivering 2-2.5x speedups for Qwen 3.6 27B locally, and Ollama shipped MTP support for Gemma 4 on Mac.</description><pubDate>Wed, 06 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-05-05</title><link>https://danielkeller.com/ai-tracker/2026-05-05/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-05/</guid><description>OpenAI launched GPT-5.5 Instant as the new default ChatGPT model, Ollama shipped Gemma 4 MTP speculative decoding for Mac with 2x speed gains, and a practical r/LocalLLaMA post quantified exactly when local models beat cloud — directly relevant to your hybrid routing setup.</description><pubDate>Tue, 05 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-05-04</title><link>https://danielkeller.com/ai-tracker/2026-05-04/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-04/</guid><description>A Sunday with a notable Claude Code release (v2.1.128 with plugin archives and channel auth), a Cursor changelog drop, and strong signals from JetBrains and LangChain on open models closing the gap with frontier for agent tasks.</description><pubDate>Mon, 04 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-05-03</title><link>https://danielkeller.com/ai-tracker/2026-05-03/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-03/</guid><description>Ollama v0.23.0 ships Claude Desktop integration (including Claude Code support via local models), and Eugene Yan publishes a substantial piece on compounding with AI that aligns closely with your &apos;context, not control&apos; framing.</description><pubDate>Sun, 03 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-05-02</title><link>https://danielkeller.com/ai-tracker/2026-05-02/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-02/</guid><description>_Quiet day — A genuinely quiet day for actionable AI engineering news. The notable items are incremental llama.cpp maintenance releases, a minor LiteLLM stable cut, and event announcements — nothing that changes practice today._</description><pubDate>Sat, 02 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-05-01</title><link>https://danielkeller.com/ai-tracker/2026-05-01/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-05-01/</guid><description>Claude Code v2.1.126 shipped with LiteLLM gateway model discovery and a useful project-purge command. Cursor posted a changelog, LangGraph alpha adds node-level error handlers and stream_events v3, and Simon Willison demonstrated building a full app on his phone with Claude Code.</description><pubDate>Fri, 01 May 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-04-30</title><link>https://danielkeller.com/ai-tracker/2026-04-30/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-04-30/</guid><description>Cursor shipped a changelog update, OpenAI&apos;s Codex CLI gained a goal-loop feature reminiscent of headless agent patterns, and LangGraph added node-level error handlers in an alpha — a relatively light day but with a few items directly relevant to your agentic workflows.</description><pubDate>Thu, 30 Apr 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item><item><title>AI Briefing — 2026-04-29</title><link>https://danielkeller.com/ai-tracker/2026-04-29/</link><guid isPermaLink="true">https://danielkeller.com/ai-tracker/2026-04-29/</guid><description>A relatively quiet day anchored by Simon Willison&apos;s significant LLM library refactor (now modelling conversations rather than prompts), a Cursor SDK release, a minor Claude Code patch, and DeepSeek-V4 Pro landing with 512K context. LangGraph&apos;s alpha introduces timers and graceful shutdown — relevant for long-running agent infrastructure.</description><pubDate>Wed, 29 Apr 2026 00:00:00 GMT</pubDate><category>AI Tracker</category><category>Daily Briefing</category></item></channel></rss>