<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Pydantic-Ai on Caktus Group</title><link>https://www.caktusgroup.com/tags/pydantic-ai/</link><description>Recent content in Pydantic-Ai on Caktus Group</description><generator>Hugo</generator><language>en</language><lastBuildDate>Mon, 27 Apr 2026 09:00:00 -0400</lastBuildDate><atom:link href="https://www.caktusgroup.com/tags/pydantic-ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Easily Stream LLM Responses with Django-Bolt and PydanticAI</title><link>https://www.caktusgroup.com/blog/2026/04/27/django-bolt-easy-pydanticai-streaming/</link><pubDate>Mon, 27 Apr 2026 09:00:00 -0400</pubDate><guid>https://www.caktusgroup.com/blog/2026/04/27/django-bolt-easy-pydanticai-streaming/</guid><description>&lt;p>I like how easy it is to create an async streaming endpoint with &lt;a href="https://bolt.farhana.li/" target="_blank" rel="noopener noreferrer">django-bolt&lt;/a> and &lt;a href="https://pydantic.dev/docs/ai/overview/" target="_blank" rel="noopener noreferrer">PydanticAI&lt;/a> from scratch. With only a few commands you can set it up.&lt;/p></description></item></channel></rss>