All Things AI Conference: Team Insights

A few of my colleagues attended the 2026 All things open AI Conference in Durham. I found myself with a major case of FOMO, so I thought I would pick Colin, honorary Cakti Peter and Gerald’s brain.

It’s been a minute since we were all together for a big tech event in downtown Durham. What was your favorite part of having this conference right in our backyard at the Convention Center and Carolina Theatre?

Colin: I think it’s pretty unique to have such a big conference right here (even if it was a little cramped!), making it easier for our team to attend. It was fun to attend with Gerald, hear his perspective on the day’s sessions, and learn how he’ll use AI from a QA perspective. Like he mentioned hearing someone talk about structuring their website to be more easily AI ingestible and I hadn’t thought about that before!

Peter: Certainly the top perk is being able to walk there from my office. More broadly, it was really nice to see a crowd that felt like it reflected the Triangle area’s diversity. This was not a homogenous conference, and I really appreciated that. Running into locals I know was another nice perk.

Gerald: It’s been a while since I’ve attended a conference, so this was a great opportunity to reignite that. I’m used to big tech conferences like this being held in much larger cities, so it was great not having to travel out of town. I second Peter’s comment on how nice it was to see such a diverse group of people coming together to learn about the future of AI.

We’ve spent a lot of time together at DjangoCon over the years. How did the energy at All Things AI compare for you? Does it feel like a different kind of ‘village’?

Colin: I think All Things AI covered a broader range than the technical focus I’m used to at DjangoCon. It was a good opportunity to reconnect with Jason Hibbets from my Code with Durham days while still seeing familiar faces like Peter and Calvin Hendryx-Parker. I think exposure to a variety of talks and a bigger community is helpful for me!

Peter: It was a different kind of village for sure, but not in a bad way. At DjangoCon, I can quickly find common ground with everyone there. Here, the crowd varied a lot more, but all the folks I chatted with were nice. It didn’t feel like there was as much spontaneous conversation and connection-forming happening here–the halls were too packed for it much of the time.

Gerald: It seemed like folks were having lots of short conversations about AI and the work they were doing in their own professions. I’d say this was similar to what I’ve seen in QA Testing and Agile conferences I’ve attended in the past. I’m used to there being 1 or maybe 2 tracks at conferences. Personally, I think that allows for deeper connections to be made because you usually end up seeing the same people from talk to talk, so there’s more opportunity to make personal connections with others. There were about 8 tracks at All Things AI, which allowed for a variety of talks, so I rarely saw any of the same people twice lol. Not saying that’s a bad thing, just noting a difference compared to past conferences I’ve attended.

There were so many tracks this year, from AI Builders to Governance. Was there a specific talk or demo that made you rethink how we’re actually going to use AI in our daily work?

Colin: There are a lot of talks to choose from, but Whurley’s keynote that was led by a speaking autonomous agent, which he coded during his flight over, was fun to see live. It was also interesting to hear Erkang Zheng discuss running a company with two autonomous agents during his closing keynote, which makes me wonder how that’s even possible! But I think I enjoyed the AI Deep Thoughts lightning talks the most. I liked Mayor Leo’s perspective on “augmented” intelligence to enhance quality of life and Chisa Pennix-Brown’s “WTF AI” session was a highlight. Her interactive demonstration, where participants followed instructions on stage, was a fun way to illustrate how differently everyone interprets instructions, much like AI.

Peter: I think the talk that hit me hardest was the fireside chat with Igor Jablokov. He got me thinking a lot more about running models locally and using the cloud less–or at least hosted versions of open models with good privacy policies. I’ve played with that already, but now I think I’m motivated enough to make it part of my routine rather than an experiment. Taylor Smith had some good info on this as well, focused on inference at scale in her talk, “Owning the Inference Layer: When and How to Run Your Own Models.” I had some thought-shift from Dena Guvetis’s “Community is Infrastructure in the Age of AI.” She got me looking at the Django community a little differently–more aware of how much relies on it and how much positive energy it puts into the world.

Gerald: One of the talks that really peaked my interest was “Vibe Coding in Action: Transforming Coding Practices for the Future. In this talk, I learned the difference between prompt engineering and context engineering and how to use them both to improve the accuracy and efficiency of AI agents. I was surprised to learn that sometimes less is actually more when it comes to engineering prompts. Providing too much information in a prompt can lead to bad outputs or what some may call Context Rot. Previous instructions are ignored, contradictions are introduced or the agent begins to sound confident in wrong answers are a few signs of an AI agent experiencing context rot. I’ve experienced all of these at one point or another, so it was great to finally understand why.

If you had to pick just one ‘pro-tip’ or tool you saw this week, what’s the first thing you’re going to try out on Monday?

Colin: Writing agent skills. Calvin’s talk, “Orchestrate Agentic AI: Context, Checklists, and No-Miss Reviews”, and Github examples were eye opening how useful the skills can be, and then easily shared across a team. Oh and LLM evaluations. I still don’t quite understand how they work, but want to learn more!

Peter: I definitely echo Colin’s answer. Calvin Hendryx-Parker gave one heck of a talk when it comes to practical examples and giving the audience everything they need to put it into practice. Chris Dabatos mentioned a voice AI platform that can read and portray emotion. Hume.ai This is intriguing, but I’m not sure where I’ll use it just yet.

Gerald: William Hurley did an open Q&A session where anyone could ask anything about Quantum Computing or AI. I’ve been hearing about how Quantum computing is a thing of the future and how it will drastically change things once we figure it out, but I never really understood how. He gave a practical breakdown on how computing currently works and how quantum computing could improve its efficiency exponentially. Learning how to scale and manage qubits are a few of the things that are slowing us down from getting to that point. There were also a lot of questions from the audience on openclaw.ai and a discussion around all the things you can do with it. I’m eager to learn more about it, maybe I’ll look into how helpful it can be with unsubscribing me from all the hundreds of spam emails in my inbox.