I’ve been an engineer for over 25 years, and one thing I’ve realized only recently is that I’ve always cared deeply about something most people barely talk about: the development environment itself. Not just the code. Not just the system design. The environment. Since I started writing software, I’ve had this strange, almost quiet joy… Continue reading Why Environment Architecture Matters More Than Ever for AI Readiness
HTTP Is the New MCP
There’s a growing push to invent new protocols so AI agents can interact with services and execute tasks on a user’s behalf. While the motivation is right, the direction often feels familiar in an uncomfortable way. It looks a bit like we’re trying to rebuild the web. Every time the industry introduces a new “universal… Continue reading HTTP Is the New MCP
Building my own language model: Transformer encoder (Part 4)
So far, we have turned raw token IDs into dense vectors that encode what a token is and where it sits in the sequence. That already feels like progress, but at this point the model still doesn’t understand anything. Each token only knows about itself. The real work happens next: the Transformer encoder. This is… Continue reading Building my own language model: Transformer encoder (Part 4)
Building my own language model: Embedding layers (Part 3)
It has been a while since I had time to work on this project. Actually, the code was written a while back, but I just did not have time to write about it. Interestingly enough the initial code was co-created using GPT 4.1 and although it technically worked, the results were not great. Therefore, while… Continue reading Building my own language model: Embedding layers (Part 3)
Building my own language model: Data & Tokenizer (Part 2)
As per plan for building my own language model, the first step is to find a dataset to train the model and then build a tokenizer. Why do we need this? When interacting with an LLM, we typically use natural language – both as input and output. Neural nets though don’t understand words or sentences… Continue reading Building my own language model: Data & Tokenizer (Part 2)
Building my own language model: Part 1
Many of us are using ChatGPT and co. now for a few years. These LLMs are very interesting and fascinating and we can use them for many interesting tasks, the next big thing being agents. But one thing I always wanted to try is building my own language model, all trained on my local machine.… Continue reading Building my own language model: Part 1
ACP Hello World
To complete the picture, in this blog post we are going to build a hello world ACP application. As with the A2A demonstration, we will also create a simple server and client application to demonstrate the basic programming model with ACP. ACP does a good job in their getting quickstart guide: https://github.com/i-am-bee/acp Server This is… Continue reading ACP Hello World
A2A Hello World
Let’s explore how A2A works in practice. In this blog post I’m demonstrating the basic usage of A2A, without using any AI. 🙂 Please note, this is a purely technical view, the challenges to build agents are not necessarily technical in nature, nonetheless I hope this post helps to get a basic understanding of A2A.… Continue reading A2A Hello World
MCP, ACP, A2A
Here is a quick overview of the various protocols we hear and read so much recently in the AI space. I don’t intend to go into the details, just a very brief and human readable, objective view. MCP Let’s start with MCP (Model Context Protocol), which I covered in one of my recent blog posts.… Continue reading MCP, ACP, A2A
Exploring Agentic AI: MCP with BeeAI
The next step of understanding how to build a proper agentic system is to explore how an agent can be extended with tools. Tools are, in my view, the most powerful extension of an LLM as it logically allows it to interact with the world: get additional context, take action. See my older blog post… Continue reading Exploring Agentic AI: MCP with BeeAI