00:00:15
It's 2025, so you open your favorite LLM chat—ChatGPT, a cloud AI, or a desktop assistant. You quickly authorize your favorite website and add a prompt: “I want to go to Spain and spend a romantic weekend around the middle of July. Book me a spot in a five-star hotel. Just make sure the weather is great, please.” You wait a second for the assistant to do its magic, and it's done. You hear the sound of an incoming email: a booking confirmation for your next trip to Barcelona. Wouldn't that be amazing? Wouldn't it be even better if the website could run on Ruby on Rails?
00:01:09
To make this dream come true, today I'm going to tell you how to make Rails AI-ready by design with the Model Context Protocol.
00:01:20
Hi, I'm Paweł Strzałkowski. By day I'm the CTO of Visuality, a Ruby agency from Warsaw, Poland. I've been a web developer for well over 20 years, and I love adding a creative spark to whatever I do.
00:01:33
For example, last year in Toronto I gave a lightning talk at Rails World about using Rails as a multiplayer real-time game engine. Depending on whether your heart lies with code or business, you can find me on GitHub or LinkedIn. Let's get into it.
00:01:59
So let's start with a brief introduction to what the Model Context Protocol (MCP) is in general. The internet is full of applications, servers, and websites—you know them; you are the ones who build them. On the other hand, we have an explosion of LLM-backed solutions, from online assistants through desktop solutions to developer tooling.
00:02:20
There is a thick wall between LLMs and traditional websites because LLMs have no direct access to current, real-world data inside classical applications. A prompt asking to book a European romantic trip has little chance of succeeding if the model cannot access live data. Of course, we can work around this by building applications that translate between prompts and API endpoints, and that approach is used frequently.
00:03:03
However, at scale that approach creates an M×N integration problem: with M models and N solutions you end up having to program each integration many times. Add a model, and you program it N times; add a solution, and you do it M times. It quickly becomes unmanageable and feels like the wrong tool for the job—it doesn't feel AI-native. To make it work properly, a large language model needs to be given real, current context over an AI-native protocol.
00:03:56
Naming is hard, but if a model needs to be given context over a protocol, let's call it the Model Context Protocol—MCP. So what is MCP?
00:04:15
MCP is an open standard: open means it can be used and implemented by anyone. It was introduced by Entropic last year in November and was so well accepted that it was backed by Google, OpenAI, and others worldwide. Many one-liners try to define MCP, but none are fully accurate. My attempt: MCP standardizes the way of providing additional context to LLMs, and that additional context includes dynamic, up-to-date, real-world data.
00:05:03
Let's look at the architecture of a system using MCP. The central piece is the MCP host—the user-facing application like ChatGPT or Google AI Studio. The host application communicates with the LLM using prompts and responses as usual. On the other hand, MCP servers provide additional context. This context is commonly provided over HTTP, which is what we'll demonstrate today. If an MCP server runs on the same machine as the MCP host, the context can be provided over standard input/output, but that's not today's subject.
00:06:01
The MCP host fetches context from MCP servers using MCP client libraries. Whenever it sends a prompt, that prompt is enriched with context from MCP servers. MCP servers expose context as primitives, and these primitives combine to form the server's entire context. The basic primitives are tools, resources, and prompts.
00:07:01
Tools are executable functionalities—what a server can do for you. For example, a tool can forecast weather for a given location, list your trips, or publish an article. Resources are data: images, logs, files—anything needed as additional context for the LLM. Prompts are standardized instructions that MCP hosts get from servers and show to the user, so the user can choose a kind of recipe or part of the conversation.
00:08:02
Let's go through a single use case: an MCP interaction between an MCP host (an LLM), a travel agency application, and a weather forecast application. It starts during initialization: the MCP host fetches MCP primitives from servers—for example, tools and resources from both the travel agency and the weather forecast MCP servers. These primitives are loaded into the host. A user says, “Book me a romantic European trip; just make sure the weather is good.” The MCP host sends that prompt along with available tools and resources to the LLM. The LLM responds with a tool call—“list romantic trips.” The MCP host invokes the travel agency MCP server to list trips. With that list, the MCP host sends the prompt, the server response, and tool information back to the LLM. Now, with additional context, the LLM requests weather checks for three different cities on a given date. The MCP host asks the weather MCP server three times to get forecasts. With the initial prompt, the list of trips, and the weather forecasts, the LLM can finalise the flow and issue a tool call to book the selected trip—say, to Barcelona. The LLM maintains short-term memory of the conversation, so it can perform multi-step procedures using plain language while the LLM handles complex orchestration logic.
00:09:33
How can we build such an AI-ready website? First, we need an up-to-date framework—let's use Rails—and we'll recreate the classic Rails application, but make it AI-native and ready.
00:10:02
It begins with rails new. It would be convenient to have an MCP mode option in Rails, but we can achieve the same result with a template. I applied a template that adds a few things to the framework: the official Ruby SDK for the Model Context Protocol, additional routes, a controller, and some configuration. I'll go over those changes in a few slides.
00:11:19
I used a repository template that will be available and linked via the QR code in the presentation. The template adds routes, a controller with a minor fix for some MCP host applications, and scaffolding for the application—the usual blog scaffold, but it also scaffolds MCP tools alongside the models. I extended the scaffold controller generator with additional templates for tools. Not all tools must be scaffolded; later we'll use an MCP tool generator to create custom tools.
00:12:05
Now let's start the server—the classic Rails server. It works as usual. Additionally, we can run another program called the MCP Inspector. Once you start building MCP servers, you'll use the inspector to validate them. You provide the URL of our new server’s /mcp endpoint, select HTTP as the transport, and connect. We're connected. Initially there are no tools, prompts, or resources, but having our Rails server act as an MCP server out of the box is promising.
00:13:07
To follow the classic Rails tutorial, we scaffold posts. We run rails generate scaffold for a Post with a title and a body. The scaffold generates additional files: a tool for each CRUD action. After running the migration, we create one post for reference.
00:14:41
With the scaffold complete, the template includes a simple rake task that lists all MCP tools defined in Rails. We have a tool for each CRUD action on Post, and each tool includes a description and an input schema. We can also view these tools in the MCP Inspector: connect to the server, list tools, and see the index tool that lists the last ten posts. We can update a post via the tool—functionally it looks like a normal form and API, but under the hood it works over MCP.
00:16:02
One of the generated tools is the post create tool, which includes the properties defined for ActiveRecord and the basic logic: it runs post new and post save and handles basic errors. But we shouldn't stop with posts. Next, we scaffold comments. A comment references a post and has content. The tools are generated alongside the model: create, list, and show tools for comments. For create, you provide post_id because a comment needs to reference a post; for listing comments you can filter by post_id; and for showing a single comment you provide its ID.
00:17:02
Hotwire wasn't in the original Rails tutorial, but I want to show it so you can see the production power. I change the posts index to use a post stream and create a partial that lists posts with their comments. I ensure posts broadcast to the post stream when they change, so posts update when their comments update. I expose posts with comments at the main route—Hotwire is added quickly and will be used later.
00:17:54
Now it's time to use our MCP server with an LLM host. Many LLM applications can act as MCP hosts, including developer tools like Cursor or Cloud Code, and more consumer-facing tools. I wanted to show a production-oriented demo that's accessible to non-developers, so I chose Cloud Desktop. To run our server with Cloud Desktop we need a public URL and a secure connection. That's straightforward with Rails: I updated a deploy configuration, used a domain and a VM on DigitalOcean, pushed to GitHub, and ran Camal setup. Within minutes it was deployed to production.
00:19:32
This is our page in production. I open Cloud Desktop and ask it to write a post about a weird animal; it does so initially without using MCP because we haven't connected the server yet. To connect, I open Cloud Desktop configuration, manage connectors, and add a custom connector pointing to the /mcp endpoint. Now it's connected and lists all our tools. I ask it to create three short posts about interesting animals. The host runs our create tool and produces posts in real time: an octopus post, an odd second post, and a shrimp post. Then I ask the host to comment on two posts at once—the octopus and the shrimp—and it uses our comment tool to add comments. The LLM remembers which posts were created through the protocol, so subsequent actions like deleting or updating a post work as expected. I update a comment to make it more intense—what I call “LinkedIn-ready”—and it updates in place.
00:21:36
So that's how you turn your Rails server into an MCP v1 server in five minutes. But should MCP tools be only CRUD? No. The magic of Rails began with CRUD to reach production-ready applications, and it's fine to learn MCP through simple tools so you can later model larger, more complex functionality. Tools in general shouldn't be limited to CRUD; you should aggregate bigger functionalities into tools when building complex systems.
00:22:18
If you want to build your own larger tools, there's a generator for that too: Rails G MCP tool. For example, Rails g MCP tool check_weather_tool can create a skeletal tool file with parameters and an input schema. All you need to do is implement the logic, such as calling a weather API. This gives you a single place to put your code and exposes a reusable MCP tool.
00:23:12
I deployed the weather tool, making it available to Cloud Desktop. I asked the host to compare weather at two Rails World venues—Amsterdam and Toronto. The conversation shows the tool calling the weather API for Amsterdam, then for Toronto, and finally the LLM composing a post comparing the two cities’ weather. The flow is seamless.
00:24:44
So by now we've generated an application, scaffolded models and tools, deployed with Camal, and used a customer-facing application like Cloud Desktop to exercise the MCP features.
00:25:07
Why is this so easy? I'd say it's about the flexibility of Ruby on Rails and the strength of the open-source community. I want to thank the creators and maintainers of Fast MCP, Action MCP, and the Ruby MCP client—I spent hours talking with them to learn how MCP interacts with Ruby and Rails. The official MCP Ruby SDK powers the template I showed, and I'm grateful to that team as well.
00:26:31
What's next? We were playing with a blog application, but the next steps are authorization—MCP recommends using OAuth, and rolling out OAuth is an application-wide decision, not just a template change. There are many more primitives beyond tools, resources, and prompts—things like completions, sampling, and more—and I leave it as an exercise for you to add them to Rails. I promise it'll be fun.