When I first heard about MCP servers, I’ll admit I didn’t immediately grasp the significance. On paper, they’re just a protocol — a way for large language models (LLMs) to talk to external systems and take actions instead of just generating text. But sitting in the audience at FabCon Europe 2025, watching Rui Romano demonstrate Microsoft’s new Power BI MCP server, the potential finally clicked.
This isn’t just about automation or AI integration. It’s about changing the way we build, document, and manage solutions in Power BI and Fabric — moving from manual implementation to context-driven design. Watching him work through real scenarios, I realized we’re not just getting a new tool for automation. We’re getting something that could fundamentally reshape how we think about Power BI development.
MCP is an open protocol that gives LLMs the ability to interact with external systems—to actually take actions, not just discuss them. Think of it as providing hands to go along with the brain. The AI can now open your Power BI model, read its structure, make changes and trigger operations. It becomes a collaborative partner that can work alongside you, not just advise you.
Microsoft’s Fabric MCP server packages up all the API specifications, schemas, and guidance for working with Power BI and Fabric into a structured format that LLMs can understand and use. It’s available now at aka.ms/FabricMCP as an open-source project. The Power BI-specific MCP server, which works with Desktop files and PBIP format, is entering private preview though in the meantime Rui has also shared some similar foundational tools/scripts in his Agentic Power BI Development GitHub repo
The explanation and videos below are split into three parts, each building on the last. By the end, I think you’ll see what I mean about this being transformative—not in a buzzword way, but in a very practical, “this changes how I’ll approach my next project” way.
Part 1: Practical Automation with the Power BI MCP Server
The first and most obvious application of the MCP server is the necesssary stuff that is tedious as a Power BI developer such as bulk renaming and applying consistent conventions across models. This is the kind of work where you know exactly what needs to happen but it’ll take you half a day of mind-numbing clicking.
Rui’s demo shows how the MCP server makes this effortless. He manually renames one table following his preferred convention, then asks the AI to analyze that pattern and apply it everywhere else. The MCP server acts as the bridge—it gives the AI the ability to actually interact with Power BI Desktop, not just talk about it.
What struck me watching this wasn’t just the time savings. It was the realization that you’re essentially teaching the AI what “correct” looks like in your specific context. You’re not writing a script or setting up rules in some configuration file. You’re showing an example, and the AI generalizes from there.
The Power BI MCP Server already knows how to work with Power BI; your job is to teach it what to do in your particular situation. There’s an approval workflow built in, which is sensible – you probably don’t want an AI making unchecked changes to your models! But even with that human-in-the-loop step, what would take hours happens in minutes.
Part 2: Encoding Your Organization’s Knowledge through context
The first demo was all about automation. Useful, but not revolutionary. The revolutionary part comes next. If the first demo was about connection, the second is about context. One of the challenges when using AI with complex data environments is helping it understand not just the commands, but the why and how behind them. Rui’s second demo beautifully illustrates how Markdown files can be used to provide that context — embedding documentation, explanations, and configuration details right alongside the MCP’s operations.
In the video, I talk about why this matters so much for enterprise environments. We’ve all seen what happens when automation runs without governance — and this approach is a clear step toward responsible AI integration. By documenting pipelines, datasets, and models in Markdown (and exposing that to the MCP via the LLM/client application), you’re effectively creating a living blueprint of your organisational/team language, processes and methodologies.
It’s a developer-friendly, transparent way to ensure AI-driven actions are informed, consistent, and auditable. It allows us to move beyond simple pattern matching to tasks that require actual understanding: writing measure descriptions that make sense to business users, generating translations that sound natural, and refactoring old code to use modern patterns like calculation groups and user-defined functions.
For these tasks, the AI needs more than just the ability to interact with Power BI. It needs context about how your business thinks, how your organization talks about data and what constitutes good practice in your world. When you provide business terminology, explain how your organization thinks about certain metrics, describe the context in which these measures are used—suddenly the AI can generate content that actually help people understand what they’re looking at.
This is what context engineering really means. You’re not just automating tasks—you’re codifying organizational knowledge in a way that an AI can apply consistently across all your work.
And refactoring—this one really resonated with me. How many models are sitting in production right now that you know could be better? Maybe they were built before you learned about calculation groups. Maybe they use patterns that made sense two years ago but don’t align with current best practices. You know they should be updated, but spending days refactoring working models is hard to justify.
What if refactoring wasn’t a multi-day project? What if you could describe the pattern you want, point to examples of what good looks like now, and have the AI do the tedious transformation work while you validate the results?
That’s not just saving time. That’s making it economically feasible to keep your entire Power BI estate aligned with current best practices. That’s a different proposition entirely.
Part 3: Creating a Digital Agent Developer
The final demo is the one that made me rethink what’s possible. Rui creates an entire semantic model and report from scratch, just by describing what he wants in a markdown specification and letting the AI handle the implementation.
Now the markdown files used for context captures database schemas, naming conventions, modeling approaches, DAX patterns, report design standards—essentially everything you’d tell a junior developer if you were explaining “how we do things here.” It’s comprehensive context about what good looks like in your organization and ways of working.
With that context, the AI doesn’t just generate random code. It creates a complete solution that already follows organizational standards. The tables are structured properly. The relationships are correct. The DAX follows established patterns. The naming is consistent. Even the report follows design conventions.
Think about what this means for how we work. Traditionally, you build something functional, then spend time bringing it up to standard—fixing naming inconsistencies, refactoring measures to follow best practices, ensuring everything aligns with organizational conventions. It’s rework, essentially. Necessary, but inefficient.
What if you could shift that effort earlier? Instead of building then refactoring, you describe what good looks like, and the implementation follows those standards from the start. Your time goes into thoughtful design and validation rather than tedious implementation and cleanup.
That’s a fundamentally different development model.
Rui also shows the consumer-facing side—a remote MCP server that lets business users query models using natural language, with the AI generating and executing DAX behind the scenes. I immediately thought of Power BI Copilot, but there’s a key difference: with the MCP approach, you control the context. You can centralize organizational terminology, standard definitions, common synonyms—the kind of foundational knowledge that should be consistent across your BI platform but currently can only be defined separately (and therefore potentially inconsistently) for each model.
What This Actually Means for How We Work
I’ve been reflecting on this since FabCon, trying to articulate what feels different about this approach versus other “AI will change everything” announcements we’ve seen.
I think it comes down to this: we’re moving from being implementers to being context engineers: people who define what “good” looks like, capture it clearly, and enable the system to apply it consistently.
That’s work that compounds. Once you’ve defined your organizational patterns, every new solution benefits. You’re not starting from zero; you’re designing within a reusable framework of shared knowledge.
The valuable skill isn’t going to be knowing exactly how to write a particular DAX pattern or remembering the right TMDL syntax. The AI can handle that if you give it the right context. The valuable skill is understanding what good looks like—what makes a model maintainable, what makes measures understandable to users, what patterns fit which scenarios, how to structure solutions for your organization’s specific needs.
This is the stuff that usually lives in people’s heads, gets shared through code reviews and mentoring, slowly becomes “how we do things” through informal osmosis. You’re making that knowledge explicit, structured, reusable.
What Rui demonstrated at FabCon is a path toward something different: designing solutions where organizational standards and best practices are embedded from the start, because they’re part of the context that drives implementation.
That’s not about working faster. It’s about working differently.
If you’re working with Power BI and Fabric, I’d encourage you to think beyond the automation use cases. Think about what organizational knowledge you could codify. Think about what it would mean to design solutions with context rather than build solutions with code or through the UI.
I’m still processing the implications myself. But I’m increasingly convinced this is one of those shifts where the landscape looks very different on the other side.


