Back to all posts
AIAgile

Reclaiming Your Intellectual Property in the Age of LLMs

How to stop renting your intelligence and start orchestrating a private knowledge layer

March 16, 20265 min read
Reclaiming Your Intellectual Property in the Age of LLMs

Every time you paste a team’s context into a fresh chat window, you’re building a gilded cage. You feel like a power user. You aren't. You’re just a tenant on someone else’s land.

True agility requires data sovereignty. Your intellectual property—those hard-won coaching frameworks, the messy project context, and your unique delivery style—is your lifeblood as a practitioner. By locking that brilliance behind a specific vendor's interface, you’re making yourself a servant to a subscription model.

We think we’re building a second brain. In reality, we’re just grinding XP for Ganon’s army and paying for the privilege.

When a new model drops, or when you need to switch platforms for a specific task, your AI suddenly has coaching amnesia. It forgets the specific organizational dynamics you spent weeks teaching it. You start over. You paste the same context again. You lose hours. I’ve seen coaches spend four hours a week just re-contextualizing different threads. That’s four hours of "procrastivity" that could have been spent with actual humans.

You are trapped in the chat interface. It is time to break out.


The Coaching Amnesia Crisis

I’ve been that person—staring at a blinking cursor in a fresh Claude tab, trying to explain the "Engineering vs. Product" cold war for the tenth time. It’s exhausting.

You spend twenty minutes talking out the political dynamics of a leadership team using a vomit prompt. You detail the resistance from the engineering director. You outline the product manager's tendency to bypass WIP limits. You hit enter. The model gives you brilliant advice.

A week later, you go back to that thread and find the AI gets dumber as the conversation grows. Or you start a new thread and the model knows nothing. If you’re using O365 Copilot, you might even be cut off from further conversations and forced into a new contextless chat. You are back to zero.

This is coaching amnesia. It is a massive form of technical debt.

People defend the current state. They point to ChatGPT Projects or Claude Artifacts as the solution. They argue that these features provide memory. They are wrong anyway. Those are features, not foundations. They are simply nicely decorated jail cells. Relying on one AI tool is like bringing a wooden sword to a boss fight. It works until it doesn't.

I talked about this last year at the Scrum Masters of the Universe and the Regional Scrum Gathering in Banff. My session, "6 Ways you’re using ChatGPT wrong and how to fix it!", was built on this exact premise: toolbelts, not tunnel vision. If you want predictable delivery in your advisory work, you need predictable context. You cannot predict what you cannot see.


The Lightbulb Moment

The story of my conference proposals is one where fragmented data almost cost me my best ideas.

I was at a conference recently, trying to recall a specific idea I had for understanding "AI Writing Slop" smells. I had worked out the theory last year in a ChatGPT thread, but I couldn't find it. I knew I had discussed this exact topic in a LinkedIn message thread months prior. I could not access any of it. I was standing there, mid-conversation, feeling my own brain's external backup go offline because it was buried in a silo. It was frustrating and, frankly, embarrassing.

I tried OpenClaw, something that promised infinite memory, but the stability was a mess. I wanted to run local models to escape the cloud entirely, but I quickly realized that swapping a cloud model for a local model does not solve the core issue if your context is still fragmented. The model is just the engine. The data is the fuel.

I needed a universal translator.

I built a custom local setup. Now, I use my agents in place of the standard ChatGPT or Claude interfaces. They pull my past thinking from a multitude of sources. My AI actually knows me. It remembers a coaching conversation I had eight months ago and applies it to a problem I am solving today. This is what actual AI agility looks like.


The Science of Sovereignty

Why does model switching matter so much? Because different tasks require different cognitive engines. Period.

In 2026, we are seeing massive 1,000,000 token context windows. It’s amazing. But even with that much room, conversation compaction still exists. Models still lose the thread of a nuanced coaching relationship over time.

If you are analyzing a year's worth of retrospective data, you need the massive context window of a flagship model. If you are doing rapid multimodal generation tomorrow, you might need a different specialized tool. If your data is locked in one platform, you cannot instantly switch. You are stuck.

It is for this reason that I treat data sovereignty as a mandatory requirement for modern delivery leaders. You have to separate the data from the model.

Enter MCP (Model Context Protocol).

Anthropic created MCP as an open standard. It is the bridge. MCP acts as the connection between your data and the AI. You get your data, the AI gets the context, and nobody gets locked in. Because it’s an open standard, a local server on your machine can feed your private context into any compliant model without migrating data.


Build Your Own Coaching Brain

image-03-1773717013536

I know what you’re thinking. "Fred, I have three backlogs to refine and a conflict-heavy retro on Friday. Why am I playing with databases?"

Because the time you spend re-explaining your context to a dumb bot is time you aren't spending with your humans. This is an investment in your future throughput.

Buckle up. It’s going to get technical-but hey, your agentic IDE loves this stuff. You do not need to be a software engineer to do this. You just need to stop acting like a passive consumer of AI and start acting like an orchestrator.

1. Extract from the Walled Gardens Walled gardens legally have to let you export your data. Start with LinkedIn. They email you a full export-posts, comments, articles. Next, go to your AI providers. Export your ChatGPT and Claude chat logs. Get it all onto your local machine. That data is your intellectual property. Reclaim it.

2. Spin Up the Storage Layer You need a place to put this data. I use Supabase-a database that stores your knowledge as "vectors." Think of these as clouds of meaning rather than just words.

This is why a coach cares: when you search, the system finds entries that are conceptually close to your query. If you need to find a specific retrospective pattern from 2022, you don't need to remember the exact wording. You just describe the "vibe" of the conflict, and the vector math finds it. It’s like having a search engine for your own intuition. These queries come back in under 10ms-faster than you can remember where you put your coffee.

3. Encode Your Expertise You cannot just dump text files into a database and expect an AI to understand them. The embeddings-the process of turning text into those meaning-clouds-can be handled by Google's Gemini Embedding 2 model. You don't need to code this from scratch; tools like a simple Python script (which ChatGPT can write for you) handle the heavy lifting. This is where your second brain lives.

4. Connect the Universal Translator Finally, you run a local MCP server. This server sits on your machine and acts as the secure bridge. You ask a model a question. It asks your MCP server for context. Your MCP server queries Supabase, retrieves your past thinking, and hands it back.

A note on security: People often ask if this introduces risks. Here’s the reality-whenever you send data to a cloud model as "context," you are transmitting it. The goal here isn't magic air-gapping; it's architectural control. By keeping the MCP server local and using Supabase Row Level Security (RLS), you ensure your data isn't sitting in a public training pool. You get the power of cloud AI with the security of local orchestration.


Stop Renting Your Intelligence

We did what worked. We needed an answer fast, so we opened a new tab and started talking. We traded long-term sovereignty for short-term convenience.

Y'all, this has to stop.

Feeding insights into closed platforms ensures you are always leaving your own thoughts behind. Every time you feed a brilliant coaching insight into a closed platform without a backup, you are enriching their product while impoverishing your own workflow.

The technical barrier to entry has vanished. The tools to build your own intelligence layer exist today.

Stop prompting in silos. Start orchestrating your context.

Try this Monday: Go to your LinkedIn settings and request your data archive. Download your ChatGPT export. Then, take this article and feed it into a tool like Google NotebookLM or Claude along with your data export. Ask it to outline the steps to make this setup a reality for your specific context.

Your data is your legacy. Stop giving it away for free.


Continue Your Journey

The Second Brain: Build a personal knowledge management system that actually works with your AI tools, not against them.

AI Development for Non-Technical Builders: Learn how to orchestrate tools like MCP and Supabase without needing a background in software engineering.

Get New Posts in Your Inbox

Join practitioners getting practical insights on agile, metrics, and leadership every week.

Subscribe