Five Layers. One Question.
This text is not meant to be a technical explainer. It is a place to think through something I keep noticing: a quiet but significant shift in how AI systems are being structured, and what that structure means for the people who use them. Read it as a snapshot, not a conclusion. A starting point for your own thinking, not a definitive answer.
Across the AI industry, a set of concepts has quietly moved from experimental to foundational. Prompts. Memory. Skills. Protocols. Agents. Every major platform, Anthropic, OpenAI, Google, Microsoft, is building the same layered architecture, each with their own terminology, each arriving with documentation, a blog post, a launch announcement. Most people file them away as things to explore later.
Anthropic's stack makes the clearest case study because the layers are most explicitly named. But the pattern is not Anthropic's. It belongs to the whole industry.
Taken together, they describe something far more interesting than a feature set: a layered architecture. A set of deliberate decisions about where knowledge lives, who controls it, and how AI systems are permitted to act in the world.
The first layer: Prompts
A prompt is the most basic unit of interaction. You write something. The LLM responds. The conversation ends and nothing persists.
For most people, this is still how they use AI. They type a request, read the output, copy what they need, and close the tab. The tool is useful. The relationship is shallow.
The limitation is obvious in practice. Every new conversation starts from zero. You re-explain context you have already explained. You re-specify preferences you have already specified. The tool is highly capable, but it does not remember you, your work, your preferences, or your way of thinking. Each session begins as if you never met.
A prompt is a one-time transaction. Powerful in the moment. Forgotten immediately after.
Example: You ask the LLM to write a competitor analysis in your brand voice. It does. Next week, you open a new conversation and start again from scratch, re-explaining the tone, the format, what matters to you.
The second layer: Projects
Projects are persistent workspaces. You store files, context, and instructions that the LLM carries across every conversation within that project. The tool now has a desk. Your documents are on it, your preferences are written into the walls.
This is a meaningful shift. The relationship between you and the tool becomes cumulative rather than transactional. You invest context once and benefit from it repeatedly.
But Projects are still personal. They live with you. They do not transfer easily to a colleague, a team, or another AI system.
Example: You create a Project for a client. You upload brand guidelines, past briefs, tone-of-voice documents. Every conversation inside that Project starts with the LLM already knowing the client. No re-explanation needed. You brief once. The context stays.
The third layer: MCP
MCP, the Model Context Protocol, is a connection standard. It is the pipe between the LLM and the outside world.
Without MCP, the LLM works with what you give it directly: text you paste in, files you upload, things you describe. With MCP, the LLM can reach into external systems, your Google Drive, your GitHub repository, your Notion workspace, your company database, and pull from them directly.
Anthropic introduced MCP in late 2024 and donated it to the Linux Foundation's Agentic AI Foundation in December 2025, making it an open standard rather than a proprietary layer. That decision matters. It means MCP is not a lock-in mechanism. It is infrastructure that any AI system can connect to.
MCP gives the LLM access. It does not tell the LLM what to do with that access.
Example: You connect the LLM to your Google Drive via MCP. You ask it to pull the three most recent competitor briefs and summarise the key themes. The LLM reaches into Drive, finds the files, reads them, and responds without you uploading anything manually. The connection does the retrieval. You keep your attention on the thinking.
The fourth layer: Skills
Skills are where the architecture becomes philosophically interesting.
A Skill is not access. It is not a connection. It is procedural knowledge, a set of instructions that encodes how to do something. When your request matches a Skill's criteria, the LLM loads that Skill automatically and applies it without you asking.
Anthropic introduced Skills in October 2025. By December, they had launched a Skills Directory, a marketplace of partner-built Skills from Notion, Figma, Canva, Atlassian, and others. Admins can deploy these Skills across entire organisations. Employees open the LLM and the workflows are already there, already active, already shaping how the LLM behaves.
The combination of MCP and Skills is where the real capability emerges. MCP gives the LLM access to Figma. The Figma Skill teaches the LLM how to work inside Figma: how to read a design, how to translate it into code, how to handle components and comments and projects with consistency. The access and the knowledge arrive together.
But notice what has also happened. Figma has encoded the workflow. Atlassian has encoded decades of teamwork methodology into a Skill. Notion has decided how the LLM should behave inside their product. The knowledge of how is no longer something you define. It is something you install.
Example: Your team installs the Atlassian Skill via your organisation's admin console. From that point on, when anyone asks the LLM to help structure a project or write a Jira ticket, the LLM automatically applies Atlassian's workflow logic: their best practices, their templates, their way of organising work. You did not design that workflow. Atlassian did.
The fifth layer: Subagents
Subagents are independent instances of the LLM, each with their own context window, their own system prompt, their own set of tool permissions. They are spun up to handle specific, isolated tasks in parallel, without interfering with the main conversation or each other.
If Skills are about shared knowledge, subagents are about distributed execution. You give one subagent the role of market researcher. Another handles technical analysis. A third reviews code for security issues. They work simultaneously, each operating within defined boundaries, each returning their output to the orchestrating system.
This is where AI stops feeling like a tool and starts feeling like a team, or at least like the infrastructure of one.
Example: You ask the LLM to produce a full competitive landscape report. Rather than doing this sequentially, the LLM spins up a market-researcher subagent to gather industry data, a technical-analyst subagent to review competitor product capabilities, and a synthesis subagent to combine the outputs into a structured document. Three streams of work run in parallel. The result arrives faster and with cleaner separation between research and judgment.
What the layers reveal together
Set side by side, the architecture tells a clear story.
Prompts are transactional. Projects are personal. MCP is connection. Skills are encoded knowledge. Subagents are distributed execution.
Each layer adds capability. Each layer also moves a decision further away from the person doing the work. At the prompt level, you control everything. At the subagent level, you are orchestrating systems that are themselves orchestrating other systems, and the knowledge embedded in each layer was largely written by someone else.
This is not inherently a problem. It is how all mature technology works. You do not write the operating system or design the typeface; you inherit the decisions of the people who built the infrastructure you rely on.
But it is worth being conscious of. Because the Skills that get built, the workflows that get encoded, the defaults that get shipped, these will shape how millions of people think and work with AI. Not because anyone forced them to, but because the path of least resistance is to install the Skill and let it run.
The question underneath all of this
Owning the machine does not mean building every layer from scratch. It means understanding the layers well enough to make intentional choices about which ones you inherit and which ones you define yourself.
If your team's entire way of working inside Notion is encoded by Notion's Skill, you have gained efficiency. You have also handed the workflow to Notion. If you build your own Skills, encoding your own methodology, your own creative logic, your own way of thinking, the knowledge stays with you. It becomes organisational memory rather than platform dependency.
The architecture is being built right now. The defaults are being set. The Skills Directory is filling up.
The question is not whether to use these layers. It is whether you are actively shaping them to reflect your own thinking and methodology, or simply installing what someone else has already decided for you.
This is part of an ongoing series of observations on AI, creative systems, and what it means to own the machine.
