As we integrate AI into more workflows, a common question is: how do AI systems find each other and related artifacts? AI Catalog addresses this by providing a shared, trusted discovery layer that gives different kinds of AI artifacts a common place to be found.
Let’s say you’re building an agentic workflow. It needs an agent that can reason over financial tasks, an MCP server with access to market data tools, your internal evaluation dataset, and an agent plugin with skills and prompts to analyze portfolio performance.
In theory, all of these things may already exist. In practice, finding them can be messy.
The MCP server you need might be listed in the official MCP registry, in a third-party MCP directory, or simply published as a repo on GitHub. The agent you’re looking for might live in an A2A-style index. The skills and prompts might sit in a skills directory or be bundled inside a plugin marketplace. Each of these surfaces is useful in its own right.
AI Catalog is not a replacement for any of them. It is a thin layer that sits above existing registries, marketplaces, and directories so that a single client, agent, or registry can find artifacts across all of them without learning each one’s bespoke discovery rules. The MCP registry still describes MCP servers. A skills directory still describes skills. An A2A index still describes agents. AI Catalog gives them a common way to be pointed at, identified, and trusted.
Diversity is good, but discovery is hard. If every artifact type has its own discovery mechanism, every client, registry, marketplace, and platform has to learn all of those mechanisms. That does not scale. And discovery is only half the problem. Once you can find something, you also need to decide whether to rely on it.
Why We Need a Discovery Standard
Discovery may sound small compared to model capabilities, agent reasoning, tool use, or multi-agent collaboration.
It is not small.
Before a client can invoke a tool, it has to know the tool exists. Before an agent can delegate work, it has to find the right counterpart. Before an enterprise can approve an AI service, it has to know who published it, where it came from, and what claims are attached to it.
Without shared discovery infrastructure, every protocol, platform, and registry has to answer the same questions:
- Where should metadata live?
- How should clients discover it?
- How should artifacts be identified?
- How should versions be represented?
- How should publisher information be attached?
- How should trust claims be expressed?
- How should large catalogs be organized?
- How should registries expose different kinds of artifacts?
These are ecosystem questions, not protocol-specific ones. That’s why a common catalog layer is important.
How AI Catalog Works
A healthy agentic AI ecosystem will not be made of one protocol, one registry, or one metadata format. AI Catalog does not replace the native metadata formats different communities already define. It provides a common discovery layer around them.
In other words, AI Catalog is a map. It is not trying to become the territory.
At its core, AI Catalog is a typed JSON container for AI artifacts. A catalog entry can tell you:
- here is an artifact
- here is its stable identifier
- here is its human-readable name
- here is what kind of artifact it is
- here is where its native metadata lives
- here is optional metadata about publisher, version, tags, trust, or provenance
A minimal catalog can be very simple:
{
"specVersion": "1.0",
"entries": [
{
"identifier": "urn:example:skill:code-review",
"displayName": "Code Review Assistant",
"mediaType": "application/agentskill+zip",
"url": "https://skills.example.com/code-review/skill.zip"
},
{
"identifier": "urn:example:mcp:weather",
"displayName": "Weather Service",
"mediaType": "application/mcp-server-card+json",
"url": "https://api.example.com/.well-known/mcp/server-card.json"
},
{
"identifier": "urn:example:a2a:research",
"displayName": "Research Assistant",
"mediaType": "application/a2a-agent-card+json",
"url": "https://agents.example.com/researchAssistant"
}
]
}
The important design choice here is the mediaType.
A catalog consumer does not have to retrieve every artifact and inspect it just to guess what it is. The catalog entry declares the artifact type up front.
For example, if a client understands application/mcp-server-card+json, it can retrieve that artifact and process the MCP server card using MCP-specific rules. If it understands application/a2a-agent-card+json, it can process the A2A agent card using that format.
If it does not understand a media type, the client can skip it, display it, index it, or pass it to another system that does.
This keeps AI Catalog extensible. New artifact types can appear without requiring everyone to redesign the catalog format.
It also keeps native formats native. AI Catalog does not redefine an agent card. It does not rewrite a tool server card. It does not collapse metadata into one universal schema.
A universal schema sounds attractive at first. One format for everything! Nice and tidy. But in practice, that would require every community to agree on the internal shape of every artifact. That is slow and usually unrealistic.
Instead, AI Catalog lets each artifact keep its native format while providing a shared way to identify and discover it.
Separation of concerns remains undefeated.
Types of AI Catalogs
AI Catalog is designed with progressive complexity.
A small open source project might only need a static JSON file listing a few artifacts. A larger organization may need host identity, compliance attestations, signatures, and integration with enterprise registry infrastructure.
The specification supports this through simple catalogs, discoverable catalogs, trusted catalogs, and nested catalogs.
Minimal Catalog
A minimal catalog is just a list of entries. Each entry includes an identifier, display name, media type, and either a URL or inline data.
This is enough for simple discovery. For example, an open source project could publish a small catalog that points to its agent card, tool server card, and evaluation dataset.
Discoverable Catalog
A discoverable catalog adds host information and can be published in predictable locations, such as a well-known URI, or advertised through link relations.
Instead of requiring every client to know a custom URL, the catalog can be placed somewhere predictable so clients, crawlers, agents, and registries can find it automatically.
Nested Catalogs
AI Catalog also supports nested catalogs, meaning a catalog entry can point to another catalog.
For example, an enterprise can organize catalogs by department, team, product line, or region:
Enterprise Catalog ├── Finance Catalog ├── Engineering Catalog └── Research Catalog
A publisher can also package related artifacts together:
Finance Workflow Package ├── Agent card ├── Tool server card └── Evaluation dataset
A useful agentic workflow may depend on a combination of agents, tools, datasets, policies, documentation, and deployment metadata. Nested catalogs give publishers a way to describe those collections without inventing a separate packaging concept for every use case.
The design stays simple: a catalog can contain entries, and an entry can itself be another catalog.
From Simple to Trusted Discovery
Finding artifacts is only part of the problem. As AI systems become more composable, clients also need to know whether they should rely on what they find.
A client may need to know:
- Who published this artifact?
- Is the publisher identity verifiable?
- Has this artifact been signed?
- What source or build process produced it?
- Are there compliance attestations?
- Has the artifact changed since it was reviewed?
- Can the artifact be tied back to a registry, source repository, or provenance statement?
This is where discovery becomes more than:
> Here is a list of things.
It becomes:
> Here is a list of things, plus information that helps you evaluate where they came from and whether you should rely on them.
AI Catalog introduces an optional Trust Manifest to carry this information.
The Trust Manifest sits alongside the artifact. It does not wrap or modify the artifact’s native format. This allows trust information to evolve at the catalog layer without forcing every protocol-specific schema to absorb the same trust fields.
A tool server card can stay focused on describing the tool server. An agent card can stay focused on describing the agent. A dataset descriptor can stay focused on describing the dataset.
The catalog can carry common discovery and trust metadata around all of them.
That distinction is important because discovery without trust only helps unsafe systems spread faster.
Why AI Registries and Marketplaces Need a Discovery Layer
Registries and marketplaces, such as those dedicated to MCP servers, skills, plugins, or agents, are where the discovery problem becomes especially visible.
A registry that only supports one artifact type can define one metadata model and one interaction pattern. However, agentic AI infrastructure is unlikely to stay that narrow.
Developers may want to search across many types of artifacts:
- “Show me all finance-related agents.”
- “Find tool servers published by this organization.”
- “List artifacts with provenance metadata.”
- “Find packages that include both an agent and supporting tools.”
- “Show me everything this domain publishes for AI clients.”
- “Filter for artifacts that support a specific protocol.”
- “Find the latest version of this capability.”
A common catalog format gives registry builders a shared foundation for these use cases.
It does not force every registry to have the same UI, ranking model, governance process, or approval workflow. Those choices can still vary. But it does provide common primitives:
- identifiers
- media types
- URLs
- versions
- publishers
- metadata
- trust manifests
- nested catalogs
That is the kind of boring infrastructure that makes ecosystems easier to build on. And boring infrastructure is often the most important kind.
A Shared Layer for an Open Ecosystem
The agentic AI ecosystem is still young. It is healthy that different communities are exploring different approaches to communication, tool use, agent interaction, packaging, identity, and deployment.
But as the ecosystem matures, some layers should become shared.
Discovery is one of those layers. Trust is another.
If we want agents, tools, and services to work together across an open ecosystem, we need more than interaction protocols.
We need maps. And those maps need to be discoverable, trustworthy, and shared.
Cataloging makes this diverse ecosystem navigable. It gives developers, registries, platforms, and enterprises a common way to find and evaluate AI artifacts while allowing those artifacts to remain rooted in their own communities and specifications.
That is the kind of work open foundations are meant to support: neutral, collaborative infrastructure that no single vendor needs to own and every participant can help shape.
AI Catalog is early, and the work is happening in the open. The next step is implementation experience: real catalogs, real artifacts, real registry experiments, and feedback from the people who will build, secure, publish, and consume these systems.
Get involved with the AI Catalog project via GitHub issues, PRs, and discussions, or by joining the AI Catalog working group which is composed of members from various AI protocols (MCP, A2A, and others).