Aura – Like robots.txt, but for AI actions



This content originally appeared on DEV Community and was authored by Dervish of AI

The Web is Breaking Under the Weight of AI.

The web has always been about evolution, and now, we stand on the precipice of its next great transformation: the Agentic Web.

This isn’t a far-off fantasy. Researchers and engineers are already defining this new era, one where autonomous, goal-driven AI agents interact directly with each other to execute complex tasks on our behalf. The core idea is a shift from manual interaction to delegated intent; you state a goal, and a team of agents accomplishes it for you. This is the logical evolution of the internet, a more interactive and automated experience.

But there’s a problem: an unspoken crisis. The Agentic Web is being built on a foundation of sand. Today’s AI agents are like tourists in a foreign city who can’t read the signs. They navigate by crudely analyzing the visual layout of websites, parsing fragile Document Object Models (DOMs), and simulating clicks. This approach is not just inefficient; it’s brittle, resource-intensive, and fundamentally disrespectful to the underlying structure of the web.

I built the AURA (Agent-Usable Resource Assertion) protocol to fix this. AURA isn’t just another tool; it’s a new language, a new social contract for the web. It’s a way for websites to speak directly to AI agents to tell them what they can do, not just what they can see. This is my proposal for a better path forward.

The Unspoken Crisis of the Agentic Web

The way agents interact with the web today is fundamentally broken. As builders of AI, we spend countless hours writing and maintaining scrapers that break every time a website pushes a minor UI update. We burn through immense computational resources to render pages, analyze layouts, and guess which button to click.4 It’s a technical nightmare that slows innovation to a crawl.

From the perspective of a website owner, the situation is even worse. This model strips away our control and floods our servers with unidentifiable, often aggressive traffic that hammers our infrastructure while ignoring the carefully crafted APIs we built for machine interaction.4 We are losing sovereignty over our own digital properties.

This technical disconnect creates a deep economic problem. The modern web’s business model is largely predicated on human eyeballs viewing advertisements. AI agents don’t see ads. They don’t click on banners. This creates a powerful incentive for websites to block agents entirely, leading to an escalating and ultimately futile arms race between scrapers and blockers.

The root of this crisis lies in a fundamental misunderstanding of what the web is. The web is composed of both “Nouns” (content, data, information) and “Verbs” (actions, capabilities, functions). The current standards discussion, which I follow closely, is primarily focused on the Noun Problem: how can AI read and process content? But the true, transformative power of the Agentic Web lies in its ability to handle verbs: to book a flight, to post a comment, to purchase a product.2 Scraping is a crude attempt to guess at a website’s verbs by looking at its nouns. This mismatch is the source of the chaos. We need a language for verbs.

AURA: A Language for AI Action, Not Just Observation

To solve this, we need to change the conversation. The robots.txt file, a cornerstone of the web for decades, is a model of prohibition. It’s a list of “don’ts” for well-behaved crawlers.4 AURA flips this model on its head. It is a protocol of permission, a manifest of “do’s” for capable agents.

At its heart, AURA is a handshake of consent. By placing a simple aura.json file on my server, I, the website owner, am making a clear declaration to the world: “Here are the things I am prepared to let your agents do on my site, and here is the most efficient, secure, and reliable way to do them.” This simple act hands control back to the creator.4

The protocol is built around the concept of “capabilities,” not pages. A capability is a specific, discrete action an agent can perform, such as post_comment or add_to_cart. This moves the point of interaction from the fragile presentation layer (HTML) to the robust logic layer (an API).

Crucially, I designed AURA for simplicity. The spectacular failure of the Semantic Web to gain widespread adoption taught us a valuable lesson: complexity is the enemy of progress. AURA’s barrier to entry is deliberately low. To start, a site owner needs only to create a single, static JSON file. This pragmatism is essential for building the critical mass needed for a new standard to succeed.4

A Technical Deep Dive into the AURA Protocol

Let’s get concrete. AURA works through a combination of a public manifest file and an optional HTTP header for handling state.

The Manifest: /.well-known/aura.json

Following established web conventions, the AURA manifest lives at a standard, discoverable location: /.well-known/aura.json. This allows any agent to find it without prior knowledge of the site’s structure.

Imagine I want to allow agents to comment on my personal blog. My aura.json file might look like this:

JSON

{  
  "auraVersion": "1.0",  
  "displayName": "Derviş's Personal Blog",  
  "description": "A manifest of capabilities for my personal blog.",  
  "capabilities":  
      }  
    }  
  \]  
}

This simple file provides everything an agent needs. It defines a capability named post_comment, describes its purpose, and specifies the exact API endpoint to use including the HTTP method (POST), the URL structure, authentication requirements (NONE), and the expected parameters (content and author_email).

The Agent’s Perspective

An AURA-aware agent, tasked with leaving a comment on my article with ID 123, would parse this manifest and immediately know how to construct a valid, efficient API call. No scraping, no guesswork.

Bash

\# Agent identifies the 'post\_comment' capability for post '123'  
curl \-X POST https://dervis-blog.com/api/posts/123/comments \\  
     \-H "Content-Type: application/json" \\  
     \-d '{  
           "content": "This is a fantastic article about AURA\!",  
           "author\_email": "agent@example.com"  
         }'

The agent succeeds on its first try. My server receives a clean, expected request. We have achieved perfect, low cost communication.

Solving the State Problem: The AURA-State Header

But the web is not static. A user’s capabilities change dramatically once they log in. A static manifest can show all possible capabilities, but it can’t represent the dynamic reality of a user’s session. This is a critical flaw in purely declarative specifications like OpenAPI when applied to the agentic web, a point rightly raised in early community discussions. AURA solves this with the AURA-State HTTP header. The server can include this header in its responses to inform the agent about capabilities that have become available or unavailable in the current context. For example, after a user logs into my blog, the server’s response would look like this:

HTTP

HTTP/1.1 200 OK  
Content-Type: application/json  
Set-Cookie: session\_id=...; HttpOnly;  
AURA-State: capabilities.add=create\_post,edit\_profile; capabilities.remove=login  
{ "status": "Login successful", "user": "Derviş" }

The AURA-State header explicitly tells the agent that two new capabilities, create_post and edit_profile (defined in the main aura.json), are now available, and the login capability is no longer relevant. The agent updates its understanding of the world without needing to refetch or reparse the entire manifest. This simple mechanism makes AURA natively stateful and suited for the complex, interactive applications that dominate the modern web.

A Tale of Two Standards: AURA and the IETF’s AIPREF

I am not building in a vacuum. The Internet Engineering Task Force (IETF) is the cornerstone of the standards that make the internet work. The AI Preferences (AIPREF) working group, co-chaired by respected leaders like Mark Nottingham, is doing essential work to create a baseline for AI-web interaction.

Their current drafts, such as draft-ietf-aipref-vocab-02 and draft-ietf-aipref-attach-02, focus on creating a vocabulary and attachment mechanisms (like the Content-Usage header) to express preferences about how content is used. For example, a site can declare

train-ai=n to opt out of its content being used for AI training.

This is vital work that solves the “Noun Problem” how site owners can control the use of their data.

AURA is the necessary complement. It is designed to solve the “Verb Problem” how site owners can define the actions an agent is permitted to perform. The two are not in conflict; they are two sides of the same coin, essential for a complete solution.

Feature IETF AIPREF (draft-ietf-aipref-*) AURA Protocol
Primary Goal Express preferences for content usage (reading, training). Declare available actions and capabilities.
Analogy An advanced robots.txt for AI. A dynamic, agent-focused sitemap.xml + OpenAPI.
Core Question Answered “How are you allowed to use my data?” “What can you do on my site?”
Mechanism Content-Usage HTTP header & robots.txt directive. aura.json manifest & AURA-State HTTP header.
Statefulness Stateless. Preferences apply broadly to content paths. Stateful. Capabilities can change based on context (e.g., login).
Focus The “Noun” Problem (Content) The “Verb” Problem (Actions)

This distinction is important, but what’s more exciting is the philosophical alignment. Mark Nottingham’s separate draft-ietf-httpapi-link-hint proposes a way to embed “hints” about a linked resource directly into a link, with the stated goal of saving round trips and allowing clients to make more intelligent choices before interaction.

This is precisely the same principle that underpins AURA. The aura.json manifest is, in essence, a comprehensive set of “link hints” for an entire website’s capabilities. It provides structured, machine-readable metadata upfront to enable more intelligent, efficient agent behavior. AURA is not an alien concept to the IETF ecosystem; it is the logical application of this forward-thinking philosophy to the specific, urgent needs of the Agentic Web.

Answering the Hard Questions (A Dialogue with the Community)

When I shared the initial concept for AURA on platforms like Hacker News, the community raised brilliant and challenging questions. An open protocol is nothing without its community, so I want to address the most important points head-on.4

  • “Isn’t this just another robots.txt that can be ignored?”

This is the most common and most important question. The aura.json file itself is indeed a voluntary signal. A badly behaved agent can ignore it. But that misses the point. The real enforcement happens where it always has: on my backend server. If my manifest says the create_post capability requires authentication, my API at /api/posts will return a 401 Unauthorized to any request that isn’t properly authenticated.

The incentive for “good” agents to use AURA is immense. It is the difference between navigating a city with a detailed map and GPS (AURA) versus trying to find your way by looking at the shapes of buildings (scraping). For any serious AI developer, AURA saves time, money, and compute resources, and makes their agents orders of magnitude more reliable.

  • “How does this work for ad-supported sites?”

AURA separates function from presentation, which fundamentally challenges the impression-based advertising model. This is not a weakness; it is an opportunity. It paves the way for a new, more valuable “intent economy.”

When an agent makes a structured API call like search_products(query=’running shoes’, size=’11’), it is signaling pure, high-fidelity user intent. The API response can include not just the product data, but a structured ad object for a competing or complementary product, delivered at the peak moment of user interest. This is far more valuable than a generic banner view. AURA also enables direct monetization, where a capability in the manifest can be marked as requiring an API key or even a micropayment.

  • “Why not just use OpenAPI/Swagger?”

OpenAPI is a fantastic standard for documenting APIs for human developers. I use it myself. But it was not designed for this purpose. There are two key differences:

  1. Control & Consent vs. Documentation: OpenAPI is a technical document for developers. AURA is a statement of consent and control from the site owner to automated agents. It represents a different social contract.
  2. Statefulness: As discussed, OpenAPI is stateless. AURA’s AURA-State header is designed from the ground up for the dynamic, stateful nature of the web.
  • “What if a site owner creates a malicious aura.json?”

Trust is the currency of any ecosystem. If a site owner provides a manifest that lies-for example, by describing parameters incorrectly-the agent’s API call will fail. The agent will receive an error from the server and learn not to trust that manifest. In a mature AURA ecosystem, this will naturally lead to community-run reputation services. An agent could check a manifest’s hash against a reputation service before trusting it. Malicious sites would quickly find themselves ostracized, unable to have their capabilities used by the agentic ecosystem. The incentive is to be truthful.

  • The Road Ahead: Building an Intent-Based Web, Together

The web is at a crossroads. One path leads to a chaotic, inefficient arms race between scrapers and blockers, a web of walled gardens and broken agents. The other path leads to a more elegant, efficient, and consensual internet.

AURA is my proposal for that second path.

To Site Owners: I encourage you to think about the “verbs” of your site. What are the core actions you provide? Creating a simple aura.json is the first step toward reclaiming control, reducing server load, and participating in the coming intent economy.

To AI Developers: I invite you to build AURA-aware agents. The reference implementation and full specification are open-source and available on GitHub. Let’s work together to make agent interaction more reliable, efficient, and respectful.

AURA is more than a protocol; it’s a vision for a web where humans and AI can collaborate seamlessly, where intent is clear, and where control rests firmly in the hands of creators. Let’s build it together.


This content originally appeared on DEV Community and was authored by Dervish of AI