What’s Google’s WebMCP?

Laptop computer showing a webpage and floating above the laptop is the AI symbol.

Google’s Chrome team just launched an early preview of something called the Web Model Context Protocol, or WebMCP. If you’re not deep in the developer world, that name probably doesn’t mean much yet. But it’s worth understanding, because it could change how the web works as AI agents become more common.

So what is it? WebMCP is a new standard that lets websites talk directly to AI agents in a structured way. Right now, when an AI agent tries to do something on a website, like book a flight or add an item to a cart, it basically takes a screenshot of the page and guesses which buttons to click. If the layout changes even a little, the whole thing breaks. It’s slow, expensive, and not very reliable.

WebMCP fixes that. Instead of forcing the AI to guess, a website can hand over what Google is calling a “Tool Contract,” a structured list that says “here’s exactly what you can do on this site, and here’s how to do it.” Think of it like giving someone a clear set of instructions instead of dropping them in a room and saying, “figure it out.”

There are two ways developers can set this up. A simple version works with standard HTML forms, so if your site already has clean forms, you’re most of the way there. A more advanced version uses JavaScript for complex tasks such as multi-step checkouts and customer support tickets. Both run through a new browser feature called navigator.modelContext.

The performance improvements are hard to ignore. Early numbers show roughly a 67% drop in computing costs compared to the old screenshot approach, and task accuracy jumps to around 98%.

This isn’t just a Google thing, either. Microsoft engineers helped build it, and it’s being developed through the W3C, the organization that sets web standards. That kind of backing suggests this could become a real industry standard, not just a Chrome-only feature.

For now, it’s only available in Chrome 146 Canary behind a testing flag, and developers can sign up for the early preview to get access to documentation and demos. No timeline yet on support from Firefox or Safari, but Edge is likely close behind, given Microsoft’s involvement.

Why should marketers and site owners care? SEO expert Dan Petrovic called it “the biggest shift in technical SEO since structured data.” As AI agents take on more of the browsing, searching, and purchasing that people do online, websites will need to communicate with those agents clearly. WebMCP is shaping up to be the way they do that.

It’s still early. This is a developer preview, not a finished product. But the direction is clear: the web is being rebuilt with AI agents in mind, and the sites that get ahead of it will be better positioned when it goes mainstream.


Sources:

OpenAI Starts Testing Ads in ChatGPT

Glowing blue sphere with the letters AI at the centre, surrounded by interconnected digital nodes and network lines against a soft gradient background, representing artificial intelligence and data connectivity.

OpenAI has begun testing advertisements inside ChatGPT for logged-in adult users in the U.S. The test, called the “OpenAI Ad Pilot Program,” is limited to users on the free tier and the $8/month ChatGPT Go subscription, while Plus, Pro, Business, Enterprise, and Education plans remain ad-free.

Ads will appear below chatbot responses, clearly labeled as sponsored and visually separated from the AI’s answers. OpenAI says ads will not influence how ChatGPT generates its responses. Answers are still optimized based on what the system deems most helpful to the query. Ad targeting is based on conversation topics, past chats, and prior ad interactions, with advertisers only receiving aggregate performance data like impressions and clicks.

Privacy guardrails are in place: advertisers won’t have access to individual conversations, chat histories, or personal details. Ads won’t be shown to users under 18 or around sensitive topics like health, mental health, or politics. Users can dismiss ads, manage personalization settings, and delete ad data.

Despite a minimum buy-in of $200,000, the pilot has already drawn investment from major agency holding companies, including Omnicom Media, WPP, and Dentsu. Omnicom alone has secured placements for more than 30 clients across apparel, automotive, beauty, CPG, hospitality, retail, QSR, technology, and telecommunications. Early ad mockups from OpenAI have featured scenarios like trip planning, with lodging ads appearing during travel-related conversations.

The move has already prompted competitive positioning from rivals. Anthropic, maker of the Claude chatbot, has highlighted that its product will remain ad-free, including in a recent Super Bowl commercial.

This is a significant shift in how AI platforms monetize. Until now, chatbots like ChatGPT have relied almost entirely on paid subscriptions and API fees to generate revenue, leaving the massive free user base as a cost center. Ads change that equation by monetizing free sessions directly, reaching users at the moment of intent, mid-conversation, mid-decision. It’s the same playbook that built Google and Meta: free product, massive audience, sell the access. Whether that translates into meaningful ROI at a $60 CPM remains to be seen, but the early agency interest suggests the industry is taking it seriously.

For marketers, it opens a new channel where ads reach users at the moment of intent, mid-conversation, mid-decision. Whether that translates into meaningful ROI at a $60 CPM remains to be seen, but the early agency interest suggests the industry is taking it seriously.

It’s too early for most brands to jump in, with a $200,000 minimum buy-in limiting the pilot to major advertisers. But as ChatGPT advertising opens up to clients of all sizes, it’s something we’ll want to test. Chatbots like ChatGPT have already captured a meaningful share of search activity, and where users go, ad dollars tend to follow.


Sources:

  1. Ad Age — ChatGPT starts serving ads, drawing early interest from major agencies
  2. ADWEEK — ChatGPT Gets Ads: Omnicom, WPP, and Dentsu Line Up Brands for OpenAI Pilot
  3. MediaPost — OpenAI Begins Testing ChatGPT Ads With Omnicom, WPP, Others
  4. The Keyword — OpenAI starts testing ads in ChatGPT for U.S. users
  5. Skift — OpenAI Launches Ads Pilot for ChatGPT, Travel Expected to Participate
  6. Yahoo Finance / Proactive Investors — OpenAI begins testing ads in ChatGPT, draws early attention from advertisers and analysts

Does JavaScript Impact AI Crawler Accessibility?

Glowing AI block sitting on a digital circuit board, surrounded by illuminated data nodes and flowing light trails, representing artificial intelligence processing and machine learning systems.

Modern websites increasingly rely on JavaScript to deliver content, but this creates a significant problem for AI crawlers trying to index and learn from that content. Many AI systems are missing crucial information simply because they can’t properly read JavaScript-heavy sites.

How Modern Websites Work

Traditional websites sent complete HTML documents to browsers. Everything you needed to see was included in that initial download. All the text, structure, and content was ready to display immediately.

Today’s websites work differently. Many use JavaScript frameworks like React, Vue, or Angular to build pages dynamically in your browser. When you first visit these sites, the server sends minimal HTML along with JavaScript code. Your browser then executes that code to fetch and display the actual content.

This approach enables responsive, app-like experiences on the web. Content updates without page refreshes, interactions feel instant, and the user experience is generally smoother. However, it creates accessibility challenges for AI crawlers.

The Crawler Problem

AI crawlers collect web content to train AI models. When you ask ChatGPT or Claude a question, they draw from knowledge these crawlers gathered by visiting websites.

Basic crawlers operate simply: they request a webpage, read the HTML response, extract the content, and move on. This process is fast and efficient. The issue arises when a website requires JavaScript execution to display its content. If the crawler doesn’t run that JavaScript, it receives an empty or nearly empty page.

Many crawlers skip JavaScript execution entirely. Even crawlers capable of running JavaScript often limit this functionality because rendering is computationally expensive and slow. Processing one JavaScript-heavy page can take as long as processing dozens of static HTML pages.

The Two Versions Problem

JavaScript-heavy sites effectively exist in two versions simultaneously. Human visitors with modern browsers see the full, intended website with all its content and features. AI crawlers often see something completely different—perhaps just a loading indicator, a bare HTML framework, or minimal placeholder content.

This discrepancy means AI models train on incomplete or entirely missing information from these sites. Your detailed articles, product information, or business services might be invisible to AI systems, not because the content doesn’t exist, but because the crawler couldn’t access it.

Real Impact on Visibility

This technical limitation has practical consequences. When someone asks an AI assistant for recommendations or information in your field, your JavaScript-heavy site might not appear in the response. The AI simply doesn’t know your content exists.

A competitor using traditional HTML rendering could gain an advantage not through better content, but through better crawler accessibility. Similarly, if you’ve published expert content on a specialized topic, that expertise won’t contribute to AI understanding of the field if crawlers never accessed it.

As AI-powered search and recommendations become more common, crawler accessibility increasingly affects online visibility and discoverability.

Current Crawler Capabilities

Not all AI crawlers handle JavaScript the same way. Modern crawlers like GPTBot (from OpenAI), ClaudeBot (from Anthropic), and Google-Extended (Google’s AI training crawler) have some JavaScript rendering capability. These crawlers have evolved alongside traditional search engine crawlers, which faced similar challenges.

However, even advanced crawlers face constraints. The computational cost of rendering JavaScript means these crawlers may limit which sites they fully render, how often they render JavaScript, or how long they wait for content to load. Sites that don’t require JavaScript rendering remain easier and faster to crawl.

Solutions for Better Accessibility

Several approaches can improve how AI crawlers access your content:

Server-Side Rendering (SSR) executes JavaScript on your server before sending content to visitors or crawlers. The crawler receives a complete HTML document with all content already rendered. Frameworks like Next.js and Nuxt.js provide built-in SSR capabilities.

Static Site Generation (SSG) pre-builds complete HTML pages during your deployment process. Every page exists as a full HTML document before any visitor or crawler requests it. No JavaScript execution is required to access the content.

Progressive Enhancement structures sites so core content exists in the initial HTML, with JavaScript adding enhanced functionality. Crawlers can access essential information from the HTML while users with JavaScript-enabled browsers get the full experience.

Dynamic Serving delivers different versions of pages based on the requesting client. You can serve fully-rendered HTML to identified crawlers while maintaining JavaScript-powered experiences for regular users. This approach requires careful implementation to avoid search engine penalties for cloaking.

The Broader Implications

Website architecture decisions now affect both human user experience and AI system comprehension. Sites must consider two distinct audiences with different technical capabilities and requirements.

Businesses invisible to AI crawlers may find themselves increasingly absent from AI-generated recommendations, summaries, and search results. As users rely more heavily on AI assistants for information discovery, this absence becomes a competitive disadvantage.

The technical choices you make about JavaScript usage directly influence whether AI systems can learn from and reference your content. Understanding this relationship enables informed decisions about balancing modern web development practices with content accessibility.

Future Developments

The situation will continue evolving. Crawler technology will improve, making JavaScript rendering more feasible at scale. Web development standards may emerge specifically addressing AI crawler accessibility. The development community is actively working on solutions to these challenges.

For now, the most effective approach balances modern user experience with crawler accessibility. Sites can deliver excellent interactive experiences while ensuring their content remains accessible to AI systems that are increasingly shaping how people discover information online.