What’s Google’s WebMCP?

Laptop computer showing a webpage and floating above the laptop is the AI symbol.

Google’s Chrome team just launched an early preview of something called the Web Model Context Protocol, or WebMCP. If you’re not deep in the developer world, that name probably doesn’t mean much yet. But it’s worth understanding, because it could change how the web works as AI agents become more common.

So what is it? WebMCP is a new standard that lets websites talk directly to AI agents in a structured way. Right now, when an AI agent tries to do something on a website, like book a flight or add an item to a cart, it basically takes a screenshot of the page and guesses which buttons to click. If the layout changes even a little, the whole thing breaks. It’s slow, expensive, and not very reliable.

WebMCP fixes that. Instead of forcing the AI to guess, a website can hand over what Google is calling a “Tool Contract,” a structured list that says “here’s exactly what you can do on this site, and here’s how to do it.” Think of it like giving someone a clear set of instructions instead of dropping them in a room and saying, “figure it out.”

There are two ways developers can set this up. A simple version works with standard HTML forms, so if your site already has clean forms, you’re most of the way there. A more advanced version uses JavaScript for complex tasks such as multi-step checkouts and customer support tickets. Both run through a new browser feature called navigator.modelContext.

The performance improvements are hard to ignore. Early numbers show roughly a 67% drop in computing costs compared to the old screenshot approach, and task accuracy jumps to around 98%.

This isn’t just a Google thing, either. Microsoft engineers helped build it, and it’s being developed through the W3C, the organization that sets web standards. That kind of backing suggests this could become a real industry standard, not just a Chrome-only feature.

For now, it’s only available in Chrome 146 Canary behind a testing flag, and developers can sign up for the early preview to get access to documentation and demos. No timeline yet on support from Firefox or Safari, but Edge is likely close behind, given Microsoft’s involvement.

Why should marketers and site owners care? SEO expert Dan Petrovic called it “the biggest shift in technical SEO since structured data.” As AI agents take on more of the browsing, searching, and purchasing that people do online, websites will need to communicate with those agents clearly. WebMCP is shaping up to be the way they do that.

It’s still early. This is a developer preview, not a finished product. But the direction is clear: the web is being rebuilt with AI agents in mind, and the sites that get ahead of it will be better positioned when it goes mainstream.


Sources:

OpenAI Starts Testing Ads in ChatGPT

Glowing blue sphere with the letters AI at the centre, surrounded by interconnected digital nodes and network lines against a soft gradient background, representing artificial intelligence and data connectivity.

OpenAI has begun testing advertisements inside ChatGPT for logged-in adult users in the U.S. The test, called the “OpenAI Ad Pilot Program,” is limited to users on the free tier and the $8/month ChatGPT Go subscription, while Plus, Pro, Business, Enterprise, and Education plans remain ad-free.

Ads will appear below chatbot responses, clearly labeled as sponsored and visually separated from the AI’s answers. OpenAI says ads will not influence how ChatGPT generates its responses. Answers are still optimized based on what the system deems most helpful to the query. Ad targeting is based on conversation topics, past chats, and prior ad interactions, with advertisers only receiving aggregate performance data like impressions and clicks.

Privacy guardrails are in place: advertisers won’t have access to individual conversations, chat histories, or personal details. Ads won’t be shown to users under 18 or around sensitive topics like health, mental health, or politics. Users can dismiss ads, manage personalization settings, and delete ad data.

Despite a minimum buy-in of $200,000, the pilot has already drawn investment from major agency holding companies, including Omnicom Media, WPP, and Dentsu. Omnicom alone has secured placements for more than 30 clients across apparel, automotive, beauty, CPG, hospitality, retail, QSR, technology, and telecommunications. Early ad mockups from OpenAI have featured scenarios like trip planning, with lodging ads appearing during travel-related conversations.

The move has already prompted competitive positioning from rivals. Anthropic, maker of the Claude chatbot, has highlighted that its product will remain ad-free, including in a recent Super Bowl commercial.

This is a significant shift in how AI platforms monetize. Until now, chatbots like ChatGPT have relied almost entirely on paid subscriptions and API fees to generate revenue, leaving the massive free user base as a cost center. Ads change that equation by monetizing free sessions directly, reaching users at the moment of intent, mid-conversation, mid-decision. It’s the same playbook that built Google and Meta: free product, massive audience, sell the access. Whether that translates into meaningful ROI at a $60 CPM remains to be seen, but the early agency interest suggests the industry is taking it seriously.

For marketers, it opens a new channel where ads reach users at the moment of intent, mid-conversation, mid-decision. Whether that translates into meaningful ROI at a $60 CPM remains to be seen, but the early agency interest suggests the industry is taking it seriously.

It’s too early for most brands to jump in, with a $200,000 minimum buy-in limiting the pilot to major advertisers. But as ChatGPT advertising opens up to clients of all sizes, it’s something we’ll want to test. Chatbots like ChatGPT have already captured a meaningful share of search activity, and where users go, ad dollars tend to follow.


Sources:

  1. Ad Age — ChatGPT starts serving ads, drawing early interest from major agencies
  2. ADWEEK — ChatGPT Gets Ads: Omnicom, WPP, and Dentsu Line Up Brands for OpenAI Pilot
  3. MediaPost — OpenAI Begins Testing ChatGPT Ads With Omnicom, WPP, Others
  4. The Keyword — OpenAI starts testing ads in ChatGPT for U.S. users
  5. Skift — OpenAI Launches Ads Pilot for ChatGPT, Travel Expected to Participate
  6. Yahoo Finance / Proactive Investors — OpenAI begins testing ads in ChatGPT, draws early attention from advertisers and analysts

What If Everyone’s Wrong About Email Send Times?

Close-up of a person typing on a laptop with floating envelope icons over the keyboard, representing email communication or digital messaging.

There’s a piece of advice that gets repeated constantly in email marketing circles. Send your newsletter on Tuesday, Wednesday, or Thursday. Avoid Mondays because people are catching up from the weekend. Avoid Fridays because people are checked out. Never send on the weekend.

It sounds reasonable. And it’s probably true that those mid-week days perform well on average.

But here’s the thing about “on average.” If everyone follows the same advice, everyone is competing for the same inbox space at the same time.

So what happens if you go the other way?

The Case for Testing Friday

We started wondering whether Friday might actually be an opportunity. Not because Friday is inherently a great day for email, but because most people assume it isn’t. That assumption clears the field.

If your competitors are all sending Tuesday through Thursday, your Friday newsletter might be the only one sitting in the inbox when your subscriber finally takes a breath at the end of the week.

It’s a theory. But theories need data.

What We Found

We started tracking our Friday sends against our mid-week sends across two different sites with different audience sizes. The results were close, which in itself is interesting.

For one site, Friday open rates came in slightly higher than mid-week sends on average. For the other, open rates were essentially identical. Neither finding is a slam dunk for Friday, but neither supports the conventional wisdom against it either.

Where Friday stood out more clearly was in clicks. Our most recent Friday send had the highest click count of any send we tracked across both sites. One data point doesn’t prove anything, but it’s worth noting.

The unsubscribe numbers also favored Friday. Fewer people unsubscribed on Friday sends than on mid-week sends. Again, small sample size, but directionally encouraging.

What This Actually Means

We’re not saying Friday is the best day to send email. We don’t have enough data to say that yet, and your audience might behave completely differently from ours.

What we are saying is that the conventional wisdom deserves to be tested rather than accepted. Every business has a different audience. Your subscribers might be checking email at different times, in different contexts, with different habits than the average subscriber on which industry benchmarks are based.

The only way to know what works for your list is to test it yourself.

Your Audience Is Unique

The guides that say “send on Tuesday” are based on aggregated data across enormous numbers of emails and industries. They’re a reasonable starting point. They are not a prescription.

A B2B software company selling to developers operates differently from a local service business. A weekly newsletter with a loyal audience behaves differently than a promotional email going to a cold list. The right send time for one is not necessarily the right send time for the other.

Start with the conventional wisdom if you need a starting point. But then test it. Look at your own open rates, click rates, and unsubscribe patterns by day of the week. Let your data tell you what your audience actually does, not what the average audience does.

You might find that Tuesday works best for you. You might find that Friday is your sweet spot. Either answer is the right answer, as long as it comes from your own data.

We’ll keep testing and share what we find.

Google Just Shook Up Merchant Center — And A Whole Lot of Retailers Got Caught in the Fallout

Two professionals reviewing analytics data on a laptop in a modern office setting

February 10, 2026

Alright, so if you run a Google Merchant Center account and somewhere around January 8–12 you looked at your dashboard and thought “well, that ain’t right” — I want you to know you’re not crazy, and you’re definitely not alone.

A whole bunch of retailers watched their approved products drop off a cliff practically overnight. Products that had been running just fine for months suddenly flipped to “Limited” or “Pending” status. No warning, no changes on our end. Just… gone.

So what the heck happened? Buckle up, because Google had itself quite a week.

Google Went and Changed Everything at Once

On January 11, Google CEO Sundar Pichai got up at the National Retail Federation conference in New York and announced what honestly amounts to the biggest shakeup of Google’s shopping infrastructure in years. And I mean years.

Here’s the rundown:

They launched something called the Universal Commerce Protocol (UCP). It’s basically a new open standard that lets AI agents handle shopping for people, from finding a product to checkout. Google built it with Shopify, Etsy, Wayfair, Target, and Walmart, and over 20 other big names signed on, including Best Buy, Mastercard, Visa, and Home Depot. That’s not a small deal. (TechCrunch, Axios)

They added dozens of new data attributes to Merchant Center. These go way beyond your usual product titles and descriptions. We’re talking answers to common product questions, compatible accessories, substitutes — all designed to feed into AI-powered shopping through Gemini, AI Mode, and their new Business Agent. (Chain Store Age, Constellation Research)

Speaking of the Business Agent, that went live on January 12 with Lowe’s, Michael’s, Poshmark, and Reebok. It’s basically a branded AI chatbot that shows up right in Google Search results and can answer customer questions in the retailer’s voice. You can activate it through Merchant Center. (Search Engine World)

They rolled out AI Mode Checkout and Direct Offers, letting shoppers buy stuff without ever leaving the AI conversation in Search or Gemini. Plus a new ads pilot where brands can serve up exclusive deals based on what someone’s chatting about. (Google Blog)

And they launched Gemini Enterprise for Customer Experience, a full Google Cloud suite that provides retailers with AI-powered shopping agents and customer service tools. (Google Blog — Sundar Pichai’s NRF Remarks)

That is a lot of stuff to drop in one weekend. And here’s the thing: when Google makes changes this big on the backend, they don’t just flip a switch and leave everything else alone. Their systems go back through, recrawl, and re-evaluate product feeds. Every. Single. One.

Oh, and Five Days Before That…

On January 6, Google announced another big change: starting in March 2026, if you sell products both online and in-store and the details differ between channels (price, availability, condition, whatever), you’re going to need separate product IDs for each version. Online attributes become the default.

They started emailing affected merchants right away to flag products that needed updating. (Search Engine Land, Search Engine Roundtable, Google Merchant Center Help)

Now, enforcement doesn’t kick in until March. But the prep work and backend processing for this change? That was happening right alongside everything else in early January.

And Then Their Own Crawl System Broke

Here’s where it really gets fun.

While all this was going on, Google’s automatic import feature — you know, the one that promises to update your product data every 24 hours — was flat-out not working right.

Emmanuel Flossie, a Google Shopping Specialist and Google Ads Diamond Product Expert (so, not just some random guy), tested it on January 13 and found product data that hadn’t been touched since January 4. Nine days. Not 24 hours — nine days. He went public with it and noted this has actually been a problem for over five years. (PPC Land, Google Merchant Center Community)

So let me get this straight: Google rolls out the biggest commerce infrastructure change in years, their crawl systems are already running behind, and then they re-evaluate everybody’s product feeds at the same time? Yeah. That’s gonna cause some problems.

Why Products Ended Up in “Limited” or “Pending”

When Google does a mass re-evaluation like this, a few things happen under the hood:

They re-crawl your product landing pages to ensure your feed data matches what’s actually on your website. They re-run automated policy enforcement against your titles, descriptions, images, and landing page content. And they validate your data against any new or updated specifications.

Most of the time, this is invisible. Products get re-approved and you never notice. But when the automated systems are running hot — as they often do during big platform transitions — stuff that was perfectly fine yesterday can get flagged today.

Google’s own help documentation says products can land in “Pending” status when they haven’t been crawled yet and Google can’t verify the data. (Google Merchant Center Help) Well, when your crawl system is nine days behind, there’s gonna be a whole lot of unverified products sitting in limbo.

And for retailers in categories that touch Google’s “sensitive” policy areas, such as health products, financial services, and yes, religious products, the automated enforcement can be especially aggressive. Products that were approved for months can suddenly get hit with policy flags that don’t make a lick of sense.

So What Can You Do About It?

Look, I’m not going to sugarcoat it; dealing with Google’s automated systems when they get it wrong is about as fun as a flat tire on I-80. But here’s what’s actually helped:

Check your Merchant Center diagnostics. Go to Products, then Needs Attention, and figure out exactly what Google is flagging. A policy violation needs a different fix than a price mismatch or a crawl error.

Don’t rely on automatic imports. Seriously, just don’t. Set up scheduled feed uploads through your e-commerce platform — whether that’s Shopify, WooCommerce, Cart.com, or whatever you’re using. You want Google to get your data straight from the source, on your schedule.

Make sure your landing pages match your feed. Google’s re-crawl is checking for consistency. If your prices, availability, or product details don’t match between your feed and your actual pages, that will cause issues.

Contact Google Support. If you’re seeing policy flags on products that have been approved forever and nothing has changed, use the Help icon in Merchant Center to contact a real person. A human reviewer can usually sort out what the automated system got wrong.

Keep an eye on things. Many retailers are seeing products return to Approved status in early February. If you’re seeing a gradual recovery, Google’s systems may be catching up and self-correcting as the re-crawl works through the backlog.

The Big Picture

Listen, I get what Google is trying to do here. AI-powered shopping is coming whether we like it or not, and the Universal Commerce Protocol, Business Agent, and all these new tools are genuinely impressive. They’re building the infrastructure for a world where AI agents do the shopping for people, and that’s going to change retail in a big way.

But the rollout created real pain for real businesses. When you’re a small or mid-size retailer and 60–70% of your approved products disappear overnight — even temporarily — that hits your bottom line. That’s real revenue walking out the door.

So here’s my takeaway, and I think it’s a good one to tape to your monitor: when Google announces big infrastructure changes, expect turbulence in your Merchant Center account in the days that follow. Keep your feeds fresh, your landing pages buttoned up, and your Google Support contacts handy.

We’re all figuring this out together. And if you’re dealing with this right now, just know — it’s not you. It’s Google being Google.


Sources

Why Is Direct Traffic Suddenly Increasing on My Website?

Laptop on a desk displaying a blurred Google Analytics 4 traffic acquisition report with charts and data tables.

You’re checking your Google Analytics and notice something alarming: direct traffic has spiked, and your bounce rate is climbing with it. Before you panic or start installing plugins, take a breath. The answer is almost always hiding in your data — you just need to know where to look.

If you’re using GA4, it automatically filters out many known bots. So in most cases, an inflated bounce rate tied to direct traffic is more about how engagement tracking is configured or what’s slipping through the cracks than it is about your website being broken. Let’s walk through how to diagnose the problem.

Start With Your GA4 Configuration

Before diving into the data, make sure your GA4 setup isn’t part of the problem. There are a few common configuration issues that can inflate direct traffic or skew your bounce rate.

Internal Traffic Filters

Your own visits can show up as direct traffic if you haven’t excluded them. To check this:

  1. Go to Admin → Data Streams → Configure tag settings → Define internal traffic
  2. Make sure your office, home, and VPN IP addresses are listed
  3. Then go to Admin → Data Settings → Data Filters and confirm the filter is set to Active, not just Testing

Referral Exclusions

When users pass through payment gateways, SSO providers, or other redirect-heavy services and return to your site, GA4 can re-classify them as new direct sessions. These often bounce because the user already completed their action. Check this under Admin → Data Streams → Configure tag settings → List unwanted referrals and add any services that are part of your normal user flow.

Engaged Session Settings

GA4 defines a “bounce” differently than Universal Analytics did. A bounced session in GA4 is one that wasn’t “engaged,” meaning it didn’t last at least 10 seconds, didn’t include 2 or more page views, and didn’t trigger a conversion event. If your site is content-heavy but users tend to read quickly and leave, that 10-second threshold might be too aggressive. You can adjust it under Admin → Data Streams → Configure tag settings → Adjust session timeout.

Missing UTM Parameters

This one is easy to overlook. If you’re running email campaigns, posting on social media, or running ads without proper UTM parameters on your links, that traffic gets dumped into the direct bucket. It might have engagement patterns that differ significantly from your direct traffic, pulling your overall numbers in unexpected directions.

Dig Into the Data With Explore Reports

Once you’ve confirmed your configuration is solid, it’s time to investigate the traffic itself. GA4’s Explore reports let you slice the data in ways that standard reports can’t, and they’re essential for spotting bot traffic.

Setting Up Your Exploration

  1. Go to Explore in the left sidebar and create a new blank exploration
  2. Add Session default channel group and Landing page + query string as dimensions
  3. Add Sessions, Engaged sessions, Bounce rate, and Engagement rate as metrics
  4. In the Tab Settings, add a filter: Session default channel group exactly matches Direct

A quick but important note: make sure you’re using Session default channel group, not just “Default channel group.” The version without “Session” is event-scoped and will return far fewer results, sometimes dramatically so. In one case, using the wrong dimension showed only 10 sessions, even though the actual number was over 142,000.

What to Look For: Landing Pages

With your exploration filtered to direct traffic, set Landing page as your row dimension and sort by sessions in descending order. You’re looking for:

  • URLs you don’t recognize or that don’t exist on your site, which can indicate spam or ghost hits
  • A single page absorbing a disproportionate share of all direct sessions
  • Pages with near-100% bounce rates and almost zero engaged sessions

What to Look For: Devices and Screen Resolution

Add Device category and Screen resolution as row dimensions alongside your landing page. Sort by sessions and look for:

  • Screen resolutions like 1024×768, 800×600, or (not set) appearing with unusually high session counts
  • A single resolution driving the vast majority of your direct traffic
  • Any resolution with a 100% or near-100% bounce rate and zero engaged sessions

The resolution 1024×768 is the default viewport size for headless browsers and automation tools like Selenium, and it’s rarely used by real humans today. If you see tens of thousands of sessions from this resolution, you’re almost certainly looking at bot traffic.

What to Look For: Engagement Patterns

Check the overall engagement rate for your direct traffic. Real human traffic, even from disinterested visitors, doesn’t produce a perfect 100% bounce rate across tens of thousands of sessions. You’d always expect at least some percentage to engage. If your engagement rate is below 1–2% and your session counts are high, that’s a strong signal of automated traffic.

A Real-World Example

Here’s what this looks like in practice. A recent investigation into a site’s GA4 data revealed the following over a 90-day period:

  • Over 140,000 total direct sessions with fewer than 800 engaged sessions, a 99%+ bounce rate
  • The vast majority of those sessions came from a single screen resolution: 1024×768 on desktop
  • Every single one of those sessions had a 100% bounce rate with zero engaged sessions
  • Over 95% of all direct traffic was concentrated in this one resolution
  • All of it was hitting the homepage exclusively

This pattern is a textbook indicator of automated bot traffic:

  1. 1024×768 is the default viewport size for headless browsers and automation tools
  2. Real human traffic doesn’t produce a perfect 100% bounce rate 
  3. All traffic landing exclusively on the homepage via direct is the most common behavior for bots that simply load a URL without navigating the site
  4. A single resolution accounting for 95%+ of all direct traffic is not a natural distribution

When the 1024×768 resolution was filtered out of the data, the results shifted dramatically:

  • Engagement rate jumped from under 1% to over 10%
  • Bounce rate dropped from over 99% to under 90%
  • Total bounce rate across all channels fell by nearly 17 percentage points

What to Do About It

If your investigation points to bot traffic, here’s the recommended path forward:

Short Term: Clean Up Your Reports

In GA4, you can create filters or custom audiences that exclude the offending screen resolution so your reports reflect real user behavior. This doesn’t stop the bots, but it gives you clean data to work with while you address the root cause.

Medium Term: Check Your Server Logs

Contact your hosting provider and ask them to check for high-volume requests from specific IP ranges or user agents that hit your homepage. The server logs will show you exactly which IPs are responsible, and you can cross-reference those against known cloud hosting providers and bot networks. From there, you can block the offending traffic while allowing legitimate bots, such as search engine crawlers, through.

Long Term: Implement Bot Mitigation

If you’re behind a CDN like Cloudflare, you can tighten your firewall rules or enable bot management features to challenge suspicious traffic before it ever reaches your site. This prevents the traffic from being recorded in GA4 in the first place, which is the cleanest solution.

The Bigger Picture

A sudden spike in direct traffic with a high bounce rate isn’t always a sign that something is wrong with your website or your analytics setup. Sometimes it’s just bots. The key is knowing how to investigate systematically: start with your configuration, dig into the data with the right dimensions and filters, and follow the evidence.

Once you’ve cleaned up the bot traffic, you’ll have a much clearer picture of how your real visitors are behaving. And from there, you can make informed decisions about what actually needs to be optimized.

Links matter, but they’re no longer in the top three

large chain links in a pile

Backlinks have always been an essential factor in your website’s search results. For well over a decade, backlinks were touted as one of the top three ranking factors.

However, Gary Illyes of Google recently confirmed that links matter but are no longer in the top three factors for site ranking. High-quality content (like mentioned in #1) is the number one factor for ranking on Google Search. It is so essential it can even cause sites with no backlinks to rank high in the search results. You can read more about Google’s view on backlinks here.

Holiday Shopping is Here

couple looking in a shop window

The holiday shopping season is here. Google released their Four Ways to Prepare for the Holiday Season guide back in August but it’s a great resource to revisit. It’s full of great information that can apply to more than just merchants selling products on Google Shopping.

The guide provides valuable insights into what Google is prioritizing this holiday season: enticing gift ideas, effective PMAX Google Ads strategies to expand your customer base, and the ever-popular feature of fast and free shipping.