Do the LLMs Know Who You Are? | VSSL Agency

May 13 2026

by Tim Peacock

AEO

Do the LLMs Know Who You Are?

Your buyers stopped Googling. A lot of them are asking ChatGPT, Perplexity, Claude, and Google’s AI Overviews instead — and the AI is answering without sending them to your site. The page-one ranking you spent years earning is now a snippet you have to be cited inside. If the model doesn’t quote you, you weren’t in the conversation.

The hard part is that most marketing teams have no visibility into where they actually stand. You can’t see what ChatGPT said about your category yesterday. You can’t tell whether Perplexity is citing your competitor’s docs and skipping yours. So we built a tool that gives you a straight answer.

The VSSL AEO Scanner

The AEO Scanner is a free 60-second audit that grades how visible your site is to AI answer engines. Drop in a URL and you get back:

  • A 0–100 AEO score and tier
  • Four pillar scores covering AI visibility, content and answerability, technical foundations, and entity and authority
  • Your top 3 priority actions, ranked by impact
  • A 30-day fix list you can hand a developer
  • A side-by-side benchmark against two of your closest competitors
  • A schema diff showing what’s missing and what to add first

No form. No sales call. No email gate. Run it on your site, then run it on the competitors who keep showing up in the answers you wish you owned.

What the Score Actually Measures

AEO is not SEO with new keywords. Traditional SEO optimizes for ranking on a results page where a human clicks through. AEO optimizes for being the source an AI cites when it answers the question directly — often without the user clicking anything. That shift changes what matters: schema markup and structured data, content written to be extracted and quoted rather than ranked, factual claims a model can lift with confidence, and the off-site citation footprint that tells an LLM you’re a credible source in the first place.

The scanner checks all four. To make it concrete, here’s a recent sample run on stripe.com: an overall score of 72/100 in the “Strong” tier, with AI visibility at 86, technical foundations at 71, entity authority at 68, and content answerability flagged as the weak link at 64. Even a brand as well-indexed as Stripe has meaningful gaps. The interesting question isn’t whether you have gaps — you do — it’s which ones are costing you the most pipeline.

From Diagnosis to Fix: The Answer Visibility Framework

A score tells you where you stand. It doesn’t tell you what to do on Monday morning. The scanner’s four pillars map directly to a four-layer model we use on every AEO engagement, so the same diagnostic that grades your site also points to the workstream that fixes it.

Layer 1: Foundation

The technical structure that makes your content machine-readable in the first place. Schema markup, structured data, clean HTML semantics, crawlability for AI agents specifically. If a model can’t parse your page, nothing else matters. Deliverable: a schema audit and the markup to fix it.

Layer 2: Answerability

Content written to be extracted and quoted, not just ranked. That means direct answers to the questions buyers are actually asking, factual claims phrased in ways a model can lift with confidence, and a content hierarchy that signals what’s important. Most B2B sites were written for humans scanning a page — they need a rewrite for machines summarizing one. Deliverable: prioritized content rewrites.

Layer 3: Authority

The off-site citation footprint that tells an LLM you’re a credible source. Review platforms, Reddit and community mentions, third-party coverage, the entity signals that connect your brand across the web. A site with perfect schema and weak external citations will still get skipped for a competitor the model has seen vouched for in more places. Deliverable: a citation strategy.

Layer 4: Tracking

Measurement that ties answer-engine performance back to traffic, leads, and pipeline. AEO is too new for most teams to have working dashboards, which means it’s easy to either over-credit it or quietly ignore it. Neither is useful. Deliverable: HubSpot reporting wired to the metrics that actually matter.

The layers stack in that order on purpose. Authority work doesn’t compound if the foundation is broken. Tracking doesn’t help if there’s nothing to measure. When the scanner returns your top 3 priority actions, they’re ranked against this model — fix the layer that’s holding the rest back first, then move up.

Why This Matters Right Now

Three things have shifted in the last 18 months and most B2B marketing programs haven’t caught up:

ChatGPT now drives more B2B research sessions than Google among buyers under 40, across software, SaaS, and professional services. That’s a new top of funnel that doesn’t show up in your analytics.

Google’s AI Overviews appear on roughly 80% of B2B queries, and click-through rates have fallen 30–40% on the queries where they show. Ranking position one used to be the prize. Now it’s a footnote inside someone else’s answer.

And in our own scans, 78% of mid-market B2B sites have at least one critical gap — schema, content structure, or entity signals — that determines whether AI can actually quote them. Most of these are fixable in a sprint, not a quarter. But you have to know which ones to fix.

How Teams Are Using It

The pattern we see most often: run the scan on your own site, then run it on two or three competitors. The gap between the scores tells you where the opportunity is. The 30-day fix list tells you where to start.

From there, some teams take the report and ship the fixes in-house. Others book a 20-minute call to walk the results with a strategist before deciding whether a deeper engagement makes sense. Either way, the scan is the starting point — and it costs you nothing but a minute.

Run Your Scan

aeo.vsslagency.com →

60 seconds. Real diagnostic, not a lead magnet. If the LLMs don’t know who you are yet, you’ll know exactly what to fix by the time you finish your coffee.