We Scored 100/100 on SEO. Here's the Part Nobody Talks About.
Lighthouse SEO 100, Best Practices 100 — that's the easy part. The real edge in 2026 is making your site readable by AI agents, not just search engines. Here's exactly how we did it.
We Got Perfect Scores. So What?
Lighthouse SEO: 100. Best Practices: 100. Performance: 81. Accessibility: 96.
Those are the scores for elmlabs.dev. We're not going to pretend this is rare or difficult. Thousands of well-built sites score 100/100 on SEO. The Lighthouse audit checks roughly 15 specific things — structured data, canonical URLs, meta descriptions, mobile viewport, crawlable links — and if you've built a modern site with a framework like Next.js, most of them come for free. The remaining ones are deliberate choices that take a few hours of implementation, not months.
So this is not a post about how to get a perfect Lighthouse score. That's a checklist, not a strategy.
This is about what happens after you've checked every box.
Here's the question most businesses aren't asking yet: when the thing searching for you is not a human typing into Google, but an AI agent running on behalf of a user — can it find you? Can it understand what you do? Can it act on what it finds?
In 2026, SEO has forked into two distinct disciplines. Traditional SEO is about being discoverable by humans through search engines. It's well-understood, well-documented, and table stakes. AI accommodation is about being discoverable, readable, and actionable by AI agents — the ChatGPTs, Perplexitys, Claudes, and tool-use agents that are increasingly the first point of contact between a user and the web.
Most sites do the first. Almost nobody does the second. We did both. Here's why, and here's what it looks like.
Key Takeaways
- Lighthouse SEO 100 is achievable with roughly 15 specific implementations — it's a hygiene standard, not a competitive advantage
llms.txtandagent.jsonare emerging standards that open a new zero-cost distribution channel: AI agents- When an AI agent can read your site, understand your services, and submit a lead on behalf of a user — that's a client you never paid to acquire
- The sites that treat AI crawlers as first-class visitors are building a compounding advantage that will be very hard to replicate later
- Total cost of our entire SEO + AI accommodation stack: 0 EUR beyond development time
Part 1 — Traditional SEO: The Foundation
Before we talk about the novel parts, let's establish the baseline. Here's what we implemented at the traditional SEO layer — not as a tutorial, but to show the scope and explain the reasoning behind each decision.
Structured Data: 8 JSON-LD Schema Types
We emit structured data on every page of the site. Not one generic WebSite schema slapped on the homepage — eight distinct schema types, each placed where it's semantically relevant.
The homepage carries Organization and WebSite schemas. The blog listing page carries CollectionPage. Every blog post carries BlogPosting with full authorship, word count, publish dates, and keyword metadata. Individual service pages carry Service schemas. Navigation pages carry BreadcrumbList. Review data carries AggregateRating and Review schemas.
Why this matters: structured data is what makes you eligible for rich results in Google — star ratings, FAQ dropdowns, article carousels, knowledge panel entries. But it also serves a second purpose that most people overlook: it gives AI models a machine-readable understanding of what your site is and what it offers. When a language model crawls your site and finds well-structured JSON-LD, it can extract facts with higher confidence than parsing prose. Structured data is a bridge between human-readable pages and machine-readable data. We were already building for AI accommodation without knowing it.
Bilingual Sitemap with hreflang
Our site is fully bilingual — English and French, with every page and every blog post available in both languages. The sitemap reflects this with proper hreflang annotations and x-default fallbacks pointing to the English versions.
This solves two problems. First, it prevents Google from treating the French and English versions of the same page as duplicate content. Second, it ensures users in French-speaking regions see the French version in search results, and vice versa. The x-default fallback catches everyone else.
What most people miss: hreflang isn't just a Google thing. AI crawlers that understand your sitemap can use hreflang to serve the right language version when responding to a user's query. If someone asks Claude a question in French and your site has a French version, a well-structured sitemap makes that connection possible.
Canonical URLs, OpenGraph, and Twitter Cards
Every page emits a canonical URL, OpenGraph metadata, and Twitter card tags. This is standard practice, but the details matter.
Canonical URLs prevent duplicate content issues across locales. OpenGraph metadata controls how your pages appear when shared on LinkedIn, Facebook, and messaging apps — the title, description, and image that people see. Twitter cards do the same for X/Twitter.
The compound effect: when someone shares your blog post and it renders with a clean title, a concise description, and a compelling image — that's free marketing. When the same link renders as a bare URL with no preview, it's a missed opportunity. This isn't technically complex, but it's the kind of detail that separates polished sites from the rest.
Dynamic OG Images
We generate Open Graph images dynamically at the edge for every page, including every blog post. Each image includes the post title, the ELM Labs branding, and visual elements — rendered on-demand, not pre-made in Figma.
This matters because social sharing is a significant traffic source for blog content, and a custom image per post dramatically outperforms a generic logo. Studies consistently show that posts with custom images get 2–3x higher click-through rates than posts with generic thumbnails.
RSS Feeds Per Locale
Two RSS feeds — one for English, one for French — automatically generated from the blog content. Subscribers get new posts in their language. Aggregators and syndication platforms can pull content automatically.
RSS is also a signal to AI systems. Some AI training pipelines and content aggregators use RSS as a primary discovery mechanism. Having clean, well-structured feeds means your content is more likely to be included in the datasets that train and inform AI models.
Content Strategy: 20 Bilingual Blog Posts
We published 20 blog posts across both languages — not thin filler content, but substantial guides ranging from 2,000 to 4,000 words each. Topics span web development costs, AI integration, mobile app pricing, marketplace development, and more.
Each post has full SEO frontmatter: title, slug, locale, alternate slug for the translated version, publish and update dates, author, excerpt, tags, category, SEO-specific title and description, and keywords. The alternateSlug field creates a bidirectional link between language versions — when Google indexes the English post, it knows exactly which French post is the equivalent, and vice versa.
This isn't a content farm. Every post targets a specific search intent with genuine information. The content is what makes the technical SEO infrastructure worth building.
Part 2 — AI Accommodation: The New Frontier
Everything above is standard practice for a well-built site in 2026. Important, yes. Differentiating, no. Here's where it gets interesting.
The Paradigm Shift
Something fundamental changed in how people discover services and information. A growing share of users no longer start with Google. They start with ChatGPT, Perplexity, Claude, or Gemini.
"Find me a developer who builds mobile apps for small businesses." "What's the best framework for building a marketplace in 2026?" "I need someone to build an AI chatbot for my e-commerce store — who offers this under 10K?"
These queries used to go to Google, where SEO determined who appeared on page one. Now they go to AI assistants, where the answer depends on what the model knows about you — and what it can learn from your site in real-time.
This is not theoretical. It's happening now. If an AI agent can't read your site, can't understand your services, and can't find a way to connect a user with you — you don't exist in that channel. You're invisible to a rapidly growing segment of potential clients.
The industry response has been mixed. Some publishers — notably major media companies — have chosen to block AI crawlers entirely, protecting their content from being used as training data. That's a legitimate choice for content publishers whose product is the content itself.
But for service businesses — studios, agencies, freelancers, SaaS companies — blocking AI crawlers is blocking free leads. The AI isn't competing with you. It's trying to recommend you.
llms.txt — Telling AI Who You Are
The first layer of our AI accommodation stack is llms.txt, a markdown file served at the root of our domain that tells AI agents everything they need to know about us in a format optimized for language models.
The concept comes from an emerging standard at llmstxt.org. The idea is simple: just as robots.txt tells search engine crawlers what they can and can't access, llms.txt tells AI agents who you are, what you do, and how to work with you. But instead of access rules, it's a structured self-description.
Our llms.txt covers our identity, service offerings, pricing ranges, portfolio, blog content, contact information, and technical details. It's written in clean markdown — no HTML, no JavaScript, no navigation chrome — because language models process plain text far more efficiently than rendered web pages.
We also serve an extended version, llms-full.txt, that includes complete blog post listings with excerpts, giving AI agents enough context to accurately recommend specific content when relevant.
The critical design choice: llms.txt isn't a copy of your homepage. It's a document specifically structured for machine consumption. The sections, the hierarchy, the level of detail — all of it is optimized for how LLMs parse and retrieve information, not for how humans browse.
agent.json — A Discovery Card for AI
The second layer is agent.json, served at /.well-known/agent.json — a structured JSON file that exposes our capabilities, service catalog, and programmatic contact method to any AI agent that knows to look for it.
Think of it as a machine-readable business card. It declares what services we offer, what languages we support, our pricing model, our response time, and — critically — how an AI agent can take action on behalf of a user.
The /.well-known/ directory is an established convention in web standards (RFC 8615). It's where you find apple-app-site-association for iOS deep links, openid-configuration for OAuth, and security.txt for vulnerability reporting. Placing agent.json there follows the same pattern: a well-known location where automated systems can find structured information without guessing.
This isn't a widely adopted standard yet — and that's exactly the point. The sites that implement it now are the ones that AI agents will discover first. By the time this becomes conventional wisdom, the early movers will have years of accumulated advantage.
The "AI Books a Call" Moment
Here's where the entire stack comes together, and where the thesis — "let AI use you" — becomes concrete.
Our site exposes a contact API with a full OpenAPI specification. The spec describes the endpoint, the request format, the required and optional fields, example requests and responses, rate limits, and error codes. It follows the OpenAPI 3.1 standard, which means any tool-use capable AI agent can read the spec and understand how to interact with it.
Now imagine this scenario: a user opens ChatGPT and types, "Find me a developer who builds iOS apps for small businesses, budget under 10K euros."
If ChatGPT has tool-use capabilities and web browsing enabled, here's what can happen:
- It finds our
llms.txtand reads that we build mobile apps starting from 2,000 EUR - It checks our
agent.jsonand sees that we offer Mobile App Development with fixed pricing - It reads the OpenAPI spec and sees the contact endpoint with its schema
- It could submit a contact request on behalf of the user — name, email, service type, project description
- We receive the lead. The user gets confirmation. Zero ad spend on either side.
This is not science fiction. Tool-use AI agents exist today. ChatGPT, Claude, and Gemini all support function calling. The only question is whether your site gives them something to call.
Most sites don't. They have a contact form rendered in JavaScript that no AI agent can interact with. They have pricing buried in PDFs. They have service descriptions scattered across marketing pages with no structured representation.
We built ours to be readable, understandable, and actionable by machines. Every AI chatbot in the world becomes a potential referral channel — and we pay nothing for it.
robots.txt — Welcoming AI Crawlers
The third practical layer is robots.txt. While most sites either ignore AI crawlers or actively block them, we explicitly welcome nine AI bots by name — including GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended, and others.
This is a deliberate strategic choice, and it's worth explaining why it goes against the current trend.
Many high-profile sites have blocked AI crawlers in 2025-2026. News organizations don't want their articles used as training data. Social media platforms don't want their user content scraped. This makes sense for businesses whose product is the content itself.
But we're a service business. Our content exists to attract clients, not to be the product. When an AI crawler indexes our site, it's not "stealing" our content — it's learning about our services so it can recommend us to relevant users. Blocking these crawlers would be like turning away a sales rep who works for free.
The explicit allowlisting also signals intent. When a bot checks robots.txt and finds a specific rule welcoming it by name, that's a stronger signal than a generic wildcard allow. It tells the crawler: we know you exist, we want you here, and we've structured our content for you.
The Metadata Layer
The final piece is the HTML metadata that connects everything. Our site's <head> includes a <link rel="author" type="text/plain" href="/llms.txt"> tag — a standard HTML author link that points to our machine-readable description.
This creates a discovery chain. A crawler that renders or parses our HTML finds the author link. That leads to llms.txt. The llms.txt references the agent.json and OpenAPI spec. The OpenAPI spec describes the contact endpoint. Each layer leads to the next, forming a complete path from discovery to action.
It's the same principle as traditional SEO linking — internal links, sitemap references, canonical URLs all create a web of connections that search engines can follow. We've built the same kind of web for AI agents.
The Full Stack at a Glance
Here's the complete picture — what we built, organized by layer. This is the "save for reference" table.
| Traditional SEO | AI Accommodation |
|---|---|
| 8 JSON-LD schema types | llms.txt + llms-full.txt |
| Sitemap with hreflang (EN/FR) | agent.json discovery card |
| Canonical URLs | OpenAPI spec for contact API |
| Dynamic OG images (edge-generated) | AI bot allowlist (9 bots) |
| RSS feeds (2 locales) | HTML author link → llms.txt |
| OpenGraph + Twitter cards | Programmatic contact endpoint |
| 20 bilingual blog posts | Structured service catalog |
| Meta descriptions + keywords | Machine-readable pricing |
Left column: standard practice in 2026. Any competent web agency delivers this. Right column: almost nobody does this yet. The combination of both is what creates the compounding advantage.
Results and What We Observed
Let's be honest about what we can and can't measure.
The hard numbers
Lighthouse SEO: 100/100. Lighthouse Best Practices: 100/100. These are repeatable, auditable scores. They confirm that the traditional SEO layer is implemented correctly.
Our sitemap correctly indexes all pages across both locales with proper hreflang alternates. Every blog post carries complete structured data. Rich result eligibility is confirmed across all schema types.
The AI accommodation side
There is no "AI SEO score" metric. No equivalent of Lighthouse for measuring how well your site serves AI agents. This is new territory, and anyone claiming to have a definitive measurement is selling something.
What we can observe:
AI chatbots accurately describe our services. When you ask ChatGPT, Claude, or Perplexity about ELM Labs, the responses are accurate and detailed. They know what we build, roughly what we charge, and how to reach us. This isn't because we paid for placement — it's because our site is structured to be readable by these systems.
The contact API works as a programmatic endpoint. Any tool-use agent with access to our OpenAPI spec can submit a contact request. The channel exists and is functional.
The discovery chain is complete. From robots.txt (welcoming the crawler) to llms.txt (describing us) to agent.json (exposing capabilities) to the OpenAPI spec (enabling action) — the path from discovery to conversion is fully machine-traversable.
The honest framing
We're not claiming this has generated thousands of AI-sourced leads. It's early. The ecosystem of tool-use agents browsing the web and taking actions on behalf of users is still developing.
But consider the analogy: building a mobile-responsive website in 2010. Mobile traffic was a fraction of total web traffic. Most businesses said "our customers use desktops." The ones that built mobile-first anyway had a massive advantage when mobile traffic overtook desktop a few years later.
AI-mediated discovery is on the same trajectory. The businesses that build for it now — when it costs nothing extra and the competition is near zero — will be positioned to capture the value when it becomes the dominant channel.
The cost
The entire stack — traditional SEO and AI accommodation combined — cost 0 EUR in external tools, services, or subscriptions. No SEO tools. No AI platforms. No third-party integrations. Everything is built into the application layer using standard web technologies and served from the same infrastructure as the rest of the site.
The only cost is development time. And because we're a development studio, that's what we do anyway.
What This Means for Your Business
We're not going to give you a step-by-step implementation tutorial. Partly because the specifics of our implementation are what we offer as a service, and partly because the strategic framework matters more than the code.
Here's how to think about it:
Traditional SEO is table stakes
If your site doesn't have structured data, a proper sitemap, canonical URLs, and clean metadata — fix that first. This is the foundation. Without it, you're invisible to both humans and machines. Every credible web agency should deliver this as a baseline, and if yours doesn't, that's a problem.
AI accommodation is the new frontier
The competitive advantage in 2026 and beyond isn't more blog posts or better keywords. It's being the site that AI agents can read, understand, and act on — while your competitors are still invisible or actively blocking these systems.
Where to start
If you're a service business and you want to move in this direction, the logical progression is:
- Start with
llms.txt— it's the simplest entry point. A well-structured markdown file that describes your business for AI consumption. No API, no spec, no complex implementation. Just clear, machine-optimized content at a well-known URL. - Add structured discovery — an
agent.jsonor similar discovery card that exposes your capabilities in a structured format. This moves you from "readable" to "understandable." - Expose programmatic endpoints — if you want AI agents to take action (submit inquiries, check availability, request quotes), you need a documented API. An OpenAPI spec makes it possible for tool-use agents to interact with your business without a human intermediary.
The strategic calculus
If you're a content publisher — a news site, a media company, a platform whose product is the content — blocking AI crawlers may make sense. You're protecting your core asset.
If you're a service business — a studio, an agency, a SaaS company, a freelancer — you're making a different calculation. Your content exists to generate leads, not to be the product. An AI agent that reads your content and sends you a qualified lead is the best marketing channel you've never paid for.
The businesses that understand this distinction early are the ones building the compounding advantage.
We built this stack for ourselves. We build it for clients too. If you want the same SEO + AI accommodation infrastructure on your site — the full stack, from structured data to programmatic contact endpoints — check our pricing or get in touch directly.
The Bottom Line
SEO in 2026 is a two-front game.
Front one is traditional: be discoverable by humans through search engines. Structured data, sitemaps, metadata, quality content. This is well-understood and widely practiced. If you're not doing it, you're behind. If you are doing it, you're at parity.
Front two is new: be discoverable, readable, and actionable by AI agents. llms.txt, agent.json, OpenAPI specs, programmatic endpoints, AI crawler allowlisting. Almost nobody is doing this. The competition is near zero, the cost is near zero, and the potential upside is enormous.
The businesses that win both fronts are the ones that turn every AI chatbot in the world into a free distribution channel. Not by gaming the system or buying placement — but by being genuinely useful to machines the same way you're genuinely useful to humans.
Don't just use AI. Let AI use you.
When an AI agent can find you, understand what you do, check your pricing, and submit a lead on behalf of a user — you've turned every AI chatbot on the planet into a sales rep that works 24/7, speaks every language, and costs you nothing.
That's the part of SEO nobody talks about. And now you know.
Ready to move forward?
30 minutes, no commitment. Let's talk.
Want this SEO + AI stack on your site? Let's talk