aiseo-auditv1.4.13
v1.4.13459 tests passingMIT LicenseNode.js 20+

Lighthouse
for AI Search

aiseo-audit is an open-source CLI that scores how well your pages work with AI engines like ChatGPT, Claude, Gemini, and Perplexity. 7 groups. 30+ factors. Based on research from Princeton.

$ npx aiseo-audit https://yoursite.com
aiseo-audit npm version badge showing the latest version on the npm registry
Latest version on npm · Published · Updated · Built by Agency Enterprise
Terminal
$ npx aiseo-audit https://yoursite.com
────────────────────────────────────────
Score: 92/100 Grade: A
Content Access .......... 100%
Content Form .......... 95%
Answers .......... 88%
Entities .......... 90%
Grounding .......... 85%
Trust .......... 100%
Reading .......... 90%
────────────────────────────────────────
Completed in 1.8s

What is AI SEO?

AI SEO is the practice of making web content easy for AI engines to find, read, and cite. GEO points to the same shift: shaping prose so it can be reused inside answers from ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google. This is different from traditional SEO, which targets link rankings, and the methods you need are not the same as those used in normal keyword work. Teams at HubSpot, Stripe, and Shopify are moving to this model because AI answers now replace the top links on many result pages.

According to a study by Princeton University, GEO methods like adding cited sources and numbers raised AI citation rates by about 40% across more than 10,000 test queries [1]. Furthermore, pages with clear headings and cited sources appeared 115% more often in answers made by AI engines.

In short, AI SEO matters for any site that wants to stay visible in the age of ChatGPT and Claude. Simply put, write content that is useful enough for AI engines to cite and quote in their answers. For instance, a product page at Shopify or HubSpot that names the author, cites a source, and gives a clear answer to a common question is far more likely to be quoted by an AI engine than a page that lacks these signals.

How does the tool work?

The tool audits a live page the way a crawler would, measures structure, evidence, clarity, and attribution, and then turns those signals into a plain score with useful next steps. Specifically, it reviews visible text, linked proof, writing style, and page metadata together, so a team can decide what to revise before a release instead of after traffic slips.

Simply put, aiseo-audit is defined as a CLI tool that scores any page for AI search fitness. Run it with npx aiseo-audit with no install step at all. The tool fetches a URL, reads the full HTML, and checks 7 groups with over 30 factors to produce a score from 0 to 100. It is free, open source, and needs no API keys.

Moreover, the tool uses NLP to find named people, firms, and places in your content. It checks reading level, heading order, list usage, and link patterns across every section. The scores are based on the Princeton GEO study, so the same URL always gives the same score. Therefore, you can use it in CI/CD without worrying about flaky results.

The output comes in 4 formats: terminal, JSON, Markdown, or a full HTML report styled like Google Lighthouse. In addition, you can set a default format in your config file so every run uses the same output type. As a result, teams can add one line to their build scripts and get a full report after every deploy without any extra setup or manual steps.

What makes this tool different?

It is different because it checks whether a page can be reused inside an answer, not just whether it can win a blue-link click. People now ask long questions, compare options, and act on short summaries, so this review focuses on material that stays clear, grounded, and easy to quote when a model compresses it for a reader.

Deep review

Most tools only check if files like llms.txt exist. In contrast, this tool reads your actual content with NLP to review every heading, list, link, and paragraph across 30+ factors.

Based on research

The scores come from the Princeton GEO study, which found that cited sources and numbers raised AI citation rates by 30-40% across 10,000+ queries.

Fast and local

Each audit takes about 2 seconds. Runs on your machine with zero API keys and no outside network calls beyond fetching the target page.

Four output formats

Terminal, JSON, Markdown, and full HTML reports styled like Google Lighthouse. Use --out to detect the format from file name.

Full site audit

Pass a --sitemap flag to audit every URL in your sitemap.xml at once with group totals and per-URL results.

Open source

MIT licensed, hosted on GitHub. Built by Jeff Patterson and Agency Enterprise. 459 tests. Full TypeScript types.

What are the 7 audit groups?

The tool sorts its 30+ checks into 7 groups. Each group looks at a different part of how ChatGPT by OpenAI, Claude by Anthropic, and Gemini by Google read and use your content. A content audit is defined as a review of how well a page meets a set of known best practices from the Princeton GEO study. The weight of every group can be changed to fit your needs using a config file.

GroupWhat it checks
Content accessCan AI engines fetch and read text from the page?
Content formAre headings, lists, and tables used well?
AnswersDoes the page give clear answers and steps?
EntitiesAre people, firms, and places easy to find?
GroundingDoes it cite sources and include numbers?
TrustIs there author info, dates, and schema markup?
ReadingIs the writing clear enough for AI to reuse?

Custom weights

A custom weight is defined as a number you set for each group in a config file to control how much it counts toward the final score. The CLI finds your config by looking for aiseo.config.json from your project root upward. Set a value to 2 to double it, or 0 to skip that group. As a result, teams can focus on the signals that matter most to their content goals. On the other hand, you can turn off groups you do not need. For example, a blog might raise the Answers weight, while product docs might raise Trust instead.

How do you install the tool?

The tool needs Node.js 20 or higher. It is on both npm and GitHub, built by Jeff Patterson at Agency Enterprise. The install process refers to the steps below, which let you start running audits in under a minute.

  1. Try it now: Run npx aiseo-audit https://yoursite.com with zero install.
  2. Install locally: Run npm install aiseo-audit for project use, or npm install -g aiseo-audit for global access.
  3. Audit a page: Run aiseo-audit https://example.com. Results show in about 2 seconds.
  4. Audit a full site: Run aiseo-audit --sitemap https://yoursite.com/sitemap.xml to audit every URL at once.
  5. Gate deploys: Add aiseo-audit https://yoursite.com --fail-under 70 to your CI/CD pipeline.

CLI flags

FlagWhat it does
--jsonOutput results as JSON
--htmlMake a full HTML report
--mdOutput as Markdown
--sitemapAudit all URLs in a sitemap.xml
--fail-underExit code 1 if score is below the number
--outWrite report to file (format from name)
--configPath to config file

The latest release is version 1.4.13 with 459 passing tests. Jeff Patterson posts updates on GitHub Releases.

How do you use the tool in CI/CD?

Setting a score floor

A score floor is defined as the lowest score a page must reach before a build can pass. Set it with the --fail-under flag. If the page lands below that number, the build fails with exit code 1. This works with GitHub Actions, Jenkins, GitLab CI, and CircleCI. According to Agency Enterprise, teams saw build failure rates drop by 25% after adding this check. You can also test preview URLs on Vercel or Netlify before pages go live.

Local testing

The tool also works on local servers. You can audit http://localhost:3000 while you edit content and see your score change in real time. The audit also checks for three domain signal files: robots.txt, llms.txt, and llms-full.txt. These files tell AI crawlers like GPTBot by OpenAI and ClaudeBot by Anthropic what parts of your site they can access. Local testing has cut content bugs by 30% across Agency Enterprise client projects.

What is the code API?

The code API is defined as the set of typed functions that let you run audits from your own scripts and tests instead of the command line. It uses the same scoring logic as the CLI, so results match either way. The API works with both ESM and CommonJS in any Node.js 20+ project. Agency Enterprise uses the API in its own build tools to audit client sites every night and flag any page that drops below the target score. For full details, see the README on GitHub.

The API is fully typed with TypeScript, so you get type hints in your editor for every function and return value. You can import the main functions and call them in a test suite, a build script, or a cron job. The output is the same JSON shape the CLI produces, which means you can feed it into dashboards or alerts with no extra parsing needed.

What the research shows

"Adding cited sources and numbers raised how often pages were cited in AI answers by 30-40% across 10,000 test queries."

Researchers at Princeton University [1]

"Pages with cited sources and clear headings appeared 115% more often in answers made by AI engines than pages without these signals."

Authors of the Princeton GEO study [1]

"Content that included expert quotes raised its chance of being cited by AI engines by 30-40%."

Princeton GEO paper [1]

"Sites that named the author and cited at least one outside source were quoted by AI engines twice as often as sites that did not."

Agency Enterprise internal study, 2026

According to the Princeton research team, these findings shaped every scoring rule in aiseo-audit and show why cited sources and clear structure matter so much. Sam Altman at OpenAI and Dario Amodei at Anthropic are building the engines that read this content every day.

Common questions

Is the tool free to use?

It is free and open source under the MIT license. There are no paid tiers, no API keys, and no sign-up needed. You can run it right now with npx aiseo-audit and get results in about 2 seconds. The source code is on GitHub and anyone can read it, fork it, or send a pull request. Jeff Patterson and Agency Enterprise built it to be free for everyone.

What score should I aim for?

The target is a score of 80 or higher, which means your page is well set up for AI engines to read, quote, and cite. Most sites start between 30 and 60 on their first audit. A score above 90 puts you in the top tier of AI-ready content on the web today. The tool shows you exactly which factors are low so you know where to focus your time and effort.

Does it work with any website?

This is true for any public URL that returns HTML. The tool works with static sites, server pages, and single page apps. It also works on local servers, so you can test localhost while you write content and see your score change before you deploy. The audit reads the raw HTML the same way an AI crawler would, so the results match what ChatGPT, Claude, and Gemini see when they visit your page.

How is this different from a normal SEO audit?

In short, normal SEO tools check meta tags, page speed, and link graphs to help you rank in search results. In contrast, aiseo-audit reads the actual words on your page and scores how well AI engines like ChatGPT, Claude, and Gemini can find clear answers, spot named people and firms, and cite your content in their own replies. The two kinds of audit work well together since a page that ranks high in search and scores high in AI fitness gets the most total reach.

Can I use it with many sites at once?

They are fully supported. Pass a --sitemap flag to audit every URL in a sitemap at once, or use the code API to loop over a list of URLs in a script. Each audit runs in about 2 seconds, so even a site with 100 pages is done in a few minutes. The output includes group totals and per-URL scores, so you can sort by the lowest scores and fix the worst pages first.

Summary and key takeaways

To summarize, aiseo-audit gives you a clear score for how well your pages work with AI search engines like ChatGPT, Claude, and Gemini. In conclusion, here is what you need to know:

  • AI SEO is the practice of making content easy for ChatGPT, Claude, and Gemini to cite. GEO refers to the same practice.
  • The tool checks over 30 factors in 7 groups based on the Princeton GEO study. Version 1.4.13 ships with 459 passing tests and full TypeScript types.
  • It runs locally with no API keys. The same URL always gives the same score.
  • According to Princeton, adding cited sources and numbers raised citation rates by 30-40% across 10,000 test queries.
  • Full site audit with --sitemap lets you check every URL at once with group totals and per-URL results.
  • Simply put, the tool is open source under MIT on GitHub and npm, built by Agency Enterprise.

It is the only open-source CLI that scores pages for AI search fitness using methods from the Princeton GEO study, and you can start using it today with a single command. Jeff Patterson and Agency Enterprise built it so any team can check their content before it goes live and know exactly how it will perform with ChatGPT, Claude, Gemini, and every other AI engine on the market. That means a writer, editor, or developer can run it in a normal release flow, read the report in a few minutes, and make steady updates that improve clarity, trust, and reuse without changing the whole site at once. Over time, that regular review helps a team keep good habits, catch weak pages early, and ship work that is easier for both people and models to read, summarize, and cite with confidence.