Overview

GEO Evaluator (Generative Engine Optimization Evaluator) is a powerful analysis tool that examines websites across 5 key categories to determine how well they’re optimized for LLM understanding. It provides actionable insights and recommendations to improve your brand’s visibility and accuracy in AI-generated responses.

Structural HTML

25% Weight

Analyzes semantic markup, heading hierarchy, and content landmarks

Content Organization

30% Weight

Evaluates paragraph structure, scannability, and FAQ formatting

Token Efficiency

20% Weight

Measures content-to-markup ratio and information density

LLM Technical

15% Weight

Checks llms.txt, structured data, and meta optimization

Accessibility

10% Weight

Reviews alt text, link context, and language clarity

Key Features

Getting Started

1

Clone the repository

git clone https://github.com/Airbais/intent-tools.git
cd geoevaluator
2

Install Dependencies

Navigate to the geoevaluator directory and install required packages:

cd geoevaluator
pip install -r requirements.txt
3

Edit the config file

Modify config.yaml to suit your brand needs. Most of the defaults will be fine to start with, but you can tweak as needed.

Usage

# Analyze a website with default settings
python geoevaluator.py --url https://example.com --name "Your Brand"

# Run analysis and open dashboard automatically
python geoevaluator.py --url https://example.com --name "Your Brand" --dashboard

Understanding Your Score

1

Overall Score

Your website receives a score from 0-100 with a letter grade:

  • 90-100: Excellent (A)
  • 80-89: Good (B)
  • 70-79: Fair (C)
  • 60-69: Poor (D)
  • Below 60: Very Poor (F)
2

Category Breakdown

Each category contributes to your overall score based on its weight. Focus on improving categories with:

  • Low scores
  • High weights
  • Many affected pages
3

Recommendations

Prioritized action items show:

  • What needs fixing
  • Why it matters
  • Which pages are affected
  • Expected impact
4

Page-Level Analysis

Individual page scores help identify:

  • High-performing pages to use as templates
  • Problem pages dragging down your average
  • Patterns in scoring across page types

Dashboard Integration

The GEO Evaluator seamlessly integrates with the master dashboard:

GEO Evaluator Dashboard View

Dashboard Features

Analysis Summary

Side-by-side view of key metrics and industry benchmarks in a responsive layout

Visual Indicators

Color-coded scores and progress bars for quick assessment

Recommendations

Expandable recommendations showing affected pages with direct links

Page Scores Table

Sortable table of individual page scores across all categories

Common Optimization Tips

Quick Wins: Start with these high-impact, easy-to-implement changes:

  1. Add an llms.txt file to your website root
  2. Replace generic <div> tags with semantic HTML5 elements
  3. Ensure every page has a proper H1 heading
  4. Add descriptive alt text to all images

Priority Recommendations

Critical for LLM Optimization: These issues significantly impact how LLMs understand your content:

  • Missing llms.txt file
  • No structured data (Schema.org)
  • Poor content-to-markup ratio (<30%)
  • Lack of semantic HTML usage

Content Organization Best Practices

Optimize for Scannability:

  • Keep paragraphs under 150 words
  • Use bullet points and numbered lists
  • Include FAQ sections with clear Q&A formatting
  • Break content with descriptive subheadings

Configuration Examples

E-commerce Site Configuration

website:
  url: "https://shop.example.com"
  name: "Example Shop"
  max_pages: 100
  excluded_paths:
    - "/cart"
    - "/checkout"
    - "/account"

analysis:
  weights:
    structural_html: 0.20      # Less critical for product pages
    content_organization: 0.35  # Very important for products
    token_efficiency: 0.20      
    llm_technical: 0.20        # Critical for product schema
    accessibility: 0.05         

Content-Heavy Blog Configuration

website:
  url: "https://blog.example.com"
  name: "Example Blog"
  max_pages: 200
  crawl_depth: 4

analysis:
  weights:
    structural_html: 0.25      
    content_organization: 0.40  # Most important for articles
    token_efficiency: 0.15      # Less critical for long-form
    llm_technical: 0.15        
    accessibility: 0.05         

Troubleshooting

API Reference

Command Line Options

geoevaluator.py [config_file] [options]

Positional arguments:
  config_file           Configuration file path (YAML format)

Website options:
  --url URL            Website URL to analyze
  --name NAME          Website/brand name for analysis
  --max-pages N        Maximum number of pages to analyze (default: 50)
  --crawl-depth N      Maximum crawl depth (default: 3)

Output options:
  --output-dir DIR     Output directory for results (default: ./results)
  --dashboard          Generate dashboard output and launch dashboard
  --formats FORMAT     Output formats: json, html, dashboard

Crawling options:
  --delay SECONDS      Delay between requests (default: 1.0)
  --timeout SECONDS    Request timeout (default: 30)

Utility options:
  --verbose, -v        Enable verbose logging
  --dry-run           Validate configuration without running
  --version           Show version information

Output File Structure

results/
└── YYYY-MM-DD/
    ├── dashboard-data.json         # Dashboard integration data
    ├── detailed_scores.json        # Complete analysis results
    ├── geo_analysis_report.html    # Human-readable HTML report
    └── crawl_log.json             # Detailed crawl information

Integration

CI/CD Pipeline Integration

#!/bin/bash
# ci-geo-check.sh

# Run GEO analysis
python geoevaluator.py config.yaml --formats json

# Extract score
SCORE=$(jq '.overall_score.total_score' results/*/dashboard-data.json)

# Fail build if score is too low
if (( $(echo "$SCORE < 70" | bc -l) )); then
  echo "❌ GEO score $SCORE is below threshold of 70"
  exit 1
fi

echo "✅ GEO score $SCORE meets threshold"

Python Integration

import subprocess
import json
from pathlib import Path

def analyze_website(url, name):
    """Run GEO analysis and return results."""
    cmd = [
        'python', 'geoevaluator.py',
        '--url', url,
        '--name', name,
        '--formats', 'json'
    ]
    
    subprocess.run(cmd, check=True)
    
    # Find latest results
    results_dir = Path('results')
    latest_dir = max(results_dir.iterdir(), key=lambda p: p.stat().st_mtime)
    
    with open(latest_dir / 'dashboard-data.json') as f:
        return json.load(f)

# Example usage
results = analyze_website('https://example.com', 'Example Brand')
print(f"Score: {results['overall_score']['total_score']}")
print(f"Grade: {results['overall_score']['grade']}")

Best Practices

Regular Monitoring

Run weekly or monthly analyses to track improvements and catch regressions

Focus on High-Impact

Address high-priority recommendations first for maximum ROI

Test Changes

Re-run analysis after making changes to verify improvements

Compare Competitors

Analyze competitor sites to understand industry benchmarks

Support

Need Help?

  • Check the troubleshooting section above
  • Review the detailed README.md in the tool directory
  • Submit issues to the project repository
  • Contact the development team for assistance