AuditTags
Tag AuditingGA4GTMDiagnosticsReports

What Your AuditTags Report Actually Means (And What To Do Next)

A technical interpretation guide explaining severity levels, health scores, engine modes, and consent findings in your AuditTags report. Learn how to read, prioritize, and act on your diagnostic results.

A
AuditTags Engineering
Analytics Diagnostics & Verification Systems
12 min read
What Your AuditTags Report Actually Means (And What To Do Next)

An AuditTags report tells you what your tracking implementation is doing—not what it should be doing, and not what your configuration files say it does. This distinction matters. Configuration can look correct while behavior fails silently.

This guide explains how to interpret every section of your report, what the severity levels mean in business terms, and how to decide what to fix first.

What an AuditTags Report Is—and Is Not

An AuditTags report is a diagnostic snapshot. The engine loads your site in a real browser, executes JavaScript, monitors network requests, and observes what actually happens when pages render. It captures evidence of tracking behavior at a specific moment in time.

What the report provides:

  • Detection of GA4 measurement IDs, GTM containers, and consent management platforms
  • Evidence of tracking requests that fired (or failed to fire)
  • Identification of patterns that indicate data quality issues
  • Severity-ranked findings with fix guidance

What the report does not provide:

  • Continuous monitoring (each report reflects one scan)
  • Automatic fixes or code changes
  • Legal compliance certification
  • Guarantee that all possible issues were detected

The report is a technical diagnostic. It tells you what the engine observed. Interpreting that observation for your specific business context—and deciding what to do about it—remains your responsibility.

Severity Levels Explained

Findings are ranked by severity. This ranking reflects the likely impact on your data, not the difficulty of fixing the issue.

P0 — Critical

P0 findings indicate tracking is broken in a way that corrupts data or creates significant risk. These are not theoretical concerns—they represent active problems affecting your analytics right now.

Trigger PatternWhat It Means
CSP blocking analytics scriptsYour Content-Security-Policy prevents GA4/GTM from loading. No data reaches Google.
Tags firing before consentTracking requests detected before user interacted with consent banner. Data collected may be invalid.
Tags firing when consent deniedTracking requests detected after user explicitly refused consent. Tags are ignoring user preference.
Consent default missingCMP present but no gtag('consent', 'default') configured. Tags fire in full tracking mode before consent obtained.
dataLayer after GTMGTM container loads before dataLayer exists. Early events are lost.

Action required: P0 findings demand immediate attention. Revenue data, attribution, or consent behavior is provably compromised. Fix before making business decisions based on affected data.

P1 — High Priority

P1 findings indicate degraded data quality. Tracking fires, but the data is unreliable—inflated, incomplete, or inconsistent.

Trigger PatternWhat It Means
Multiple GA4 measurement IDsTwo or more GA4 IDs detected. Every event fires twice, inflating metrics by 2x.
Missing purchase eventShopify store detected but no GA4 purchase event fired. Revenue tracking is broken.
Missing add_to_cart eventNo add-to-cart tracking. Funnel analysis and remarketing audiences are incomplete.
Consent signals inconsistentCMP detected but Google consent signals missing or don't match consent state.
Checkout blockedEngine could not reach checkout page. Checkout tracking cannot be verified.

Action recommended: P1 findings should be prioritized within your normal development cycle. Business decisions based on this data are compromised, but the tracking infrastructure is not completely broken.

P2 — Medium Priority

P2 findings indicate suboptimal configuration. Core tracking works, but data granularity or accuracy is reduced.

Trigger PatternWhat It Means
Missing view_item eventProduct page views not tracked. Product performance analysis is limited.
Missing begin_checkout eventCheckout initiation not tracked. Funnel analysis is incomplete.
Duplicate pixels (non-GA4)Facebook, TikTok, or other pixels firing multiple times. Platform-specific metrics inflated.
Timing anomaliesEvents fire, but timing suggests potential race conditions.

Action situational: P2 findings are worth fixing if you rely on granular funnel analysis or platform-specific attribution. They can be deferred if core revenue tracking is your priority.

Info

Info findings are observations, not issues. They provide context about what was detected.

Examples: "Server-side GTM suspected," "CMP detected: OneTrust," "Shopify platform detected."

No action required. These entries help you understand what the engine found on your site.

How the Health Score Works

The health score is built from four components, each contributing to a maximum of 100 points:

ComponentMax PointsCriteria
GA4 Setup30At least one GA4 property detected
GTM Setup30At least one GTM container detected
Findings30Base 30, reduced by finding severity
Tracking Activity10Network requests to analytics endpoints observed

Findings reduce the "Findings" component based on severity:

SeverityDeduction per Finding
P0-15 points
P1-10 points
P2-5 points

A site with GA4, GTM, no findings, and active tracking receives 100. A site missing GTM but otherwise clean receives 70.

Interpreting the score:

Score RangeLabelMeaning
90–100HealthyNo critical issues detected. Core tracking appears functional.
70–89Needs AttentionP1 or P2 findings present. Data quality is degraded.
0–69CriticalP0 findings present or multiple P1 issues. Tracking is unreliable.

The label is determined by both score and finding presence. A score of 75 with a P0 finding still displays "Critical" because the P0 takes precedence.

The health score is a summary metric—useful for quick assessment, not a substitute for reading the findings.

What Engine Mode Means

Your report includes an engineMode field. This tells you how the scan executed.

browser

The engine ran in full browser mode. Puppeteer launched Chromium, loaded your pages, executed JavaScript, and captured network traffic. This is the expected mode for most scans.

Full browser mode provides the most complete results. All checks that require JavaScript execution and user flow simulation were available.

static_degraded

The browser launched but encountered instability during execution. Some data was captured before the issue occurred.

This typically happens on sites with very heavy JavaScript bundles or aggressive anti-bot measures. The engine preserves whatever network data it captured before the problem.

What this means for your results: Checks that depend on full page execution may not have run. Detection results (GA4 IDs, GTM containers) are usually still accurate because they're captured early. Findings related to user flows or late-firing events may be incomplete.

static_fallback

The browser could not launch at all. The engine fell back to static HTML analysis.

This mode uses HTTP requests and HTML parsing. It can detect script tags and basic configuration, but cannot observe runtime behavior.

What this means for your results: The scan provides limited insight. Use it as a starting point, but consider re-scanning if infrastructure issues were temporary.

Severity Is Never Downgraded

The engine does not reduce finding severity based on mode. A duplicate GA4 detected in static_degraded mode is still P1—the same as in full browser mode.

What changes is coverage. In degraded modes, fewer checks run. A "clean" report in degraded mode means "no issues found in the checks that could run," not "no issues exist."

If your scan ran in a degraded mode and shows few findings, the results may be incomplete rather than reassuring.

AuditTags detects consent-related patterns by observing network requests at different stages: before consent interaction, after consent granted, and after consent denied.

What consent findings represent:

  • Technical evidence that tracking requests occurred (or didn't occur) at specific moments
  • Pattern matching against expected consent behavior
  • Signals that warrant further investigation

What consent findings do not represent:

  • Legal compliance determination
  • GDPR/CCPA audit certification
  • Definitive proof of violation

When the engine reports "tags fire before consent," it means network requests to tracking endpoints were observed before the user interacted with the consent banner. This is a technical observation. Whether that observation constitutes a legal issue depends on jurisdiction, consent banner configuration, tag purposes, and factors the engine cannot assess.

Consent findings are technical risk indicators. They tell you where to look. They do not tell you whether you're compliant—that determination requires qualified legal counsel with knowledge of your specific implementation and regulatory context.

If you see consent-related findings, investigate the technical behavior, then consult appropriate expertise for compliance assessment.

What To Do After Receiving Your Report

1. Read the P0 findings first

If P0 findings exist, they represent active data corruption or significant risk. Understand what each one means before proceeding.

2. Identify root causes

Multiple findings often share a single root cause. Duplicate GA4 from a rogue app might cause both "multiple measurement IDs" and "purchase event duplicated." Fixing the root cause resolves both.

3. Fix and verify

Implement fixes in a staging environment when possible. After deploying, verify the fix is live. Browser caching, CDN caching, and GTM publishing delays can make fixes appear ineffective when they're simply not deployed yet.

4. Re-scan after fixes

A new scan confirms whether fixes resolved the issues. The engine will observe current behavior, not cached results. Allow 5-10 minutes after deployment for CDN propagation before re-scanning.

5. Know when not to re-scan

If you fixed a specific issue and want to verify just that fix, a re-scan is appropriate. If you're iterating rapidly on multiple changes, wait until the implementation stabilizes before using a scan credit.

Why AuditTags Doesn't Change Every Week

The engine is intentionally stable. Web tracking behavior changes slowly—GA4's collection endpoint, GTM's loading patterns, and consent mode specifications don't shift frequently.

Engine changes introduce risk. A modified detection pattern might produce different results for the same site, creating false positives or masking real issues. We update the engine when external changes (platform updates, new tracking patterns, browser behavior changes) require it—not on a fixed schedule.

Stable diagnostics mean your results are comparable over time. A finding that appeared in December and persists in January represents the same underlying issue, not a detection artifact.

When we do update the engine, release notes document what changed and why.


Your AuditTags report is a diagnostic tool. It surfaces what your tracking implementation is actually doing, ranks issues by impact, and provides evidence to guide fixes. The interpretation—deciding what matters for your business and how urgently to address it—remains in your hands.

If a finding is unclear or you're uncertain how to proceed, the evidence section of each finding includes the specific network requests, DOM states, or console messages the engine observed. That raw evidence can inform conversations with developers, agencies, or platform support.