R RomantiCode
Resource · 2026-05-06

AI code audit checklist for AI-generated apps

AI-generated code can run without being maintainable, reviewable, or production-ready. Use this checklist before cleanup, refactor, or launch.

Quick summary

This checklist covers 8 areas to review before cleaning up, refactoring, or shipping AI-generated code:

  1. Architecture and ownership
  2. Dependencies and configuration
  3. Authentication and permissions to review
  4. Data flow and API boundaries
  5. Error handling and edge cases
  6. Tests and manual verification
  7. Documentation and onboarding
  8. Cleanup priority plan

Use it as a starting point. Then generate an AI Code Audit Report with LegacyDoc AI to fill in the architecture map, module summaries, and cleanup priorities automatically.

Note: This checklist is for understanding and preparing your codebase — not a substitute for a professional security audit. Items marked "review" or "inspect" mean you should look at them carefully, not that a tool has automatically verified them.

01

Architecture and ownership

  • Architecture map exists
  • Each folder/file has a clear purpose
  • No orphaned modules
  • Entry point documented
  • No oversized files (300+ lines)
02

Dependencies and configuration

  • Dependencies listed and up to date
  • No unused packages
  • Env vars documented (not hardcoded)
  • Dev/prod config separated
  • Third-party services documented
03

Authentication and permissions to review

  • Auth flows documented
  • Secrets in env vars (not in code)
  • Access control rules documented
  • All protected routes actually require auth
04

Data flow and API boundaries

  • API routes and inputs/outputs documented
  • Data entry/exit points clear
  • No undocumented external API calls
  • Error handling consistent
05

Error handling and edge cases

  • Errors handled consistently
  • Loading and empty states handled
  • Edge cases covered
  • Plan for external service failures
06

Tests and manual verification

  • Automated tests OR manual test plan
  • Happy path documented
  • Known bugs documented
  • Deployment/smoke test plan
07

Documentation and onboarding

  • README explains project
  • Key functions have JSDoc / comments
  • New dev can understand in under an hour
  • AI-ready context file exists
08

Cleanup priority plan

  • Highest-risk areas identified
  • Cleanup vs. later items separated
  • Safe-to-refactor parts identified
  • Plan documented for team / AI tools

Printable / copyable checklist

Copy the checklist below into your README, project notes, or share it with your team or AI tools.

ai-code-audit-checklist.md
# AI Code Audit Checklist

## 1. Architecture and ownership
- [ ] Architecture map exists
- [ ] Each folder/file has a clear purpose
- [ ] No orphaned modules
- [ ] Entry point documented
- [ ] No oversized files (300+ lines)

## 2. Dependencies and configuration
- [ ] Dependencies listed and up to date
- [ ] No unused packages
- [ ] Env vars documented (not hardcoded)
- [ ] Dev/prod config separated
- [ ] Third-party services documented

## 3. Authentication and permissions to review
- [ ] Auth flows documented
- [ ] Secrets in env vars (not in code)
- [ ] Access control rules documented
- [ ] All protected routes actually require auth

## 4. Data flow and API boundaries
- [ ] API routes and inputs/outputs documented
- [ ] Data entry/exit points clear
- [ ] No undocumented external API calls
- [ ] Error handling consistent

## 5. Error handling and edge cases
- [ ] Errors handled consistently
- [ ] Loading and empty states handled
- [ ] Edge cases covered
- [ ] Plan for external service failures

## 6. Tests and manual verification
- [ ] Automated tests OR manual test plan
- [ ] Happy path documented
- [ ] Known bugs documented
- [ ] Deployment/smoke test plan

## 7. Documentation and onboarding
- [ ] README explains project
- [ ] Key functions have JSDoc / comments
- [ ] New dev can understand in under an hour
- [ ] AI-ready context file exists

## 8. Cleanup priority plan
- [ ] Highest-risk areas identified
- [ ] Cleanup vs. later items separated
- [ ] Safe-to-refactor parts identified
- [ ] Plan documented for team / AI tools

How to use this checklist with Claude Code, Cursor, or Codex

A six-step loop you can run inside VS Code.

  1. Generate an AI Code Audit Report with LegacyDoc AI to get the architecture map, module summaries, and cleanup priorities automatically.
  2. Open your AI coding tool (Claude Code, Cursor, or Codex) inside the same project.
  3. Share the generated context pack with the AI tool — paste it into the chat or include it as a context file.
  4. Ask the AI tool to walk through this checklist one section at a time, using the project context.
  5. For each item flagged as missing or weak, create a small, reviewable cleanup task.
  6. Tackle tasks in priority order. Re-run the audit report after major changes to update the context.

How LegacyDoc AI helps with this checklist

LegacyDoc AI is a local VS Code extension that generates documentation, architecture maps, and AI-ready context packs from your codebase. It can help you check off several items on this list automatically:

It does not replace a professional security audit, does not automatically fix code, and does not guarantee production readiness. It helps you understand, document, and prepare — so you can make informed decisions.

See what the output looks like

Sample audit report

Curious what an actual audit report looks like before running it on your own code? We put together a text-based example for a fictional vibe-coded app — project overview, architecture map, areas to inspect, cleanup priorities, and AI-ready review notes.

Browse the sample audit report

FAQ

What is an AI code audit checklist?

A structured list of things to review before cleaning up, refactoring, or shipping AI-generated code. It covers architecture, dependencies, documentation, tests, and cleanup priorities.

Is AI-generated code production-ready?

Not automatically. AI-generated code can work without being maintainable, secure, or well-documented. A review and cleanup process is usually needed before production use.

How do I review AI-generated code?

Start by understanding the architecture, then check documentation, dependencies, error handling, and test coverage. Use this checklist as a starting point, and generate a context pack to share with your team or AI tools.

Can AI audit its own code?

AI tools like Claude Code or Cursor can help review code, but they need context about your project to do it well. Generating a context pack first makes AI-assisted review much more effective.

Generate an AI Code Audit Report

Let LegacyDoc AI generate the architecture map, module summaries, and context pack for your codebase.