New Project Guide

Start with AI-ready context from day one

🏆

README first, .faf second, code third.

Why This Order Matters

README First

Forces you to articulate purpose before code. Human-readable definition that you can't fake.

project.faf Second

AI-readable version of your README context. Structured YAML format that machines can parse.

Code Third

Implementation follows definition. AI has context to help from the start. No documentation debt.

README Generation Tools

Don't have a README yet? We've got you covered with two easy options:

🌐 Web Tool: The 6 Ws

Fill out a simple form answering who/what/why/where/when/how. Copy the generated README.

faf.one/6ws →

⌨️ CLI Tool

Interactive command-line tool that asks the 6 Ws and generates a README for you.

faf readme

Asks questions, writes README.md instantly

The 3 Magic Tools

Simple, powerful, championship-grade workflow.

1

Initial Extraction

faf init OR faf git

When: First step - choose based on context
Why: Extract initial context from your project or any GitHub repo.
Decision: Choose your path

Have a GitHub repo URL?

npx faf-cli git https://github.com/facebook/react

⏱️ 2 seconds → 30-50% score. No clone needed!

OR

Working locally?

faf init

Extracts from your local files (README, package.json, etc.)

2

Auto-Enhance

faf auto

When: After initial extraction
Why: Turbo-Cat discovers 153+ formats. Grows from 30% → 80%.
Auto-enhance your context:
faf auto

What happens:

  • ✨ Turbo-Cat discovers 153+ format types
  • 📊 Auto-fills detected stack slots
  • 🚀 Grows score from 30% → 80%
  • ⚡ Zero questions asked - fully automated
3

The Last 10-20%

faf go

When: When score is 80-90% (optional if already 100%)
Why: Usually fills 1-2 missing 6 Ws (who/what/why). Stack detection is robust - blocker is almost always human context.
The final 10-20% (optional if already 100%):
faf go

Interactive polish:

  • 💡 Usually fills 1-2 missing 6 Ws (who/what/why/where/when/how)
  • 🎯 Stack detection is robust - blocker is human context
  • 🏆 80-90% → 100% Trophy
  • ⚡ Skip if faf auto already got you to 100%!

Score Progression

StepCommandScoreWhat Happens
1faf init or faf git30-50%Initial extraction from README + package files
2faf auto80%Turbo-Cat auto-discovers formats and fills slots
3faf go100% 🏆Interactive polish to championship grade

AI-Specific Context Files

project.faf works with ALL AIs. But each AI also has its own prose version:

CLAUDE.md

For Anthropic Claude

faf bi-sync

GEMINI.md

For Google Gemini

faf gemini

project.faf

Universal (all AIs)

Always generated

💡 Tip: project.faf is the source of truth. AI-specific files are generated from it.

Common Mistakes

Wrong: Skipping the Magic Tools

# Just use faf init
faf init
# Stop here, start coding
# Score: 30% (incomplete)

Problem: Missing 70% of potential context. faf auto and faf go are free wins!

Wrong: Using faf init on GitHub Repos

git clone https://github.com/facebook/react
cd react
faf init  # Slow! Wrong tool!

Problem: Use faf git instead - no clone needed, 2 seconds.

Right: All 3 Magic Tools

# 1. Initial extraction
faf init  # or: faf git 

# 2. Auto-enhance
faf auto

# 3. Polish
faf go

# Result: 100% 🏆

Result: Championship-grade AI context in 3 commands.

Quick Reference

💡 What Blocks 100%?

With a good README, faf auto can often reach 100% without needing faf go. And faf git on well-documented repos can score 100% on its own!

When stuck at 80-90%: It's very often 1-2 missing 6 Ws (who/what/why/where/when/how). Tech stack detection is robust - the blocker is almost always human context.

faf go asks 1-2 targeted questions like "Who is this for?" or "Why did you build this?" and you're at 100%.

The Decision Tree

  • Have a GitHub URL? → faf git (might be 100% already!)
  • Working locally? → faf init then faf auto
  • Want to polish? → faf go (if not already 100%)

One-Liner

"faf init → faf auto → faf go = 100% 🏆"

Next Steps