Start Here
New to the CCA-F? Read our Complete CCA-F Certification Guide for a full exam overview. Then check What is CCA-F? for background and bookmark the CCA-F Cheat Sheet for quick reference during your studies.
Introduction
Thirty days is enough to pass the Claude Certified Architect - Foundations (CCA-F) exam, but only if you study smart. The CCA-F covers five domains spanning agentic architecture, tool design, MCP configuration, Claude Code mastery, prompt engineering, and production reliability. That is a wide surface area, and unfocused studying is the number-one reason candidates fail.
This plan assumes you can dedicate 1-2 hours per day on weekdays and 2-4 hours on weekends for hands-on projects. It totals roughly 50-60 hours of study time. Every day has a specific focus, every week ends with a practice exam, and by Day 30 you will have covered every testable concept at least twice.
The philosophy behind this plan is simple: active recall beats passive reading. Every study session includes something you build, test, or practice. Passive reading of documentation feels productive but produces poor exam results. The candidates who pass on their first attempt are the ones who build working code, take practice tests under timed conditions, and review their mistakes systematically.
The plan is organized around four weekly themes:
- Week 1: Foundations and Agentic Architecture (Domain 1)
- Week 2: Tool Design, MCP, and Claude Code (Domains 2 and 3)
- Week 3: Prompt Engineering and Context Management (Domains 4 and 5)
- Week 4: Practice Exams, Targeted Review, and Exam Day
This sequencing is intentional. Domain 1 (Agentic Architecture) is the heaviest domain on the exam and provides the conceptual foundation for everything else. Domains 2 and 3 (Tool Design, MCP, Claude Code) build directly on Domain 1 concepts. Domains 4 and 5 (Prompt Engineering, Reliability) complete the picture with production-readiness topics. Week 4 is dedicated entirely to practice exams because testing yourself is the highest-leverage activity for score improvement.
Adjust the timeline based on your experience. If you already build production Claude applications daily, you can compress this to two weeks. If you have never called the Claude API, add foundation work before starting. The prerequisites section below will help you calibrate.
Preparing for CCA-F? Practice with 390+ exam questions
Before You Start: Prerequisites Check
Before committing to a 30-day timeline, honestly assess where you stand. The CCA-F is not a beginner certification. It expects hands-on familiarity with Claude's API, agentic patterns, and development tooling. Answer these four questions:
-
Can you build a basic Claude API application? You should be comfortable making API calls, handling responses, managing conversation history, and working with tool_use responses.
-
Have you used Claude Code? You do not need to be an expert, but you should have installed it, run it in a project, and seen how CLAUDE.md files work.
-
Are you familiar with MCP concepts? You should know what the Model Context Protocol is, even if you have not configured servers from scratch.
-
Do you understand agentic AI patterns at a conceptual level? Concepts like agent loops, multi-agent orchestration, and tool calling should not be completely foreign.
Timeline Adjustment by Experience Level
| Experience Level | Self-Assessment | Recommended Timeline |
|---|---|---|
| Experienced Claude Developer | Yes to all 4 questions. Build with Claude daily. | 2-3 weeks (compress plan) |
| Intermediate Developer | Yes to 2-3 questions. Some Claude API experience. | 30 days (follow plan as-is) |
| AI-Curious Developer | Yes to 1 question. General AI/ML background but new to Claude. | 6-8 weeks (add foundation work) |
| Complete Beginner | No to all. New to both AI and Claude. | 8-12 weeks (complete Anthropic Academy first) |
If you answered yes to all four questions, this 30-day plan is calibrated perfectly for you. If you are new to Claude but have general programming experience, add 2-4 weeks of foundation work first: complete the Anthropic Academy courses, build a few small projects with the Claude API, and get comfortable with Claude Code before starting Day 1.
Foundation Work Resources
If you need foundation work before starting this plan, complete these in order: (1) Anthropic Academy "Claude 101" course, (2) Build 3-5 small Claude API projects, (3) Install and use Claude Code for a real task, (4) Read the MCP documentation. Then return to Day 1.
Study Plan Overview
Here is the complete 30-day plan at a glance. Each week builds on the previous one, and the domains are sequenced so that foundational concepts come first.
30-Day Study Plan Overview
| Week | Theme | Domains Covered | Hours/Week | Key Deliverable |
|---|---|---|---|---|
| Week 1 (Days 1-7) | Foundations & Agentic Architecture | Domain 1: Agentic Architecture | 10-14 hours | Working multi-agent orchestrator |
| Week 2 (Days 8-14) | Tool Design, MCP & Claude Code | Domain 2: Tool Design & MCP, Domain 3: Claude Code | 10-14 hours | Fully configured project with CLAUDE.md + MCP |
| Week 3 (Days 15-21) | Prompt Engineering & Context Management | Domain 4: Prompt Engineering, Domain 5: Reliability | 10-14 hours | Structured extraction pipeline with validation |
| Week 4 (Days 22-30) | Practice Exams & Final Review | All 5 Domains | 12-16 hours | Consistent 75%+ on practice tests |
The plan front-loads the hardest and most heavily weighted domain (Agentic Architecture) in Week 1 while you are freshest. Weeks 2 and 3 cover the remaining four domains. Week 4 is entirely dedicated to practice exams and targeted review, which is where most of your score improvement will come from.
Week 1: Foundations & Agentic Architecture (Days 1-7)

Week 1 focuses entirely on Domain 1: Agentic Architecture. This is the largest and most conceptually dense domain on the exam. You will learn the agentic loop, multi-agent orchestration patterns, session management, error handling, and the critical distinction between programmatic enforcement and prompt-based guidance.
Day 1: Orientation & Anthropic Academy
Time: 1.5-2 hours
Your first day is about getting oriented. Do not dive into technical details yet. Instead, build a mental map of the entire exam.
- Complete "Claude 101" on Anthropic's Skilljar platform. Even if you have used Claude before, this course ensures you have the same vocabulary the exam uses.
- Complete "AI Fluency Framework" on Skilljar. This covers Anthropic's approach to AI safety and responsible development, which appears throughout the exam.
- Read the CCA-F exam guide end to end. Note the five domains, their weightings, and the specific topics listed under each.
- Take a diagnostic practice test on Preporato in untimed mode. Do not study first. The goal is to identify your weak areas so you can allocate time wisely over the next 29 days.
- Record your baseline score for each domain. You will compare this to your Week 1 practice test to measure progress.
After Day 1, you should have a clear picture of what the exam covers and where your gaps are. Adjust the remaining days if your diagnostic reveals unexpected weaknesses.
Study tip for Day 1: Create a spreadsheet or document to track your scores. Record your diagnostic score by domain. You will update this after every practice test. Watching your scores improve over 30 days is motivating and helps you allocate study time where it matters most.
Days 2-3: Agentic Loop Design
Time: 1.5-2 hours per day
The agentic loop is the foundation of everything else on the exam. If you understand this deeply, many other questions become straightforward.
Day 2 - Theory:
- Complete the Anthropic Academy "Agent Skills" course. This is the single most important preparatory resource for Domain 1.
- Study the core agentic loop pattern:
- Call Claude with tools available
- Check
stop_reason: if"tool_use", execute the tool and continue the loop; if"end_turn", terminate - Append tool results to the conversation history before the next API call
- The loop continues until Claude decides it is done (not until you decide)
- Understand why this matters: The exam tests whether you know that the model controls loop termination through stop_reason, not the developer through arbitrary iteration caps.
Day 3 - Hands-On:
- Build a simple single-agent loop that uses 2-3 tools (for example: a file reader, a calculator, and a web search tool)
- Implement proper conversation history management: Every assistant message and every tool result must be appended to the messages array
- Test edge cases: What happens when a tool fails? What happens when the model calls a tool that does not exist? What happens when you hit the context window limit?
- Key exam concept: The agentic loop is a while loop controlled by stop_reason, not a for loop with a fixed iteration count. Arbitrary caps are an anti-pattern.
Why this matters so much for the exam: At least 3-5 questions on a typical CCA-F exam directly test your understanding of the agentic loop. These questions present scenarios where a developer has implemented the loop incorrectly (e.g., using string matching on Claude's output to decide when to stop, or limiting the loop to exactly 5 iterations) and ask you to identify the problem. If you have built a working loop yourself, these questions are straightforward. If you have only read about the concept, they can be tricky because the wrong answers often sound plausible.
Build It, Don't Just Read It
The CCA-F tests applied knowledge, not just theory. Building a working agentic loop on Day 3 will cement the concepts far better than reading documentation alone. Use the Claude API directly rather than a framework so you understand what happens under the hood.
Days 4-5: Multi-Agent Orchestration
Time: 1.5-2 hours per day
Multi-agent orchestration is one of the most heavily tested topics. The exam expects you to know specific patterns, their tradeoffs, and common mistakes.
Day 4 - Hub-and-Spoke Architecture:
- Study the hub-and-spoke pattern: A coordinator agent manages multiple specialized subagents. The coordinator decides which subagent to invoke, what context to provide, and how to synthesize results.
- Critical concept - Context Isolation: Subagents have ISOLATED context. They do NOT inherit the coordinator's conversation history. This is the most commonly missed concept on the exam.
- Passing context explicitly: Because subagents are isolated, you must pass complete findings explicitly in prompts. If the coordinator learns something relevant to a subagent's task, it must include that information in the subagent's prompt.
- Parallel execution: Multiple subagent Task calls can be made in a single coordinator response. This is the mechanism for parallel execution in multi-agent systems.
Day 5 - Hands-On:
- Build a coordinator that dispatches 2 research subagents in parallel. For example: one subagent researches pricing for a product while another researches reviews. The coordinator synthesizes both results.
- Verify context isolation: Confirm that your subagents cannot see the coordinator's earlier conversation turns. If they can, your architecture is wrong.
- Test explicit context passing: Have the coordinator pass specific findings from one subagent to another when needed.
- Practice the exam's language: The exam uses specific terms like "hub-and-spoke," "context isolation," "explicit context passing," and "parallel dispatch." Use these terms in your notes.
Key anti-patterns to memorize:
- Assuming subagents inherit coordinator context (they do not)
- Using a single monolithic agent instead of specialized subagents
- Over-decomposing into too many tiny subagents (narrow decomposition anti-pattern)
- Not passing necessary context explicitly between agents
Study exercise for Days 4-5: After building your multi-agent coordinator, try deliberately breaking context isolation. Have the coordinator reference information from its conversation history in a subagent prompt WITHOUT explicitly passing it. Observe how the subagent fails or hallucinates. This experiential learning is far more memorable than reading about context isolation in documentation. Then fix it by passing the information explicitly and observe the difference in subagent performance.
Day 6: Session & Error Handling
Time: 1.5-2 hours
Day 6 covers session management, error handling, and the single most important mental model on the entire exam: programmatic enforcement versus prompt-based guidance.
Session Management:
--resume <session-name>: Resume a named session to continue previous work. Sessions persist conversation history and tool state.fork_session: Create a branch from an existing session. Useful for exploring alternatives without losing the original session state.- When to use each: Resume for continuation, fork for exploration. The exam tests whether you know the difference.
Programmatic Enforcement vs. Prompt-Based Guidance:
This is THE key mental model for the CCA-F. Every architecture decision can be evaluated through this lens:
- Programmatic enforcement: Hard constraints implemented in code. The model CANNOT violate them. Examples: tool availability restrictions, output schema validation via tool_use, file system permissions, API rate limits.
- Prompt-based guidance: Soft constraints expressed in natural language. The model SHOULD follow them but CAN violate them. Examples: "Do not modify test files," "Always explain your reasoning," "Use formal tone."
- The exam principle: Use programmatic enforcement for safety-critical constraints and prompt-based guidance for preferences and style. Never rely on prompt-based guidance alone for security or correctness.
Anti-Patterns to Memorize:
- Natural language parsing for loop termination (use stop_reason instead)
- Arbitrary iteration caps (let the model control the loop)
- Narrow decomposition (over-splitting tasks into too many subagents)
- Escalation based on sentiment (escalate on: human request, ambiguous policy, no progress -- NOT on user frustration or negative sentiment)
Error Handling Patterns:
- Structured errors with
isRetryableflag: transient errors (network timeouts, rate limits) are retryable; validation errors (bad input, missing permissions) are not - Never silently swallow errors. Always propagate structured error context back to the agent.
- Crash recovery using manifest files: track operation progress so work can resume after failures
Putting it all together: By the end of Day 6, you should be able to look at any agentic system design and evaluate it through the lens of programmatic enforcement vs. prompt-based guidance. Ask yourself: "Is this constraint enforced by code or by a natural language instruction? If it is safety-critical, is code enforcement sufficient?" This mental model will help you answer 20-30% of the exam questions because it applies across all five domains.
Study exercise for Day 6: Take your multi-agent coordinator from Day 5 and add both types of enforcement. Add a programmatic constraint (e.g., the coordinator can only invoke subagents from an allowed list) and a prompt-based constraint (e.g., "Always summarize findings in bullet points"). Observe how the programmatic constraint is impossible to violate while the prompt-based one occasionally is. This visceral understanding is what the exam tests.
Day 7: Week 1 Review & Practice
Time: 2-4 hours
Day 7 is your first checkpoint. This is where you measure progress and identify remaining gaps before moving to Week 2.
- Review your notes and flashcards for all Domain 1 concepts
- Take a practice test on Preporato focusing on Agentic Architecture questions
- Compare your score to Day 1 baseline. You should see meaningful improvement in Domain 1.
- Create flashcards for any concepts you still find shaky
- Read: CCA-F Exam Domains Complete Breakdown to preview Weeks 2 and 3
Week 1 Checkpoint
If you can explain hub-and-spoke orchestration, context isolation, the agentic loop's stop_reason mechanism, and the difference between programmatic enforcement and prompt-based guidance from memory, you are on track. If any of these feel fuzzy, spend an extra day on them before moving to Week 2.
Week 2: Tool Design, MCP & Claude Code (Days 8-14)

Week 2 covers Domains 2 and 3. These domains are tightly connected: tool design principles apply whether you are defining tools via the API or configuring them through MCP and Claude Code. Mastering this week's material will also reinforce the agentic architecture concepts from Week 1.
Days 8-9: Tool Design Principles
Time: 1.5-2 hours per day
Tool design is where architecture meets implementation. The exam tests both your understanding of design principles and your ability to apply them in realistic scenarios.
Day 8 - Tool Description & Routing:
- Tool descriptions are the PRIMARY routing mechanism. Claude uses tool descriptions to decide which tool to call. A vague description leads to incorrect tool selection. A precise description leads to reliable routing.
- What to include in tool descriptions:
- What the tool does (clear, specific purpose)
- Input formats and expected types
- Example inputs and outputs
- Edge cases and boundary conditions
- What the tool does NOT do (to prevent incorrect routing)
- Tool count guidelines: 4-5 tools per agent is the sweet spot. Once you exceed 18 tools, selection accuracy degrades significantly. If you need more tools, split into specialized subagents.
tool_choiceparameter:auto: Claude decides whether and which tool to call (default, most common)any: Claude MUST call a tool (forces tool use)named(e.g.,{"type": "tool", "name": "specific_tool"}): Claude MUST call the specified tool
Day 9 - Error Handling in Tools:
- Structured error responses: Every tool should return structured errors, not raw exception messages. Include:
- Error type (validation, permission, network, timeout)
isRetryableflag:truefor transient errors (network timeout, rate limit),falsefor validation errors (bad input, missing field)- Human-readable message
- Suggested remediation (when applicable)
- Why structured errors matter for the exam: The exam specifically tests whether you know to include the
isRetryableflag. This flag allows the agentic loop to automatically retry transient failures without human intervention while stopping on validation errors.
Day 9 - Hands-On Build:
- Build a set of 4-5 tools with comprehensive descriptions, structured error handling, and clear boundaries. Suggested tool set:
- A file reader tool with clear description of supported formats
- A data lookup tool with input validation
- A calculation tool with explicit numeric type requirements
- A formatting/output tool with format specification
- A validation tool that checks inputs against rules
- Test tool routing: Give Claude ambiguous requests and observe which tool it selects. If it picks the wrong tool, improve the tool description until routing is reliable.
- Test error handling: Deliberately send bad inputs to your tools and verify that structured errors (with isRetryable flags) are returned correctly.
- Test tool count limits: If time permits, add 15+ tools and observe how selection accuracy changes. This demonstrates the 18-tool degradation threshold firsthand.
Common Exam Trap
The exam often presents scenarios where a tool description is vague or ambiguous and asks what the most likely consequence is. The answer is almost always "incorrect tool selection" or "unreliable routing." Tool descriptions are not documentation for humans; they are routing instructions for Claude.
Days 10-11: MCP Deep Dive
Time: 1.5-2 hours per day
The Model Context Protocol (MCP) is Anthropic's standard for connecting Claude to external tools and data sources. The exam tests configuration, security, and architectural decisions around MCP.
Day 10 - MCP Configuration:
- Complete the Anthropic Academy "Introduction to MCP" course. This is essential preparation for MCP questions.
- Two configuration files:
.mcp.json(project-level): Stored in the project root, committed to version control. Contains project-specific MCP server configurations that all team members share.~/.claude.json(user-level): Stored in the home directory, NOT committed to VCS. Contains personal MCP server configurations, API keys, and user-specific settings.
- Environment variable expansion: Use
${GITHUB_TOKEN}syntax in MCP configurations. The variable is expanded at runtime from the user's environment. - Security rule: NEVER commit secrets to
.mcp.json. Use environment variable references instead. This is a frequently tested security concept. - MCP Resources: MCP servers can expose content catalogs (not just tools). Resources provide structured data that Claude can access without a tool call, like documentation, database schemas, or configuration files.
Day 11 - Hands-On:
- Configure an MCP server for a real project. Choose something practical: a GitHub MCP server, a database MCP server, or a file system MCP server.
- Set up both configuration files: Create a
.mcp.jsonwith project-level settings and update~/.claude.jsonwith user-level settings. - Test environment variable expansion: Verify that
${GITHUB_TOKEN}or similar variables resolve correctly at runtime. - Explore MCP Resources: If your chosen server exposes resources, examine how Claude discovers and uses them.
- Read: CCA-F Exam Domains Complete Breakdown for Domain 2 details
Key exam concepts for MCP:
.mcp.jsonis for project/team;~/.claude.jsonis for user/personal- Environment variables for secrets, never hardcoded values
- MCP servers provide tools AND resources
- MCP standardizes the tool interface so Claude does not need provider-specific integrations
Common exam question pattern for MCP: The exam frequently presents a scenario where a developer has placed secrets directly in .mcp.json and committed it to version control, then asks what the security issue is and how to fix it. The correct answer always involves environment variable expansion (${VARIABLE_NAME}) and storing actual secret values in the user-level configuration or environment. Another common pattern shows team instructions in ~/.claude.json (user-level) and asks why other team members are not seeing them. The answer: team instructions belong in .mcp.json (project-level, committed to VCS).
Days 12-13: Claude Code Configuration
Time: 1.5-2 hours per day
Claude Code is Anthropic's CLI tool for agentic coding. The exam tests configuration hierarchy, skill definitions, operational modes, and CI/CD integration. This is one of the most practical domains on the exam.
Day 12 - CLAUDE.md Hierarchy & Configuration:
-
Complete the Anthropic Academy "Claude Code Developer Training" course.
-
CLAUDE.md hierarchy (three levels):
- User-level (
~/.claude/CLAUDE.md): Personal preferences that apply to ALL projects. Examples: coding style preferences, editor settings, personal workflow habits. - Project-level (
./CLAUDE.mdin project root): Project-specific instructions committed to VCS. Examples: project architecture, coding standards, testing requirements, deployment procedures. - Directory-level (
./src/CLAUDE.mdin subdirectories): Context for specific directories. Examples: "This directory contains React components, use functional components with hooks."
- User-level (
-
Critical exam trap: Team-wide instructions placed in user-level CLAUDE.md. This is WRONG because user-level is personal and not shared via VCS. Team instructions belong in project-level CLAUDE.md. The exam specifically tests this.
-
.claude/rules/directory: Supports glob-pattern-based rules that apply only to matching files. Example: a rule that only activates when editing.test.tsfiles. More targeted than CLAUDE.md instructions.
Day 13 - Skills, Modes & CI/CD:
-
Claude Code Skills:
context:fork- Fork the current context for parallel explorationallowed-tools- Restrict which tools are available in a contextargument-hint- Provide hints for tool arguments- Skills are defined in CLAUDE.md or rules files and extend Claude Code's capabilities for specific tasks
-
Operational Modes:
- Plan Mode: Claude analyzes and plans but does NOT execute changes. Use for understanding codebases, planning refactors, and reviewing before action.
- Direct Execution Mode: Claude analyzes AND executes changes. The default mode for most development tasks.
- When to use each: Plan Mode for high-risk changes, unfamiliar codebases, or when you want to review before execution. Direct Execution for routine tasks with well-understood scope.
-
CI/CD Integration:
-pflag: Non-interactive mode for CI/CD pipelines. Claude reads from stdin or arguments, executes, and exits without prompting.--output-format json: Machine-readable output for pipeline processing. Parse results programmatically instead of reading human-formatted text.- Together:
claude -p "Run tests and report results" --output-format jsonenables fully automated CI/CD integration.
-
Build: Set up CLAUDE.md at all three levels for a real or practice project. Include project architecture notes, coding standards, and directory-specific context. Verify that Claude Code respects the hierarchy.
Detailed build exercise for Day 13:
Create a sample project with this structure:
my-project/
CLAUDE.md (project-level)
.claude/rules/ (glob-pattern rules)
src/
CLAUDE.md (directory-level for src)
components/
CLAUDE.md (directory-level for components)
In the project-level CLAUDE.md, include: project architecture overview, tech stack, testing requirements, and deployment procedures. In the src/CLAUDE.md, include: "Source code lives here. Use TypeScript. All functions must have JSDoc comments." In src/components/CLAUDE.md, include: "React functional components only. Use hooks, not class components. Each component gets its own file."
Then add a .claude/rules/ file with a glob pattern for test files (e.g., *.test.ts) that says: "Always use describe/it blocks. Mock external dependencies. Test edge cases."
Run Claude Code in different directories and verify it picks up the appropriate CLAUDE.md instructions. This exercise takes 30-45 minutes but cements the hierarchy concept permanently.
Hands-On Practice is Essential
Do not just read about Claude Code configuration. Actually set it up. Create the files, run Claude Code, and verify the behavior. The exam presents realistic scenarios that are much easier to answer if you have done the configuration yourself.
Day 14: Week 2 Review & Practice
Time: 3-4 hours
Day 14 is your second major checkpoint. This practice test covers everything from Weeks 1 and 2.
- Take a full-length practice test on Preporato under timed conditions. Simulate real exam pressure.
- Review every wrong answer with the provided explanations. Do not just read the right answer; understand WHY the other options are wrong.
- Update your notes for weak areas. If you missed MCP configuration questions, go back to Day 10-11 notes. If tool design tripped you up, revisit Days 8-9.
- Create a "mistake log" tracking the specific concepts you get wrong. You will use this log extensively in Week 4.
Target score: 70%+ overall. If you are below 65%, consider spending an extra day or two on your weakest domain before moving to Week 3.
Master These Concepts with Practice
Our CCA-F practice bundle includes:
- 6 full practice exams (390+ questions)
- Detailed explanations for every answer
- Domain-by-domain performance tracking
30-day money-back guarantee
Week 3: Prompt Engineering & Context Management (Days 15-21)

Week 3 covers Domains 4 and 5. These domains deal with how you communicate with Claude (prompt engineering, structured output) and how you ensure reliability in production (context management, error propagation, human review). Many candidates underestimate these domains because they feel "softer" than architecture, but they carry significant exam weight.
Days 15-16: Prompt Engineering
Time: 1.5-2 hours per day
Prompt engineering for the CCA-F is not about clever tricks. It is about systematic, reliable techniques that produce consistent results in production systems.
Day 15 - Core Principles:
- Explicit criteria over vague guidance: "Respond in JSON with keys: name, age, city" beats "Give me a structured response." The exam rewards specificity and punishes ambiguity.
- System prompts for persistent instructions: Place role definitions, output format requirements, and behavioral constraints in the system prompt. These persist across the entire conversation.
- User messages for per-turn input: Dynamic content, specific queries, and turn-specific context go in user messages.
- The clarity test: If a human could interpret your prompt two different ways, Claude might too. Eliminate ambiguity.
Day 16 - Few-Shot Prompting:
- 2-4 examples is the sweet spot for few-shot prompting. Fewer may be ambiguous; more wastes context window space.
- Include reasoning in examples: Do not just show input and output. Show the reasoning process. This teaches Claude the HOW, not just the WHAT.
- Example format for code review:
Location: src/auth/login.ts, line 42 Issue: SQL injection vulnerability Severity: Critical Fix: Use parameterized queries instead of string concatenation - Consistency in format: All few-shot examples must use the same format. Inconsistency confuses the model.
- Read: How to Pass CCA-F on Your First Attempt for exam strategy tips
Key exam concepts for prompt engineering:
- Explicit is always better than implicit
- Few-shot examples include reasoning, not just input/output pairs
- System prompts for persistent context, user messages for dynamic content
- Format consistency across all examples
Study exercise for Days 15-16: Take a real task (e.g., "classify customer support tickets by category and urgency") and write three versions of the prompt:
- Vague version: "Classify these tickets into categories and urgency levels."
- Explicit version: "Classify each ticket into exactly one category (billing, technical, account, shipping) and one urgency level (critical, high, medium, low). Output as JSON."
- Few-shot version: The explicit version plus 3 examples with reasoning: "This ticket mentions 'charged twice' which indicates a billing issue. The customer says 'need refund immediately' which suggests high urgency but not system-down critical."
Run all three through the Claude API and compare output consistency. The vague version will produce inconsistent formats. The explicit version will produce consistent formats but occasionally questionable classifications. The few-shot version will produce both consistent formats AND better classifications. This exercise demonstrates WHY the exam emphasizes these techniques.
Days 17-18: Structured Output & Batch API
Time: 1.5-2 hours per day
Structured output and the Batch API represent two distinct but complementary approaches to reliable, production-grade Claude integration. The exam tests both.
Day 17 - Structured Output via tool_use:
-
How it works: Define a tool with a JSON schema describing the desired output structure. Claude "calls" the tool with structured data matching the schema. You extract the structured data from the tool call.
-
Key distinction: JSON schemas ensure SYNTACTIC correctness (valid JSON, correct types, required fields present). They do NOT guarantee SEMANTIC correctness (the values might be wrong, hallucinated, or contextually inappropriate).
-
Handling uncertainty in schemas:
- Nullable fields: Allow
nullfor fields where data might not be available. Better than forcing Claude to guess. - "unclear" enum value: Add "unclear" as an option in enum fields so Claude can express uncertainty instead of making something up.
- "other" + detail field: For classification tasks, include an "other" option with a freeform detail field for edge cases that do not fit predefined categories.
- Nullable fields: Allow
-
Validation retry pattern:
- First attempt: Claude generates structured output
- Validation fails: Send Claude the original prompt + the failed output + specific error messages
- Second attempt: Claude corrects the output based on the error feedback
- Key rule: Retry for FORMAT errors (missing field, wrong type). Do NOT retry for ABSENT data (if the information is not in the source, retrying will not help and may cause hallucination).
Day 18 - Batch API:
-
Core characteristics:
- 50% cost savings compared to real-time API calls
- 24-hour processing window (no latency SLA)
- No multi-turn conversations (each request is independent)
custom_idfield for tracking and matching results to requests
-
When to use Batch API: Large-scale processing where latency is not critical: data extraction, content classification, bulk summarization, document analysis.
-
When NOT to use Batch API: Interactive applications, real-time chat, time-sensitive workflows.
-
Multi-instance review pattern: Use Batch API to have multiple Claude instances review the same content independently. Compare results for consensus. Critical: each instance does NOT have access to the generator's context (context isolation again).
-
Build: A structured data extraction pipeline with validation. This is the most comprehensive hands-on project in the plan. Here is the suggested implementation:
- Define a tool schema for extracting structured data from job postings (or similar documents). Include fields like: job_title (string), company (string), salary_range (nullable object with min/max), experience_level (enum: junior/mid/senior/lead/unclear), remote_policy (enum: remote/hybrid/onsite/other), and if "other" is selected, a detail field for explanation.
- Implement the extraction by sending document text to Claude with the tool definition. Claude "calls" the tool with structured data.
- Add validation: Check that required fields are present, enums contain valid values, and salary ranges are logical (min less than max).
- Implement the retry pattern: When validation fails, send Claude the original document + the failed output + specific error messages. Observe how Claude corrects format errors on retry.
- Test the "do not retry absent data" rule: Send a job posting that does not mention salary. Verify that the salary_range field is null (not hallucinated). If you retry asking for salary, observe that Claude may hallucinate a number. This demonstrates why you should NOT retry for absent data.
- Process a batch: Send 5-10 documents through the pipeline and verify consistent results.
This project touches structured output, validation retry, nullable fields, uncertainty handling, and the absent-data principle, covering 5+ exam concepts in a single exercise.
Days 19-20: Context Management & Reliability
Time: 1.5-2 hours per day
Context management and reliability are the operational backbone of production Claude applications. These concepts appear throughout the exam, often in scenario-based questions.
Day 19 - Context Window Management:
-
Lost-in-the-middle problem: Claude is better at recalling information from the beginning and end of the context window than from the middle. This has direct architectural implications:
- Place critical information (case facts, key constraints, task definitions) at the BEGINNING of the context
- Place recent conversation turns at the END
- Trim verbose tool outputs in the middle
- Periodically generate summaries and place them at the top
-
Context degradation strategies: As conversations grow, context quality degrades. Mitigation:
- Scratchpad pattern: Have Claude maintain a running summary of key facts in a structured scratchpad. This prevents information loss as older turns scroll out of the effective window.
- Subagent delegation: Fork context-heavy tasks to subagents. Each subagent gets a fresh, focused context window instead of inheriting a bloated conversation history.
/compactcommand: In Claude Code, compact the conversation to reclaim context space while preserving essential information.
-
Crash recovery with manifest files: For long-running operations, maintain a manifest file tracking:
- Which subtasks have been completed
- Current state and progress
- Pending work items
- If a crash occurs, the agent can read the manifest and resume from the last completed checkpoint instead of starting over.
Day 20 - Error Propagation & Human Review:
-
Error propagation principles:
- Structured context: Always propagate errors with full context: what was attempted, what failed, what the error was, and whether it is retryable.
- Never silent failures: A silently swallowed error is worse than a crash. Every error must be surfaced, logged, or escalated.
- Cascading failure prevention: When a subagent fails, the coordinator must decide: retry, escalate, or proceed without that result. Never let one failure crash the entire pipeline.
-
Human review strategies:
- Stratified sampling: Do not review a random sample of outputs. Stratify by category, difficulty, or confidence level. A 97% overall accuracy can mask a 40% accuracy rate on a specific category.
- Type-specific weakness detection: High aggregate accuracy does not mean the system works well for all input types. Break down accuracy by category and investigate outliers.
- When to escalate to humans: The agent should escalate when:
- A human explicitly requests it
- Policy is ambiguous (no clear rule applies)
- No progress is being made (agent is stuck in a loop)
- NOT sentiment-based escalation: The exam specifically tests that escalation should NOT be triggered by user frustration or negative sentiment. This is a common trap.
Day 21: Week 3 Review & Practice
Time: 3-4 hours
Day 21 is your third checkpoint. By now you have covered all five domains at least once.
- Take a full-length timed practice test on Preporato covering all domains
- Pay special attention to Domains 4 and 5 questions, as these are the most recently studied
- Review your mistake log from Day 14 and add new entries
- Cross-reference weak areas across all three practice tests so far. Recurring mistakes indicate concepts that need deeper study in Week 4.
Target score: 72%+ overall. If any single domain is below 65%, flag it for intensive review in Week 4.
Week 3 self-assessment questions: Before moving to Week 4, you should be able to answer all of these from memory:
- What is the difference between syntactic and semantic correctness in structured output?
- When should you retry a failed extraction versus accepting null/unclear values?
- What are the three characteristics of the Batch API (cost, latency, conversation support)?
- What is the lost-in-the-middle problem and how do you mitigate it?
- What three conditions should trigger human escalation? What should NOT trigger escalation?
- How does stratified sampling improve human review compared to random sampling?
- What is the scratchpad pattern and when do you use it?
If you can answer all seven confidently, move to Week 4. If not, spend Day 22 reviewing the specific topics you are unsure about before starting practice exams.
Week 4: Practice Exams & Final Review (Days 22-30)
Week 4 is where your score improves the most. Research consistently shows that practice testing is the single most effective study technique. You will spend this week taking practice exams, reviewing mistakes, and doing targeted remediation.
Days 22-23: Full Practice Exams
Time: 2-3 hours per day
Take practice tests 4 and 5 under realistic exam conditions.
- Simulate exam conditions: Set a timer for 120 minutes. No notes, no documentation, no breaks. Close all other tabs and applications.
- Practice test 4 (Day 22): Take the full exam. When finished, score it but do NOT review answers yet.
- Practice test 5 (Day 23): Take another full exam. Score it.
- Review both tests together: Go through every wrong answer from both tests. For each wrong answer:
- Write down the correct answer and WHY it is correct
- Identify which concept you misunderstood
- Categorize the mistake: was it a knowledge gap, a misreading, or a trick question?
Mistake Categories & Remediation
| Mistake Type | Description | Remediation |
|---|---|---|
| Knowledge Gap | Did not know the concept | Go back to the relevant day in Weeks 1-3 and re-study |
| Misreading | Knew the concept but misread the question | Practice reading questions carefully. Underline key words. |
| Trick Question | Fell for a common distractor | Review the 7 anti-patterns. The exam loves testing these. |
| Time Pressure | Ran out of time on later questions | Practice time management. Allocate ~2 min per question. |
| Overthinking | Knew the answer but talked yourself out of it | Trust your first instinct. Change answers only with clear reason. |
Days 24-26: Targeted Weakness Review
Time: 1.5-2 hours per day
These three days are for deep remediation of your weakest areas. Use your mistake log to guide study.
Day 24 - Domain Remediation:
- Identify which domains you scored below 72% on across all practice tests
- Re-study the specific technical concepts from those domains
- Do not re-read everything; focus on the specific topics you keep getting wrong
- Create new flashcards for persistent trouble spots
Day 25 - Anti-Pattern Deep Dive:
The exam loves anti-pattern questions. Memorize all seven and understand why each is wrong:
- Natural language parsing for termination: Use
stop_reason, not string matching on Claude's output - Arbitrary iteration caps: Let the model control the loop via stop_reason
- Narrow decomposition: Do not split tasks into too many tiny subagents
- Assuming context inheritance: Subagents have isolated context
- Vague tool descriptions: Descriptions are routing mechanisms, not documentation
- Sentiment-based escalation: Escalate on policy ambiguity and lack of progress, not frustration
- Silent error swallowing: Always propagate errors with structured context
Day 26 - Wrong Question Review:
- Go back through EVERY wrong answer from ALL practice tests (tests 1-5)
- Re-answer each question without looking at the answer first
- If you get it wrong again, that concept needs additional study
- If you get it right, confirm you understand the reasoning, not just the answer
The "teach it" technique for Day 26: For every concept you got wrong more than once, explain it out loud as if you were teaching it to a colleague who has never heard of it. Use concrete examples. If you cannot explain it clearly, you do not truly understand it. This technique (called the Feynman method) is the most effective way to identify and fill knowledge gaps.
For example, try explaining context isolation: "When a coordinator dispatches a subagent, the subagent starts with a blank conversation history. It cannot see what the coordinator discussed earlier. So if the coordinator learned something important from Subagent A, and now wants Subagent B to use that information, the coordinator must explicitly include that information in Subagent B's prompt. If it does not, Subagent B will either not have the information or might hallucinate something wrong."
If you can give an explanation that clear and specific for every key concept, you are exam-ready.
Days 27-28: Final Practice Exam
Time: 2-3 hours per day
Day 27 - Practice Test 6:
- Take the hardest practice test available under full exam conditions
- This should be your most challenging test to build confidence
- Score it immediately
- Target: 75%+ overall, no domain below 70%
Day 28 - Final Review:
- Review all wrong answers from test 6
- Quick review of any remaining weak spots
- Finalize your mental models and key concepts
Not Scoring 70%+ Consistently?
If you are scoring below 70% on practice tests at this point, seriously consider delaying your exam by 1-2 weeks. Use the extra time to focus exclusively on your weakest domains. It is better to delay than to fail and have to retake. Review the CCA-F Exam Format & Structure to recalibrate your expectations.
Day 29: Light Review
Time: 1 hour maximum
Day 29 is about consolidation, not new learning. Your brain needs time to organize everything you have studied.
- Review the CCA-F Cheat Sheet only. Do not open any other study materials.
- Scan the 5 mental models:
- Programmatic enforcement vs. prompt-based guidance
- Context isolation in multi-agent systems
- stop_reason controls loop termination
- Tool descriptions as routing mechanisms
- Structured errors with isRetryable flags
- Scan the 7 anti-patterns one final time
- Do NOT study new material. If you discover a gap today, do not panic-study. You have covered enough. Trust your preparation.
- Get a good night's sleep
Day 30: Exam Day
Time: Exam duration + brief warm-up
Today is the day. You are prepared.
- Morning: Light breakfast, hydrate, brief physical activity
- 30 minutes before: Quick scan of your cheat sheet (key facts only, not deep study)
- During the exam:
- Read each question carefully. Underline key words.
- Eliminate obviously wrong answers first
- When stuck between two options, apply the programmatic vs. prompt-based mental model
- Do not spend more than 3 minutes on any single question. Flag it and move on.
- Trust your preparation
- After the exam: Regardless of result, you have built real expertise in Claude architecture. That knowledge is valuable whether or not you pass on the first attempt.
Exam Day Strategy
The most common exam day mistake is changing answers. Research shows your first instinct is usually correct. Only change an answer if you have a specific, articulable reason, not just a vague feeling. Read our How to Pass CCA-F on Your First Attempt guide for more exam strategies.
Adjusting the Plan
Not everyone fits the 30-day timeline. Here is how to adapt this plan to your situation.
For Experienced Claude Developers (Compress to 2 Weeks)
If you build production Claude applications daily and answered "yes" to all four prerequisite questions, you can compress this plan significantly:
- Days 1-3: Skim Weeks 1-2 material. Focus only on exam-specific terminology and anti-patterns you might not use daily.
- Days 4-5: Take 2 full practice tests. Identify gaps.
- Days 6-10: Targeted study on gaps only. Skip topics where you scored 85%+.
- Days 11-12: Take 2 more practice tests under exam conditions.
- Days 13-14: Light review and exam day.
The risk of compressing: you might have blind spots in areas you do not use daily (Batch API, MCP Resources, specific CLAUDE.md hierarchy rules). Practice tests will reveal these.
Even experienced developers should not skip practice tests. The exam uses specific terminology and tests specific anti-patterns that may not come up in daily work. A developer who builds excellent Claude applications might still miss questions about the CLAUDE.md hierarchy or the Batch API's 24-hour processing window simply because those are not part of their daily workflow.
For Complete Beginners (Expand to 8 Weeks)
If you are new to Claude and answered "no" to most prerequisite questions, you need more time:
- Weeks 1-2 (Foundation): Complete ALL Anthropic Academy courses. Build 5+ small Claude API projects. Install and use Claude Code daily.
- Weeks 3-4 (Foundation): Study MCP documentation. Build an MCP server. Set up Claude Code for a real project. Configure CLAUDE.md hierarchy.
- Weeks 5-8: Follow this 30-day plan as written (it maps to your Weeks 5-8).
The extra four weeks ensure you have the hands-on experience the exam assumes. Do not rush foundation work just to save time. A strong foundation makes the exam-specific material much easier.
For Part-Time Study (Expand to 6 Weeks)
If you can only study 30-45 minutes per day instead of 1-2 hours:
- Weeks 1-2: Cover Week 1 material (Agentic Architecture)
- Weeks 3-4: Cover Week 2 material (Tool Design, MCP, Claude Code)
- Week 5: Cover Week 3 material (Prompt Engineering, Context Management)
- Week 6: Cover Week 4 material (Practice Exams, Review)
Each topic gets more calendar time but the same total hours. The tradeoff: more time between related topics means more review is needed to maintain retention. Use flashcards daily to combat forgetting.
Part-time study tips:
- Use spaced repetition: Review flashcards from previous weeks for 10 minutes at the start of each study session. This prevents forgetting Week 1 material while studying Week 5 content.
- Commute time: If you commute, use audio recordings of key concepts or review flashcards on your phone. Even 15 minutes of daily review adds up to 10+ hours over 6 weeks.
- Weekend deep dives: Save hands-on build projects for weekends when you have longer blocks of time. Weekday sessions can focus on reading, flashcard review, and short practice question sets.
- Accountability: Tell someone about your exam date. External accountability increases completion rates for self-study plans by 65% according to learning research.
Resources by Week
Key Resources by Week
| Week | Primary Resources | Practice Activities | Read on Preporato |
|---|---|---|---|
| Week 1 | Anthropic Academy: Claude 101, AI Fluency, Agent Skills | Build single-agent loop, Build multi-agent coordinator | CCA-F Exam Domains Breakdown, What is CCA-F? |
| Week 2 | Anthropic Academy: Intro to MCP, Claude Code Training | Configure MCP server, Set up CLAUDE.md hierarchy | Complete CCA-F Guide, Exam Format & Structure |
| Week 3 | Claude API Documentation, Batch API Documentation | Build structured extraction pipeline | How to Pass CCA-F, Cheat Sheet |
| Week 4 | Preporato Practice Tests (4-6 full exams) | Full timed simulations, Targeted weakness review | All articles for weak domains |
How to use resources effectively:
- Anthropic Academy courses are your primary study material for Weeks 1 and 2. Complete them in order, take notes, and do the exercises. Do not just watch the videos; pause and build along with them.
- Practice tests on Preporato are your primary study tool for Week 4 and your progress measurement tool throughout. Take at minimum 6 full-length tests over the 30 days. More is better.
- Official documentation is your reference material. Do not try to read it cover to cover. Instead, use it to look up specific concepts when practice test questions reveal gaps.
- The CCA-F Cheat Sheet is your daily companion. Review it at the start of each study session to warm up and at the end to consolidate. By Week 4, you should have the entire cheat sheet memorized.
Additional resources used throughout all four weeks:
- Preporato CCA-F Practice Tests - Take at least 6 full-length practice tests during the 30 days
- CCA-F Cheat Sheet - Your daily quick-reference companion
- Claude API Documentation - Official reference for API-specific questions
- MCP Documentation - Official reference for MCP concepts
- Claude Code Documentation - Official reference for Claude Code configuration
Frequently Asked Questions
Conclusion
The CCA-F exam covers a lot of ground, but it is a passable exam with the right preparation strategy. This 30-day plan gives you a structured path through all five domains, with hands-on projects to cement understanding and practice tests to measure progress.
The most important takeaways from this plan:
- Study the mental models, not just the facts. Programmatic enforcement vs. prompt-based guidance is a lens that helps you answer dozens of questions, not just one.
- Build things. Reading about agentic loops is not the same as building one. The hands-on projects are not optional.
- Take practice tests seriously. Simulate exam conditions. Review every wrong answer. Track your mistakes. Week 4's practice exams are where your score improves the most.
- Know the anti-patterns cold. The exam tests what NOT to do as aggressively as it tests what TO do.
- Trust the process. If you follow this plan consistently, you will be prepared by Day 30.
Start your preparation today. Take the diagnostic practice test on Preporato to establish your baseline, then begin Day 1 tomorrow.
Here is what your study journey should produce by Day 30:
- A working agentic loop with proper stop_reason handling and conversation history management
- A multi-agent coordinator with context isolation and explicit context passing
- A fully configured project with CLAUDE.md at all three levels, MCP configuration, and Claude Code skills
- A structured data extraction pipeline with validation retry and uncertainty handling
- 6+ practice test scores showing consistent improvement from your Day 1 baseline
- A mistake log documenting every wrong answer, its root cause, and the correct concept
These artifacts are not just exam preparation. They are production-ready patterns you can apply immediately in your work. The CCA-F is designed to certify practical expertise, and this study plan is designed to build it.
For a complete exam overview, read our Claude Certified Architect Complete Guide. For quick reference during study sessions, bookmark the CCA-F Cheat Sheet. For exam strategy and test-taking tips, see How to Pass CCA-F on Your First Attempt.
30-Day Study Plan Milestones
0/20 completedReady to start your CCA-F preparation? Take your first practice test on Preporato and begin Day 1 of your 30-day journey to certification.
Ready to Pass the CCA-F Exam?
Join thousands who passed with Preporato practice tests

![CCA-F 30-Day Study Plan: Week-by-Week Preparation Guide [2026]](/blog/cca-f-study-plan-30-day-preparation.webp)