Free Claude Certified Architect - Foundations (CCA-F) Practice Questions
Test your knowledge with 20 free exam-style questions
CCA-F Exam Facts
Questions
65
Passing
720/1000
Duration
130 min
You are building a customer support resolution agent that autonomously handles ticket classification, response generation, and escalation. The agent uses Claude with tool calling in an agentic loop. How should the loop determine when to stop executing?
Frequently Asked Questions
These 20 sample questions let you experience the exact format, difficulty, and question styles you'll encounter on exam day. Use them to identify knowledge gaps and decide if our full practice exam package is right for your preparation strategy.
Our questions mirror the actual exam format, difficulty level, and topic distribution. Each question includes detailed explanations to help you understand the concepts.
The full package includes 6 complete practice exams with 390+ unique questions, detailed explanations, progress tracking, and lifetime access.
Yes! Our CCA-F practice questions are regularly updated to reflect the latest exam objectives and question formats. All questions align with the current 2026 exam blueprint.
Sample CCA-F Practice Questions
Browse all 20 free Claude Certified Architect - Foundations practice questions below.
You are building a customer support resolution agent that autonomously handles ticket classification, response generation, and escalation. The agent uses Claude with tool calling in an agentic loop. How should the loop determine when to stop executing?
- Continue the loop when stop_reason is 'tool_use' and terminate when stop_reason is 'end_turn'
- Set a maximum iteration count of 5 and terminate regardless of task completion status
- Use a sentiment analysis tool to detect when the customer seems satisfied and terminate the loop at that point
- Parse Claude's natural language output for phrases like 'I'm done' or 'task complete' to signal loop termination
Your multi-agent research system uses a hub-and-spoke architecture where a coordinator agent dispatches tasks to specialized subagents (market research, patent search, competitor analysis). A subagent needs context about the overall research objective. What is the correct way to provide this context?
- Have each subagent call the coordinator via a tool to request any context it needs during execution
- Store the research objective in a shared database that all subagents query at the start of their execution
- Rely on automatic context inheritance — subagents receive the coordinator's full conversation history by default
- Pass complete findings and the research objective explicitly in the prompt sent to each subagent
You are designing a customer support agent that must comply with your company's refund policy: refunds over $500 require manager approval and cannot be issued automatically. You need to enforce this rule reliably. What is the BEST implementation approach?
- Implement a programmatic check in the refund tool that rejects amounts over $500 and returns an escalation instruction
- Have Claude self-evaluate each refund decision using a confidence score and escalate when confidence is below 80%
- Train the model with fine-tuning examples that demonstrate refusing refunds over $500
- Add a system prompt instruction: 'Never issue refunds over $500 without manager approval'
Your multi-agent research system coordinator needs to dispatch three independent research tasks simultaneously: patent search, market analysis, and competitor review. Each task goes to a specialized subagent. What is the MOST efficient way to structure this dispatch?
- Send each task sequentially, waiting for one subagent to complete before dispatching the next
- Break each research task into multiple micro-tasks (e.g., 15 sub-tasks total) for maximum granularity
- Have the coordinator emit all three Task tool calls in a single response for parallel execution
- Create a shared message queue where subagents pick up tasks at their own pace
Your customer support agent uses Claude's self-reported confidence scores to decide when to escalate to a human agent. During a production review, you discover that the agent occasionally processes complex policy-edge-case tickets without escalating, despite producing incorrect resolutions. What is the MOST likely root cause?
- Self-reported confidence scores are poorly calibrated and should not be used as the primary escalation mechanism
- The system prompt needs more explicit instructions about when to report low confidence
- The model needs to be fine-tuned on edge-case examples to improve its confidence calibration
- The confidence threshold is set too high and should be lowered from 90% to 70%
You are building a customer support resolution agent that handles billing inquiries. The agent uses tool calls to look up account details and process refunds. During testing, you notice the agent sometimes stops mid-conversation without completing the resolution. What is the MOST likely cause of this behavior?
- The agentic loop terminates on stop_reason='end_turn' instead of continuing only when stop_reason='tool_use'
- The agent's temperature setting is too high, causing unpredictable behavior
- The system prompt is too long, causing the model to lose track of instructions
- The API rate limit is being hit, causing the agent to silently fail
You are designing a multi-agent research system where a coordinator dispatches tasks to specialized subagents for market analysis, competitor research, and financial data collection. A junior engineer proposes that subagents should automatically inherit the coordinator's full conversation context for efficiency. What is the primary problem with this approach?
- Subagents in a hub-and-spoke architecture have isolated context by design; there is no automatic context inheritance, so complete findings must be explicitly passed in prompts
- Sharing context between agents creates a security vulnerability that exposes sensitive data
- It would exceed the API's maximum token limit for every subagent call
- The coordinator would need to wait for all subagents to finish before starting new ones
Your customer support agent handles refund requests. Company policy requires that refunds over $500 must always be escalated to a human reviewer. A developer implements this rule as a strongly worded instruction in the system prompt: 'You MUST ALWAYS escalate refunds over $500 to a human reviewer. Never process these directly.' What is the BEST assessment of this implementation?
- The developer should lower the model temperature to 0 to ensure deterministic compliance with the escalation rule
- The instruction should include few-shot examples of escalation scenarios to improve compliance
- When errors have financial consequences, prompts are not sufficient; the refund amount check should be enforced programmatically in the tool implementation or orchestration layer
- This is sufficient because Claude reliably follows strongly worded instructions in system prompts
You are configuring MCP servers for your development team's Claude Code setup. The team uses a shared GitHub integration and a project-specific database tool. A developer asks where to place the MCP configuration so all team members have access to these tools. What is the correct configuration approach?
- Place the configuration in each developer's ~/.claude.json file and share it via a wiki page
- Configure the MCP servers in the CI/CD pipeline environment so they are available during automated runs only
- Place the configuration in the project's .mcp.json file, which is version-controlled and shared across the team, using environment variable references like ${GITHUB_TOKEN} for secrets
- Add the MCP server URLs directly to the project's CLAUDE.md file as instructions for Claude to connect to them
Your team uses Claude Code for a large monorepo with both a Python backend and a React frontend. You want Claude to follow different coding standards depending on which part of the codebase is being edited. Backend files should use specific Python conventions, while frontend files should follow your React style guide. What is the BEST way to configure this?
- Create separate CLAUDE.md files in the backend/ and frontend/ subdirectories, each containing the relevant coding standards for that part of the codebase
- Put all coding standards in the root CLAUDE.md file with clear section headers for backend and frontend
- Use environment variables to switch between backend and frontend configuration profiles
- Create a custom MCP server that serves different coding standards based on the file path being edited
A customer support resolution agent uses an agentic loop to handle incoming tickets. The current implementation checks if the model's response contains the phrase 'case resolved' to decide whether to continue looping or terminate. What is the BEST way to fix this loop termination logic?
- Add additional keyword patterns like 'issue fixed' and 'problem solved' to the string matching logic to cover more termination phrases
- Terminate the loop when stop_reason is 'end_turn' and continue looping when stop_reason is 'tool_use', using the API's built-in structured signals
- Set a maximum iteration count of 10 and terminate the loop when the count is reached, regardless of the model's output
- Instruct the model in the system prompt to always end its final message with a special XML tag like </case_complete> and parse for that tag
A multi-agent research system uses a coordinator that dispatches tasks to three specialized subagents: a web researcher, a database analyst, and a summarizer. The team discovers that the summarizer subagent is producing summaries that reference data it was never given, seemingly drawing from the web researcher's findings. What is the MOST likely cause of this behavior?
- The subagents share a common message history by default in the multi-agent framework, allowing cross-agent context leakage
- The coordinator is inadvertently including the web researcher's results in the context when dispatching tasks to the summarizer
- The summarizer's model has been fine-tuned on outputs from previous runs of the web researcher, creating implicit knowledge transfer
- The summarizer is using its general training knowledge to fill in gaps, and the overlap with the web researcher's findings is coincidental
A development team wants their multi-agent research system coordinator to dispatch tasks to three subagents simultaneously rather than sequentially to reduce total latency. What is the correct approach to enable parallel subagent execution?
- Configure a thread pool in the system prompt and instruct the coordinator to use asynchronous execution
- Deploy each subagent as a separate microservice with its own API endpoint and have the coordinator make HTTP calls in parallel
- Have the coordinator emit multiple Task tool calls in a single response, which the orchestration framework executes concurrently
- Create a pipeline where each subagent's output feeds into the next, using streaming to overlap computation
A customer support agent is designed to escalate cases to a human when the customer is likely dissatisfied. The team implements this by having the agent assess customer sentiment and escalate whenever it detects negative sentiment or frustration. What is the PRIMARY problem with this escalation strategy?
- The latency of performing sentiment analysis on each message will slow down the agent's response time and degrade the customer experience
- Customer frustration does not correlate with case complexity, so sentiment-based escalation floods human agents with cases the AI could resolve while missing genuinely complex cases from calm customers
- Negative sentiment detection will trigger escalation too late because by the time a customer is frustrated, the experience is already damaged beyond recovery
- Sentiment analysis requires a specialized NLP model and cannot be done reliably by a general-purpose language model like Claude
A team building a customer support agent considers two approaches for ensuring the agent never offers refunds above $50 without manager approval: (A) adding a strong instruction in the system prompt saying 'Never approve refunds above $50 without manager approval', or (B) implementing a programmatic check in the orchestration layer that intercepts refund tool calls and routes those above $50 to a manager queue. Which approach is BEST for production deployment?
- Approach A is sufficient because Claude reliably follows system prompt instructions, making the programmatic check unnecessary overhead
- Use both approaches together but make Approach A the primary control and Approach B as a secondary logging mechanism
- Approach A combined with few-shot examples in the system prompt showing the model correctly refusing high refunds, which makes the behavior virtually guaranteed
- Approach B, the programmatic check, because system behavior guarantees require programmatic enforcement rather than prompt-based guidance alone
A financial services company is building a customer support agent that handles account inquiries, transaction disputes, and loan applications. When the agent encounters an ambiguous refund policy that could be interpreted multiple ways, what is the BEST escalation strategy?
- Instruct the agent to pick the interpretation most favorable to the customer and proceed without escalation
- Have the agent ask the customer a series of clarifying questions to determine which policy interpretation applies
- Escalate to a human agent whenever the applicable policy has multiple valid interpretations, providing the customer context and relevant policy sections
- Have the agent analyze customer sentiment and escalate only when negative sentiment exceeds a threshold score
A development team is using Claude Code to generate a microservices application. They want Claude Code to produce API endpoint handlers that always validate input against a JSON schema before processing. The team notices that sometimes Claude generates handlers without validation. What is the MOST effective way to enforce this requirement?
- Use programmatic enforcement by wrapping generated handlers in a validation middleware layer that rejects unvalidated requests at runtime
- Set an iteration cap of 3 attempts per handler and pick the best version that includes validation
- Add few-shot examples showing handlers with validation and instruct Claude to follow the same pattern
- Add a note in the prompt saying 'Please remember to include input validation in all handlers'
A research team has built a multi-agent system where a coordinator agent delegates tasks to three specialist subagents: a literature search agent, a data analysis agent, and a report writing agent. The literature search agent finds relevant papers, but the report writing agent produces a report that doesn't reference any of the found papers. What is the MOST likely cause?
- The coordinator did not explicitly pass the literature search agent's findings to the report writing agent's prompt
- The report writing agent's system prompt needs to explicitly instruct it to include academic citations
- The literature search agent and report writing agent are running in parallel, causing a race condition
- The report writing agent's language model has a token limit that prevents it from processing the paper references
A development team is building a multi-agent system for automated code deployment. The coordinator agent must pass the output of a security scanning subagent to a deployment approval subagent. The team assumes the deployment subagent can access the security scan results because both subagents run under the same coordinator. Why is this assumption INCORRECT?
- In environment variables set in the CI/CD pipeline configuration
- In a README.md file at the project root that Claude Code will automatically detect
- In each developer's ~/.claude/CLAUDE.md file with identical content
- In the project's .claude/CLAUDE.md file committed to version control
An engineering team uses Claude to extract structured data from legal contracts. They define a JSON schema with fields for parties, effective_date, termination_clause, and governing_law. The extraction works well for standard contracts, but for contracts where the governing law is not explicitly stated, Claude sometimes guesses a jurisdiction based on the parties' locations. What schema modification BEST handles this edge case?
- Make the governing_law field nullable so Claude can return null when the information is not explicitly stated in the contract
- Increase the prompt's emphasis on accuracy by adding 'Only extract information explicitly stated in the document'
- Add more few-shot examples showing contracts with explicit governing law clauses
- Add a post-processing step that flags any governing_law value not found verbatim in the source document
