Free AWS Certified Generative AI Developer - Professional (AIP-C01) Practice Questions
Test your knowledge with 20 free exam-style questions
AIP-C01 Exam Facts
Questions
65
Passing
720/1000
Duration
130 min
A financial services company needs to build a document analysis application that extracts key information from loan applications. The solution must use a foundation model from Amazon Bedrock and comply with data residency requirements that mandate all data remains in the eu-central-1 region. Which approach ensures compliance while optimizing for cost?
Frequently Asked Questions
These 20 sample questions let you experience the exact format, difficulty, and question styles you'll encounter on exam day. Use them to identify knowledge gaps and decide if our full practice exam package is right for your preparation strategy.
Our questions mirror the actual exam format, difficulty level, and topic distribution. Each question includes detailed explanations to help you understand the concepts.
The full package includes 7 complete practice exams with 455+ unique questions, detailed explanations, progress tracking, and lifetime access.
Yes! Our AIP-C01 practice questions are regularly updated to reflect the latest exam objectives and question formats. All questions align with the current 2026 exam blueprint.
Sample AIP-C01 Practice Questions
Browse all 20 free AWS Certified Generative AI Developer - Professional practice questions below.
A financial services company needs to build a document analysis application that extracts key information from loan applications. The solution must use a foundation model from Amazon Bedrock and comply with data residency requirements that mandate all data remains in the eu-central-1 region. Which approach ensures compliance while optimizing for cost?
- Use Amazon Bedrock in eu-central-1 with the Anthropic Claude 3 Sonnet model and enable model invocation logging to an S3 bucket in the same region.
- Deploy a self-hosted open-source LLM on EC2 instances in eu-central-1 using SageMaker endpoints for inference.
- Implement Amazon Textract in eu-central-1 for document processing instead of using foundation models.
- Use Amazon Bedrock in us-east-1 with cross-region replication to eu-central-1 for compliance reporting.
A development team is building a RAG (Retrieval-Augmented Generation) application using Amazon Bedrock Knowledge Bases with OpenSearch Serverless as the vector store. They need to index 500,000 technical documents with an average size of 50KB each. The application requires sub-second query response times and must support metadata filtering by document type, date, and department. What is the MOST effective chunking strategy for this use case?
- Store entire documents as single chunks without splitting to preserve complete context.
- Use sentence-level chunking with each sentence as a separate chunk to maximize retrieval precision.
- Use fixed-size chunking with 512 tokens per chunk and 50-token overlap, storing metadata with each chunk.
- Use semantic chunking based on paragraph boundaries with variable chunk sizes up to 2048 tokens.
An e-commerce company is implementing a chatbot using Amazon Bedrock with the Claude 3.5 Sonnet model. The chatbot must access real-time inventory data from a DynamoDB table and current promotional pricing from an API Gateway endpoint. The solution must minimize latency and support concurrent user sessions. Which implementation approach is MOST appropriate?
- Configure Bedrock Agents with action groups that invoke Lambda functions to query DynamoDB and API Gateway.
- Pre-fetch all inventory and pricing data into the prompt context for each user session using Bedrock Knowledge Bases.
- Use prompt engineering to include DynamoDB query syntax in the system prompt and parse SQL-like queries from the model's response.
- Implement a custom orchestration layer using Step Functions to coordinate Bedrock API calls with data retrieval from DynamoDB and API Gateway.
A healthcare provider is developing a clinical decision support system using Amazon Bedrock. The system must prevent the model from providing medical diagnoses, medication dosage recommendations, or contradicting established clinical guidelines. The solution must also log all blocked responses for compliance auditing. Which approach provides the MOST comprehensive protection?
- Fine-tune the Bedrock model on examples of appropriate clinical decision support responses to teach it proper boundaries.
- Add detailed instructions in the system prompt explicitly telling the model not to provide diagnoses or medication recommendations.
- Implement Bedrock Guardrails with content filters, denied topics for medical diagnoses and medications, and word filters for clinical terminology, with all configurations logged to CloudWatch Logs.
- Deploy a custom Lambda function that uses Amazon Comprehend Medical to detect medical entities in responses and blocks them before returning to users.
A global enterprise is deploying a multi-lingual customer service application using Amazon Bedrock. The application must support 20 languages, process 10,000 requests per minute during peak hours, and maintain sub-2-second response times. Cost optimization is a priority. Which architecture BEST meets these requirements?
- Implement request queuing with SQS and batch process requests every 30 seconds to reduce the number of Bedrock API calls.
- Deploy 20 separate SageMaker endpoints, one for each language, with model-specific fine-tuning to optimize language performance.
- Use exclusively On-Demand Bedrock invocations with aggressive prompt caching to reduce token costs.
- Use Provisioned Throughput for the base load of 5,000 RPM and On-Demand mode for burst traffic, with CloudWatch metrics triggering Auto Scaling for Provisioned Throughput adjustments.
A retail company is building a product recommendation system using Amazon Bedrock. The system needs to analyze customer purchase history stored in Amazon DynamoDB and product catalogs in Amazon S3. The solution must generate personalized recommendations in real-time with sub-second latency while maintaining cost efficiency. Which architecture BEST meets these requirements?
- Use Amazon Bedrock Knowledge Bases to index product catalog from S3, implement a Lambda function to retrieve customer history from DynamoDB, and use Bedrock Agents with action groups to orchestrate the recommendation generation with Claude 3 Haiku for fast inference.
- Load all product catalogs and customer history into the Bedrock prompt context for each request, using Claude 3 Opus for maximum recommendation quality.
- Deploy a custom fine-tuned model on SageMaker with real-time endpoints, caching all product embeddings in ElastiCache for fast retrieval.
- Use Amazon Personalize for recommendations instead of foundation models, as it's specifically designed for this use case.
A legal firm is implementing a document analysis system using Amazon Bedrock Knowledge Bases. The system must process contracts ranging from 10 pages to 500 pages, extract key clauses, and answer questions about specific terms. Documents contain tables, numbered lists, and cross-references between sections. Which chunking strategy provides the BEST balance between retrieval accuracy and context preservation?
- Chunk documents by individual sentences to maximize retrieval precision for specific legal terms and definitions.
- Store entire documents as single chunks to preserve complete context, using document-level embeddings for retrieval.
- Implement hierarchical chunking that preserves document structure with parent-child relationships, using semantic chunking for paragraphs within sections while maintaining section headers as metadata for each chunk.
- Use fixed-size chunking with 1024 tokens and 200-token overlap to ensure consistent chunk sizes across all document types.
A software development team is building an AI coding assistant using Amazon Bedrock. The assistant must support multiple programming languages, understand project context from a Git repository, and provide code suggestions that follow the team's coding standards stored in markdown files. Which combination of services and techniques provides the MOST effective solution? (Select TWO)
- Store all repository code in the system prompt for each request to ensure the model has complete project context.
- Implement a Bedrock Agent with action groups that can execute Git commands through Lambda functions to analyze repository structure, file history, and recent changes for context-aware suggestions.
- Configure Amazon Bedrock Knowledge Bases with an S3 data source containing the Git repository code and coding standards documents, using appropriate metadata filters for language and file type.
- Use Amazon Q Developer directly instead of building a custom solution, as it's specifically designed for code assistance with IDE integration.
- Fine-tune a foundation model on the team's codebase to create a specialized code generation model that inherently understands their patterns.
A healthcare research organization is using Amazon Bedrock to analyze medical literature and generate research summaries. The application must ensure that generated content is grounded in the source documents and does not include fabricated citations or statistics. The solution must also provide traceability to original sources. Which approach provides the MOST reliable grounding and citation accuracy?
- Use Claude 3 Opus with temperature set to 0 to ensure deterministic, factual outputs that don't include creative elaborations.
- Fine-tune the foundation model on the organization's medical literature corpus to ensure it only generates content about known sources.
- Implement Bedrock Knowledge Bases with contextual grounding checks enabled in Guardrails, configure citation extraction in the retrieval response, and use a post-processing Lambda function to verify that all cited sources exist in the retrieved context before returning results.
- Add explicit instructions in the system prompt telling the model to only cite sources from the provided context and never fabricate references.
A financial services company is deploying a customer-facing chatbot using Amazon Bedrock. The chatbot handles account inquiries and must be protected against prompt injection attacks that could trick it into revealing other customers' information or performing unauthorized actions. Which combination of security controls provides the MOST comprehensive protection? (Select TWO)
- Design the agent architecture to separate user context from system instructions using Lambda pre-processing, validate all user inputs against allow-lists, and implement IAM-scoped tool permissions that limit what actions the chatbot can perform regardless of instructions.
- Implement rate limiting on the chatbot API to prevent attackers from attempting multiple injection variations.
- Use a smaller, less capable model that is less likely to follow complex injection instructions.
- Store all customer data in encrypted format so that even if the model is manipulated, it cannot reveal readable information.
- Implement Bedrock Guardrails with content filters for harmful content, denied topics for account manipulation attempts, and custom word filters for prompt injection patterns.
A company is building a document processing application using Amazon Bedrock. The application needs to extract information from scanned PDFs containing both text and images. The solution must handle documents up to 100 pages and process them within 30 seconds. Which approach provides the MOST efficient solution?
- Convert the PDF pages to images and use Amazon Rekognition for text detection, then process the results with Amazon Bedrock.
- Use Amazon Bedrock with Claude 3.5 Sonnet's native vision capabilities to process the PDF pages directly, implementing parallel page processing with Lambda.
- Use Amazon Textract to extract text from the PDFs, then pass the extracted text to Amazon Bedrock for processing.
- Store the PDFs in S3 and use Amazon Bedrock Knowledge Bases with default parsing to automatically extract and index the content.
A machine learning team needs to implement semantic chunking for their RAG application using Amazon Bedrock Knowledge Bases. The documents contain technical specifications with complex tables and code snippets. The team wants to preserve the semantic meaning of content while ensuring efficient retrieval. Which chunking strategy is MOST appropriate?
- Use hierarchical chunking with parent chunks of 2000 tokens and child chunks of 500 tokens.
- Use fixed-size chunking with 1000 tokens and 200-token overlap to handle all content types uniformly.
- Use no chunking and store each document as a single unit to preserve complete context.
- Use semantic chunking with custom Lambda-based parsing to handle tables and code blocks as atomic units.
A development team is implementing a customer support chatbot using Amazon Bedrock Agents. The agent needs to access customer order data from a DynamoDB table and check shipping status from an external REST API. The solution must handle authentication securely and support concurrent user sessions. Which implementation approach is MOST appropriate?
- Create two action groups with Lambda functions - one for DynamoDB access using IAM role permissions and another for the REST API using AWS Secrets Manager for API credentials.
- Create a single action group with one Lambda function that handles both DynamoDB and REST API calls, storing API credentials in Lambda environment variables.
- Configure the Bedrock Agent to return control to the calling application for data fetching, then pass the results back in the next conversation turn.
- Use Bedrock Knowledge Bases to ingest order data from DynamoDB and configure a custom connector for the REST API.
A financial services company needs to implement content filtering for their Amazon Bedrock application to prevent the model from discussing competitor products or providing specific investment advice. The solution must log all filtered responses for compliance auditing. Which configuration provides the MOST comprehensive protection?
- Configure Bedrock Guardrails with denied topics for competitor discussions and investment advice, enable word filters for competitor brand names, and configure CloudWatch Logs for guardrail intervention logging.
- Fine-tune the foundation model with examples that avoid competitor discussions and investment advice to ensure the model learns appropriate responses.
- Implement content filters at the HIGH sensitivity level for all harmful content categories to catch any potentially problematic responses.
- Add detailed instructions in the system prompt prohibiting competitor discussions and investment advice, and implement post-processing Lambda functions to scan responses.
A company is deploying a high-traffic generative AI application using Amazon Bedrock. The application processes 50,000 requests per hour during peak times and requires consistent sub-3-second response times. The company wants to optimize costs while maintaining performance. Which architecture BEST meets these requirements?
- Use Provisioned Throughput for baseline capacity, implement semantic caching with ElastiCache, and configure prompt optimization to reduce token usage.
- Deploy a custom model on Amazon SageMaker with auto-scaling endpoints and use CloudFront for edge caching of responses.
- Use cross-region inference to distribute load across multiple regions and implement request queuing with Amazon SQS.
- Use on-demand pricing with aggressive retry logic and implement response caching at the application layer using exact string matching.
A company is building a customer-facing chatbot using Amazon Bedrock that must maintain conversation context across user sessions that may span multiple days. Users should be able to return and continue previous conversations seamlessly. Which session management approach provides the BEST user experience?
- Use Amazon ElastiCache to store conversation history for fast retrieval during active sessions.
- Store full conversation history in DynamoDB with user IDs, implement conversation summarization for older exchanges to manage context window limits, and retrieve relevant history when users return.
- Use Bedrock's built-in session management to automatically persist conversations between sessions.
- Store conversation state in browser local storage and send it with each request when users return.
A development team needs to implement function calling with Amazon Bedrock to allow the model to request specific actions. The functions include querying databases and calling external APIs with varying response times. Which implementation pattern handles function calling MOST effectively?
- Include function definitions in the system prompt and parse model outputs for function call requests, then execute functions and return results in the next conversation turn.
- Use Bedrock Agents with action groups that define available functions through OpenAPI schemas, implement Lambda functions for each action with appropriate timeout configurations based on expected response times.
- Use separate Bedrock invocations for function determination and response generation to separate concerns.
- Pre-execute all potential functions before invoking Bedrock and provide results in the prompt context.
A financial company is deploying Amazon Bedrock for generating investment reports. The reports must include proper disclaimers and risk warnings, and must never provide specific buy/sell recommendations. Which guardrail configuration ensures compliance?
- Implement post-processing that scans outputs and rejects any reports containing investment terminology.
- Train compliance officers to review all generated reports before distribution to catch any violations.
- Use a model fine-tuned on compliant financial reports to ensure outputs naturally follow compliance guidelines.
- Configure Bedrock Guardrails with denied topics for investment recommendations, add word filters for action words like 'buy' and 'sell' in recommendation contexts, and include mandatory disclaimer text in system prompts that the model must include.
A company is implementing Amazon Bedrock for processing customer feedback at scale. They need to extract sentiment, key themes, and action items from thousands of feedback entries daily. Cost efficiency is critical. Which architecture BEST balances accuracy with cost?
- Use Claude 3 Opus for maximum accuracy in understanding nuanced customer feedback sentiment.
- Process each feedback entry in real-time as it arrives to provide immediate insights to customer service teams.
- Use batch inference with Amazon Bedrock for processing feedback in scheduled batches, select a cost-efficient model like Claude 3 Haiku for the extraction task, and implement output schemas to ensure consistent, parseable results.
- Use Amazon Comprehend for sentiment analysis as it's specifically designed for this use case.
A startup is building a RAG-based question-answering system using Amazon Bedrock Knowledge Bases. During testing, they notice the system sometimes retrieves irrelevant documents that reduce answer quality. Which optimization MOST effectively improves retrieval relevance?
- Add more documents to the knowledge base to improve the chances of finding relevant matches.
- Increase the top-k parameter to retrieve more documents, giving the model more context to work with.
- Use a larger embedding model to generate more semantically rich document representations.
- Implement query expansion using the foundation model to generate multiple query variations, rerank retrieved results using a cross-encoder model, and tune similarity thresholds to filter low-relevance matches.