Preporato
AWSAmazon BedrockAIP-C01Generative AIRAGAI AgentsGuardrails

Amazon Bedrock for AIP-C01: Complete Guide to AWS GenAI Services [2026]

Preporato TeamFebruary 3, 202622 min readAIP-C01

Amazon Bedrock is the centerpiece of the AWS Certified Generative AI Developer - Professional (AIP-C01) exam. As AWS's fully managed foundation model service, Bedrock provides access to leading AI models and production-ready features that form the foundation of enterprise GenAI applications. This comprehensive guide covers every Bedrock feature you need to master for exam success.

Exam Weight: 80%+ Bedrock Content

Amazon Bedrock concepts appear across all five AIP-C01 exam domains. Questions frequently test your understanding of Knowledge Bases, Agents, Guardrails, model selection, and the Converse API. This guide focuses exclusively on Bedrock features critical to passing the exam.

What is Amazon Bedrock?

Amazon Bedrock is a fully managed service that provides API access to foundation models (FMs) from Amazon and leading AI companies. Unlike self-hosted models, Bedrock eliminates infrastructure management while providing enterprise-grade security, privacy, and compliance.

Key Value Propositions:

  • No Infrastructure Management: Models run on AWS-managed infrastructure
  • Private & Secure: Your data never leaves your AWS account, never trains models
  • Multiple Providers: Access Claude, Llama, Titan, Mistral, and more through unified APIs
  • Enterprise Features: Built-in guardrails, knowledge bases, agents, and fine-tuning
  • Pay-Per-Use: Token-based pricing with no upfront commitments

Exam Quick Facts

Duration
Core AIP-C01 Service
Cost
On-demand: $0.0001-$0.02/1K tokens
Questions
60-70% of exam questions
Passing Score
Know ALL Bedrock features
Valid For
Updated quarterly
Format: APIs, Console, SDKs

Preparing for AIP-C01? Practice with 455+ exam questions

Foundation Models in Bedrock

Understanding model selection is critical for AIP-C01. The exam tests your ability to choose appropriate models based on use case, cost, performance, and capability requirements.

Core Topics
  • Nova Micro: Text-only, lowest latency, cost-optimized
  • Nova Lite: Multimodal (text + image), fast processing
  • Nova Pro: Multimodal with complex reasoning
  • Nova Premier: Most capable, coming Q1 2025
  • All Nova models trained on responsible AI principles
  • Native function calling and tool use support
  • Up to 300K token context windows
Skills Tested
Select appropriate Nova model tier for use casesUnderstand Nova multimodal capabilitiesCompare Nova vs Claude vs Llama trade-offs
Example Question Topics
  • Which Nova model is best for real-time customer support chatbots?
  • When should you choose Nova Pro over Nova Lite for document analysis?

Exam Model Selection Strategy

Quick Decision Framework:

  • Lowest cost, simple tasks: Amazon Titan Text Lite, Claude Haiku 4.5, Nova Micro
  • Best for coding & agents: Claude Sonnet 4.5, Nova Pro
  • Frontier reasoning: Claude Opus 4.5 (with extended thinking)
  • Open-source requirement: Llama 3.1/3.2
  • Embeddings for RAG: Titan Embeddings V2, Cohere Embed
  • Image generation: Titan Image Generator, Stability AI

Bedrock Converse API

The Converse API is Bedrock's unified interface for all chat-based model interactions. It provides a consistent request/response format across different foundation models, simplifying multi-model applications.

Key Features:

  • Unified Interface: Same API structure for Claude, Llama, Titan, Mistral
  • Message History: Built-in conversation context management
  • Tool Use: Native function calling across supported models
  • Streaming: Real-time response streaming for low-latency UX
  • System Prompts: Consistent system message handling

Converse API vs InvokeModel API

FeatureConverse APIInvokeModel API
InterfaceUnified across modelsModel-specific formats
Conversation HistoryBuilt-in managementManual implementation
Tool UseNative supportModel-dependent
StreamingConverseStreamInvokeModelWithResponseStream
Best ForChat applicationsCustom integrations
Exam FocusHIGHMedium
# Converse API Example (Exam-relevant pattern)
import boto3

bedrock = boto3.client('bedrock-runtime')

response = bedrock.converse(
    modelId='anthropic.claude-sonnet-4-5-20250101-v1:0',
    messages=[
        {
            'role': 'user',
            'content': [{'text': 'Explain RAG architecture in 3 sentences.'}]
        }
    ],
    system=[{'text': 'You are an AWS solutions architect.'}],
    inferenceConfig={
        'maxTokens': 500,
        'temperature': 0.7,
        'topP': 0.9
    }
)

print(response['output']['message']['content'][0]['text'])

Exam Focus: Tool Use with Converse

The AIP-C01 exam frequently tests tool use (function calling) through the Converse API. Understand how to define tools, handle tool use requests, and return tool results. This is critical for Bedrock Agents implementation.

Amazon Bedrock Knowledge Bases

Knowledge Bases enable Retrieval-Augmented Generation (RAG) by connecting foundation models to your organization's data. This is the most heavily tested Bedrock feature on the AIP-C01 exam.

Architecture Overview:

  1. Data Sources: S3 buckets, Confluence, SharePoint, Salesforce, web crawlers
  2. Chunking: Documents split into manageable pieces
  3. Embedding: Chunks converted to vectors using embedding models
  4. Vector Store: Vectors stored in OpenSearch Serverless, Aurora pgvector, Pinecone, or S3 Vectors
  5. Retrieval: Semantic search finds relevant chunks at query time
  6. Generation: Foundation model generates response using retrieved context
Core Topics
  • Amazon S3 (primary source)
  • Confluence Cloud and Data Center
  • SharePoint Online
  • Salesforce
  • Web Crawler for public websites
  • Custom data connectors
  • Supported formats: PDF, TXT, MD, HTML, DOC, CSV
Skills Tested
Configure S3 data source with proper IAMSet up Confluence/SharePoint connectorsImplement incremental sync strategiesHandle multi-format document ingestion
Example Question Topics
  • How do you configure a Knowledge Base to sync documents from both S3 and Confluence?
  • What IAM permissions are required for the Knowledge Base service role to access S3?

NEW: S3 Vectors (2025)

S3 Vectors eliminates the need for a separate vector database. Vectors are stored directly in S3 with built-in indexing. Benefits:

  • Zero vector database management
  • Automatic scaling
  • Lower cost for moderate workloads
  • Native S3 security and compliance
  • Best for applications under 1M vectors

Knowledge Base Query Flow:

Amazon Bedrock Knowledge Base Query Flow - RAG pipeline from user query to generated response
Amazon Bedrock Knowledge Base Query Flow - RAG pipeline from user query to generated response

Knowledge Base Retrieval Strategies

StrategyUse CaseProsCons
Semantic SearchGeneral Q&AHigh relevanceMay miss exact matches
Hybrid SearchMixed queriesBest of bothMore complex
Metadata FilteringStructured dataPrecise controlRequires metadata
Multi-queryComplex questionsBetter coverageHigher latency

Amazon Bedrock Agents

Bedrock Agents enable autonomous AI systems that can plan, execute actions, and interact with external tools and APIs. This is the second most tested feature on AIP-C01.

Agent Capabilities:

  • Autonomous Planning: Break complex tasks into steps
  • Tool Invocation: Call Lambda functions, APIs, Knowledge Bases
  • Multi-turn Conversations: Maintain context across interactions
  • Action Groups: Define available tools and their parameters
  • Knowledge Base Integration: Ground responses in enterprise data
Core Topics
  • Foundation model selection for agents
  • Action groups and Lambda functions
  • OpenAPI schema definitions
  • Agent instructions and system prompts
  • Session management and state
  • Return of control patterns
  • Agent aliases and versioning
Skills Tested
Design agent action groupsWrite effective agent instructionsConfigure Lambda function integrationImplement return of control for HITLVersion and deploy agents safely
Example Question Topics
  • How do you configure an agent to query a database, process results, and send email notifications?
  • What is the difference between return of control and standard agent execution?

Agent Components:

  1. Foundation Model: Powers reasoning and planning (Claude recommended)
  2. Instructions: System prompt defining agent behavior
  3. Action Groups: Tools the agent can invoke
  4. Knowledge Bases: Data sources for grounded responses
  5. Guardrails: Safety controls for inputs and outputs
# Agent Invocation Pattern
response = bedrock_agent_runtime.invoke_agent(
    agentId='AGENT_ID',
    agentAliasId='ALIAS_ID',
    sessionId='unique-session-id',
    inputText='Book a flight from NYC to LAX for next Friday'
)

# Handle streaming response
for event in response['completion']:
    if 'chunk' in event:
        print(event['chunk']['bytes'].decode())

Exam Trap: Agent vs Knowledge Base

Know when to use Agents vs Knowledge Bases alone:

  • Knowledge Base only: Simple Q&A, document search, no actions needed
  • Agent + Knowledge Base: Complex tasks requiring actions + data retrieval
  • Agent without KB: Action execution without enterprise data grounding

Amazon Bedrock Guardrails

Guardrails provide content filtering, topic restrictions, and safety controls for GenAI applications. This is heavily tested in Domain 3 (AI Safety, Security, and Governance - 20%).

Core Topics
  • Content filters: Hate, violence, sexual, misconduct
  • Denied topics: Block specific subjects
  • Word filters: Explicit word blocking
  • PII detection and redaction
  • Regex patterns for custom filtering
  • Contextual grounding: Reduce hallucinations
  • Input and output guardrails
Skills Tested
Configure multi-layer content filteringImplement PII detection strategiesDesign topic restriction policiesApply guardrails to agents and modelsTest guardrail effectiveness
Example Question Topics
  • How do you configure guardrails to detect and redact credit card numbers in agent responses?
  • A healthcare app needs to block discussions of drug prices. How should guardrails be configured?

Guardrail Configuration Layers:

LayerPurposeConfiguration
Content FiltersBlock harmful contentSet filter strengths (LOW/MEDIUM/HIGH)
Denied TopicsBlock specific subjectsDefine topic descriptions
Word FiltersBlock explicit wordsAdd word lists
PII FiltersProtect sensitive dataEnable PII types, set action (BLOCK/ANONYMIZE)
Contextual GroundingReduce hallucinationsSet grounding threshold

PII Filter Actions

ActionBehaviorUse Case
BLOCKReject entire request/responseStrict compliance environments
ANONYMIZEReplace PII with placeholdersAllow processing without exposure
AllowPass through unchangedNon-sensitive applications
# Creating a Guardrail
guardrail_response = bedrock.create_guardrail(
    name='customer-support-guardrail',
    description='Guardrail for customer support chatbot',
    contentPolicyConfig={
        'filtersConfig': [
            {'type': 'HATE', 'inputStrength': 'HIGH', 'outputStrength': 'HIGH'},
            {'type': 'VIOLENCE', 'inputStrength': 'HIGH', 'outputStrength': 'HIGH'}
        ]
    },
    topicPolicyConfig={
        'topicsConfig': [
            {
                'name': 'competitor-discussion',
                'definition': 'Discussion about competitor products or services',
                'type': 'DENY'
            }
        ]
    },
    sensitiveInformationPolicyConfig={
        'piiEntitiesConfig': [
            {'type': 'CREDIT_DEBIT_CARD_NUMBER', 'action': 'ANONYMIZE'},
            {'type': 'US_SOCIAL_SECURITY_NUMBER', 'action': 'BLOCK'}
        ]
    }
)

Exam Strategy: Guardrails Application

Guardrails can be applied at three levels:

  1. Model invocation: Direct InvokeModel or Converse calls
  2. Knowledge Base queries: Applied during RetrieveAndGenerate
  3. Agent execution: Applied to agent inputs and outputs

The exam tests understanding of where to apply guardrails in complex architectures.

Amazon Bedrock Flows

Bedrock Flows (previously called Prompt Flows) enable visual orchestration of multi-step GenAI workflows without code.

Flow Components:

  • Input Node: Receives user input
  • Prompt Node: Sends prompts to foundation models
  • Knowledge Base Node: Queries Knowledge Bases
  • Condition Node: Branching logic
  • Lambda Node: Custom code execution
  • Output Node: Returns final response

Example Flow: Document Processing Pipeline

Bedrock Flows Document Processing Pipeline - Textract, Claude, Nova, Lambda routing
Bedrock Flows Document Processing Pipeline - Textract, Claude, Nova, Lambda routing

This pipeline shows a typical document processing workflow:

  1. User uploads document to S3
  2. Textract extracts text from PDFs, images, or scanned documents
  3. Claude summarizes the extracted content
  4. Nova classifies the document by category
  5. Lambda routes to specialized handlers based on classification
  6. Bedrock generates final response
  7. Output stored in S3 and returned to user

Master These Concepts with Practice

Our AIP-C01 practice bundle includes:

  • 7 full practice exams (455+ questions)
  • Detailed explanations for every answer
  • Domain-by-domain performance tracking

30-day money-back guarantee

Model Customization and Fine-Tuning

While Bedrock provides powerful base models, some use cases require customization. The exam tests your understanding of when and how to customize models.

Bedrock Customization Options

MethodUse CaseData RequiredCost
Continued Pre-trainingDomain adaptationLarge unlabeled corpusHigh
Fine-tuningTask specializationLabeled examplesMedium
Prompt EngineeringQuick optimizationNo training dataLow
RAG (Knowledge Bases)Enterprise dataDocumentsLow-Medium

Fine-Tuning Considerations:

  • Only available for select models (Titan, some Llama variants)
  • Requires labeled training data in specific formats
  • Creates a custom model version in your account
  • Higher inference costs than base models
  • Data never leaves your account

Exam Focus: Fine-Tuning vs RAG

Choose Fine-Tuning when:

  • Need to change model behavior/style
  • Have consistent task-specific patterns
  • Want to encode domain knowledge in weights

Choose RAG when:

  • Data changes frequently
  • Need citations/source attribution
  • Want to keep model general-purpose
  • Have limited labeled data

Bedrock Pricing and Cost Optimization

Cost optimization is tested in Domain 4 (Operational Efficiency - 12%). Understanding Bedrock pricing is essential.

Pricing Models:

ModelOn-DemandProvisioned Throughput
Claude Sonnet 4.5$3.00/M input, $15.00/M outputMonthly commitment
Claude Haiku 4.5$0.80/M input, $4.00/M outputMonthly commitment
Titan Text$0.15/M input, $0.20/M outputMonthly commitment
Nova Micro$0.035/M input, $0.14/M outputMonthly commitment

Cost Optimization Best Practices:

  1. Start with smaller models: Use Haiku/Nova Micro for simple tasks
  2. Implement caching: Cache common queries and responses
  3. Optimize prompts: Shorter prompts = lower costs
  4. Use batch inference: For non-real-time workloads
  5. Monitor usage: Set CloudWatch alarms on token consumption
  6. Provisioned throughput: For predictable high-volume workloads

Bedrock Security and Compliance

Security is tested across multiple domains, particularly Domain 3 (20%) and throughout integration scenarios.

IAM Policy Example:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream"
            ],
            "Resource": [
                "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-haiku*",
                "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet*"
            ]
        }
    ]
}

Exam Security Scenarios

Common security scenarios tested:

  • Restricting model access by IAM policies
  • Enabling VPC endpoints for private Bedrock access
  • Configuring CloudTrail for model invocation auditing
  • Encrypting Knowledge Base data with customer-managed KMS keys
  • Implementing guardrails for PII protection

Monitoring and Troubleshooting

Domain 5 (Testing, Validation, and Troubleshooting - 11%) tests your ability to monitor and debug Bedrock applications.

Key CloudWatch Metrics:

  • InvocationLatency: Model response time
  • InvocationClientErrors: 4xx errors (client issues)
  • InvocationServerErrors: 5xx errors (service issues)
  • InvocationThrottles: Rate limit hits
  • InputTokenCount: Tokens consumed in requests
  • OutputTokenCount: Tokens generated in responses

Common Issues and Solutions:

IssueCauseSolution
Irrelevant RAG responsesPoor chunkingAdjust chunk size, add overlap
Agent tool failuresSchema mismatchValidate OpenAPI schema
High latencyModel selectionUse faster model variant
Throttling errorsRate limitsImplement retry with backoff
Guardrail blocks valid contentOverly strict configTune filter thresholds

Exam Preparation Checklist

Bedrock Mastery Checklist for AIP-C01

0/11 completed

Frequently Asked Questions

Amazon Bedrock concepts appear in 70-80% of exam questions. While the exam also covers SageMaker, Comprehend, and other AI services, Bedrock is the primary focus. Master all Bedrock features covered in this guide.

Next Steps

After mastering Bedrock fundamentals, continue your AIP-C01 preparation:

  1. Complete our AIP-C01 Complete Guide for full exam coverage
  2. Build hands-on projects using Knowledge Bases, Agents, and Guardrails
  3. Take Preporato practice exams to test your Bedrock knowledge
  4. Review AWS documentation for latest Bedrock features and updates

Ready to Pass AIP-C01?

Preporato offers 7 full-length practice exams specifically designed for the AWS Certified Generative AI Developer - Professional exam. Our questions cover all Bedrock features with detailed explanations referencing official AWS documentation. 92% of our students pass on their first attempt.


Sources

Last updated: February 3, 2026

Ready to Pass the AIP-C01 Exam?

Join thousands who passed with Preporato practice tests

Instant access30-day guaranteeUpdated monthly