Free AWS Certified Solutions Architect - Associate (SAA-C03) Practice Questions
Test your knowledge with 20 free exam-style questions
SAA-C03 Exam Facts
Questions
65
Passing
720/1000
Duration
130 min
A financial services company operates a multi-tier application across 50 VPCs in us-east-1. They need to implement centralized egress filtering for all internet-bound traffic using third-party firewall appliances. The solution must minimize operational overhead and support automatic failover. Which architecture provides the MOST scalable solution?
Frequently Asked Questions
These 20 sample questions let you experience the exact format, difficulty, and question styles you'll encounter on exam day. Use them to identify knowledge gaps and decide if our full practice exam package is right for your preparation strategy.
Our questions mirror the actual exam format, difficulty level, and topic distribution. Each question includes detailed explanations to help you understand the concepts.
The full package includes 6 complete practice exams with 390+ unique questions, detailed explanations, progress tracking, and lifetime access.
Yes! Our SAA-C03 practice questions are regularly updated to reflect the latest exam objectives and question formats. All questions align with the current 2026 exam blueprint.
Sample SAA-C03 Practice Questions
Browse all 20 free AWS Certified Solutions Architect - Associate practice questions below.
A financial services company operates a multi-tier application across 50 VPCs in us-east-1. They need to implement centralized egress filtering for all internet-bound traffic using third-party firewall appliances. The solution must minimize operational overhead and support automatic failover. Which architecture provides the MOST scalable solution?
- Configure AWS Network Firewall in each VPC and use AWS Firewall Manager to centrally manage policies.
- Deploy NAT Gateways in each VPC and route traffic through a central inspection VPC using VPC Peering.
- Use AWS Transit Gateway with a dedicated inspection VPC containing Gateway Load Balancer endpoints connected to firewall appliances.
- Deploy a Network Load Balancer in a central VPC and configure all spoke VPCs to route traffic through the NLB to firewall EC2 instances.
A media streaming company has a Lambda function that processes video uploads and stores metadata in an RDS MySQL database. During peak hours, the function experiences throttling errors and database connection timeouts. The database shows high CPU usage from connection management, and CloudWatch Logs show 'Too many connections' errors. Which solution will resolve these issues with the LEAST code changes?
- Implement an Amazon SQS queue between Lambda and the database to throttle connection requests.
- Create an Amazon RDS Proxy and configure the Lambda function to connect through the proxy endpoint.
- Implement connection pooling logic in the Lambda function code using a library like mysql2/promise with pool configuration.
- Increase the RDS instance size to provide more memory and CPU for connection handling.
A healthcare organization runs a HIPAA-compliant application on Amazon EKS. The application consists of multiple microservices, where each service requires access to different AWS resources: the patient service needs DynamoDB access, the imaging service needs S3 access, and the billing service needs access to both SQS and Secrets Manager. Security requirements mandate that each service can access only its required AWS resources and nothing more. Which solution provides the MOST granular and secure permissions model?
- Attach an IAM role to the EKS worker nodes with permissions for all required services (DynamoDB, S3, SQS, Secrets Manager) and rely on application-level access control.
- Store AWS access keys in Kubernetes Secrets and mount them as environment variables in each pod, with different keys for each microservice.
- Create separate IAM roles for each microservice. Use Kubernetes service accounts with IAM Roles for Service Accounts (IRSA) to assign each pod its specific role.
- Use EKS Pod Identities to assign IAM roles directly to pods, with separate roles for each microservice's requirements.
An e-commerce company uses Amazon Aurora MySQL for their product catalog database. The application experiences highly variable traffic: during normal hours it uses minimal resources, but during flash sales traffic can spike to 100x normal levels within minutes. The company wants to optimize costs while ensuring the database can handle sudden traffic spikes without performance degradation. Which Aurora configuration meets these requirements MOST cost-effectively?
- Use Aurora Serverless v1 with automatic pause enabled and scaling configured for the expected peak load.
- Use Aurora Provisioned with 15 read replicas in an Auto Scaling group that scales based on CPU utilization.
- Use Aurora Provisioned with a single large instance (db.r6g.16xlarge) to handle peak load and maintain it at all times.
- Deploy Aurora Serverless v2 with a minimum capacity of 0.5 ACUs and maximum capacity of 128 ACUs configured to scale automatically based on workload.
A data analytics company has an AWS Lambda function that processes large CSV files uploaded to S3. The processing involves data transformation, validation, and loading into Amazon Redshift. During testing, some files are taking 14 minutes to process, approaching Lambda's maximum execution time, and the team is concerned about timeout failures in production. Which architectural change will BEST address this issue while maintaining a serverless approach?
- Split large CSV files client-side before uploading to S3, ensuring no single file triggers more than 10 minutes of processing.
- Increase the Lambda function timeout to 15 minutes and allocate maximum memory (10,240 MB) to speed up processing.
- Migrate the processing logic from Lambda to an ECS Fargate task triggered by S3 event notifications.
- Use AWS Step Functions to orchestrate the workflow, breaking the processing into smaller Lambda functions for chunked processing, with state management between steps.
A developer needs to implement an AWS Lambda function in AWS account A that accesses an Amazon Simple Storage Service (Amazon S3) bucket in AWS account B. As a Solutions Architect, which of the following will you recommend to meet this requirement?
- Create an IAM role for the AWS Lambda function that grants access to the Amazon S3 bucket. Set the IAM role as the AWS Lambda function's execution role. Make sure that the bucket policy also grants access to the AWS Lambda function's execution role
- The Amazon S3 bucket owner should make the bucket public so that it can be accessed by the AWS Lambda function in the other AWS account
- Create an IAM role for the AWS Lambda function that grants access to the Amazon S3 bucket. Set the IAM role as the Lambda function's execution role and that would give the AWS Lambda function cross-account access to the Amazon S3 bucket
- AWS Lambda cannot access resources across AWS accounts. Use Identity federation to work around this limitation of Lambda
An organization wants to delegate access to a set of users from the development environment so that they can access some resources in the production environment which is managed under another AWS account. As a solutions architect, which of the following steps would you recommend?
- Create new IAM user credentials for the production environment and share these credentials with the set of users from the development environment
- It is not possible to access cross-account resources
- Create a new IAM role with the required permissions to access the resources in the production environment. The users can then assume this IAM role while accessing the resources from the production environment
- Both IAM roles and IAM users can be used interchangeably for cross-account access
A financial services company is looking to move its on-premises IT infrastructure to AWS Cloud. The company has multiple long-term server bound licenses across the application stack and the CTO wants to continue to utilize those licenses while moving to AWS. As a solutions architect, which of the following would you recommend as the MOST cost-effective solution?
- Use Amazon EC2 on-demand instances
- Use Amazon EC2 dedicated hosts
- Use Amazon EC2 reserved instances (RI)
- Use Amazon EC2 dedicated instances
A retail company maintains an AWS Direct Connect connection to AWS and has recently migrated its data warehouse to AWS. The data analysts at the company query the data warehouse using a visualization tool. The average size of a query returned by the data warehouse is 60 megabytes and the query responses returned by the data warehouse are not cached in the visualization tool. Each webpage returned by the visualization tool is approximately 600 kilobytes. Which of the following options offers the LOWEST data transfer egress cost for the company?
- Deploy the visualization tool on-premises. Query the data warehouse directly over an AWS Direct Connect connection at a location in the same AWS region
- Deploy the visualization tool in the same AWS region as the data warehouse. Access the visualization tool over a Direct Connect connection at a location in the same region
- Deploy the visualization tool in the same AWS region as the data warehouse. Access the visualization tool over the internet at a location in the same region
- Deploy the visualization tool on-premises. Query the data warehouse over the internet at a location in the same AWS region
An Internet of Things (IoT) company would like to have a streaming system that performs real-time analytics on the ingested IoT data. Once the analytics is done, the company would like to send notifications back to the mobile applications of the IoT device owners. As a solutions architect, which of the following AWS technologies would you recommend to send these notifications to the mobile applications?
- Amazon Kinesis with Amazon Simple Email Service (Amazon SES)
- Amazon Kinesis with Amazon Simple Queue Service (Amazon SQS)
- Amazon Kinesis with Amazon Simple Notification Service (Amazon SNS)
- Amazon Simple Queue Service (Amazon SQS) with Amazon Simple Notification Service (Amazon SNS)
A company has a web application that runs on Amazon EC2 instances in an Auto Scaling group. The application connects to an Amazon RDS for MySQL DB instance. A solutions architect needs to design a solution to protect the database credential. The solution must rotate the credentials automatically every 30 days. Which solution meets these requirements?
- Store the database credentials in an encrypted S3 bucket. Use an Amazon EventBridge rule to trigger a rotation function.
- Store the database credentials in AWS Secrets Manager. Configure automatic rotation for the secret.
- Embed the credentials in the application code and use AWS KMS to encrypt the code.
- Store the database credentials in AWS Systems Manager Parameter Store as a SecureString. Use an AWS Lambda function to rotate the credentials.
A company plans to migrate a legacy application to AWS. The application requires a shared file system that supports the SMB protocol and integrates with Microsoft Active Directory. Which storage service should the solutions architect recommend?
- Amazon EFS
- Amazon EBS Multi-Attach
- Amazon S3
- Amazon FSx for Windows File Server
A solutions architect is designing a disaster recovery strategy for a mission-critical application. The application must have an RPO of less than 15 minutes and an RTO of less than 1 hour. The application runs on EC2 instances and uses an RDS database. Which strategy is the MOST cost-effective solution that meets these requirements?
- Pilot Light
- Warm Standby
- Backup and Restore
- Multi-Site Active/Active
A company needs to store object data in Amazon S3. The data is accessed frequently for the first 30 days, then infrequently for the next 60 days, and rarely after 90 days. The data must remain immediately accessible at all times. Which S3 storage class is the MOST cost-effective?
- S3 Intelligent-Tiering
- S3 Standard-IA
- S3 Glacier Deep Archive
- S3 Standard
An application running on Amazon EC2 instances needs to access an Amazon DynamoDB table in the same Region. The company's security policy prohibits traffic from traversing the public internet. How should the solutions architect configure the network?
- Establish a VPC peering connection to the DynamoDB service VPC.
- Use an AWS Direct Connect connection to DynamoDB.
- Configure a NAT gateway in a public subnet.
- Create a VPC gateway endpoint for DynamoDB and update the route table.
A financial services company has a web application with an application tier running in the U.S and Europe. The database tier consists of a MySQL database running on Amazon EC2 in us-west-1. Users are directed to the closest application tier using Route 53 latency-based routing. The users in Europe have reported poor performance when running queries. Which changes should a Solutions Architect make to the database tier to improve performance?
- Migrate the database to Amazon RedShift. Use AWS DMS to synchronize data. Configure applications to use the RedShift data warehouse for queries.
- Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in one of the European Regions.
- Create an Amazon RDS Read Replica in one of the European regions. Configure the application tier in Europe to use the read replica for queries.
- Migrate the database to an Amazon Aurora global database in MySQL compatibility mode. Configure the application tier in Europe to use the local reader endpoint.
A surveying team is using a fleet of drones to collect images of construction sites. The surveying team's laptops lack the inbuilt storage and compute capacity to transfer the images and process the data. While the team has Amazon EC2 instances for processing and Amazon S3 buckets for storage, network connectivity is intermittent and unreliable. The images need to be processed to evaluate the progress of each construction site. What should a solutions architect recommend?
- Cache the images locally on a hardware appliance pre-installed with AWS Storage Gateway to process the images when connectivity is restored.
- Process and store the images using AWS Snowball Edge devices.
- During intermittent connectivity to EC2 instances, upload images to Amazon SQS.
- Configure Amazon Kinesis Data Firehose to create multiple delivery streams aimed separately at the S3 buckets for storage and the EC2 instances for processing the images.
A company is working with a strategic partner that has an application that must be able to send messages to one of the company’s Amazon SQS queues. The partner company has its own AWS account. How can a Solutions Architect provide least privilege access to the partner?
- Update the permission policy on the SQS queue to grant the sqs:SendMessage permission to the partner’s AWS account.
- Update the permission policy on the SQS queue to grant all permissions to the partner’s AWS account.
- Create a cross-account role with access to all SQS queues and use the partner's AWS account in the trust document for the role.
- Create a user account and grant the sqs:SendMessage permission for Amazon SQS. Share the credentials with the partner company.
A Solutions Architect has deployed an application on several Amazon EC2 instances across three private subnets. The application must be made accessible to internet-based clients with the least amount of administrative effort. How can the Solutions Architect make the application available on the internet?
- Create an Amazon Machine Image (AMI) of the instances in the private subnet and launch new instances from the AMI in public subnets. Create an Application Load Balancer and add the public instances to the ALB.
- Create a NAT gateway in a public subnet. Add a route to the NAT gateway to the route tables of the three private subnets.
- Create an Application Load Balancer and associate three private subnets from the same Availability Zones as the private instances. Add the private instances to the ALB.
- Create an Application Load Balancer and associate three public subnets from the same Availability Zones as the private instances. Add the private instances to the ALB.
A fintech company is modernizing its payments processing system to adopt a serverless microservices architecture. The company wants to decouple its services and implement an event-driven architecture to support a publish/subscribe (pub/sub) model. The system needs to notify multiple downstream services when payment events occur, ensuring scalability and low operational overhead. Which solution will meet these requirements MOST cost-effectively?
- Use Amazon Kinesis Data Firehose to deliver payment events to multiple S3 buckets. Configure downstream services to poll the buckets for event processing.
- Configure an Amazon SNS topic to receive payment events from an AWS Lambda function. Set up multiple subscribers, such as Lambda functions, to process the events.
- Configure an Amazon EventBridge rule to capture payment events and route them to multiple AWS Lambda functions that handle downstream processing.
- Use Amazon MQ as a message broker to enable publish/subscribe communication between the payment microservices and the downstream services.