AWS Certified Solutions Architect - Associate (SAA-C03)

- 556 exam-style questions
- Detailed explanations and references
- Simulation and custom modes
- Custom exam settings to drill down into specific topics
- 180-day access period
- Pass or money back guarantee
What is in the package
The tone and tenor of the questions mimic the actual exam. Along with the detailed description and exam tips provided within the explanations, we have extensively referenced AWS documentation to get you up to speed on all domain areas tested for the SAA-C03 exam.
Complete SAA-C03 domains coverage
CertVista SAA-C03 is organized into four domains that closely relate to the topics necessary to cover the exam.
1. Design Secure Architectures
Designing secure applications and architectures is covered in domain one. This domain reviews designing secure access to AWS resources, application tiers, and data security options.
2. Design Resilient Architectures
Domain two covers designing resilient architectures. This domain focuses on multi-tier solutions, highly available and fault-tolerant architectures, AWS services used as decoupling mechanisms, and appropriate resilient storage options.
3. Design High-Performing Architectures
Domain three, which is about designing high-performing architectures, will guide you through identifying elastic and scalable compute and storage solutions, and selecting high-performing network and database solutions.
4. Design Cost-Optimized Architectures
The fourth and final domain focuses on designing cost-optimized architectures. This is where you'll look at identifying cost-effective storage, compute, database solutions, and designing cost-optimized network architectures.
CertVista's Solutions Architect - Associate question bank features hundreds of exam-style questions meticulously designed to mirror the actual SAA-C03 certification exam. Practice with a variety of question formats, including multiple-choice, multiple-response, and complex scenario-based questions focused on real-world AWS architectural challenges. Our exam engine familiarizes you with the testing interface and conditions, ensuring you can confidently approach your certification day.
Every CertVista SAA-C03 question includes comprehensive explanations and direct references. These explanations break down the core AWS architectural principles, link to relevant official AWS documentation, and clarify common design and implementation misconceptions. You'll understand why the correct answer best meets the architectural requirements posed in the question and why alternative solutions fall short.
CertVista provides two powerful study modes tailored for SAA-C03 preparation. Use Custom Mode for targeted practice on specific AWS architectural domains, perfect for reinforcing knowledge in areas like designing resilient architectures, high-performing architectures, secure applications and architectures, or cost-optimized architectures. Switch to Simulation Mode to experience the complete 130-minute exam environment with realistic time constraints and question weighting, effectively building your test-taking endurance and confidence.
Leverage the CertVista analytics dashboard for insights into your AWS Certified Solutions Architect - Associate preparation journey. Monitor your performance across all key exam domains – Design Resilient Architectures, Design High-Performing Architectures, Design Secure Applications and Architectures, and Design Cost-Optimized Architectures. Pinpoint knowledge gaps, refine your study plan for maximum efficiency, and gain the confidence to know exactly when you're prepared to pass the SAA-C03 exam.
What's in the SAA-C03 exam
The SAA-C03 certification exam is intended for individuals with a solutions architect role and one or more years of experience designing highly available, cost-efficient, fault-tolerant, and scalable distributed systems. The Certified Solution Architect - Associate certification validates your ability to design and implement systems on AWS, highlighting your capacity to incorporate resiliency, high performance, security, and cost optimization.
During the exam, you'll be asked to complete tasks such as evaluating the design and implementation of multi-tiered and highly available architectures, using scaling for compute and storage solutions, identifying cost-effective solutions, and more. The AWS Certified Solutions Architect Associate certification validates you have the knowledge, depth, and experience to do the following.
- Design solutions incorporating AWS services to meet current business requirements and projected needs.
- Design architectures that are secure, resilient, high-performing, and cost-optimized
- Review existing solutions and determine improvements.
What are the questions like on the exam?
There are two types of questions on the exam: multiple choice, which has one correct response and three incorrect responses, or distractors, or multiple responses, which has two or more correct responses out of five or more response options.
During the exam, you will be asked to choose the best answer for scenarios to complete tasks to design and implement systems on AWS.
This highlights your capacity to incorporate resiliency, high performance, security, and cost optimization. The questions' overall length, complexity, and difficulty tend to be longer and more complicated than what you might expect from an associate-level certification exam. Most questions involve lengthy scenarios, usually several sentences to a couple of paragraphs.
Most of the answer choices will be several sentences long as well. So, take your time as you're reading through these longer questions, and be sure to process every word you read in detail. Be on the lookout for repeated sentences across all of the possible answers with just a word or two change.
Those one or two words can make all the difference when determining which answer is correct and which might be a distractor. Always do your best to eliminate these distractors as early as possible so you can focus more on the plausible answers and select the best answer to each question.
Sample SAA-C03 questions
Get a taste of the AWS Certified AI Practitioner exam with our carefully curated sample questions below. These questions mirror the actual exam's style, complexity, and subject matter, giving you a realistic preview of what to expect. Each question comes with comprehensive explanations, relevant AWS documentation references, and valuable test-taking strategies from our expert instructors.
While these sample questions provide excellent study material, we encourage you to try our free demo for the complete exam preparation experience. The demo features our state-of-the-art test engine that simulates the real exam environment, helping you build confidence and familiarity with the exam format. You'll experience timed testing, question marking, and review capabilities – just like the actual AWS certification exam.
A company runs an infrastructure monitoring service. The company is building a new feature that will enable the service to monitor data in customer AWS accounts. The new feature will call AWS APIs in customer accounts to describe Amazon EC2 instances and read Amazon CloudWatch metrics.
What should the company do to obtain access to customer accounts in the most secure way?
Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch permissions and a trust policy to the company’s account.
Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch permissions. Encrypt and store customer access and secret keys in a secrets management system.
Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with read-only EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user and password in a secrets management system.
Create a serverless API that implements a token vending machine to provide temporary AWS credentials for a role with read-only EC2 and CloudWatch permissions.
The most secure way for an external service to obtain programmatic access to a customer’s AWS account, especially for a read-only purpose such as describing Amazon EC2 instances and retrieving Amazon CloudWatch metrics, is to allow the external service to assume an IAM role that the customer defines in their own account. The main advantage of cross-account roles is that the company never receives or stores long-term credentials; instead, it obtains temporary credentials through AWS STS when it needs to make API calls. This configuration follows the principle of least privilege and ensures that the permission scope is limited to exactly what the external service requires.
An example scenario would be for the customer to create an IAM role with the necessary read-only permissions for EC2 and CloudWatch, along with a trust policy that trusts the company’s AWS account ID. Whenever the feature in the monitoring service needs to access the customer’s resources, it calls the "AssumeRole" API on the customer’s role using AWS STS. This approach helps reduce risk since no static credentials are shared or stored, and the permissions can be enforced and audited in the customer account.
Storing and encrypting long-term credentials (such as access keys) in one account for multiple customers is generally discouraged because those credentials could be misused if compromised, and rotating them is more challenging. Using cross-account roles with AssumeRole not only reduces this management overhead but also ensures credential transparency, accountability, and easy rotation (each organization can simply update or revoke the trust when needed).
When you see a question about third-party or cross-account access, remember that relying on IAM roles and AWS STS is considered best practice for security. Long-term credentials pose unnecessary security risks.
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the least amount of effort?
Change the RDS database instance storage type to Provisioned IOPS
Enable storage autoscaling in RDS
Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
Increase the RDS database instance size
Enabling storage autoscaling in Amazon RDS is the most straightforward way to address low disk space without incurring downtime. By allowing RDS to automatically increase storage capacity as needed, you avoid manual interventions such as taking snapshots, restoring from backups, or changing the instance storage type. Simply modifying the configuration to enable storage autoscaling takes care of unexpected increases in data growth and helps ensure that the database remains available and performant.
Other approaches either do not directly address the underlying storage issue or require more complex, time-consuming procedures that could disrupt application availability. Backing up the database and restoring it would involve additional steps and potential downtime, and changing the instance storage type or instance class alone does not guarantee additional disk space without further action.
Make sure you look for the least disruptive method to solve capacity problems. Look for built-in RDS features like autoscaling to avoid maintenance overhead and potential downtime.
A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for customers to use for self-service purposes.
Which solution will meet these requirements?
Create AWS Config items for the customers.
Create AWS Service Catalog products for the customers.
Create AWS Systems Manager templates for the customers.
Create AWS CloudFormation templates for the customers.
Using AWS Service Catalog is the best way to centrally manage and deploy a standard set of solutions and tools that customers can consume for self-service purposes. AWS Service Catalog allows administrators to create, organize, and govern commonly used IT services, and it presents them as products that other users can provision on their own. This structure ensures that deployments remain consistent and aligns with the company's best practices and security requirements. Additionally, it provides fine-grained access control, centralized governance, and the ability to track provisioning and usage, which are key benefits that other services do not directly offer for product-based self-service.
For instance, a consulting firm building a standardized data processing solution could define a product in AWS Service Catalog with an underlying AWS CloudFormation template that includes an Amazon S3 bucket and an Amazon EMR cluster. The company’s customers can then select this product from the catalog and quickly launch a pre-approved, repeatable setup without having to manage the CloudFormation template themselves or worry about adjustable configurations that might cause inconsistencies.
Remember that although AWS CloudFormation templates, AWS Systems Manager templates, or AWS Config could play a role in parts of the solution, the unique benefit of AWS Service Catalog is that it packages these capabilities in a self-service product model while adding a management and governance layer.
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to IAM permissions. The company has AWS CloudTrail turned on.
Which solution will meet these requirements with the least effort?
Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
Search CloudTrail logs with Amazon Athena queries to identify the errors.
Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Using Amazon Athena to query AWS CloudTrail logs offers a straightforward way to find and analyze Access Denied or Unauthorized errors without requiring you to write extensive custom code or manage additional infrastructure. Because Athena can query log data directly in Amazon S3 with standard SQL statements, you significantly reduce operational overhead. In contrast, approaches that involve creating custom scripts or building dashboards typically require more time and effort to deploy and maintain, such as writing and updating code or configuring data visualization tools. By leveraging Athena’s integration with AWS CloudTrail and S3, you can rapidly pinpoint errors related to Identity and Access Management (IAM) permissions and speed up incident analysis and troubleshooting.
For example, you could store CloudTrail logs in S3, configure an AWS Glue crawler to make these logs queryable as a table in Athena, and run SQL queries to filter on event names or error messages containing “AccessDenied” or “UnauthorizedOperation.” Instantly returning query results in the Athena console allows you to zero in on any issues with minimal setup time.
When an exam question asks for the “least effort” or “minimal management overhead,” think of managed AWS services like Athena. They often remove the need to manage and update custom deployments while remaining flexible enough for on-demand analytics.
A solutions architect is reviewing the resilience of an application. The solutions architect notices that a database administrator recently failed over the application's Amazon Aurora PostgreSQL database writer instance as part of a scaling exercise. The failover resulted in 3 minutes of downtime for the application.
Which solution will reduce the downtime for scaling exercises with the least operational overhead?
Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the application to use the secondary cluster's writer endpoint.
Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
Using Amazon RDS Proxy to manage connections for the Aurora PostgreSQL database reduces downtime during failover and scaling events because the proxy handles the underlying database connectivity for the application. When changes occur to the database writer instance, the application’s existing connections remain open through the proxy, minimizing the failover window and requiring far less manual intervention. This results in a much faster switchover and helps avoid dropping or reinitializing client connections.
Other approaches do not match the needs of reducing downtime with minimal operational overhead. Simply creating more read replicas does not minimize writer failover downtime—read replicas primarily serve read requests but do not automatically reduce the failover delay for write operations. Manually setting up a secondary cluster and shifting endpoints requires more operational overhead and is prone to errors if failover procedures are not automated. Using an ElastiCache cluster can offload some read traffic to the cache layer, but it does not address the database failover downtime for write queries.
Ensure you understand how Amazon RDS Proxy works to keep client connections intact and minimize downtime. Familiarity with failover strategies for Amazon Aurora and knowing how proxy-based architectures seamlessly mask failover events is often tested in architecture design questions.
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture includes an Amazon Aurora global database cluster that extends across multiple Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
Which solution will provide the most fault tolerance?
Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-Region Aurora Replica in the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL database in the second Region. Use AWS Database Migration Service (AWS DMS) to replicate the primary database to the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region. Use an Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
A multi-Region deployment that uses Amazon Aurora Global Database provides the highest level of fault tolerance with minimal downtime. Aurora Global Database can replicate data across Regions with minimal lag while offering fast local reads and near-instantaneous recovery if a Region becomes unavailable. By deploying the web tier and application tier in the second Region, you have a fully functional stack available for failover. Using Amazon Route 53 health checks with a failover routing policy ensures that traffic is redirected to the healthy Regional endpoint when failures are detected. This setup helps you promote the secondary database cluster to the primary role quickly if the primary Region goes down, reducing downtime significantly.
Other approaches, such as using AWS DMS or stretching an Auto Scaling group across Regions, are either less efficient or not feasible. For instance, AWS DMS is often used for data migration or continuous replication scenarios, but it would not provide the same near real-time failover capabilities that Aurora Global Database offers. Furthermore, an Auto Scaling group operates within a single Region; you would typically set up a separate Auto Scaling group in each Region instead of stretching one group across Regions.
Example:
Imagine you have a streaming platform that primarily serves users in North America. As your user base expands to Europe, deploying your web and application tiers in a new European Region alongside an Aurora Global Database secondary cluster will allow users there to enjoy low-latency connections. If the primary North American Region comes under stress or experiences an outage, Amazon Route 53 health checks detect the Regional failure and promptly redirect traffic to the European deployment, where the secondary Aurora cluster can quickly be promoted to primary.
When planning global expansions with minimal downtime, focus on solutions that not only replicate data across Regions but can also fail over rapidly. Aurora Global Database and Route 53 failover are a common pattern in architecting highly available and fault-tolerant applications.
A company is migrating its workloads to AWS. The company has transactional and sensitive data in its databases. The company wants to use AWS Cloud solutions to increase security and reduce operational overhead for the databases.
Which solution will meet these requirements?
Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption.
Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and protection.
Migrate the databases to Amazon RDS Configure encryption at rest.
Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
A fully managed DB service that supports encryption at rest provides a secure and straightforward way to handle transactional workloads while minimizing administrative efforts. By using Amazon RDS with encryption enabled, it is possible to protect sensitive data on disk and in automated backups without needing to manage the underlying database infrastructure. This approach offloads patching, backups, scaling, and high availability to AWS, thereby reducing operational overhead.
On the other hand, storing databases on Amazon EC2 requires more manual management of backups, patching, and resilience. Using Amazon CloudWatch Logs primarily provides observability rather than comprehensive data encryption. Storing data in Amazon S3 with Amazon Macie helps discover and protect sensitive data in object storage, but this does not address the requirement of hosting transactional databases with reduced overhead—and Macie is not a database solution.
Remember that encryption at rest in Amazon RDS integrates with AWS KMS to automatically encrypt DB instances, snapshots, backups, and read replicas, significantly reducing the effort of maintaining security compliance.
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?
Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.
Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.
Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
Using AWS Lambda for processing provides automatic scaling, which allows the application to handle varying concurrency without having to manually provision or manage servers. Storing the photos in Amazon S3 is cost-effective and ensures virtually unlimited scalability. Retaining Amazon DynamoDB for the metadata is also ideal because DynamoDB offers consistent performance at any scale, especially when properly using features like auto scaling.
A solution that stores both data and images in DynamoDB alone can become costly and is not typically the recommended practice for storing large objects like photos. Simply increasing the number of Amazon EC2 instances and using Provisioned IOPS volumes can help with performance but does not natively handle highly variable traffic without additional services or elasticity configurations. Using services specifically for streaming and data ingestion is more aligned with continuous data ingestion scenarios, which does not match the real-time image processing requirements as effectively as a serverless approach with Lambda and S3.
In a real-world scenario, this strategy provides a scalable architecture for image processing. For instance, the Lambda function can be triggered each time an object is put into an S3 bucket. The function can then apply the desired transformations or frames to the uploaded photo. DynamoDB remains well-suited for storing metadata such as user preferences, timestamps, and relevant attributes about the photo frames.
For test takers, remember that AWS Lambda and Amazon S3 are a common pairing for serverless file processing tasks that can seamlessly scale up and down without upfront server provisioning. DynamoDB is appropriate for storing JSON-like metadata with low-latency reads and writes. When you see a question about unpredictable or spiky traffic, serverless architectures can quickly adapt to demand while preventing over-provisioning.
A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned about the overall cost of the solution. Which storage solution meets these requirements MOST cost-effectively?
Amazon Elastic Block Store (Amazon EBS)
Amazon OpenSearch Service (Amazon Elasticsearch Service)
Amazon S3
Amazon Elastic File System (Amazon EFS)
In this scenario, a web-based application must read and store a large repository of text documents—totaling 900 TB. Amazon Simple Storage Service (Amazon S3) provides virtually unlimited scalability at a more cost-effective rate compared to alternatives, such as Amazon EBS or Amazon EFS. Employing an object-based storage solution is ideal for this workload because it can handle high throughput and deliver the necessary durability and availability. By contrast, a service like Amazon OpenSearch Service (Amazon Elasticsearch Service) is well-suited for search and analytics but would be prohibitively expensive to store all 900 TB of data long-term.
Using Amazon S3 also offers flexible storage classes (e.g., S3 Standard, S3 Intelligent-Tiering, or S3 Glacier) that can further optimize storage costs, especially for data that is accessed infrequently. This combination of scalability and tiered pricing makes Amazon S3 the most cost-effective choice for hosting and delivering large volumes of static data.
Example
A common approach is to serve static files—such as large text documents—from an S3 bucket using presigned URLs or by fronting S3 with Amazon CloudFront as a content delivery network to reduce latency and offload traffic from origin servers. The combination of S3, CloudFront, and EC2 ensures that the solution can auto scale when needed while minimizing overall costs.
Review various Amazon S3 storage classes and ensure you understand how storage class transitions can help balance performance requirements and budget constraints.
A global company hosts its web application on Amazon EC2 instances behind an Application Load Balancer (ALB). The web application has static data and dynamic data. The company stores its static data in an Amazon S3 bucket. The company wants to improve performance and reduce latency for the static data and dynamic data. The company is using its own domain name registered with Amazon Route 53.
What should a solutions architect do to meet these requirements?
Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint Configure Route 53 to route traffic to the CloudFront distribution.
Create an Amazon CloudFront distribution that has the S3 bucket as an origin. Create an AWS Global Accelerator standard accelerator that has the ALB and the CloudFront distribution as endpoints. Create a custom domain name that points to the accelerator DNS name. Use the custom domain name as an endpoint for the web application.
Create an Amazon CloudFront distribution that has the ALB as an origin. Create an AWS Global Accelerator standard accelerator that has the S3 bucket as an endpoint. Create two domain names. Point one domain name to the CloudFront DNS name for dynamic content. Point the other domain name to the accelerator DNS name for static content. Use the domain names as endpoints for the web application.
Create an Amazon CloudFront distribution that has the S3 bucket and the ALB as origins. Configure Route 53 to route traffic to the CloudFront distribution.
Serving both static and dynamic content through a single Amazon CloudFront distribution—where static content is sourced from Amazon S3 and dynamic content from the Application Load Balancer (ALB)—is a best practice for optimizing performance and reducing latency. CloudFront’s global network of edge locations helps cache and accelerate the delivery of both types of content to end users. By configuring multiple origins (S3 for static assets and ALB for dynamic requests) within the same CloudFront distribution, you can use path-based routing to cache static assets while allowing pass-through of dynamic requests to the ALB. Amazon Route 53 can then provide DNS resolution, typically via an alias record, that points to the CloudFront distribution. This approach simplifies your architecture and reduces cost by eliminating the need to introduce AWS Global Accelerator for this scenario.
Using AWS Global Accelerator is valuable in specific use cases—such as non-HTTP/HTTPS traffic or multi-region active-active ALB deployments—but for a standard web application serving static and dynamic data over HTTP/HTTPS, a single CloudFront distribution is typically sufficient to provide edge caching, global reach, and lower latency. Creating multiple domain names or separate endpoints would overcomplicate the setup and does not offer significant benefits over CloudFront alone in this common web-serving pattern.
Example
Assume you have an S3 bucket named “my-app-static-content” and an ALB with a DNS name of “my-app-alb-123456789.us-east-1.elb.amazonaws.com.” You can configure CloudFront with an origin for the S3 bucket to handle requests for paths like “/static/” and another origin for the ALB to handle “/api/” or “/dynamic/.” Then, you create a behavior in CloudFront that routes “/static/” to the S3 origin and another behavior that routes all other paths to the ALB origin. Finally, use Route 53 to create an alias record pointing to the CloudFront distribution, for example, “app.example.com → d1234abcdef.cloudfront.net.”
Look for key AWS services—CloudFront for caching and distribution, ALB for dynamic application traffic, S3 for static object storage, and Route 53 for DNS—when you see questions about global performance optimization. Understand how path-based routing in CloudFront lets you configure multiple origins in one distribution.