AWS Certified SysOps Administrator - Associate (SOA-C02)

- 475 exam-style questions
- Detailed explanations and references
- Simulation and custom modes
- Custom exam settings to drill down into specific topics
- 180-day access period
- Pass or money back guarantee
What is in the package
The content, tone, and tenor of the questions mimic the actual exam. Along with the detailed explanations and the exam-taker tips, we have extensively referenced AWS documentation to get you up to speed on all domain areas tested for the SOA-C02 exam.
Please consider this course the final pit stop so you can cross the winning line with absolute confidence and get AWS Certified! Trust our process; you are in good hands.
Complete SOA-C02 domains coverage
Preparing to take and pass any certification is a studious process. We created the CertVista SOA-C02 practice exams to align with the real exam blueprint to enable you to confidently study for the exam and take and pass the AWS Certified SysOps Administrator – Associate exam.
Monitoring, Logging, and Remediation
This domain tests your ability to implement and manage monitoring and logging solutions. It focuses on using AWS services like Amazon CloudWatch (Metrics, Alarms, Logs, Events), AWS CloudTrail, and AWS Config effectively. You'll need to demonstrate skills in monitoring environment health, detecting potential issues proactively, collecting and analyzing logs for troubleshooting and auditing, and setting up automated responses or remediation actions based on defined events or thresholds.
Reliability and Business Continuity
Here, the focus shifts to ensuring systems are resilient, highly available, and recoverable. This section assesses your understanding of implementing scalability using services like EC2 Auto Scaling and Elastic Load Balancing, achieving fault tolerance through strategies like Multi-AZ deployments, establishing robust backup and restore procedures using AWS Backup, S3 Versioning, RDS Snapshots, and comprehending disaster recovery approaches and metrics (RPO/RTO) within the AWS Cloud.
Deployment, Provisioning, and Automation
This domain validates your skills in deploying applications and infrastructure efficiently and consistently using automation. Key areas include infrastructure as code practices with AWS CloudFormation, using AWS Systems Manager for configuration management, patching, and task automation, leveraging services like AWS Elastic Beanstalk for application deployment, and understanding different deployment strategies (e.g., blue/green, canary, rolling) and their operational implications.
Security and Compliance
Security is paramount, and this domain covers implementing and managing security controls effectively within AWS. It assesses your knowledge of AWS Identity and Access Management (IAM) for managing users, groups, roles, and policies following least privilege principles, configuring network security elements like Security Groups and Network ACLs, implementing data encryption using services like AWS KMS, managing secrets securely, and understanding how AWS services align with common compliance frameworks.
Networking and Content Delivery
This section evaluates your knowledge of core AWS networking concepts and content delivery mechanisms. Topics include configuring and managing Amazon Virtual Private Clouds (VPCs), including subnets, route tables, internet gateways, and NAT gateways. It also covers VPC connectivity options like peering, AWS Direct Connect, and VPN, DNS management using Amazon Route 53, and optimizing content delivery globally using Amazon CloudFront.
Cost and Performance Optimization
Finally, this domain tests your ability to manage AWS resources efficiently from both a cost and performance perspective. It includes understanding AWS pricing models, utilizing tools like AWS Cost Explorer and AWS Budgets for tracking and alerting, identifying and implementing cost-saving strategies (e.g., Reserved Instances, Savings Plans, resource right-sizing), monitoring key performance metrics, and tuning resource configurations for optimal performance based on workload requirements.
What's in the SOA-C02 exam
Knowing the logistics of the SOA-C02 exam can help reduce test-day anxiety. The exam format is a mix of question types, including multiple choice (one correct answer) and multiple response (two or more correct answers). Crucially, the SOA-C02 also used to incorporate exam labs, requiring you to perform specific tasks directly within the AWS Management Console or using the AWS CLI. This practical component is a significant differentiator.
NOTE: As of March 28, 2023, the AWS Certified SysOps Administrator - Associate exam will not include exam labs until further notice. This removal of exam labs is temporary, but we will evaluate the exam labs and make improvements to provide an optimal candidate experience. With this change, the exam will consist of 50 scored and 15 unscored multiple-choice and multiple-response questions, with an exam time of 130 minutes.
The exam is scored on a scale from 100 to 1000, with a passing score set at 720. Remember this is a scaled score, not a simple percentage, meaning question difficulty influences the points awarded. The standard cost is typically $150 USD, although this can vary. You can take the exam in English, Japanese, Korean, or Simplified Chinese at a designated testing center or through online proctoring.
Tackling the exam labs
The exam labs are a defining feature of the SOA-C02, demanding genuine hands-on skill. You will face scenarios requiring specific configuration or troubleshooting actions within a live, timed AWS environment. Simply reading documentation won't suffice; practical experience is essential. You must be comfortable performing common operational tasks like configuring CloudWatch alarms, modifying security groups, troubleshooting connectivity, deploying resources with CloudFormation, and navigating IAM policies. Use the AWS Free Tier or other accounts to practice these activities extensively. While our practice exams are primarily question-based, they rigorously test the underlying knowledge needed to successfully navigate these labs. Familiarity with both the AWS console and CLI is non-negotiable.
Your path to SOA-C02 success
Passing the SOA-C02 requires a strategic blend of theoretical knowledge and practical application. Start by gaining significant hands-on experience with the relevant AWS services across all exam domains. Supplement this with diligent study of official AWS resources, including documentation, whitepapers (especially the Well-Architected Framework), and FAQs. Always thoroughly review AWS's official SOA-C02 exam guide to understand the objectives and scope.
Finally, integrate CertVista's high-quality practice exams into your preparation. This step is crucial for gauging your readiness. Our questions are specifically tailored for the SOA-C02, covering all domains comprehensively and mimicking the style and difficulty of real exam questions. They are invaluable for identifying your weak areas, building confidence, and testing your knowledge for both the multiple-choice sections and the practical labs.
Sample SOA-C02 questions
Get a taste of the AWS Certified AI Practitioner exam with our carefully curated sample questions below. These questions mirror the actual exam's style, complexity, and subject matter, giving you a realistic preview of what to expect. Each question comes with comprehensive explanations, relevant AWS documentation references, and valuable test-taking strategies from our expert instructors.
While these sample questions provide excellent study material, we encourage you to try our free demo for the complete exam preparation experience. The demo features our state-of-the-art test engine that simulates the real exam environment, helping you build confidence and familiarity with the exam format. You'll experience timed testing, question marking, and review capabilities – just like the actual AWS certification exam.
A SysOps administrator is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server.
Route 53 should route traffic to the primary server if the associated health check returns 2xx
or 3xx
HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID, and routing policy have been set appropriately for both primary and secondary servers.
Which next step should be taken to configure Route 53?
Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check.
Create an A record for each server. Associate the records with the Route 53 TCP health check.
Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 TCP health check.
Create an A record for each server. Associate the records with the Route 53 HTTP health check.
The scenario requires configuring Amazon Route 53 failover routing for two on-premises web servers (primary active, secondary passive). Failover should be triggered based on the HTTP status code returned by the primary server. Since the servers are on-premises, they have specific IP addresses. The appropriate Route 53 record type to point a domain name to an IP address is an A record (for IPv4) or an AAAA record (for IPv6). Alias records are typically used to point to AWS resources like ELBs or CloudFront distributions.
The failover logic depends on checking the health of the primary server by verifying if it returns HTTP status codes 2xx or 3xx. This necessitates configuring a Route 53 HTTP health check (or HTTPS if applicable), as TCP health checks only verify connectivity on a specific port without checking the HTTP response status. This HTTP health check must then be associated with the A record corresponding to the primary server. Route 53 will monitor the health of the primary server using this health check. If the health check passes (returns 2xx or 3xx), Route 53 directs traffic to the primary server's A record. If the health check fails, Route 53 directs traffic to the secondary server's A record, assuming the secondary record is configured as the failover target.
Therefore, the correct approach involves creating standard A records for both the primary and secondary on-premises server IP addresses and associating the Route 53 HTTP health check with the primary server's A record.
When configuring Route 53 health checks for failover, ensure the record type matches the target (A/AAAA for IPs, Alias for AWS resources) and the health check type matches the protocol and validation method required (TCP, HTTP/S, String Matching). For failover routing, associate the health check with the primary record.
An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS) queues. A SysOps administrator must ensure that the application can read, write, and delete messages from the SQS queues.
Which solution will meet these requirements in the most secure manner?
Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows the sqs:SendMessage
permission, the sqs:ReceiveMessage
permission, and the sqs:DeleteMessage
permission to the appropriate queues.
Create an IAM user with an IAM policy that allows the sqs:SendMessage
permission, the sqs:RecelveMessage
permission, and the sqs:DeleteMessage
permission to the appropriate queues. Export the IAM user's access key and secret access key as environment variables on the EC2 instance.
Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows sqs:*
permissions to the appropriate queues.
Create an IAM user with an IAM policy that allows the sqs:SendMessage
permission, the sqs:ReceiveMessage
permission, and the sqs:DeleteMessage
permission to the appropriate queues. Embed the IAM user's credentials in the application's configuration
To securely grant permissions to applications running on Amazon EC2 instances, AWS recommends using IAM roles. Instead of creating and distributing AWS access keys, you can create an IAM role that has the necessary permissions and attach this role to the EC2 instance. Applications running on the instance can then retrieve temporary security credentials from the instance metadata service, eliminating the need to store long-term credentials on the instance itself. This approach significantly enhances security by avoiding the risks associated with managing static access keys, such as accidental exposure or the need for manual rotation.
This solution correctly implements this best practice by creating an IAM role associated with the EC2 instance. Furthermore, it adheres to the principle of least privilege by attaching an IAM policy that grants only the specific permissions required for the application to interact with Amazon SQS: sqs:SendMessage
, sqs:ReceiveMessage
, and sqs:DeleteMessage
for the designated queues. Granting overly broad permissions, such as sqs:*
, would violate the least privilege principle and potentially allow unintended actions. Storing IAM user credentials, either as environment variables or embedded in application configuration, is less secure than using IAM roles, as these credentials could be compromised if the instance is accessed inappropriately.
When dealing with permissions for AWS services running on EC2 instances, always look for solutions that utilize IAM roles attached to the instance profile. This is the most secure and standard AWS practice. Also, pay close attention to the permissions being granted; solutions adhering to the principle of least privilege (granting only necessary permissions) are preferable to those granting wildcard (*
) permissions.
A SysOps administrator needs to configure an Amazon S3 bucket to host a web application. The SysOps administrator has created the S3 bucket and has copied the static files for the web application to the S3 bucket.
The company has a policy that all S3 buckets must not be public.
What should the SysOps administrator do to meet these requirements?
Configure static website hosting in the S3 bucket. Use Amazon Route 53 to create a DNS CNAME to point to the S3 website endpoint.
Create an Application Load Balancer (ALB). Change the protocol to HTTPS in the ALB listener configuration. Forward the traffic to the S3 bucket.
Create an accelerator in AWS Global Accelerator. Set up a listener configuration for port 443. Set the endpoint type to forward the traffic to the S3 bucket.
Create an Amazon CloudFront distribution. Configure the S3 bucket as an origin with an origin access identity (OAI). Give the OAI the s3:GetObject
permission in the S3 bucket policy.
To host a static website using Amazon S3 while ensuring the bucket remains private, the recommended approach is to use Amazon CloudFront with an Origin Access Identity (OAI). An OAI is a special CloudFront user that you associate with your distribution. You then configure the S3 bucket policy to grant this OAI permission (s3:GetObject
) to read the objects in your bucket. This configuration allows CloudFront to fetch the content from the private S3 bucket and serve it to users, while direct access to the S3 bucket remains restricted, thereby adhering to the policy that the bucket must not be public. Users will access the web application through the CloudFront distribution's domain name.
Configuring static website hosting directly on the S3 bucket requires making the bucket's objects public, which violates the stated security policy. While an Application Load Balancer (ALB) can route traffic, it is not designed to directly serve static content from an S3 bucket as an origin in this manner; ALBs typically forward traffic to resources like EC2 instances or Lambda functions. Similarly, AWS Global Accelerator enhances availability and performance but does not directly serve content from a private S3 bucket; it usually directs traffic to endpoints like ALBs, NLBs, or EC2 instances.
Remember that serving static content from a private S3 bucket is a common requirement. The standard and most secure AWS solution for this involves using CloudFront with Origin Access Identity (OAI) or the newer Origin Access Control (OAC). If a question requires an S3 bucket to remain private while serving its content, look for CloudFront in the options.
A company runs an application on hundreds of Amazon EC2 instances in three Availability Zones. The application calls a third-party API over the public internet. A SysOps administrator must provide the third party with a list of static IP addresses so that the third party can allow traffic from the application.
Which solution will meet these requirements?
Add a NAT gateway in the public subnet of each Availability Zone. Make the NAT gateway the default route of all private subnets in those Availability Zones.
Allocate one Elastic IP address in each Availability Zone. Associate the Elastic IP address with all the instances in the Availability Zone.
Update the main route table to send the traffic to the internet through an Elastic IP address that is assigned to each instance.
Place the instances behind a Network Load Balancer (NLB). Send the traffic to the internet through the private IP address of the NLB.
The requirement is to provide a static set of source IP addresses for outbound traffic originating from EC2 instances spread across multiple Availability Zones, likely residing in private subnets, destined for a third-party API on the public internet. The standard AWS solution for this scenario is to use NAT gateways. By placing a NAT gateway in a public subnet within each Availability Zone used by the application and configuring the route tables of the private subnets in each respective AZ to direct internet-bound traffic (0.0.0.0/0) to the NAT gateway, you achieve the desired outcome. Each NAT gateway is automatically assigned an Elastic IP address upon creation, which provides a stable, static public IP address. Instances within the private subnets will initiate outbound connections, and their traffic will be routed through the NAT gateway in their AZ. The source IP address observed by the third-party API will be the Elastic IP address of the corresponding NAT gateway. This setup ensures high availability because if one AZ fails, instances in other AZs can still route traffic through their respective NAT gateways. The list of static IPs to provide to the third party would consist of the Elastic IP addresses associated with each NAT gateway.
Associating an Elastic IP address directly with each instance is not feasible or scalable for hundreds of instances, and it doesn't address the need for instances in private subnets. Furthermore, a single Elastic IP cannot be associated with multiple instances simultaneously. Updating the main route table to route traffic through an EIP assigned to each instance is also incorrect; instances route based on their subnet's route table, and assigning EIPs to potentially private instances is generally not the intended architecture. A Network Load Balancer (NLB) is used for distributing incoming traffic to targets and does not provide outbound internet connectivity with a static source IP for the instances behind it.
When you need instances in private subnets to initiate outbound connections to the internet with a static source IP address, think of NAT Gateways. For high availability, deploy a NAT gateway in each Availability Zone used by your application and configure routing accordingly. The static IPs will be the Elastic IPs associated with the NAT gateways.
A company manages its multi-account environment by using AWS Organizations. The company needs to automate the creation of daily incremental backups of any Amazon Elastic Block Store (Amazon EBS) volume that is marked with a Lifecycle: Production tag in one of its primary AWS accounts.
The company wants to prevent users from using Amazon EC2 *
permissions to delete any of these production snapshots.
What should a SysOps administrator do to meet these requirements?
Create a daily Amazon Machine Image (AMI) of every production EC2 instance within the AWS account by using Amazon Data Lifecycle Manager.
Create a daily snapshot of all EBS volumes by using Amazon Data Lifecycle Manager. Specify Lifecycle as the tag key. Specify Production as the tag value.
Create a daily snapshot of all EBS volumes by using AWS Backup. Specify Lifecycle as the tag key. Specify Production as the tag value.
Associate a service control policy (SCP) with the account to deny users the ability to delete EBS snapshots. Create an Amazon EventBridge rule with a 24-hour cron schedule. Configure EBS Create Snapshot as the target. Target all EBS volumes with the specified tags.
To meet the requirements of automating daily incremental backups for specific EBS volumes based on tags and preventing accidental or malicious deletion, AWS Backup provides a comprehensive solution. You can create a backup plan in AWS Backup configured with a daily schedule. Within this plan, you define a resource assignment that targets resources based on specific tags, in this case, the Lifecycle: Production
tag applied to EBS volumes. AWS Backup automatically handles the creation of incremental snapshots according to the schedule.
Furthermore, AWS Backup includes a feature called Backup Vault Lock. By applying Vault Lock in compliance mode to the backup vault where the snapshots (recovery points) are stored, you can make the backups immutable for their retention period. This prevents deletion of the recovery points, even by users with administrative privileges or broad permissions like ec2:*
, fulfilling the requirement to protect the production snapshots from deletion.
Using Amazon Data Lifecycle Manager (DLM) could automate snapshot creation based on tags, but it doesn't offer the same robust, built-in protection against deletion as AWS Backup Vault Lock. Creating AMIs is not the specific requirement; the focus is on EBS volume backups. While using an SCP to deny ec2:DeleteSnapshot
combined with an EventBridge rule for creation is a possible approach, AWS Backup offers a more integrated and purpose-built solution for managing and protecting backups, including features like Vault Lock specifically designed for immutability.
When backup requirements include not only automation based on tags but also strong protection against deletion (immutability), consider AWS Backup with Vault Lock. While DLM is excellent for lifecycle management (creation/retention), AWS Backup provides a broader backup management framework with enhanced protection features.
A company hosts a Windows-based file server on a fleet of Amazon EC2 instances across multiple Availability Zones. The current setup does not allow application servers to access files simultaneously from the EC2 fleet.
Which solution will allow this access in the most operationally efficient way?
Create an Amazon Elastic Block Store (Amazon EBS) volume that has EBS Multi-Attach enabled. Create an Auto Scaling group for the Windows file server. Use a script in the file server's user data to attach the SharedFileAccess tag to the EBS volume during launch.
Create an Amazon FSx for Windows File Server Multi-AZ file system. Copy the files to the Amazon FSx file system. Adjust the connections from the application servers to use the share that the Amazon FSx file system exposes.
Create two Amazon FSx for Windows File Server file systems. Configure Distributed File System (DFS) replication between the file systems. Copy the files to the Amazon FSx file systems. Adjust the connections from the application servers to use the shares that the Amazon FSx file systems expose.
Create an Amazon Elastic File System (Amazon EFS) Multi-AZ file system. Copy the files to the EFS file system. Connect the EFS file system to mount points on the application servers.
The scenario requires a shared file storage solution accessible simultaneously by multiple Windows-based application servers across different Availability Zones, replacing an existing setup on EC2 instances. Amazon FSx for Windows File Server is a fully managed file storage service built on Windows Server that provides native support for the Server Message Block (SMB) protocol, which is standard for Windows file sharing. Creating an FSx for Windows File Server Multi-AZ file system directly addresses the requirements. This service provides a highly available and durable file system that spans multiple AZs. Application servers can connect to the file share using standard Windows networking mechanisms without needing complex configurations or custom scripts. AWS manages the file server infrastructure, including patching, backups, and failover, making it the most operationally efficient solution among the choices.
Enabling EBS Multi-Attach allows attaching a single EBS volume to multiple EC2 instances within the same Availability Zone, but it requires a cluster-aware file system and does not provide a native SMB share accessible across AZs without significant additional configuration. Creating two separate FSx file systems and configuring DFS Replication adds operational complexity compared to using a native Multi-AZ FSx file system, which handles high availability automatically. Amazon EFS provides shared file storage using the NFS protocol, which is not native to Windows environments; while Windows can connect via an NFS client, FSx for Windows File Server offers a more seamless and optimized experience for Windows workloads using the native SMB protocol.
When you need shared file storage for Windows-based applications on AWS, especially requiring simultaneous access and high availability across Availability Zones, Amazon FSx for Windows File Server (particularly the Multi-AZ option) is typically the most appropriate and operationally efficient managed service. Remember the protocol difference: FSx for Windows uses SMB, while EFS uses NFS.
A company has deployed an application on Amazon EC2 instances in a single VPC. The company has placed the EC2 instances in a private subnet in the VPC.
The EC2 instances need access to Amazon S3 buckets that are in the same AWS Region as the EC2 instances. A SysOps administrator must provide the EC2 instances with access to the S3 buckets without requiring any changes to the EC2 instances or the application. The EC2 instances must not have access to the internet.
Which solution will meet these requirements?
Create an S3 gateway endpoint that uses the default gateway endpoint policy. Associate the private subnet with the gateway endpoint.
Create an S3 interface endpoint. Associate the EC2 instances with the interface endpoint.
Configure a NAT gateway. Associate the private subnet with the NAT gateway.
Configure a proxy EC2 instance. Update the private subnet route tables to route traffic through the proxy EC2 instance. Configure the proxy to route all S3 requests to the target S3 bucket.
To allow EC2 instances in a private subnet to access Amazon S3 buckets within the same AWS Region privately, without traversing the internet, and without requiring modifications to the instances or applications, the appropriate solution is to use an S3 gateway endpoint. A gateway VPC endpoint provides a target for a specific route in your route table for traffic destined for supported AWS services, in this case, Amazon S3. When you create an S3 gateway endpoint and associate it with the route table used by the private subnet, traffic from the EC2 instances destined for S3 in that Region is automatically routed through the endpoint over the AWS private network. This requires no changes on the EC2 instances themselves, as the standard S3 DNS names resolve correctly within the VPC to use the endpoint path. This solution inherently prevents internet access for this traffic path and satisfies the requirement that instances should not have general internet access.
An S3 interface endpoint uses AWS PrivateLink and provides an ENI in the subnet, which might require DNS configuration or application changes unless Private DNS is enabled; however, the gateway endpoint is specifically designed for this VPC-to-S3 scenario and integrates seamlessly via route tables. A NAT gateway provides general outbound internet access, which is explicitly forbidden by the requirements. Configuring a proxy instance adds operational overhead and requires modifying the instances or applications to use the proxy, violating another requirement.
Remember the difference between gateway endpoints and interface endpoints. Gateway endpoints (for S3 and DynamoDB) modify route tables and are simpler for providing access from within a VPC to these services in the same region. Interface endpoints (for most other services, including S3) use ENIs and PrivateLink, offering broader connectivity options but potentially requiring DNS considerations.
A company has a public web application that experiences rapid traffic increases after advertisements appear on local television. The application runs on Amazon EC2 instances that are in an Auto Scaling group. The Auto Scaling group is not keeping up with the traffic surges after an advertisement runs. The company often needs to scale out to 100 EC2 instances during the traffic surges.
The instance startup times are lengthy because of a boot process that creates machine-specific data caches that are unique to each instance. The exact timing of when the advertisements will appear on television is not known. A SysOps administrator must implement a solution so that the application can function properly during the traffic surges.
Which solution will meet these requirements?
Use predictive scaling for the Auto Scaling group.
Increase the value of the instance warmup time in the scaling policy
Start 100 instances. Allow the boot process to finish running. Store this data on the instance store volume before stopping the instances.
Create e warm pool. Keep enough instances in the Stopped state to meet the increased demand.
The core challenge is the need for rapid scaling in response to unpredictable traffic surges, combined with long instance startup times due to the boot process creating data caches. Auto Scaling group warm pools are specifically designed to address this scenario. A warm pool maintains a set of pre-initialized EC2 instances, which have already completed the time-consuming boot process. These instances can be kept in either a Running
or Stopped
state. When the Auto Scaling group needs to scale out due to a traffic surge, it can pull instances from the warm pool much faster than launching new instances from scratch. Using instances from the warm pool significantly reduces the time required to get new capacity online and handling application traffic. Configuring the warm pool to keep enough instances in the Stopped
state ensures that the required capacity (up to 100 instances) can be rapidly brought into service when the unpredictable surges occur, while minimizing costs compared to keeping them running.
Predictive scaling relies on historical patterns and may not react effectively to sudden, unpredictable surges caused by external events like television advertisements whose exact timing is unknown. Increasing the instance warmup time parameter merely informs Auto Scaling about the lengthy boot process; it does not accelerate the scaling response itself. Manually starting instances and attempting to store boot data on instance store volumes is not a viable or automated solution, as instance store data is ephemeral and lost upon instance stop.
Look for Auto Scaling group warm pools when questions describe applications with long initialization times that need to scale out quickly in response to traffic increases. Warm pools bridge the gap between needing capacity fast and instances taking a long time to become ready.
A company hosts an internal application on Amazon EC2 On-Demand Instances behind an Application Load Balancer (ALB). The instances are in an Amazon EC2 Auto Scaling group. Employees use the application to provide product prices to potential customers. The Auto Scaling group is configured with a dynamic scaling policy and tracks average CPU utilization of the instances.
Employees have noticed that sometimes the application becomes slow or unresponsive. A SysOps administrator finds that some instances are experiencing a high CPU load. The Auto Scaling group cannot scale out because the company is reaching the EC2 instance service quota.
The SysOps administrator needs to implement a solution that provides a notification when the company reaches 70% or more of the EC2 instance service quota.
Which solution will meet these requirements in the most operationally efficient manner?
Create an Amazon CloudWatch alarm. Configure the alarm with a threshold of 70% for the CPUUtilization metric for the EC2 instances. Configure the alarm to publish an Amazon Simple Notification Service (Amazon SNS) notification when the alarm enters ALARM state.
Create an AWS Lambda function that lists the EC2 instances, counts the EC2 instances, and compares the total number against the applied quota value by using the Amazon CloudWatch Metrics API. Configure the Lambda function to publish an Amazon Simple Notification Service (Amazon SNS) notification if the quota utilization is equal to or greater than 70%. Create an Amazon EventBridge rule to invoke the Lambda function.
Create an AWS Lambda function that lists the EC2 instances, counts the EC2 instances, and compares the total number against the applied quota value by using the Service Quotas API. Configure the Lambda function to publish an Amazon Simple Notification Service (Amazon SNS) notification if the quota utilization is equal to or greater than 70%. Create an Amazon EventBridge rule to invoke the Lambda function.
Use the Service Quotas console to create an Amazon CloudWatch alarm for the EC2 instances. Configure the alarm with quota utilization equal to or greater than 70%. Configure the alarm to publish an Amazon Simple Notification Service (Amazon SNS) notification when the alarm enters ALARM state.
The requirement is to proactively monitor the usage of the EC2 instance service quota and receive a notification when it reaches 70%. AWS Service Quotas integrates directly with Amazon CloudWatch, allowing you to create alarms based on the utilization of your service quotas. This native integration provides the most operationally efficient solution. You can navigate to the Service Quotas console, select the relevant EC2 instance quota (e.g., 'Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances'), and create a CloudWatch alarm directly from there.
Within the alarm configuration, you can set the threshold to 70% of the quota value and specify an Amazon Simple Notification Service (Amazon SNS) topic as the action to take when the alarm state changes to ALARM. This approach avoids the need to write, deploy, and maintain custom code, leveraging built-in AWS capabilities.
Monitoring EC2 CPU utilization does not address the service quota limit directly.
While high CPU might trigger scaling attempts, the fundamental block is the instance quota, not instance performance.
Creating a custom Lambda function to periodically check the instance count against the quota value obtained via the Service Quotas API (or potentially CloudWatch metrics) and then publish to SNS is a viable but less operationally efficient approach compared to using the direct integration between Service Quotas and CloudWatch Alarms. The built-in method requires less setup and ongoing maintenance.
When you need to monitor AWS service quota usage, always check for direct integration with CloudWatch Alarms via the Service Quotas console first. This is often the simplest and most efficient method compared to building custom monitoring solutions with Lambda.
A SysOps administrator wants to use AWS Systems Manager Patch Manager to automate the process of patching Amazon EC2 Windows instances. The SysOps administrator wants to ensure that patches are auto-approved 2 days after the release date for development instances. Patches also must be auto-approved 5 days after the release date for production instances. Maintenance must occur only during a 2-hour window for all instances.
Which solution will meet these requirements?
Use tags to identify development instances and production instances. In Patch Manager, create two patch groups and two patch baselines. Specify an auto-approval delay in each of the patch baselines. Create a single maintenance window.
Use tags to identify development instances. In Patch Manager, create one patch group and one patch baseline. Specify auto-approval delays in the patch baseline, Add development instances to the new patch group. Use predefined Patch Manager patch baselines for all remaining instances. Create a single maintenance window.
Use tags to identify development instances and production instances. In Patch Manager, create two patch groups and one patch baseline. Add an auto-approval delay to each patch group. Create a single maintenance window.
Use tags to identity development instances and production instances. In Patch Manager, create two patch groups and one patch baseline, Create two separate maintenance windows, each with an auto-approval delay.
AWS Systems Manager Patch Manager uses patch baselines to define which patches are approved for installation on your instances and includes rules for auto-approval. Patch groups are used to associate instances with a specific patch baseline, typically using tags. Maintenance Windows define recurring schedules during which patching operations can occur.
To meet the requirements, you need different patching rules (specifically, different auto-approval delays) for development and production instances. This necessitates creating two separate patch baselines: one for development with a 2-day auto-approval delay, and another for production with a 5-day auto-approval delay. You then use tags to identify the development and production instances. These tags are used to create two corresponding patch groups (e.g., Patch Group: Development
and Patch Group: Production
). Each patch group is then associated with its respective patch baseline (Development patch group linked to the 2-day delay baseline, Production patch group linked to the 5-day delay baseline). Finally, a single maintenance window is required to orchestrate the patching process for all instances within the specified 2-hour timeframe. This maintenance window targets both development and production instances (identified via tags or resource groups) and registers a task (like AWS-RunPatchBaseline
) to perform the patching. When the task executes, Patch Manager applies the appropriate baseline based on the patch group membership of each targeted instance.
Configuring only one patch baseline would not allow for different auto-approval delays. Auto-approval delays are properties of the patch baseline, not the patch group or maintenance window. While tags are used to identify instances, they don't directly define the patching rules. Requiring two maintenance windows contradicts the requirement for a single window.
Remember the roles of different Patch Manager components: Patch Baselines define what patches and when they are auto-approved (including delays). Patch Groups link instances (via tags) to a specific baseline. Maintenance Windows define when the patching operation runs.