aiotestking uk

DOP-C02 Exam Questions - Online Test


DOP-C02 Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Your success in Amazon-Web-Services DOP-C02 is our sole target and we develop all our DOP-C02 braindumps in a way that facilitates the attainment of this target. Not only is our DOP-C02 study material the best you can find, it is also the most detailed and the most updated. DOP-C02 Practice Exams for Amazon-Web-Services DOP-C02 are written to the highest standards of technical accuracy.

Check DOP-C02 free dumps before getting the full version:

NEW QUESTION 1
A production account has a requirement that any Amazon EC2 instance that has been logged in to manually must be terminated within 24 hours. All applications in the production account are using Auto Scaling groups with the Amazon CloudWatch Logs agent configured.
How can this process be automated?

  • A. Create a CloudWatch Logs subscription to an AWS Step Functions applicatio
  • B. Configure an AWS Lambda function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissione
  • C. Create an Amazon EventBridge rule to invoke a second Lambda function once a day that will terminate all instances with this tag.
  • D. Create an Amazon CloudWatch alarm that will be invoked by the login even
  • E. Send the notification to an Amazon Simple Notification Service (Amazon SNS) topic that the operations team is subscribed to, and have them terminate the EC2 instance within 24 hours.
  • F. Create an Amazon CloudWatch alarm that will be invoked by the login even
  • G. Configure the alarm to send to an Amazon Simple Queue Service (Amazon SQS) queu
  • H. Use agroup of worker instances to process messages from the queue, which then schedules an Amazon EventBridge rule to be invoked.
  • I. Create a CloudWatch Logs subscription to an AWS Lambda functio
  • J. Configure the function to add a tag to the EC2 instance that produced the login event and mark the instance to be decommissione
  • K. Create an Amazon EventBridge rule to invoke a daily Lambda function that terminates all instances with this tag.

Answer: D

Explanation:
"You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as an Amazon Kinesis stream, an Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems. When log events are sent to the receiving service, they are Base64 encoded and compressed with the gzip format." See https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Subscriptions.html

NEW QUESTION 2
A company's security policies require the use of security hardened AMIS in production environments. A DevOps engineer has used EC2 Image Builder to create a pipeline that builds the AMIs on a recurring schedule.
The DevOps engineer needs to update the launch templates of the companys Auto Scaling groups. The Auto Scaling groups must use the newest AMIS during the launch of Amazon EC2 instances.
Which solution will meet these requirements with the MOST operational efficiency?

  • A. Configure an Amazon EventBridge rule to receive new AMI events from Image Builde
  • B. Target an AWS Systems Manager Run Command document that updates the launch templates of the Auto Scaling groups with the newest AMI ID.
  • C. Configure an Amazon EventBridge rule to receive new AMI events from Image Builde
  • D. Target an AWS Lambda function that updates the launch templates of the Auto Scaling groups with the newest AMI ID.
  • E. Configure the launch template to use a value from AWS Systems Manager Parameter Store for the AMI I
  • F. Configure the Image Builder pipeline to update the Parameter Store value with the newest AMI ID.
  • G. Configure the Image Builder distribution settings to update the launch templates with the newest AMI I
  • H. Configure the Auto Scaling groups to use the newest version of the launch template.

Answer: C

Explanation:
✑ The most operationally efficient solution is to use AWS Systems Manager Parameter Store1 to store the AMI ID and reference it in the launch template2. This way, the launch template does not need to be updated every time a new AMI is created by Image Builder. Instead, the Image Builder pipeline can update the Parameter Store value with the newest AMI ID3, and the Auto Scaling group can launch instances using the latest value from Parameter Store.
✑ The other solutions require updating the launch template or creating a new version of it every time a new AMI is created, which adds complexity and overhead. Additionally, using EventBridge rules and Lambda functions or Run Command documents introduces additional dependencies and potential points of failure.
References: 1: AWS Systems Manager Parameter Store 2: Using AWS Systems Manager parameters instead of AMI IDs in launch templates 3: Update an SSM parameter with
Image Builder

NEW QUESTION 3
A growing company manages more than 50 accounts in an organization in AWS Organizations. The company has configured its applications to send logs to Amazon CloudWatch Logs.
A DevOps engineer needs to aggregate logs so that the company can quickly search the logs to respond to future security incidents. The DevOps engineer has created a new AWS account for centralized monitoring.
Which combination of steps should the DevOps engineer take to make the application logs searchable from the monitoring account? (Select THREE.)

  • A. In the monitoring account, download an AWS CloudFormation template from CloudWatch to use in Organization
  • B. Use CloudFormation StackSets in the organization's management account to deploy the CloudFormation template to the entire organization.
  • C. Create an AWS CloudFormation template that defines an IAM rol
  • D. Configure the role to allow logs-amazonaws.com to perform the logs:Link action if the aws:ResourceAccount property is equal to the monitoring account I
  • E. Use CloudFormation StackSets in the organization's management account to deploy the CloudFormation template to the entire organization.
  • F. Create an IAM role in the monitoring accoun
  • G. Attach a trust policy that allows logs.amazonaws.com to perform the iam:CreateSink action if the aws:PrincipalOrgld property is equal to the organization ID.
  • H. In the organization's management account, enable the logging policies for the organization.
  • I. use CloudWatch Observability Access Manager in the monitoring account to create a sin
  • J. Allow logs to be shared with the monitoring accoun
  • K. Configure the monitoring account data selection to view the Observability data from the organization ID.
  • L. In the monitoring account, attach the CloudWatchLogsReadOnlyAccess AWS managed policy to an IAM role that can be assumed to search the logs.

Answer: BCF

Explanation:
✑ To aggregate logs from multiple accounts in an organization, the DevOps engineer needs to create a cross-account subscription1 that allows the monitoring account to receive log events from the sharing accounts.
✑ To enable cross-account subscription, the DevOps engineer needs to create an IAM role in each sharing account that grants permission to CloudWatch Logs to link the log groups to the destination in the monitoring account2. This can be done using a CloudFormation template and StackSets3 to deploy the role to all accounts in the organization.
✑ The DevOps engineer also needs to create an IAM role in the monitoring account that allows CloudWatch Logs to create a sink for receiving log events from other accounts4. The role must have a trust policy that specifies the organization ID as a condition.
✑ Finally, the DevOps engineer needs to attach the
CloudWatchLogsReadOnlyAccess policy5 to an IAM role in the monitoring account that can be used to search the logs from the cross-account subscription.
References: 1: Cross-account log data sharing with subscriptions 2: Create an IAM role for CloudWatch Logs in each sharing account 3: AWS CloudFormation StackSets 4: Create an IAM role for CloudWatch Logs in your monitoring account 5: CloudWatchLogsReadOnlyAccess policy

NEW QUESTION 4
An application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). A DevOps engineer is using AWS CodeDeploy to release a new version. The deployment fails during the AlIowTraffic lifecycle event, but a cause for the failure is not indicated in the deployment logs.
What would cause this?

  • A. The appspe
  • B. yml file contains an invalid script that runs in the AllowTraffic lifecycle hook.
  • C. The user who initiated the deployment does not have the necessary permissions tointeract with the ALB.
  • D. The health checks specified for the ALB target group are misconfigured.
  • E. The CodeDeploy agent was not installed in the EC2 instances that are pad of the ALB target group.

Answer: C

Explanation:
This failure is typically due to incorrectly configured health checks in Elastic Load Balancing for the Classic Load Balancer, Application Load Balancer, or Network Load Balancer used to manage traffic for the deployment group. To resolve the issue, review and correct any errors in the health check configuration for the load balancer. https://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-deployments.html#troubleshooting-deployments-allowtraffic-no-logs

NEW QUESTION 5
A company is developing an application that will generate log events. The log events consist of five distinct metrics every one tenth of a second and produce a large amount of data The company needs to configure the application to write the logs to Amazon Time stream The company will configure a daily query against the Timestream table.
Which combination of steps will meet these requirements with the FASTEST query performance? (Select THREE.)

  • A. Use batch writes to write multiple log events in a Single write operation
  • B. Write each log event as a single write operation
  • C. Treat each log as a single-measure record
  • D. Treat each log as a multi-measure record
  • E. Configure the memory store retention period to be longer than the magnetic store retention period
  • F. Configure the memory store retention period to be shorter than the magnetic store retention period

Answer: ADF

Explanation:
A comprehensive and detailed explanation is:
✑ Option A is correct because using batch writes to write multiple log events in a single write operation is a recommended practice for optimizing the performance and cost of data ingestion in Timestream. Batch writes can reduce the number of network round trips and API calls, and can also take advantage of parallel processing by Timestream. Batch writes can also improve the compression ratio of data in the memory store and the magnetic store, which can reduce the storage costs and improve the query performance1.
✑ Option B is incorrect because writing each log event as a single write operation is not a recommended practice for optimizing the performance and cost of data ingestion in Timestream. Writing each log event as a single write operation would increase the number of network round trips and API calls, and would also reduce the compression ratio of data in the memory store and the magnetic store. This would increase the storage costs and degrade the query performance1.
✑ Option C is incorrect because treating each log as a single-measure record is not a recommended practice for optimizing the query performance in Timestream. Treating each log as a single-measure record would result in creating multiple records for each timestamp, which would increase the storage size and the query latency. Moreover, treating each log as a single-measure record would require using joins to query multiple measures for the same timestamp, which would add complexity and overhead to the query processing2.
✑ Option D is correct because treating each log as a multi-measure record is a recommended practice for optimizing the query performance in Timestream. Treating each log as a multi-measure record would result in creating a single record for each timestamp, which would reduce the storage size and the query latency. Moreover, treating each log as a multi-measure record would allow querying multiple measures for the same timestamp without using joins, which would simplify and speed up the query processing2.
✑ Option E is incorrect because configuring the memory store retention period to be longer than the magnetic store retention period is not a valid option in Timestream. The memory store retention period must always be shorter than or equal to the magnetic store retention period. This ensures that data is moved from the memory store to the magnetic store before it expires out of the memory store3.
✑ Option F is correct because configuring the memory store retention period to be shorter than the magnetic store retention period is a valid option in Timestream. The memory store retention period determines how long data is kept in the memory store, which is optimized for fast point-in-time queries. The magnetic store retention period determines how long data is kept in the magnetic store, which is optimized for fast analytical queries. By configuring these retention periods appropriately, you can balance your storage costs and query performance according to your application needs3.
References:
✑ 1: Batch writes
✑ 2: Multi-measure records vs. single-measure records
✑ 3: Storage

NEW QUESTION 6
A company is examining its disaster recovery capability and wants the ability to switch over its daily operations to a secondary AWS Region. The company uses AWS CodeCommit as a source control tool in the primary Region.
A DevOps engineer must provide the capability for the company to develop code in the secondary Region. If the company needs to use the secondary Region, developers can add an additional remote URL to their local Git configuration.
Which solution will meet these requirements?

  • A. Create a CodeCommit repository in the secondary Regio
  • B. Create an AWS CodeBuild project to perform a Git mirror operation of the primary Region's CodeCommit repository to the secondary Region's CodeCommit repositor
  • C. Create an AWS Lambda function that invokes the CodeBuild projec
  • D. Create an Amazon EventBridge rule that reacts to merge events in the primary Region's CodeCommit repositor
  • E. Configure the EventBridge rule to invoke the Lambda function.
  • F. Create an Amazon S3 bucket in the secondary Regio
  • G. Create an AWS Fargate task to perform a Git mirror operation of the primary Region's CodeCommit repository and copy the result to the S3 bucke
  • H. Create an AWS Lambda function that initiates the Fargate tas
  • I. Create an Amazon EventBridge rule that reacts to merge events in the CodeCommitrepositor
  • J. Configure the EventBridge rule to invoke the Lambda function.
  • K. Create an AWS CodeArtifact repository in the secondary Regio
  • L. Create an AWS CodePipeline pipeline that uses the primary Region's CodeCommit repository for the source actio
  • M. Create a Cross-Region stage in the pipeline that packages the CodeCommit repository contents and stores the contents in the CodeArtifact repository when a pull request is merged into the CodeCommit repository.
  • N. Create an AWS Cloud9 environment and a CodeCommit repository in the secondary Regio
  • O. Configure the primary Region's CodeCommit repository as a remote repository in the AWS Cloud9 environmen
  • P. Connect the secondary Region's CodeCommit repository to the AWS Cloud9 environment.

Answer: A

Explanation:
The best solution to meet the disaster recovery capability and allow developers to switch over to a secondary AWS Region for code development is option A. This involves creating a CodeCommit repository in the secondary Region and setting up an AWS CodeBuild project to perform a Git mirror operation of the primary Region’s CodeCommit repository to the secondary Region’s repository. An AWS Lambda function is then created to invoke the CodeBuild project. Additionally, an Amazon EventBridge rule is configured to react to merge events in the primary Region’s CodeCommit repository and invoke the Lambda function12. This setup ensures that the secondary Region’s repository is always up-to-date with the primary repository, allowing for a seamless transition in case of a disaster recovery event1.
References:
✑ AWS CodeCommit User Guide on resilience and disaster recovery1.
✑ AWS Documentation on monitoring CodeCommit events in Amazon EventBridge and Amazon CloudWatch Events2.

NEW QUESTION 7
A global company manages multiple AWS accounts by using AWS Control Tower. The company hosts internal applications and public applications.
Each application team in the company has its own AWS account for application hosting. The accounts are consolidated in an organization in AWS Organizations. One of the AWS Control Tower member accounts serves as a centralized DevOps account with CI/CD pipelines that application teams use to deploy applications to their respective target AWS accounts. An 1AM role for deployment exists in the centralized DevOps account.
An application team is attempting to deploy its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster in an application AWS account. An 1AM role for deployment exists in the application AWS account. The deployment is through an AWS CodeBuild project that is set up in the centralized DevOps account. The CodeBuild project uses an 1AM service role for CodeBuild. The deployment is failing with an Unauthorized error during attempts to connect to the cross-account EKS cluster from CodeBuild.
Which solution will resolve this error?

  • A. Configure the application account's deployment 1AM role to have a trust relationship with the centralized DevOps accoun
  • B. Configure the trust relationship to allow the sts:AssumeRole actio
  • C. Configure the application account's deployment 1AM role to have the required access to the EKS cluste
  • D. Configure the EKS cluster aws-auth ConfigMap to map the role to the appropriate system permissions.
  • E. Configure the centralized DevOps account's deployment I AM role to have a trust relationship with the application accoun
  • F. Configure the trust relationship to allow the sts:AssumeRole actio
  • G. Configure the centralized DevOps account's deployment 1AM role to allow the required access to CodeBuild.
  • H. Configure the centralized DevOps account's deployment 1AM role to have a trust relationship with the application accoun
  • I. Configure the trust relationship to allow the sts:AssumeRoleWithSAML actio
  • J. Configure the centralized DevOps account's deployment 1AM role to allow the required access to CodeBuild.
  • K. Configure the application account's deployment 1AM role to have a trust relationship with the AWS Control Tower management accoun
  • L. Configure the trust relationship to allow the sts:AssumeRole actio
  • M. Configure the application account's deployment 1AM role to have the required access to the EKS cluste
  • N. Configure the EKS cluster aws-auth ConfigMap to map the role to the appropriate system permissions.

Answer: A

Explanation:
In the source AWS account, the IAM role used by the CI/CD pipeline should have permissions to access the source code repository, build artifacts, and any other resources required for the build process. In the destination AWS accounts, the IAM role used for deployment should have permissions to access the AWS resources required for deploying the application, such as EC2 instances, RDS databases, S3 buckets, etc. The exact permissions required will depend on the specific resources being used by the application. the IAM role used for deployment in the destination accounts should also have permissions to assume the IAM role for deployment in the centralized DevOps account. This is typically done using an IAM role trust policy that allows the destination account to assume the DevOps account role.

NEW QUESTION 8
An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application. The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running.
All resources should be removed when the CloudFormation stack is deleted. However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted.
How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors?

  • A. Add a DelelionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.
  • B. Add a custom resource with an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM rol
  • C. Write the Lambda function to delete all objects from the bucket when RequestType is Delete.
  • D. Identify the resource that was not delete
  • E. Manually empty the S3 bucket and then delete it.
  • F. Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resourc
  • G. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.

Answer: B

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-s3-custom-resources/

NEW QUESTION 9
A company runs its container workloads in AWS App Runner. A DevOps engineer manages the company's container repository in Amazon Elastic Container Registry (Amazon ECR).
The DevOps engineer must implement a solution that continuously monitors the container repository. The solution must create a new container image when the solution detects an operating system vulnerability or language package vulnerability.
Which solution will meet these requirements?

  • A. Use EC2 Image Builder to create a container image pipelin
  • B. Use Amazon ECR as the target repositor
  • C. Turn on enhanced scanning on the ECR repositor
  • D. Create an Amazon EventBridge rule to capture an Inspector2 finding even
  • E. Use the event to invoke the image pipelin
  • F. Re-upload the container to the repository.
  • G. Use EC2 Image Builder to create a container image pipelin
  • H. Use Amazon ECR as the target repositor
  • I. Enable Amazon GuardDuty Malware Protection on the container workloa
  • J. Create an Amazon EventBridge rule to capture a GuardDuty finding even
  • K. Use the event to invoke the image pipeline.
  • L. Create an AWS CodeBuild project to create a container imag
  • M. Use Amazon ECR as the target repositor
  • N. Turn on basic scanning on the repositor
  • O. Create an Amazon EventBridge rule to capture an ECR image action even
  • P. Use the event to invoke the CodeBuild projec
  • Q. Re-upload the container to the repository.
  • R. Create an AWS CodeBuild project to create a container imag
  • S. Use Amazon ECR as the target repositor
  • T. Configure AWS Systems Manager Compliance to scan all managed node
  • . Create an Amazon EventBridge rule to capture a configuration compliance state change even
  • . Use the event to invoke the CodeBuild project.

Answer: A

Explanation:
The solution that meets the requirements is to use EC2 Image Builder to create a container image pipeline, use Amazon ECR as the target repository, turn on enhanced scanning on the ECR repository, create an Amazon EventBridge rule to capture an Inspector2 finding event, and use the event to invoke the image pipeline. Re-upload the container to the repository.
This solution will continuously monitor the container repository for vulnerabilities using enhanced scanning, which is a feature of Amazon ECR that provides detailed information and guidance on how to fix security issues found in your container images. Enhanced scanning uses Inspector2, a security assessment service that integrates with Amazon ECR and generates findings for any vulnerabilities detected in your images. You can use Amazon EventBridge to create a rule that triggers an action when an Inspector2 finding event occurs. The action can be to invoke an EC2 Image Builder pipeline, which is a
service that automates the creation of container images. The pipeline can use the latest patches and updates to build a new container image and upload it to the same ECR repository, replacing the vulnerable image.
The other options are not correct because they do not meet all the requirements or use services that are not relevant for the scenario.
Option B is not correct because it uses Amazon GuardDuty Malware Protection, which is a feature of GuardDuty that detects malicious activity and unauthorized behavior on your AWS accounts and resources. GuardDuty does not scan container images for vulnerabilities, nor does it integrate with Amazon ECR or EC2 Image Builder.
Option C is not correct because it uses basic scanning on the ECR repository, which only provides a summary of the vulnerabilities found in your container images. Basic scanning does not use Inspector2 or generate findings that can be captured by Amazon EventBridge. Moreover, basic scanning does not provide guidance on how to fix the vulnerabilities.
Option D is not correct because it uses AWS Systems Manager Compliance, which is a feature of Systems Manager that helps you monitor and manage the compliance status of your AWS resources based on AWS Config rules and AWS Security Hub standards. Systems Manager Compliance does not scan container images for vulnerabilities, nor does it integrate with Amazon ECR or EC2 Image Builder.

NEW QUESTION 10
A company detects unusual login attempts in many of its AWS accounts. A DevOps engineer must implement a solution that sends a notification to the company's security team when multiple failed login attempts occur. The DevOps engineer has already created an Amazon Simple Notification Service (Amazon SNS) topic and has subscribed the security team to the SNS topic.
Which solution will provide the notification with the LEAST operational effort?

  • A. Configure AWS CloudTrail to send log management events to an Amazon CloudWatch Logs log grou
  • B. Create a CloudWatch Logs metric filter to match failed ConsoleLogin event
  • C. Create a CloudWatch alarm that is based on the metric filte
  • D. Configure an alarm action to send messages to the SNS topic.
  • E. Configure AWS CloudTrail to send log management events to an Amazon S3 bucke
  • F. Create an Amazon Athena query that returns a failure if the query finds failed logins in the logs in the S3 bucke
  • G. Create an Amazon EventBridge rule to periodically run the quer
  • H. Create a second EventBridge rule to detect when the query fails and to send a message to the SNS topic.
  • I. Configure AWS CloudTrail to send log data events to an Amazon CloudWatch Logs log grou
  • J. Create a CloudWatch logs metric filter to match failed Consolel_ogin event
  • K. Create a CloudWatch alarm that is based on the metric filte
  • L. Configure an alarm action to send messages to the SNS topic.
  • M. Configure AWS CloudTrail to send log data events to an Amazon S3 bucke
  • N. Configure an Amazon S3 event notification for the s3:ObjectCreated event typ
  • O. Filter the event type by ConsoleLogin failed event
  • P. Configure the event notification to forward to the SNS topic.

Answer: C

Explanation:
The correct answer is C. Configuring AWS CloudTrail to send log data events to an Amazon CloudWatch Logs log group and creating a CloudWatch logs metric filter to match failed ConsoleLogin events is the simplest and most efficient way to monitor and alert on failed login attempts. Creating a CloudWatch alarm that is based on the metric filter and configuring an alarm action to send messages to the SNS topic will ensure that the security team is notified when multiple failed login attempts occur. This solution requires the least operational effort compared to the other options.
Option A is incorrect because it involves configuring AWS CloudTrail to send log management events instead of log data events. Log management events are used to track changes to CloudTrail configuration, such as creating, updating, or deleting a trail. Log data events are used to track API activity in AWS accounts, such as login attempts. Therefore, option A will not capture the failed ConsoleLogin events.
Option B is incorrect because it involves creating an Amazon Athena query and two Amazon EventBridge rules to monitor and alert on failed login attempts. This is a more complex and costly solution than using CloudWatch logs and alarms. Moreover, option B relies on the query returning a failure, which may not happen if the query is executed successfully but does not find any failed logins.
Option D is incorrect because it involves configuring AWS CloudTrail to send log data events to an Amazon S3 bucket and configuring an Amazon S3 event notification for the s3:ObjectCreated event type. This solution will not work because the s3:ObjectCreated event type does not allow filtering by ConsoleLogin failed events. The event notification will be triggered for any object created in the S3 bucket, regardless of the event type. Therefore, option D will generate a lot of false positives and unnecessary notifications. References:
✑ AWS CloudTrail Log File Examples
✑ Creating CloudWatch Alarms for CloudTrail Events: Examples
✑ Monitoring CloudTrail Log Files with Amazon CloudWatch Logs

NEW QUESTION 11
A company runs a workload on Amazon EC2 instances. The company needs a control that requires the use of Instance Metadata Service Version 2 (IMDSv2) on all EC2 instances in the AWS account. If an EC2 instance does not prevent the use of Instance Metadata Service Version 1 (IMDSv1), the EC2 instance must be terminated.
Which solution will meet these requirements?

  • A. Set up AWS Config in the accoun
  • B. Use a managed rule to check EC2 instance
  • C. Configure the rule to remediate the findings by using AWS Systems Manager Automation to terminate the instance.
  • D. Create a permissions boundary that prevents the ec2:Runlnstance action if the ec2:MetadataHttpTokens condition key is not set to a value of require
  • E. Attach the permissions boundary to the IAM role that was used to launch the instance.
  • F. Set up Amazon Inspector in the accoun
  • G. Configure Amazon Inspector to activate deep inspection for EC2 instance
  • H. Create an Amazon EventBridge rule for an Inspector2 findin
  • I. Set an AWS Lambda function as the target to terminate the instance.
  • J. Create an Amazon EventBridge rule for the EC2 instance launch successful even
  • K. Send the event to an AWS Lambda function to inspect the EC2 metadata and to terminate the instance.

Answer: B

Explanation:
To implement a control that requires the use of IMDSv2 on all EC2 instances in the account, the DevOps engineer can use a permissions boundary. A permissions boundary is a policy that defines the maximum permissions that an IAM entity can have. The DevOps engineer can create a permissions boundary that prevents the ec2:RunInstance action if the ec2:MetadataHttpTokens condition key is not set to a value of required. This condition key enforces the use of IMDSv2 on EC2 instances. The DevOps engineer can attach the permissions boundary to the IAM role that was used to launch the instance. This way, any attempt to launch an EC2 instance without using IMDSv2 will be denied by the permissions boundary.

NEW QUESTION 12
A DevOps engineer is building an application that uses an AWS Lambda function to query an Amazon Aurora MySQL DB cluster. The Lambda function performs only read queries. Amazon EventBridge events invoke the Lambda function.
As more events invoke the Lambda function each second, the database's latency increases and the database's throughput decreases. The DevOps engineer needs to improve the performance of the application.
Which combination of steps will meet these requirements? (Select THREE.)

  • A. Use Amazon RDS Proxy to create a prox
  • B. Connect the proxy to the Aurora cluster reader endpoin
  • C. Set a maximum connections percentage on the proxy.
  • D. Implement database connection pooling inside the Lambda cod
  • E. Set a maximum number of connections on the database connection pool.
  • F. Implement the database connection opening outside the Lambda event handler code.
  • G. Implement the database connection opening and closing inside the Lambda event handler code.
  • H. Connect to the proxy endpoint from the Lambda function.
  • I. Connect to the Aurora cluster endpoint from the Lambda function.

Answer: ACE

Explanation:
To improve the performance of the application, the DevOps engineer should use Amazon RDS Proxy, implement the database connection opening outside the Lambda event handler code, and connect to the proxy endpoint from the Lambda function. References:
✑ Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure1. By using Amazon RDS Proxy, the DevOps engineer can reduce the overhead of opening and closing connections to the database, which can improve latency and throughput2.
✑ The DevOps engineer should connect the proxy to the Aurora cluster reader
endpoint, which allows read-only connections to one of the Aurora Replicas in the DB cluster3. This can help balance the load across multiple read replicas and improve performance for read-intensive workloads4.
✑ The DevOps engineer should implement the database connection opening outside the Lambda event handler code, which means using a global variable to store the database connection object5. This can enable connection reuse across multiple invocations of the Lambda function, which can reduce latency and improve performance.
✑ The DevOps engineer should connect to the proxy endpoint from the Lambda function, which is a unique URL that represents the proxy. This can allow the Lambda function to access the database through the proxy, which can provide benefits such as connection pooling, load balancing, failover handling, and enhanced security.
✑ The other options are incorrect because:

NEW QUESTION 13
A company is testing a web application that runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The company uses a blue green deployment process with immutable instances when deploying new software.
During testing users are being automatically logged out of the application at random times. Testers also report that when a new version of the application is deployed all users are logged out. The development team needs a solution to ensure users remain logged m across scaling events and application deployments.
What is the MOST operationally efficient way to ensure users remain logged in?

  • A. Enable smart sessions on the load balancer and modify the application to check tor an existing session.
  • B. Enable session sharing on the toad balancer and modify the application to read from the session store.
  • C. Store user session information in an Amazon S3 bucket and modify the application to read session information from the bucket.
  • D. Modify the application to store user session information in an Amazon ElastiCache cluster.

Answer: D

Explanation:
https://aws.amazon.com/caching/session-management/

NEW QUESTION 14
A company has deployed a critical application in two AWS Regions. The application uses an Application Load Balancer (ALB) in both Regions. The company has Amazon Route 53 alias DNS records for both ALBs.
The company uses Amazon Route 53 Application Recovery Controller to ensure that the application can fail over between the two Regions. The Route 53 ARC configuration includes a routing control for both Regions. The company uses Route 53 ARC to perform quarterly disaster recovery (DR) tests.
During the most recent DR test, a DevOps engineer accidentally turned off both routing controls. The company needs to ensure that at least one routing control is turned on at all times.
Which solution will meet these requirements?

  • A. In Route 53 AR
  • B. create a new assertion safety rul
  • C. Apply the assertion safety rule to the two routing control
  • D. Configure the rule with the ATLEAST type with a threshold of 1.
  • E. In Route 53 ARC, create a new gating safety rul
  • F. Apply the assertion safety rule to the two routing control
  • G. Configure the rule with the OR type with a threshold of 1.
  • H. In Route 53 ARC, create a new resource se
  • I. Configure the resource set with an AWS: Route53: HealthCheck resource typ
  • J. Specify the ARNs of the two routing controls as the target resourc
  • K. Create a new readiness check for the resource set.
  • L. In Route 53 ARC, create a new resource se
  • M. Configure the resource set with an AWS: Route53RecoveryReadiness: DNSTargetResource resource typ
  • N. Add the domain names of the two Route 53 alias DNS records as the target resourc
  • O. Create a new readiness check for the resource set.

Answer: A

Explanation:
The correct solution is to create a new assertion safety rule in Route 53 ARC and apply it to the two routing controls. An assertion safety rule is a type of safety rule that ensures that a minimum number of routing controls are always enabled. The ATLEAST type of assertion safety rule specifies the minimum number of routing controls that must be enabled for the rule to evaluate as healthy. By setting the threshold to 1, the rule ensures that at least one routing control is always turned on. This prevents the scenario where both routing controls are accidentally turned off and the application becomes unavailable in both Regions.
The other solutions are incorrect because they do not use safety rules to prevent both routing controls from being turned off. A gating safety rule is a type of safety rule that prevents routing control state changes that violate the rule logic. The OR type of gating safety rule specifies that one or more routing controls must be enabled for the rule to evaluate as healthy. However, this rule does not prevent a user from turning off both routing controls manually. A resource set is a collection of resources that are tested for readiness by Route 53 ARC. A readiness check is a test that verifies that all the resources in a resource set are operational. However, these concepts are not related to routing control states or safety rules. Therefore, creating a new resource set and a new readiness check will not ensure that at least one routing control is turned on at all times. References:
✑ Routing control in Amazon Route 53 Application Recovery Controller
✑ Viewing and updating routing control states in Route 53 ARC
✑ Creating a control panel in Route 53 ARC
✑ Creating safety rules in Route 53 ARC

NEW QUESTION 15
A company is launching an application. The application must use only approved AWS services. The account that runs the application was created less than 1 year ago and is assigned to an AWS Organizations OU.
The company needs to create a new Organizations account structure. The account structure must have an appropriate SCP that supports the use of only services that are currently active in the AWS account.
The company will use AWS Identity and Access Management (IAM) Access Analyzer in the solution.
Which solution will meet these requirements?

  • A. Create an SCP that allows the services that IAM Access Analyzer identifie
  • B. Create an OU for the accoun
  • C. Move the account into the new O
  • D. Attach the new SCP to the new O
  • E. Detach the default FullAWSAccess SCP from the new OU.
  • F. Create an SCP that denies the services that IAM Access Analyzer identifie
  • G. Create an OU for the accoun
  • H. Move the account into the new OI
  • I. Attach the new SCP to the new OU.
  • J. Create an SCP that allows the services that IAM Access Analyzer identifie
  • K. Attach the new SCP to the organization's root.
  • L. Create an SCP that allows the services that IAM Access Analyzer identifie
  • M. Create an OU for the accoun
  • N. Move the account into the new O
  • O. Attach the new SCP to the management accoun
  • P. Detach the default FullAWSAccess SCP from the new OU.

Answer: A

Explanation:
To meet the requirements of creating a new Organizations account structure with an appropriate SCP that supports the use of only services that are currently active in the AWS account, the company should use the following solution:
✑ Create an SCP that allows the services that IAM Access Analyzer identifies. IAM Access Analyzer is a service that helps identify potential resource-access risks by analyzing resource-based policies in the AWS environment. IAM Access Analyzer can also generate IAM policies based on access activity in the AWS CloudTrail logs. By using IAM Access Analyzer, the company can create an SCP that grants only the permissions that are required for the application to run, and denies all other services. This way, the company can enforce the use of only approved AWS services and reduce the risk of unauthorized access12
✑ Create an OU for the account. Move the account into the new OU. An OU is a container for accounts within an organization that enables you to group accounts that have similar business or security requirements. By creating an OU for the account, the company can apply policies and manage settings for the account as a group. The company should move the account into the new OU to make it subject to the policies attached to the OU3
✑ Attach the new SCP to the new OU. Detach the default FullAWSAccess SCP from the new OU. An SCP is a type of policy that specifies the maximum permissions for an organization or organizational unit (OU). By attaching the new SCP to the new OU, the company can restrict the services that are available to all accounts in that OU, including the account that runs the application. The company should also detach the default FullAWSAccess SCP from the new OU, because this policy allows all actions on all AWS services and might override or conflict with the new SCP45
The other options are not correct because they do not meet the requirements or follow best practices. Creating an SCP that denies the services that IAM Access Analyzer identifies is not a good option because it might not cover all possible services that are not approved or required for the application. A deny policy is also more difficult to maintain and update than an allow policy. Creating an SCP that allows the services that IAM Access Analyzer identifies and attaching it to the organization’s root is not a good option because it might affect other accounts and OUs in the organization that have different service requirements or approvals. Creating an SCP that allows the services that IAM Access Analyzer identifies and attaching it to the management account is not a valid option because SCPs cannot be attached directly to accounts, only to OUs or roots.
References:
✑ 1: Using AWS Identity and Access Management Access Analyzer - AWS Identity and Access Management
✑ 2: Generate a policy based on access activity - AWS Identity and Access Management
✑ 3: Organizing your accounts into OUs - AWS Organizations
✑ 4: Service control policies - AWS Organizations
✑ 5: How SCPs work - AWS Organizations

NEW QUESTION 16
A company has an application that is using a MySQL-compatible Amazon Aurora Multi-AZ DB cluster as the database. A cross-Region read replica has been created for disaster recovery purposes. A DevOps engineer wants to automate the promotion of the replica so it becomes the primary database instance in the event of a failure.
Which solution will accomplish this?

  • A. Configure a latency-based Amazon Route 53 CNAME with health checks so it points to both the primary and replica endpoint
  • B. Subscribe an Amazon SNS topic to Amazon RDS failure notifications from AWS CloudTrail and use that topic to invoke an AWS Lambda function that will promote the replica instance as the primary.
  • C. Create an Aurora custom endpoint to point to the primary database instanc
  • D. Configure the application to use this endpoin
  • E. Configure AWS CloudTrail to run an AWS Lambda function to promote the replica instance and modify the custom endpoint to point to the newly promoted instance.
  • F. Create an AWS Lambda function to modify the application's AWS CloudFormation template to promote the replica, apply the template to update the stack, and point the application to the newly promoted instanc
  • G. Create an Amazon CloudWatch alarm to invoke this Lambda function after the failure event occurs.
  • H. Store the Aurora endpoint in AWS Systems Manager Parameter Stor
  • I. Create an Amazon EventBridge event that detects the database failure and runs an AWS Lambda function to promote the replica instance and update the endpoint URL stored in AWS Systems Manager Parameter Stor
  • J. Code the application to reload the endpoint from Parameter Store if a database connection fails.

Answer: D

Explanation:
EventBridge is needed to detect the database failure. Lambda is needed to promote the replica as it's in another Region (manual promotion, otherwise). Storing and updating the endpoint in Parameter store is important in updating the application. Look at High Availability section of Aurora FAQ: https://aws.amazon.com/rds/aurora/faqs/

NEW QUESTION 17
A DevOps engineer is designing an application that integrates with a legacy REST API. The application has an AWS Lambda function that reads records from an Amazon Kinesis data stream. The Lambda function sends the records to the legacy REST API.
Approximately 10% of the records that the Lambda function sends from the Kinesis data stream have data errors and must be processed manually. The Lambda function event source configuration has an Amazon Simple Queue Service (Amazon SQS) dead-letter queue as an on-failure destination. The DevOps engineer has configured the Lambda function to process records in batches and has implemented retries in case of failure.
During testing the DevOps engineer notices that the dead-letter queue contains many records that have no data errors and that already have been processed by the legacy REST API. The DevOps engineer needs to configure the Lambda function's event source options to reduce the number of errorless records that are sent to the dead-letter queue.
Which solution will meet these requirements?

  • A. Increase the retry attempts
  • B. Configure the setting to split the batch when an error occurs
  • C. Increase the concurrent batches per shard
  • D. Decrease the maximum age of record

Answer: B

Explanation:
This solution will meet the requirements because it will reduce the number of errorless records that are sent to the dead-letter queue. When you configure the setting to split the batch when an error occurs, Lambda will retry only the records that caused the error, instead of retrying the entire batch. This way, the records that have no data errors and have already been processed by the legacy REST API will not be retried and sent to the dead-letter queue unnecessarily.
https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html

NEW QUESTION 18
A DevOps engineer is planning to deploy a Ruby-based application to production. The application needs to interact with an Amazon RDS for MySQL database and should have automatic scaling and high availability. The stored data in the database is critical and should persist regardless of the state of the application stack.
The DevOps engineer needs to set up an automated deployment strategy for the application with automatic rollbacks. The solution also must alert the application team when a deployment fails.
Which combination of steps will meet these requirements? (Select THREE.)

  • A. Deploy the application on AWS Elastic Beanstal
  • B. Deploy an Amazon RDS for MySQL DB instance as part of the Elastic Beanstalk configuration.
  • C. Deploy the application on AWS Elastic Beanstal
  • D. Deploy a separate Amazon RDS for MySQL DB instance outside of Elastic Beanstalk.
  • E. Configure a notification email address that alerts the application team in the AWS Elastic Beanstalk configuration.
  • F. Configure an Amazon EventBridge rule to monitor AWS Health event
  • G. Use an Amazon Simple Notification Service (Amazon SNS) topic as a target to alert the application team.
  • H. Use the immutable deployment method to deploy new application versions.
  • I. Use the rolling deployment method to deploy new application versions.

Answer: BDE

Explanation:
For deploying a Ruby-based application with requirements for interaction with an Amazon RDS for MySQL database, automatic scaling, high availability, and data persistence, the following steps will meet the requirements:
✑ B. Deploy the application on AWS Elastic Beanstalk. Deploy a separate Amazon
RDS for MySQL DB instance outside of Elastic Beanstalk. This approach ensures that the database persists independently of the Elastic Beanstalk environment, which can be torn down and recreated without affecting the database123.
✑ E. Use the immutable deployment method to deploy new application
versions. Immutable deployments provide a zero-downtime deployment method that ensures that if any part of the deployment process fails, the environment is rolled back to the original state automatically4.
✑ D. Configure an Amazon EventBridge rule to monitor AWS Health events. Use an
Amazon Simple Notification Service (Amazon SNS) topic as a target to alert the application team. This setup allows for automated monitoring and alerting of the application team in case of deployment failures or other health events56.
References:
✑ AWS Elastic Beanstalk documentation on deploying Ruby applications1.
✑ AWS documentation on application auto-scaling7.
✑ AWS documentation on automated deployment strategies with automatic rollbacks and alerts456.

NEW QUESTION 19
......

P.S. Easily pass DOP-C02 Exam with 250 Q&As 2passeasy Dumps & pdf Version, Welcome to Download the Newest 2passeasy DOP-C02 Dumps: https://www.2passeasy.com/dumps/DOP-C02/ (250 New Questions)