aiotestking uk

AWS-Certified-DevOps-Engineer-Professional Exam Questions - Online Test


AWS-Certified-DevOps-Engineer-Professional Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Q1. You need to process long-running jobs once and only once. How might you do this?

A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.

B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.

C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.

D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to process. 

Answer: C

Explanation:

The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.

Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml

Q2. What is the maximum supported single-volume throughput on EBS?

A. 320IV|iB/s

B. 160MiB/s

C. 40MiB/s

D. 640MiB/s 

Answer: A

Explanation:

The ceiling throughput for PIOPS on EBS is 320MiB/s.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVo|umeTypes.htm| IIIIIEZIIII HWS-IIEIIII|]S-EII§iII|}|}I‘-PI‘0I|}SSi0IIilI EIIEIII

Q3. Which of these configuration or deployment practices is a security risk for RDS?

A. Storing SQL function code in plaintext

B. Non-MuIti-AZ RDS instance

C. Having RDS and EC2 instances exist in the same subnet

D. RDS in a public subnet 

Answer: D

Explanation:

Making RDS accessible to the public internet in a public subnet poses a security risk, by making your database directly addressable and spammable.

DB instances deployed within a VPC can be configured to be accessible from the Internet or from EC2 instances outside the VPC. If a VPC security group specifies a port access such as TCP port 22, you would not be able to access the DB instance because the firewall for the DB instance provides access only via the IP addresses specified by the DB security groups the instance is a member of and the port defined when the DB instance was created.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.htmI

Q4. You have an asynchronous processing application using an Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxed out, but the inbound job velocity did not increase. What is a possible issue?

A. Some of the newjobs coming in are malformed and unprocessable.

B. The routing tables changed and none of the workers can process events anymore.

C. Someone changed the IAM Role Policy on the instances in the worker group and broke permissions to access the queue.

D. The scaling metric is not functioning correctly. 

Answer: A

Explanation:

The IAM Role must be fine, as if it were broken, NO jobs would be processed since the system would never be able to get any queue messages. The same reasoning applies to the routing table change. The scaling metric is fine, as instance count increased when the queue depth increased due to more messages entering than exiting. Thus, the only reasonable option is that some of the recent messages must be malformed and unprocessable.

Reference:

https://github.com/andrew-templeton/cloudacademy/blob/fca920b45234bbe99cc0e8efb9c65134884dd48 9/questions/null

Q5. You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those few minutes. What is a good approach?

A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.

B. Set up a DynamoDB MuIti-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.

C. Set up a DynamoDB Mu|ti-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.

D. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region. Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.

Answer:

Explanation:

There is no such thing as a cross-region ELB, nor such thing as a cross-region Auto Scaling Group, nor such thing as a DynamoDB Multi-Region Table. The only option that makes sense is the cross-regional replication version with two ELBs and ASGs with Route53 Failover and Latency DNS.

Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepI.htmI

Q6. You are building a Ruby on Rails application for internal, non-production use which uses IV|ySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?

A. AWS CIoudFormation

B. AWS OpsWorks

C. AWS ELB + EC2 with CLI Push

D. AWS Elastic Beanstalk 

Answer: D

Explanation:

Elastic BeanstaIk's primary mode of operation exactly supports this use case out of the box. It is simpler than all the other options for this question.

With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

Reference:        http://docs.aws.amazon.com/elasticbeanstaIk/Iatest/dg/create_depIoy_Ruby_raiIs.html

QUESTION N0: 65

What is the scope of AWS IAM?

A. Global

B. Availability Zone

C. Region

D. Placement Group 

Q7. When thinking of AWS Elastic BeanstaIk's model, which is true?

A. Applications have many deployments, deployments have many environments.

B. Environments have many applications, applications have many deployments.

C. Applications have many environments, environments have many deployments.

D. Deployments have many environments, environments have many applications. 

Answer: C

Explanation:

Applications group logical services. Environments belong to Applications, and typically represent different deployment levels (dev, stage, prod, fo forth). Deployments belong to environments, and are pushes of bundles of code for the environments to run.

Reference:      http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/\NeIcome.html

Q8. You are getting a lot of empty receive requests when using Amazon SQS. This is making a lot of unnecessary network load on your instances. What can you do to reduce this load?

A. Subscribe your queue to an SNS topic instead.

B. Use as long of a poll as possible, instead of short polls.

C. Alter your visibility timeout to be shorter.

D. Use <code>sqsd</code> on your EC2 instances. 

Answer: B

Explanation:

One benefit of long polling with Amazon SQS is the reduction of the number of empty responses, when there are no messages available to return, in reply to a ReceiveMessage request sent to an Amazon SQS queue. Long polling allows the Amazon SQS service to wait until a message is available in the queue before sending a response.

Reference:

http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/sqs-long-polling.html

Q9. You want to pass queue messages that are 1GB each. How should you achieve this?

A. Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS.

B. Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies.

C. Use SQS's support for message partitioning and multi-part uploads on Amazon S3.

D. Use AWS EFS as a shared pool storage medium. Store filesystem pointers to the files on disk in the SQS message bodies.

Answer:

Explanation:

You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and retrieving messages with a message size of up to 2 GB. To manage Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java.

Reference:

http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/s3-messages.html

Q10. Which EBS volume type is best for high performance NoSQL cluster deployments?

A. iol

B. gpl

C. standard

D. gp2 

Answer: A

Explanation:

io1 volumes, or Provisioned IOPS (PIOPS) SSDs, are best for: Critical business applications that require sustained IOPS performance, or more than 10,000 IOPS or 160 MiB/s of throughput per volume, like large database workloads, such as MongoDB.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVo|umeTypes.htm|