aiotestking uk

AWS-Certified-Solutions-Architect-Professional Exam Questions - Online Test


AWS-Certified-Solutions-Architect-Professional Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Q1. You are tasked with moving a legacy application from a virtual machine running inside your datacenter to an Amazon VPC. Unfortunately, this app requires access to a number of on- premises services and no one who configured the app still works for your company. Even worse, there's no documentation for it. What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? Choose 3 answers 

A. A VM Import of the current virtual machine 

B. An Internet Gateway to allow a VPN connection 

C. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies' IP addresses 

D. An IP address space that does not conflict with the one on-premises 

E. An Elastic IP address on the VPC instance 

F. An AWS Direct Connect link between the VPC and the network housing the internal services 

Answer: B, E, F 

Q2. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas.The company has been running a pilot deployment of around 100 sensors for 3 months. Each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.During the pilot, you measured a peak of 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced, auto scaled Ingestion layer using EC2 instances, and a PostgreSQL RDS database with 500GB standard storage The pilot is considered a success and your CEO has managed to get the attention of some potential Investors. The business plan requires a deployment of at least 100k sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year improvements. To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup will meet the requirements? 

A. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage 

B. Keep the current architecture, but upgrade RDS storage to 3TB and 10k provisioned IOPS 

C. Ingest data into a DynamoDB table and move old data to a Redshift cluster 

D. Add an SQS queue to the ingestion layer to buffer writes to the RDS Instance 

Answer:

Q3. You are designing a connectivity solution between on-premises infrastructure and Amazon VPC. Your servers on-premises will be communicating with your VPC instances. You will be establishing IPsec tunnels over the Internet You will be using VPN gateways, and terminating the IPsec tunnels on AWS supported customer gateways. Which of the following objectives would you achieve by implementing an IPsec tunnel as outlined above? Choose 4 answers 

A. Peer identity authentication between VPN gateway and customer gateway. 

B. End-to-end identity authentication. 

C. Data integrity protection across the Internet. 

D. End-to-end protection of data in transit. 

E. Data encryption across the Internet. 

F. Protection of data in transit over the Internet. 

Answer: A, C, E, F 

Q4. Your company policies require encryption of sensitive data at rest. You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? Choose 3 answers 

A. Implement third party volume encryption tools 

B. Implement SSL/TLS for all services running on the server 

C. Encrypt data inside your applications before storing it on EBS 

D. Encrypt data using native data encryption drivers at the file system level 

E. Do nothing as EBS volumes are encrypted by default 

Answer: B, C, D 

Q5. Your company hosts a social media site supporting users in multiple countries. You have been asked to provide a highly available design for the application that leverages multiple regions for the most recently accessed content and latency sensitive portions of the web site. The most latency sensitive component of the application Involves reading user preferences to support web site personalization and ad selection. In addition to running your application in multiple regions, which option will support this application's requirements? 

A. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3, CloudFront with dynamic content, and an ELB in each region. Retrieve user preferences from an ElastiCache cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node in each region. 

B. Serve user content from S3, CloudFront with dynamic content, and an ELB in each region. Retrieve user preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the propagation of user preferences from a centralized DB to each ElastiCache cluster. 

C. Serve user content from S3, CloudFront, and use Route53 latency-based routing between ELBs in each region. Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to user preferences with SQS workers for propagating updates to each table. 

D. Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3, CloudFront, and Route53 latency-based routing between ELBs in each region. Retrieve user preferences from a DynamoDB table and leverage SQS to capture changes to user preferences with SQS workers for propagating DynamoDB updates. 

Answer:

Q6. Your customer wishes to deploy an enterprise application to AWS, which will consist of several web servers, several application servers, and a small (50GB) Oracle database. Information is stored both in the database and the filesystems of the various servers. The backup system must support database recovery, whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database. Which backup architecture will meet these requirements? 

A. Backup RDS using automated daily DB backups. Backup the EC2 Instances using AMIs, and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore. 

B. Backup RDS database to S3 using Oracle RMAN. Backup the EC2 instances using AMIs, and supplement with EBS snapshots for individual volume restore. 

C. Backup RDS using a Multi-AZ Deployment. Backup the EC2 instances using AMIs, and supplement by copying filesystem data to S3 to provide file level restore. 

D. Backup RDS using automated daily DB backups. Backup the EC2 instances using EBS snapshots, and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore. 

Answer:

Q7. Your company currently has a 2-tier web application running in an on-premises data center. 

You have experienced several infrastructure failures in the past few months resulting in 

significant financial losses. Your CIO is strongly considering moving the application to AWS. 

While working on achieving buy-In from the other company executives, he asks you to develop a disaster recovery plan to help improve business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs? 

A. Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, Auto Scaling, and ELB resources to support deploying the application across Multiple-Availability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. 

B. Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection. 

C. Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload. 

D. Install your application on a compute-optimized EC2 instance capable of supporting the application's average load. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection. 

Answer:

Q8. You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each account's bill to a Master AWS account using Consolidated Billing. To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal. 

A. Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account. 

B. Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts. 

C. Link the accounts using Consolidated Billing. This will give IAM Users in the Master account access to resources in the Dev and Test accounts. 

D. Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access. 

Answer:

Q9. Your company produces customer commissioned one-of-a-kind skiing helmets, combining high fashion with custom technical enhancements. Customers can show off their individuality on the ski slopes and have access to head-up-displays, GPS, rear-view cams and any other technical Innovation they wish to embed in the helmet. The current manufacturing process is data rich and complex, including assessments to ensure that the custom electronics and materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments. You need to add a new set of assessment to model the failure modes of the custom electronics using GPUs with CUDA, across a cluster of servers with low latency networking. What architecture would allow you to automate the existing process using a hybrid approach, and ensure that the architecture can support the evolution of processes over time. 

A. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta- data. Use an auto-scaling group of G2 instances in a placement group. 

B. Use Amazon Simple Workflow (SWF) to manage assessments, movement of data & meta- data. Use an auto-scaling group of C3 instances with SR-IOV (Single Root I/O Visualization). 

C. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use auto-scaling group of C3 with SR-IOV (Single Root I/O Visualization). 

D. Use AWS Data Pipeline to manage movement of data & meta-data and assessments. Use an auto-scaling group of G2 instances in a placement group. 

Answer:

Q10. You are designing a data leak prevention solution for your VPC environment. You want your VPC instances to be able to access software depots and distributions on the Internet for product updates. The depots and distributions are accessible via third party CDNs by their URLs. You want to explicitly deny any other outbound connections from your VPC instances to hosts on the Internet. Which of the following options would you consider? 

A. Implement security groups and configure outbound rules to only permit traffic to software depots. 

B. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access. Remove default routes. 

C. Implement network access control lists to allow specific destinations, with an implicit deny all rule. 

D. Move all your instances into private VPC subnets. Remove default routes from all routing tables and add specific routes to the software depots and distributions only. 

Answer: