aiotestking uk

AWS-Certified-Solutions-Architect-Professional Exam Questions - Online Test


AWS-Certified-Solutions-Architect-Professional Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

It is impossible to pass Amazon AWS-Certified-Solutions-Architect-Professional exam without any help in the short term. Come to Testking soon and find the most advanced, correct and guaranteed Amazon AWS-Certified-Solutions-Architect-Professional practice questions. You will get a surprising result by our Renew Amazon AWS Certified Solutions Architect Professional practice guides.

Online Amazon AWS-Certified-Solutions-Architect-Professional free dumps demo Below:

NEW QUESTION 1

A company is migrating a legacy application from an on-premises data center to AWS. The application uses MongoDB as a key-value database According to the company's technical guidelines, all Amazon EC2 instances must be hosted in a private subnet without an internet connection. In addition, all connectivity between applications and databases must be encrypted. The database must be able to scale based on demand.
Which solution will meet these requirements?

  • A. Create new Amazon DocumentDB (with MongoDB compatibility) tables for the application with Provisioned IOPS volume
  • B. Use the instance endpoint to connect to Amazon DocumentDB.
  • C. Create new Amazon DynamoDB tables for the application with on-demand capacit
  • D. Use a gateway VPC endpoint for DynamoDB to connect to the DynamoDB tables
  • E. Create new Amazon DynamoDB tables for the application with on-demand capacit
  • F. Use an interface VPC endpoint for DynamoDB to connect to the DynamoDB tables.
  • G. Create new Amazon DocumentDB (with MongoDB compatibility) tables for the application with Provisioned IOPS volumes Use the cluster endpoint to connect to Amazon DocumentDB

Answer: A

Explanation:
A is the correct answer because it uses Amazon DocumentDB (with MongoDB compatibility) as a key-value database that can scale based on demand and supports encryption in transit and at rest. Amazon DocumentDB is a fully managed document database service that is designed to be compatible with the MongoDB API. It is a NoSQL database that is optimized for storing, indexing, and querying JSON data. Amazon DocumentDB supports encryption in transit using TLS and encryption at rest using AWS Key Management Service (AWS KMS). Amazon DocumentDB also supports provisioned IOPS volumes that can scale up to 64 TiB of storage and 256,000 IOPS per cluster. To connect to Amazon DocumentDB, you can use the instance endpoint, which connects to a specific instance in the cluster, or the cluster endpoint, which connects to the primary instance or one of the replicas in the cluster. Using the cluster endpoint is recommended for high availability and load balancing purposes. References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/documentdb/latest/developerguide/what-is.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/documentdb/latest/developerguide/security.encryption.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/documentdb/latest/developerguide/limits.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/documentdb/latest/developerguide/connecting.html

NEW QUESTION 2

A company is running an application in the AWS Cloud. Recent application metrics show inconsistent
response times and a significant increase in error rates. Calls to third-party services are causing the delays. Currently, the application calls third-party services synchronously by directly invoking an AWS Lambda function.
A solutions architect needs to decouple the third-party service calls and ensure that all the calls are eventually completed.
Which solution will meet these requirements?

  • A. Use an Amazon Simple Queue Service (Amazon SQS) queue to store events and invoke the Lambda function.
  • B. Use an AWS Step Functions state machine to pass events to the Lambda function.
  • C. Use an Amazon EventBridge rule to pass events to the Lambda function.
  • D. Use an Amazon Simple Notification Service (Amazon SNS) topic to store events and Invoke the Lambda function.

Answer: A

Explanation:
Using an SQS queue to store events and invoke the Lambda function will decouple the third-party service calls and ensure that all the calls are eventually completed. SQS allows you to store messages in a queue and process them asynchronously, which eliminates the need for the application to wait for a response from the third-party service. The messages will be stored in the SQS queue until they are processed by the Lambda function, even if the Lambda function is currently unavailable or busy. This will ensure that all the calls are eventually completed, even if there are delays or errors.
AWS Step Functions state machines can also be used to pass events to the Lambda function, but it would require additional management and configuration to set up the state machine, which would increase operational overhead.
Amazon EventBridge rule can also be used to pass events to the Lambda function, but it would not provide the same level of decoupling and reliability as SQS.
Using Amazon Simple Notification Service (Amazon SNS) topic to store events and Invoke the Lambda function, is similar to SQS, but SNS is a publish-subscribe messaging service and SQS is a queue service. SNS is used for sending messages to multiple recipients, SQS is used for sending messages to a single recipient, so SQS is more appropriate for this use case.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS SQS
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS Step Functions
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS EventBridge
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS SNS

NEW QUESTION 3

A live-events company is designing a scaling solution for its ticket application on AWS. The application has high peaks of utilization during sale events. Each sale event is a one-time event that is scheduled.
The application runs on Amazon EC2 instances that are in an Auto Scaling group. The application uses PostgreSOL for the database layer.
The company needs a scaling solution to maximize availability during the sale events. Which solution will meet these requirements?

  • A. Use a predictive scaling policy for the EC2 instance
  • B. Host the database on an Amazon Aurora PostgreSOL Serverless v2 Multi-AZ DB instance with automatically scaling read replica
  • C. Create an AWS Step Functions state machine to run parallel AWS Lambda functions to pre-warm the database before a sale even
  • D. Create an Amazon EventBridge rule to invoke the state machine.
  • E. Use a scheduled scaling policy for the EC2 instance
  • F. Host the database on an Amazcyl ROS for PostgreSQL Multi-AZ DB instance with automatically scaling read replica
  • G. Create an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger read replica before a sale even
  • H. Fail over to the larger read replic
  • I. Create another EventBridge rule that invokes another Lambda function to scale down the read replica after the sale event.
  • J. Use a predictive scaling policy for the EC2 instance
  • K. Host the database on an Amazon RDS for PostgreSOL Multi-AZ DB instance with automatically scaling read replic
  • L. Create an AWS Step Functions state machine to run parallel AWS Lambda functions to pre-warm the database before a sale even
  • M. Create an Amazon EventBridge rule to invoke the state machine.
  • N. Use a scheduled scaling policy for the EC2 instance
  • O. Host the database on an Amazon Aurora PostgreSQL Multi-AZ DB duste
  • P. Create an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger Aurora Replica before a sale even
  • Q. Fail over to the larger Aurora Replic
  • R. Create another EventBridge rule that invokes another Lambda function to scale down the Aurora Replica after the sale event.

Answer: D

Explanation:
The correct answer is D. Use a scheduled scaling policy for the EC2 instances. Host the database on an Amazon Aurora PostgreSQL Multi-AZ DB cluster. Create an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger Aurora Replica before a sale event. Fail over to the larger Aurora Replica. Create another EventBridge rule that invokes another Lambda function to scale down the Aurora Replica after the sale event.
This solution will meet the requirements of maximizing availability during the sale events. A scheduled scaling policy for the EC2 instances will allow the application to scale up and down according to the predefined schedule of the sale events. Hosting the database on an Amazon Aurora PostgreSQL Multi-AZ DB cluster will provide high availability and durability, as well as compatibility with PostgreSQL. Creating an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger Aurora Replica before a sale event will ensure that the database can handle the increased read traffic during the peak periods. Failing over to the larger Aurora Replica will make it the primary instance, which will also improve the write performance of the database. Creating another EventBridge rule that invokes another Lambda function to scale down the Aurora Replica after the sale event will reduce the cost and resources of the database.
Reference: [3], section “Scaling Amazon Aurora MySQL and PostgreSQL with Aurora Auto Scaling”

NEW QUESTION 4

A company provides auction services for artwork and has users across North America and Europe. The company hosts its application in Amazon EC2 instances in the us-east-1 Region. Artists upload photos of their work as large-size, high-resolution image files from their mobile phones to a centralized Amazon S3 bucket created in the us-east-l Region. The users in Europe are reporting slow performance for their Image uploads.
How can a solutions architect improve the performance of the image upload process?

  • A. Redeploy the application to use S3 multipart uploads.
  • B. Create an Amazon CloudFront distribution and point to the application as a custom origin
  • C. Configure the buckets to use S3 Transfer Acceleration.
  • D. Create an Auto Scaling group for the EC2 instances and create a scaling policy.

Answer: C

Explanation:
Transfer acceleration. S3 Transfer Acceleration utilizes the Amazon CloudFront global network of edge
locations to accelerate the transfer of data to and from S3 buckets. By enabling S3 Transfer Acceleration on the centralized S3 bucket, the users in Europe will experience faster uploads as their data will be routed through the closest CloudFront edge location.

NEW QUESTION 5

A solutions architect is designing an AWS account structure for a company that consists of multiple teams. All the teams will work in the same AWS Region. The company needs a VPC that is connected to the on-premises network. The company expects less than 50 Mbps of total traffic to and from the on-premises network.
Which combination of steps will meet these requirements MOST cost-effectively? (Select TWO.)

  • A. Create an AWS Cloud Formation template that provisions a VPC and the required subnet
  • B. Deploy the template to each AWS account.
  • C. Create an AWS Cloud Formation template that provisions a VPC and the required subnet
  • D. Deploy the template to a shared services account Share the subnets by using AWS Resource Access Manager.
  • E. Use AWS Transit Gateway along with an AWS Site-to-Site VPN for connectivity to the on-premises networ
  • F. Share the transit gateway by using AWS Resource Access Manager.
  • G. Use AWS Site-to-Site VPN for connectivity to the on-premises network.
  • H. Use AWS Direct Connect for connectivity to the on-premises network.

Answer: BD

NEW QUESTION 6

A company is refactoring its on-premises order-processing platform in the AWS Cloud. The platform includes a web front end that is hosted on a fleet of VMs RabbitMQ to connect the front end to the backend, and a Kubernetes cluster to run a containerized backend system to process the orders. The company does not want to make any major changes to the application
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer Set up Amazon MQ to replace the on-premises messaging queue Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend
  • B. Create a custom AWS Lambda runtime to mimic the web server environment Create an Amazon API Gateway API to replace the front-end web servers Set up Amazon MQ to replace the on-premises messaging queue Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host theorder-processing backend
  • C. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer Set up Amazon MQ to replace the on-premises messaging queue Install Kubernetes on a fleet of different EC2 instances to host the order-processing backend
  • D. Create an AMI of the web server VM Create an Amazon EC2 Auto Scaling group that uses the AMI and an Application Load Balancer Set up an Amazon Simple Queue Service (Amazon SQS) queue to replace the on-premises messaging queue Configure Amazon Elastic Kubernetes Service (Amazon EKS) to host the order-processing backend

Answer: A

Explanation:
https://aws.amazon.com/about-aws/whats-new/2020/11/announcing-amazon-mq-rabbitmq/

NEW QUESTION 7

A company has developed APIs that use Amazon API Gateway with Regional endpoints. The APIs call AWS Lambda functions that use API Gateway authentication mechanisms. After a design review, a solutions architect identifies a set of APIs that do not require public access.
The solutions architect must design a solution to make the set of APIs accessible only from a VPC. All APIs need to be called with an authenticated user.
Which solution will meet these requirements with the LEAST amount of effort?

  • A. Create an internal Application Load Balancer (ALB). Create a target grou
  • B. Select the Lambda function to cal
  • C. Use the ALB DNS name to call the API from the VPC.
  • D. Remove the DNS entry that is associated with the API in API Gatewa
  • E. Create a hosted zone in Amazon Route 53. Create a CNAME record in the hosted zon
  • F. Update the API in API Gateway with the CNAME recor
  • G. Use the CNAME record to call the API from the VPC.
  • H. Update the API endpoint from Regional to private in API Gatewa
  • I. Create an interface VPC endpoint in the VP
  • J. Create a resource policy, and attach it to the AP
  • K. Use the VPC endpoint to call the API from the VPC.
  • L. Deploy the Lambda functions inside the VP
  • M. Provision an EC2 instance, and install an Apache server.From the Apache server, call the Lambda function
  • N. Use the internal CNAME record of the EC2 instance to call the API from the VPC.

Answer: C

Explanation:
This solution requires the least amount of effort as it only requires to update the API endpoint to private in API Gateway and create an interface VPC endpoint. Then create a resource policy and attach it to the API. This will make the API only accessible from the VPC and still keep the authentication mechanism intact. Reference:
AWS-Certified-Solutions-Architect-Professional dumps exhibithttps://aws.amazon.com/api-gateway/features/

NEW QUESTION 8

A company is migrating mobile banking applications to run on Amazon EC2 instances in a VPC. Backend service applications run in an on-premises data center.
The data center has an AWS Direct Connect connection into AWS. The applications that run in the VPC need to resolve DNS requests to an on-premises Active Directory domain that runs in the data center.
Which solution will meet these requirements with the LEAST administrative overhead?

  • A. Provision a set of EC2 instances across two Availability Zones in the VPC as caching DNS servers to resolve DNS queries from the application servers within the VPC.
  • B. Provision an Amazon Route 53 private hosted zon
  • C. Configure NS records that point to on-premises DNS servers.
  • D. Create DNS endpoints by using Amazon Route 53 Resolver Add conditional forwarding rules to resolve DNS namespaces between the on-premises data center and the VPC.
  • E. Provision a new Active Directory domain controller in the VPC with a bidirectional trust between this new domain and the on-premises Active Directory domain.

Answer: C

NEW QUESTION 9

A company hosts an application on AWS. The application reads and writes objects that are stored in a single Amazon S3 bucket. The company must modify the application to deploy the application in two AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Set up an Amazon CloudFront distribution with the S3 bucket as an origi
  • B. Deploy the application to a second Region Modify the application to use the CloudFront distributio
  • C. Use AWS Global Accelerator to access the data in the S3 bucket.
  • D. Create a new S3 bucket in a second Regio
  • E. Set up bidirectional S3 Cross-Region Replication (CRR) between the original S3 bucket and the new S3 bucke
  • F. Configure an S3 Multi-Region Access Point that uses both S3 bucket
  • G. Deploy a modified application to both Regions.
  • H. Create a new S3 bucket in a second Region Deploy the application in the second Regio
  • I. Configure the application to use the new S3 bucke
  • J. Set up S3 Cross-Region Replication (CRR) from the original S3 bucket to the new S3 bucket.
  • K. Set up an S3 gateway endpoint with the S3 bucket as an origi
  • L. Deploy the application to a second Regio
  • M. Modify the application to use the new S3 gateway endpoin
  • N. Use S3 Intelligent-Tiering on the S3 bucket.

Answer: B

NEW QUESTION 10

A solutions architect at a large company needs to set up network security tor outbound traffic to the internet from all AWS accounts within an organization in AWS Organizations. The organization has more than 100 AWS accounts, and the accounts route to each other by using a centralized AWS Transit Gateway. Each account has both an internet gateway and a NAT gateway tor outbound traffic to the internet The company deploys resources only into a single AWS Region.
The company needs the ability to add centrally managed rule-based filtering on all outbound traffic to the internet for all AWS accounts in the organization. The peak load of outbound traffic will not exceed 25 Gbps in each Availability Zone.
Which solution meets these requirements?

  • A. Create a new VPC for outbound traffic to the interne
  • B. Connect the existing transit gateway to the new VP
  • C. Configure a new NAT gatewa
  • D. Create an Auto Scaling group of Amazon EC2 instances that run an open-source internet proxy for rule-based filtering across all Availability Zones in the Regio
  • E. Modify all default routes to point to the proxy's Auto Scaling group.
  • F. Create a new VPC for outbound traffic to the interne
  • G. Connect the existing transit gateway to the new VP
  • H. Configure a new NAT gatewa
  • I. Use an AWSNetwork Firewall firewall for rule-based filterin
  • J. Create Network Firewall endpoints in each Availability Zon
  • K. Modify all default routes to point to the Network Firewall endpoints.
  • L. Create an AWS Network Firewall firewall for rule-based filtering in each AWS accoun
  • M. Modify all default routes to point to the Network Firewall firewalls in each account.
  • N. In each AWS account, create an Auto Scaling group of network-optimized Amazon EC2 instances that run an open-source internet proxy for rule-based filterin
  • O. Modify all default routes to point to the proxy's Auto Scaling group.

Answer: B

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/

NEW QUESTION 11

A weather service provides high-resolution weather maps from a web application hosted on AWS in the
eu-west-1 Region. The weather maps are updated frequently and stored in Amazon S3 along with static HTML content. The web application is fronted by Amazon CloudFront.
The company recently expanded to serve users in the us-east-1 Region, and these new users report that viewing their respective weather maps is slow from time to time.
Which combination of steps will resolve the us-east-1 performance issues? (Choose two.)

  • A. Configure the AWS Global Accelerator endpoint for the S3 bucket in eu-west-1. Configure endpoint groups for TCP ports 80 and 443 in us-east-1.
  • B. Create a new S3 bucket in us-east-1. Configure S3 cross-Region replication to synchronize from the S3 bucket in eu-west-1.
  • C. Use Lambda@Edge to modify requests from North America to use the S3 Transfer Acceleration endpoint in us-east-1.
  • D. Use Lambda@Edge to modify requests from North America to use the S3 bucket in us-east-1.
  • E. Configure the AWS Global Accelerator endpoint for us-east-1 as an origin on the CloudFront distributio
  • F. Use Lambda@Edge to modify requests from North America to use the new origin.

Answer: BD

Explanation:
https://aws.amazon.com/about-aws/whats-new/2016/04/transfer-files-into-amazon-s3-up-to-300-percent-faster/

NEW QUESTION 12

A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many dependent services hosted either in the same data center or externally.
The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration.
Which tools or services should solutions architect use to plan the cloud migration? (Choose three.)

  • A. AWS Application Discovery Service
  • B. AWS SMS
  • C. AWS x-Ray
  • D. AWS Cloud Adoption Readiness Tool (CART)
  • E. Amazon Inspector
  • F. AWS Migration Hub

Answer: ADF

NEW QUESTION 13

A company needs to build a disaster recovery (DR) solution for its ecommerce website. The web application is hosted on a fleet of t3.Iarge Amazon EC2 instances and uses an Amazon RDS for MySQL DB instance. The EC2 instances are in an Auto Scaling group that extends across multiple Availability Zones.
In the event of a disaster, the web application must fail over to the secondary environment with an RPO of 30 seconds and an R TO of 10 minutes.
Which solution will meet these requirements MOST cost-effectively?

  • A. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Regio
  • B. Create a cross-Region read replica for the DB instanc
  • C. Set up a backup plan in AWS Backup to createcross-Region backups for the EC2 instances and the DB instanc
  • D. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Regio
  • E. Recover the EC2 instancesfrom the latest EC2 backu
  • F. Use an Amazon Route 53 geolocation routing policy to automatically fail over to the DR Region in the event of a disaster.
  • G. Use infrastructure as code (laC) to provision the new infrastructure in the DR Regio
  • H. Create across-Region read replica for the DB instanc
  • I. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Regio
  • J. Run the EC2 instances at the minimum capacity in the DR Region Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaste
  • K. Increase the desired capacity of the Auto Scaling group.
  • L. Set up a backup plan in AWS Backup to create cross-Region backups for the EC2 instances and the DB instanc
  • M. Create a cron expression to back up the EC2 instances and the DB instance every 30 seconds to the DR Regio
  • N. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Regio
  • O. Manually restore the backed-up data on new instance
  • P. Use an Amazon Route 53 simple routing policy to automatically fail over to the DR Region in the event of a disaster.
  • Q. Use infrastructure as code (IaC) to provision the new infrastructure in the DR Regio
  • R. Create an Amazon Aurora global databas
  • S. Set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Regio
  • T. Run the Auto Scaling group of EC2 instances at full capacity in the DR Regio
  • . Use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster.

Answer: B

Explanation:
The company should use infrastructure as code (IaC) to provision the new infrastructure in the DR Region. The company should create a cross-Region read replica for the DB instance. The company should set up AWS Elastic Disaster Recovery to continuously replicate the EC2 instances to the DR Region. The company should run the EC2 instances at the minimum capacity in the DR Region. The company should use an Amazon Route 53 failover routing policy to automatically fail over to the DR Region in the event of a disaster. The company should increase the desired capacity of the Auto Scaling group. This solution will meet the requirements most cost-effectively because AWS Elastic Disaster Recovery (AWS DRS) is a service that minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. AWS DRS enables RPOs of seconds and RTOs of minute1s. AWS DRS continuously replicates data from the source servers to a staging area subnet in the DR Region, where it uses low-cost storage and minimal compute resources to maintain ongoing replication. In the event of a disaster, AWS DRS automatically converts the servers to boot and run natively on AWS and launches recovery instances on AWS within minutes2. By using AWS DRS, the company can save costs by removing idle recovery site resources and paying for the full disaster recovery site only when needed. By creating a cross-Region read replica for the DB instance, the company can have a standby copy of its primary database in a different AWS Region3. By using infrastructure as code (IaC), the company can provision the new infrastructure in the DR Region in an automated and consistent way4. By using an Amazon Route 53 failover routing policy, the company can route traffic to a resource that is healthy or to another resource when the first resource becomes unavailable.
The other options are not correct because:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Using AWS Backup to create cross-Region backups for the EC2 instances and the DB instance would not meet the RPO and RTO requirements. AWS Backup is a service that enables you to centralize and automate data protection across AWS services. You can use AWS Backup to back up your application data across AWS services in your account and across accounts. However, AWS Backup does not
provide continuous replication or fast recovery; it creates backups at scheduled intervals and requires manual restoration. Creating backups every 30 seconds would also incur high costs and network bandwidth.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating an Amazon API Gateway Data API service integration with Amazon Redshift would not help with disaster recovery. The Data API is a feature that enables you to query your Amazon Redshift cluster using HTTP requests, without needing a persistent connection or a SQL client. It is useful for building applications that interact with Amazon Redshift, but not for replicating or recovering data.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating an AWS Data Exchange datashare by connecting AWS Data Exchange to the Redshift cluster would not help with disaster recovery. AWS Data Exchange is a service that makes it easy for AWS customers to exchange data in the cloud. You can use AWS Data Exchange to subscribe to a diverse selection of third-party data products or offer your own data products to other AWS customers. A datashare is a feature that enables you to share live and secure access to your Amazon Redshift data across your accounts or with third parties without copying or moving the underlying data. It is useful for sharing query results and views with other users, but not for replicating or recovering data.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/disaster-recovery/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/drs/latest/userguide/what-is-drs.html
AWS-Certified-Solutions-Architect-Professional dumps exhibithttps://aws.amazon.com/cloudformation/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/backup/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/data-exchange/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/redshift/latest/dg/datashare-overview.html

NEW QUESTION 14

A company has an application in the AWS Cloud. The application runs on a fleet of 20 Amazon EC2 instances. The EC2 instances are persistent and store data on multiple attached Amazon Elastic Block Store (Amazon EBS) volumes.
The company must maintain backups in a separate AWS Region. The company must be able to recover the EC2 instances and their configuration within I business day, with loss of no more than I day's worth of data. The company has limited staff and needs a backup solution that optimizes operational efficiency and cost. The company already has created an AWS CloudFormation template that can deploy the required network configuration in a secondary Region.
Which solution will meet these requirements?

  • A. Create a second CloudFormation template that can recreate the EC2 instances in the secondary Region.Run daily multivolume snapshots by using AWS Systems Manager Automation runbook
  • B. Copy the snapshots to the secondary Regio
  • C. In the event of a failure, launch the CloudFormation templates, restore the EBS volumes from snapshots, and transfer usage to the secondary Region.
  • D. Use Amazon Data Lifecycle Manager (Amazon DLM) to create daily multivolume snapshots of the EBS volume
  • E. In the event of a failure, launch theCloudFormation template and use Amazon DLM to restore the EBS volumes and transfer usage to the secondary Region.
  • F. Use AWS Backup to create a scheduled daily backup plan for the EC2 instance
  • G. Configure the backup task to copy the backups to a vault in the secondary Regio
  • H. In the event of a failure, launch the CloudFormation template, restore the instance volumes and configurations from the backup vault, and transfer usage to the secondary Region.
  • I. Deploy EC2 instances of the same size and configuration to the secondary Regio
  • J. Configure AWS DataSync daily to copy data from the primary Region to the secondary Regio
  • K. In the event of a failure, launch the CloudFormation template and transfer usage to the secondary Region.

Answer: C

Explanation:
Using AWS Backup to create a scheduled daily backup plan for the EC2 instances will enable taking snapshots of the EC2 instances and their attached EBS volumes1. Configuring the backup task to copy the backups to a vault in the secondary Region will enable maintaining backups in a separate Region1. In the event of a failure, launching the CloudFormation template will enable deploying the network configuration in the secondary Region2. Restoring the instance volumes and configurations from the backup vault will enable recovering the EC2 instances and their data1. Transferring usage to the secondary Region will enable resuming operations2.

NEW QUESTION 15

A company has an organization in AWS Organizations. The company is using AWS Control Tower to deploy a landing zone for the organization. The company wants to implement governance and policy enforcement. The company must implement a policy that will detect Amazon RDS DB instances that are not encrypted at rest in the company’s production OU.
Which solution will meet this requirement?

  • A. Turn on mandatory guardrails in AWS Control Towe
  • B. Apply the mandatory guardrails to the production OU.
  • C. Enable the appropriate guardrail from the list of strongly recommended guardrails in AWS Control Towe
  • D. Apply the guardrail to the production OU.
  • E. Use AWS Config to create a new mandatory guardrai
  • F. Apply the rule to all accounts in the production OU.
  • G. Create a custom SCP in AWS Control Towe
  • H. Apply the SCP to the production OU.

Answer: B

Explanation:
AWS Control Tower provides a set of "strongly recommended guardrails" that can be enabled to implement governance and policy enforcement. One of these guardrails is "Encrypt Amazon RDS instances" which will
detect RDS DB instances that are not encrypted at rest. By enabling this guardrail and applying it to the production OU, the company will be able to enforce encryption for RDS instances in the production environment.

NEW QUESTION 16

A company runs an loT platform on AWS loT sensors in various locations send data to the company's Node js API servers on Amazon EC2 instances running behind an Application Load Balancer The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General Purpose SSD volume
The number of sensors the company has deployed in the field has increased over time and is expected to grow significantly The API servers are consistently overloaded and RDS metrics show high write latency
Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this platform cost-efficient? {Select TWO.)

  • A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume's IOPS
  • B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas
  • C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data
  • D. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load
  • E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance

Answer: CE

Explanation:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option C is correct because leveraging Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data resolves the issues permanently and enable growth as new sensors are provisioned. Amazon Kinesis Data Streams is a serverless streaming data service that simplifies the capture, processing, and storage of data streams at any scale. Kinesis Data Streams can handle any amount of streaming data and process data from hundreds of thousands of sources with very low latency. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda can be triggered by Kinesis Data Streams events and process the data records in real time. Lambda can also scale automatically based on the incoming data volume. By using Kinesis Data Streams and Lambda, the company can reduce the load on the API servers and improve the performance and scalability of the data ingestion and processing layer3
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option E is correct because re-architecting the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance resolves the issues permanently and enable growth as new sensors are provisioned. Amazon DynamoDB is a fully managed key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB supports auto scaling, which automatically adjusts read and write capacity based on actual traffic patterns. DynamoDB also supports on-demand capacity mode, which instantly accommodates up to double the previous peak traffic on a table. By using DynamoDB instead of RDS MySQL DB instance, the company can eliminate high write latency and improve scalability and performance of the database tier.
References: 1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html 2:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html 3: https://docs.aws.amazon.com/streams/latest/dev/introduction.html : https://docs.aws.amazon.com/lambda/latest/dg/welcome.html : https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html : https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html :

NEW QUESTION 17

A company is deploying a new API to AWS. The API uses Amazon API Gateway with a Regional API endpoint and an AWS Lambda function for hosting. The API retrieves data from an external vendor API, stores data in an Amazon DynamoDB global table, and retrieves data from the DynamoDB global table. The API key for the vendor's API is stored in AWS Secrets Manager and is encrypted with a customer managed key in AWS Key Management Service (AWS KMS). The company has deployed its own API into a single AWS Region.
A solutions architect needs to change the API components of the company's API to ensure that the components can run across multiple Regions in an active-active configuration.
Which combination of changes will meet this requirement with the LEAST operational overhead? (Choose three.)

  • A. Deploy the API to multiple Region
  • B. Configure Amazon Route 53 with custom domain names that route traffic to each Regional API endpoin
  • C. Implement a Route 53 multivalue answer routing policy.
  • D. Create a new KMS multi-Region customer managed ke
  • E. Create a new KMS customer managed replica key in each in-scope Region.
  • F. Replicate the existing Secrets Manager secret to other Region
  • G. For each in-scope Region's replicated secret, select the appropriate KMS key.
  • H. Create a new AWS managed KMS key in each in-scope Regio
  • I. Convert an existing key to a multi-Region ke
  • J. Use the multi-Region key in other Regions.
  • K. Create a new Secrets Manager secret in each in-scope Regio
  • L. Copy the secret value from the existing Region to the new secret in each in-scope Region.
  • M. Modify the deployment process for the Lambda function to repeat the deployment across in-scope Region
  • N. Turn on the multi-Region option for the existing AP
  • O. Select the Lambda function that isdeployed in each Region as the backend for the multi-Region API.

Answer: ABC

Explanation:
The combination of changes that will meet the requirement with the least operational overhead are:
AWS-Certified-Solutions-Architect-Professional dumps exhibit A. Deploy the API to multiple Regions. Configure Amazon Route 53 with custom domain names that route traffic to each Regional API endpoint. Implement a Route 53 multivalue answer routing policy.
AWS-Certified-Solutions-Architect-Professional dumps exhibit B. Create a new KMS multi-Region customer managed key. Create a new KMS customer managed replica key in each in-scope Region.
AWS-Certified-Solutions-Architect-Professional dumps exhibit C. Replicate the existing Secrets Manager secret to other Regions. For each in-scope Region’s replicated secret, select the appropriate KMS key.
These changes will enable the company to have an active-active configuration for its API across multiple Regions, while minimizing the complexity and cost of managing the secrets and keys.
AWS-Certified-Solutions-Architect-Professional dumps exhibit A. This change will allow the company to use Route 53 to distribute traffic across multiple Regional API endpoints, based on the availability and latency of each endpoint. This will improve the performance and availability of the API for global customers12
AWS-Certified-Solutions-Architect-Professional dumps exhibit B. This change will allow the company to use KMS multi-Region keys, which are KMS keys in different Regions that can be used interchangeably. This will simplify the encryption and decryption of secrets across Regions, as the same key material and key ID can be used in any Region34
AWS-Certified-Solutions-Architect-Professional dumps exhibit C. This change will allow the company to use Secrets Manager replication, which replicates the encrypted secret data and metadata across the specified Regions. This will ensure that the secrets are consistent and accessible in any Region, and that any update made to the primary secret will be propagated to the replica secrets automatically56
References:
1: Creating a regional API endpoint - Amazon API Gateway 2: Multivalue answer routing policy - Amazon Route 53 3: Multi-Region keys in AWS KMS - AWS Key Management Service 4: Creating multi-Region keys
- AWS Key Management Service 5: Replicate an AWS Secrets Manager secret to other AWS Regions 6: How to replicate secrets in AWS Secrets Manager to multiple Regions | AWS Security Blog

NEW QUESTION 18

A company runs an application in an on-premises data center. The application gives users the ability to upload media files. The files persist in a file server. The web application has many users. The application server is overutilized, which causes data uploads to fail occasionally. The company frequently adds new storage to the file server. The company wants to resolve these challenges by migrating the application to AWS.
Users from across the United States and Canada access the application. Only authenticated users should have the ability to access the application to upload files. The company will consider a solution that refactors the application, and the company needs to accelerate application development.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use AWS Application Migration Service to migrate the application server to Amazon EC2 instances.Create an Auto Scaling group for the EC2 instance
  • B. Use an Application Load Balancer to distribute the request
  • C. Modify the application to use Amazon S3 to persist the file
  • D. Use Amazon Cognito to authenticate users.
  • E. Use AWS Application Migration Service to migrate the application server to Amazon EC2 instances.Create an Auto Scaling group for the EC2 instance
  • F. Use an Application Load Balancer to distribute the request
  • G. Set up AWS IAM Identity Center (AWS Single Sign-On) to give users the ability to sign in to the applicatio
  • H. Modify the application to use Amazon S3 to persist the files.
  • I. Create a static website for uploads of media file
  • J. Store the static assets in Amazon S3. Use AWS AppSync to create an AP
  • K. Use AWS Lambda resolvers to upload the media files to Amazon S3. Use Amazon Cognito to authenticate users.
  • L. Use AWS Amplify to create a static website for uploads of media file
  • M. Use Amplify Hosting to serve the website through Amazon CloudFron
  • N. Use Amazon S3 to store the uploaded media file
  • O. Use Amazon Cognito to authenticate users.

Answer: D

Explanation:
The company should use AWS Amplify to create a static website for uploads of media files. The company should use Amplify Hosting to serve the website through Amazon CloudFront. The company should use Amazon S3 to store the uploaded media files. The company should use Amazon Cognito to authenticate users. This solution will meet the requirements with the least operational overhead because AWS Amplify is a complete solution that lets frontend web and mobile developers easily build, ship, and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as use cases evolve. No cloud expertise needed1. By using AWS Amplify, the company can refactor the application to a serverless architecture that reduces operational complexity and costs. AWS Amplify offers the following features and benefits:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amplify Studio: A visual interface that enables you to build and deploy a full-stack app quickly, including frontend UI and backend.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amplify CLI: A local toolchain that enables you to configure and manage an app backend with just a few commands.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amplify Libraries: Open-source client libraries that enable you to build cloud-powered mobile and web apps.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amplify UI Components: Open-source design system with cloud-connected components for building feature-rich apps fast.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amplify Hosting: Fully managed CI/CD and hosting for fast, secure, and reliable static and server-side rendered apps.
By using AWS Amplify to create a static website for uploads of media files, the company can leverage Amplify Studio to visually build a pixel-perfect UI and connect it to a cloud backend in clicks. By using Amplify Hosting to serve the website through Amazon CloudFront, the company can easily deploy its web app or website to the fast, secure, and reliable AWS content delivery network (CDN), with hundreds of points of presence globally. By using Amazon S3 to store the uploaded media files, the company can benefit from a highly scalable, durable, and cost-effective object storage service that can handle any amount of data2. By using Amazon Cognito to authenticate users, the company can add user sign-up, sign-in, and access control to its web app with a fully managed service that scales to support millions of users3.
The other options are not correct because:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Using AWS Application Migration Service to migrate the application server to Amazon EC2 instances would not refactor the application or accelerate development. AWS Application Migration Service (AWS MGN) is a service that enables you to migrate physical servers, virtual machines (VMs), or cloud servers from any source infrastructure to AWS without requiring agents or specialized tools. However, this would not address the challenges of overutilization and data uploads failures. It would also not reduce operational overhead or costs compared to a serverless architecture.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating a static website for uploads of media files and using AWS AppSync to create an API would not be as simple or fast as using AWS Amplify. AWS AppSync is a service that enables you to create flexible APIs for securely accessing, manipulating, and combining data from one or more data sources. However, this would require more configuration and management than using Amplify Studio and Amplify Hosting. It would also not provide authentication features like Amazon Cognito.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Setting up AWS IAM Identity Center (AWS Single Sign-On) to give users the ability to sign in to the application would not be as suitable as using Amazon Cognito. AWS Single Sign-On (AWS SSO) is a service that enables you to centrally manage SSO access and user permissions across multiple AWS accounts and business applications. However, this service is designed for enterprise customers who need to manage access for employees or partners across multiple resources. It is not intended for authenticating end users of web or mobile apps.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/amplify/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/s3/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/cognito/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/mgn/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/appsync/
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://aws.amazon.com/single-sign-on/

NEW QUESTION 19

An online retail company is migrating its legacy on-premises .NET application to AWS. The application runs on load-balanced frontend web servers, load-balanced application servers, and a Microsoft SQL Server database.
The company wants to use AWS managed services where possible and does not want to rewrite the application. A solutions architect needs to implement a solution to resolve scaling issues and minimize licensing costs as the application scales.
Which solution will meet these requirements MOST cost-effectively?

  • A. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer for the web tier and for the application tie
  • B. Use Amazon Aurora PostgreSQL with Babelfish turned on to replatform the SOL Server database.
  • C. Create images of all the servers by using AWS Database Migration Service (AWS DMS). Deploy Amazon EC2 instances that are based on the on-premises import
  • D. Deploy the instances in an Auto Scaling group behind a Network Load Balancer for the web tier and for the application tie
  • E. Use Amazon DynamoDB as the database tier.
  • F. Containerize the web frontend tier and the application tie
  • G. Provision an Amazon Elastic Kubernetes Service (Amazon EKS) cluste
  • H. Create an Auto Scaling group behind a Network Load Balancer for the web tier and for the application tie
  • I. Use Amazon RDS for SOL Server to host the database.
  • J. Separate the application functions into AWS Lambda function
  • K. Use Amazon API Gateway for the web frontend tier and the application tie
  • L. Migrate the data to Amazon S3. Use Amazon Athena to query the data.

Answer: A

Explanation:
The best solution is to create a tag policy that contains the allowed project tag values in the organization’s management account and create an SCP that denies the cloudformation:CreateStack API operation unless a project tag is added. A tag policy is a type of policy that can help standardize tags across resources in the organization’s accounts. A tag policy can specify the allowed tag keys, values, and case treatment for compliance. A service control policy (SCP) is a type of policy that can restrict the actions that users and roles can perform in the organization’s accounts. An SCP can deny access to specific API operations unless certain conditions are met, such as having a specific tag. By creating a tag policy in the management account and attaching it to each OU, the organization can enforce consistent tagging across all accounts. By creating an SCP that denies the cloudformation:CreateStack API operation unless a project tag is added, the organization can prevent users from creating new resources without proper tagging. This solution will meet the requirements with the least effort, as it does not involve creating additional resources or modifying existing ones. References: Tag policies - AWS Organizations, Service control policies - AWS Organizations, AW CloudFormation User Guide

NEW QUESTION 20

A global manufacturing company plans to migrate the majority of its applications to AWS. However, the company is concerned about applications that need to remain within a specific country or in the company's central on-premises data center because of data regulatory requirements or requirements for latency of single-digit milliseconds. The company also is concerned about the applications that it hosts in some of its factory sites, where limited network infrastructure exists.
The company wants a consistent developer experience so that its developers can build applications once and deploy on premises, in the cloud, or in a hybrid architecture.
The developers must be able to use the same tools, APIs, and services that are familiar to them. Which solution will provide a consistent hybrid experience to meet these requirements?

  • A. Migrate all applications to the closest AWS Region that is complian
  • B. Set up an AWS Direct Connect connection between the central on-premises data center and AW
  • C. Deploy a Direct Connect gateway.
  • D. Use AWS Snowball Edge Storage Optimized devices for the applications that have data regulatory requirements or requirements for latency of single-digit millisecond
  • E. Retain the devices on premise
  • F. Deploy AWS Wavelength to host the workloads in the factory sites.
  • G. Install AWS Outposts for the applications that have data regulatory requirements or requirements for latency of single-digit millisecond
  • H. Use AWS Snowball Edge Compute Optimized devices to host the workloads in the factory sites.
  • I. Migrate the applications that have data regulatory requirements or requirements for latency ofsingle-digit milliseconds to an AWS Local Zon
  • J. Deploy AWS Wavelength to host the workloads in the factory sites.

Answer: C

Explanation:
Installing AWS Outposts for the applications that have data regulatory requirements or requirements for latency of single-digit milliseconds will provide a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises1. AWS Outposts allows customers to run some AWS services locally and connect to a broad range of services available in the local AWS Region1. Using AWS Snowball Edge Compute Optimized devices to host the workloads in the factory sites will provide local compute and storage resources for locations with limited network infrastructure2. AWS Snowball Edge devices can run Amazon EC2 instances and AWS Lambda functions locally and sync data with AWS when network connectivity is available2.

NEW QUESTION 21
......

P.S. Dumps-hub.com now are offering 100% pass ensure AWS-Certified-Solutions-Architect-Professional dumps! All AWS-Certified-Solutions-Architect-Professional exam questions have been updated with correct answers: https://www.dumps-hub.com/AWS-Certified-Solutions-Architect-Professional-dumps.html (483 New Questions)