aiotestking uk

AWS-Certified-Solutions-Architect-Professional Exam Questions - Online Test


AWS-Certified-Solutions-Architect-Professional Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

It is impossible to pass Amazon AWS-Certified-Solutions-Architect-Professional exam without any help in the short term. Come to Pass4sure soon and find the most advanced, correct and guaranteed Amazon AWS-Certified-Solutions-Architect-Professional practice questions. You will get a surprising result by our Up to the minute Amazon AWS Certified Solutions Architect Professional practice guides.

Online Amazon AWS-Certified-Solutions-Architect-Professional free dumps demo Below:

NEW QUESTION 1

A research company is running daily simul-ations in the AWS Cloud to meet high demand. The simu-lations run on several hundred Amazon EC2 instances that are based on Amazon Linux 2. Occasionally, a simu-lation gets stuck and requires a cloud operations engineer to solve the problem by connecting to an EC2 instance through SSH.
Company policy states that no EC2 instance can use the same SSH key and that all connections must be logged in AWS CloudTrail.
How can a solutions architect meet these requirements?

  • A. Launch new EC2 instances, and generate an individual SSH key for each instanc
  • B. Store the SSH key in AWS Secrets Manage
  • C. Create a new IAM policy, and attach it to the engineers' IAM role with an Allow statement for the GetSecretValue actio
  • D. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through any SSH client.
  • E. Create an AWS Systems Manager document to run commands on EC2 instances to set a new unique SSH ke
  • F. Create a new IAM policy, and attach it to the engineers' IAM role with an Allow statement to run Systems Manager document
  • G. Instruct the engineers to run the document to set an SSH key and to connect through any SSH client.
  • H. Launch new EC2 instances without setting up any SSH key for the instance
  • I. Set up EC2 Instance Connect on each instanc
  • J. Create a new IAM policy, and attach it to the engineers' IAM role with an Allow statement for the SendSSHPublicKey actio
  • K. Instruct the engineers to connect to the instance by using a browser-based SSH client from the EC2 console.
  • L. Set up AWS Secrets Manager to store the EC2 SSH ke
  • M. Create a new AWS Lambda function to create a new SSH key and to call AWS Systems Manager Session Manager to set the SSH key on the EC2instanc
  • N. Configure Secrets Manager to use the Lambda function for automatic rotation once dail
  • O. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through any SSH client.

Answer: C

NEW QUESTION 2

A software-as-a-service (SaaS) provider exposes APIs through an Application Load Balancer (ALB). The ALB connects to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that is deployed in the
us-east-I Region. The exposed APIs contain usage of a few non-standard REST methods: LINK, UNLINK, LOCK, and UNLOCK.
Users outside the United States are reporting long and inconsistent response times for these APIs. A solutions architect needs to resolve this problem with a solution that minimizes operational overhead.
Which solution meets these requirements?

  • A. Add an Amazon CloudFront distributio
  • B. Configure the ALB as the origin.
  • C. Add an Amazon API Gateway edge-optimized API endpoint to expose the API
  • D. Configure the ALB as the target.
  • E. Add an accelerator in AWS Global Accelerato
  • F. Configure the ALB as the origin.
  • G. Deploy the APIs to two additional AWS Regions: eu-west-l and ap-southeast-2. Add latency-based routing records in Amazon Route 53.

Answer: C

Explanation:
Adding an accelerator in AWS Global Accelerator will enable improving the performance of the APIs for local and global users1. AWS Global Accelerator is a service that uses the AWS global network to route traffic to the optimal regional endpoint based on health, client location, and policies1. Configuring the ALB as the origin will enable connecting the accelerator to the ALB that exposes the APIs2. AWS Global Accelerator supports non-standard REST methods such as LINK, UNLINK, LOCK, and UNLOCK3.

NEW QUESTION 3

A company recently deployed an application on AWS. The application uses Amazon DynamoDB. The company measured the application load and configured the RCUs and WCUs on the DynamoDB table to
match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The application load is close to the average load tor the rest of the week. The access pattern includes many more writes to the table than reads of the table.
A solutions architect needs to implement a solution to minimize the cost of the table. Which solution will meet these requirements?

  • A. Use AWS Application Auto Scaling to increase capacity during the peak perio
  • B. Purchase reserved RCUs and WCUs to match the average load.
  • C. Configure on-demand capacity mode for the table.
  • D. Configure DynamoDB Accelerator (DAX) in front of the tabl
  • E. Reduce the provisioned read capacity to match the new peak load on the table.
  • F. Configure DynamoDB Accelerator (DAX) in front of the tabl
  • G. Configure on-demand capacity mode for the table.

Answer: D

Explanation:
This solution meets the requirements by using Application Auto Scaling to automatically increase capacity during the peak period, which will handle the double the average load. And by purchasing reserved RCUs and WCUs to match the average load, it will minimize the cost of the table for the rest of the week when the load is close to the average.

NEW QUESTION 4

A company is running a traditional web application on Amazon EC2 instances. The company needs to refactor the application as microservices that run on containers. Separate versions of the application exist in two distinct environments: production and testing. Load for the application is variable, but the minimum load and the maximum load are known. A solutions architect needs to design the updated application with a serverless architecture that minimizes operational complexity.
Which solution will meet these requirements MOST cost-effectively?

  • A. Upload the container images to AWS Lambda as function
  • B. Configure a concurrency limit for the associated Lambda functions to handle the expected peak loa
  • C. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing.
  • D. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected loa
  • E. Deploy tasks from the ECR image
  • F. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.
  • G. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected loa
  • H. Deploy tasks from the ECR image
  • I. Configure two separate Application Load Balancers to direct traffic to the EKS clusters.
  • J. Upload the container images to AWS Elastic Beanstal
  • K. In Elastic Beanstalk, create separate environments and deployments for production and testin
  • L. Configure two separate Application Load Balancers to direct traffic to the Elastic Beanstalk deployments.

Answer: B

Explanation:
minimizes operational + microservices that run on containers = AWS Elastic Beanstalk

NEW QUESTION 5

A solutions architect has implemented a SAML 2 0 federated identity solution with their company's
on-premises identity provider (IdP) to authenticate users' access to the AWS environment. When the solutions architect tests authentication through the federated identity web portal, access to the AWS environment is granted However when test users attempt to authenticate through the federated identity web portal, they are not able to access the AWS environment
Which items should the solutions architect check to ensure identity federation is properly configured? (Select THREE)

  • A. The 1AM user's permissions policy has allowed the use of SAML federation for that user
  • B. The 1AM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal
  • C. Test users are not in the AWSFederatedUsers group in the company's IdP
  • D. The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the 1AM role, and the SAML assertion from IdP
  • E. The on-premises IdP's DNS hostname is reachable from the AWS environment VPCs
  • F. The company's IdP defines SAML assertions that properly map users or groups in the company to 1AM roles with appropriate permissions

Answer: BDF

NEW QUESTION 6

A company is running an application in the AWS Cloud. The core business logic is running on a set of Amazon EC2 instances in an Auto Scaling group. An Application Load Balancer (ALB) distributes traffic to the EC2 instances. Amazon Route 53 record api.example.com is pointing to the ALB.
The company's development team makes major updates to the business logic. The company has a rule that when changes are deployed, only 10% of customers can receive the new logic during a testing window. A customer must use the same version of the business logic during the testing window.
How should the company deploy the updates to meet these requirements?

  • A. Create a second ALB, and deploy the new logic to a set of EC2 instances in a new Auto Scaling group.Configure the ALB to distribute traffic to the EC2 instance
  • B. Update the Route 53 record to use weighted routing, and point the record to both of the ALBs.
  • C. Create a second target group that is referenced by the AL
  • D. Deploy the new logic to EC2 instances in this new target grou
  • E. Update the ALB listener rule to use weighted target group
  • F. Configure ALB target group stickiness.
  • G. Create a new launch configuration for the Auto Scaling grou
  • H. Specify the launch configuration to use the AutoScaIingRoIIingUpdate policy, and set the MaxBatchSize option to 10. Replace the launch configuration on the Auto Scaling grou
  • I. Deploy the changes.
  • J. Create a second Auto Scaling group that is referenced by the AL
  • K. Deploy the new logic on a set ofEC2 instances in this new Auto Scaling grou
  • L. Change the ALB routing algorithm to least outstanding requests (LOR). Configure ALB session stickiness.

Answer: B

Explanation:
The company should create a second target group that is referenced by the ALB. The company should deploy the new logic to EC2 instances in this new target group. The company should update the ALB listener rule to use weighted target groups. The company should configure ALB target group stickiness. This solution will meet the requirements because weighted target groups are a feature that enables you to distribute traffic across multiple target groups using a single listener rule. You can specify a weight for each target group, which determines the percentage of requests that are routed to that target group. For example, if you specify two target groups, each with a weight of 10, each target group receives half the requests1. By creating a second target group and deploying the new logic to EC2 instances in this new target group, the company can have two versions of its business logic running in parallel. By updating the ALB listener rule to use weighted target groups, the company can control how much traffic is sent to each version. By configuring ALB target group stickiness, the company can ensure that a customer uses the same version of the business logic during the testing window. Target group stickiness is a feature that enables you to bind a user’s session to a specific target within a target group for the duration of the session2.
The other options are not correct because:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating a second ALB and deploying the new logic to a set of EC2 instances in a new Auto Scaling group would not be as cost-effective or simple as using weighted target groups. A second ALB would incur additional charges and require more configuration and management. Updating the Route 53 record to use weighted routing would not ensure that a customer uses the same version of the business logic during the testing window, as DNS caching could affect how requests are routed.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating a new launch configuration for the Auto Scaling group and replacing it on the Auto Scaling group would not allow for gradual traffic shifting between versions. A launch configuration is a
template that an Auto Scaling group uses to launch EC2 instances. You can specify information such as
the AMI ID, instance type, key pair, security groups, and block device mapping for your instances3. However, replacing the launch configuration on an Auto Scaling group would affect all instances in that group, not just 10% of customers.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating a second Auto Scaling group and changing the ALB routing algorithm to least outstanding requests (LOR) would not allow for controlled traffic shifting between versions. A second Auto Scaling group would require more configuration and management. The LOR routing algorithm is a feature that enables you to route traffic based on how quickly targets respond to requests. The load balancer selects a target from the target group with the fewest outstanding requests4. However, this algorithm does not
take into account customer sessions or weights.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibithttps://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#rou

NEW QUESTION 7

A company is running a data-intensive application on AWS. The application runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The application reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
A solutions architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.
Which solution will provide the LARGEST overall cost reduction while meeting these requirements?

  • A. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage clas
  • B. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loadin
  • C. Use the new file system as the shared storage for the duration of the jo
  • D. Delete the file system when the job is complete.
  • E. Migrate the data from the existing shared file system to a large Amazon Elastic Block Store (Amazon EBS) volume with Multi-Attach enable
  • F. Attach the EBS volume to each of the instances by using a user data script in the Auto Scaling group launch templat
  • G. Use the EBS volume as the shared storage for the duration of the jo
  • H. Detach the EBS volume when the job is complete.
  • I. Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Standard storage clas
  • J. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using batch loadin
  • K. Use the new file system as the shared storage for the duration of the jo
  • L. Delete the file system when the job is complete.
  • M. Migrate the data from the existing shared file system to an Amazon S3 bucke
  • N. Before the job runs each month, use AWS Storage Gateway to create a file gateway with the data from Amazon S3. Use the file gateway as the shared storage for the jo
  • O. Delete the file gateway when the job is complete.

Answer: A

Explanation:
https://aws.amazon.com/blogs/storage/new-enhancements-for-moving-data-between-amazon-fsx-for-lustre-and

NEW QUESTION 8

A data analytics company has an Amazon Redshift cluster that consists of several reserved nodes. The cluster is experiencing unexpected bursts of usage because a team of employees is compiling a deep audit analysis report. The queries to generate the report are complex read queries and are CPU intensive.
Business requirements dictate that the cluster must be able to service read and write queries at all times. A solutions architect must devise a solution that accommodates the bursts of usage.
Which solution meets these requirements MOST cost-effectively?

  • A. Provision an Amazon EMR cluste
  • B. Offload the complex data processing tasks.
  • C. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
  • D. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
  • E. Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.

Answer: C

Explanation:
The best solution is to deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%. This solution will enable the cluster to scale up or down quickly by adding or removing nodes within minutes. This will improve the performance of the complex read queries and also reduce the cost by scaling down when the demand decreases. This solution is more cost-effective than using a classic resize operation, which takes longer and requires more downtime. It is also more suitable than using Amazon EMR, which is designed for big data processing rather than data warehousing. References: Amazon Redshift Documentation, Resizing clusters in Amazon Redshift, [Amazon EMR Documentation]

NEW QUESTION 9

A company has millions of objects in an Amazon S3 bucket. The objects are in the S3 Standard storage class. All the S3 objects are accessed frequently. The number of users and applications that access the objects is increasing rapidly. The objects are encrypted with server-side encryption with AWS KMS Keys (SSE-KMS).
A solutions architect reviews the company's monthly AWS invoice and notices that AWS KMS costs are increasing because of the high number of requests from Amazon S3. The solutions architect needs to optimize costs with minimal changes to the application.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create a new S3 bucket that has server-side encryption with customer-provided keys (SSE-C) as the encryption typ
  • B. Copy the existing objects to the new S3 bucke
  • C. Specify SSE-C.
  • D. Create a new S3 bucket that has server-side encryption with Amazon S3 managed keys (SSE-S3) as the encryption typ
  • E. Use S3 Batch Operations to copy the existing objects to the new S3 bucke
  • F. Specify SSE-S3.
  • G. Use AWS CloudHSM to store the encryption key
  • H. Create a new S3 bucke
  • I. Use S3 Batch Operations to copy the existing objects to the new S3 bucke
  • J. Encrypt the objects by using the keys from CloudHSM.
  • K. Use the S3 Intelligent-Tiering storage class for the S3 bucke
  • L. Create an S3 Intelligent-Tiering archive configuration to transition objects that are not accessed for 90 days to S3 Glacier Deep Archive.

Answer: B

Explanation:
To reduce the volume of Amazon S3 calls to AWS KMS, use Amazon S3 bucket keys, which are protected encryption keys that are reused for a limited time in Amazon S3. Bucket keys can reduce costs for AWS KMS requests by up to 99%. You can configure a bucket key for all objects in an Amazon S3 bucket, or for a specific object in an Amazon S3 bucket. https://docs.aws.amazon.com/fr_fr/kms/latest/developerguide/services-s3.html

NEW QUESTION 10

A large mobile gaming company has successfully migrated all of its on-premises infrastructure to the AWS Cloud. A solutions architect is reviewing the environment to ensure that it was built according to the design and that it is running in alignment with the Well-Architected Framework.
While reviewing previous monthly costs in Cost Explorer, the solutions architect notices that the creation and subsequent termination of several large instance types account for a high proportion of the costs. The solutions architect finds out that the company's developers are launching new Amazon EC2 instances as part of their testing and that the developers are not using the appropriate instance types.
The solutions architect must implement a control mechanism to limit the instance types that only the developers can launch.
Which solution will meet these requirements?

  • A. Create a desired-instance-type managed rule in AWS Confi
  • B. Configure the rule with the instance types that are allowe
  • C. Attach the rule to an event to run each time a new EC2 instance is launched.
  • D. In the EC2 console, create a launch template that specifies the instance types that are allowe
  • E. Assign the launch template to the developers' IAM accounts.
  • F. Create a new IAM polic
  • G. Specify the instance types that are allowe
  • H. Attach the policy to an IAM group that contains the IAM accounts for the developers
  • I. Use EC2 Image Builder to create an image pipeline for the developers and assist them in the creation of a golden image.

Answer: C

Explanation:
This is doable with IAM policy creation to restrict users to specific instance types. Found the below article. https://blog.vizuri.com/limiting-allowed-aws-instance-type-with-iam-policy

NEW QUESTION 11

A company has developed a web application. The company is hosting the application on a group of Amazon EC2 instances behind an Application Load Balancer. The company wants to improve the security posture of the application and plans to use AWS WAF web ACLs. The solution must not adversely affect legitimate traffic to the application.
How should a solutions architect configure the web ACLs to meet these requirements?

  • A. Set the action of the web ACL rules to Coun
  • B. Enable AWS WAF logging Analyze the requests for false positives Modify the rules to avoid any false positive Over time change the action of the web ACL rules from Count to Block.
  • C. Use only rate-based rules in the web ACL
  • D. and set the throttle limit as high as possible Temporarily block all requests that exceed the limi
  • E. Define nested rules to narrow the scope of the rate tracking.
  • F. Set the action o' the web ACL rules to Bloc
  • G. Use only AWS managed rule groups in the web ACLs Evaluate the rule groups by using Amazon CloudWatch metrics with AWS WAF sampled requests or AWS WAF logs.
  • H. Use only custom rule groups in the web ACL
  • I. and set the action to Allow Enable AWS WAF logging Analyze the requests tor false positives Modify the rules to avoid any false positive Over time, change the action of the web ACL rules from Allow to Block.

Answer: A

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/waf-analyze-count-action-rules/

NEW QUESTION 12

A financial services company in North America plans to release a new online web application to its customers on AWS . The company will launch the application in the us-east-1 Region on Amazon EC2 instances. The application must be highly available and must dynamically scale to meet user traffic. The company also wants to implement a disaster recovery environment for the application in the us-west-1 Region by using active-passive failover.
Which solution will meet these requirements?

  • A. Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the us-east-1 VP
  • B. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in both VPCs Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALB.
  • C. Create a VPC in us-east-1 and a VPC in us-west-1. In the us-east-1 VP
  • D. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VP
  • E. Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Set up the same configuration in the us-west-1 VP
  • F. Create an Amazon Route 53 hosted zone Create separate records for each ALB Enable health checks to ensure high availability between Regions.
  • G. Create a VPC in us-east-1 and a VPC in us-west-1 In the us-east-1 VP
  • H. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in that VPC Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in the us-east-1 VPC Place the Auto Scaling group behind the ALB Set up the same configuration in the us-west-1 VPC Create an Amazon Route 53 hosted zon
  • I. Create separate records for each ALB Enable health checks and configure a failover routing policy for each record.
  • J. Create a VPC in us-east-1 and a VPC in us-west-1 Configure VPC peering In the us-east-1 VP
  • K. create an Application Load Balancer (ALB) that extends across multiple Availability Zones in Create an Auto Scaling group that deploys the EC2 instances across the multiple Availability Zones in both VPCs Place the Auto Scaling group behind the ALB Create an Amazon Route 53 host.. Create a record for the ALB.

Answer: C

Explanation:
it's the one that handles failover while B (the one shown as the answer today) it almost the same but does not handle failover.

NEW QUESTION 13

A research center is migrating to the AWS Cloud and has moved its on-premises 1 PB object storage to an Amazon S3 bucket. One hundred scientists are using this object storage to store their work-related documents. Each scientist has a personal folder on the object store. All the scientists are members of a single IAM user group.
The research center's compliance officer is worried that scientists will be able to access each other's work. The research center has a strict obligation to report on which scientist accesses which documents.
The team that is responsible for these reports has little AWS experience and wants a ready-to-use solution that minimizes operational overhead.
Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)

  • A. Create an identity policy that grants the user read and write acces
  • B. Add a condition that specifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on the scientists' IAM user group.
  • C. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucke
  • D. Store the trail output in another S3 bucke
  • E. Use Amazon Athena to query the logs and generate reports.
  • F. Enable S3 server access loggin
  • G. Configure another S3 bucket as the target for log deliver
  • H. Use Amazon Athena to query the logs and generate reports.
  • I. Create an S3 bucket policy that grants read and write access to users in the scientists' IAM user group.
  • J. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket and write the events to Amazon CloudWatc
  • K. Use the Amazon Athena CloudWatch connector to query the logs and generate reports.

Answer: AB

Explanation:
This option allows the solutions architect to use an identity policy that grants the user read and write access to their own personal folder on the S3 bucket1. By adding a condition that specifies that the S3
paths must be prefixed with ${aws:username}, the solutions architect can ensure that each scientist can only access their own folder2. By applying the policy on the scientists’ IAM user group, the solutions architect can simplify the management of permissions for all the scientists3. By configuring a trail with AWS CloudTrail to capture all object-level events in the S3 bucket, the solutions architect can record and store information about every action performed on the S3 objects4. By storing the trail output in another S3 bucket, the solutions architect can secure and archive the log files5. By using Amazon Athena to query the logs and generate reports, the solutions architect can use a serverless interactive query service that can analyze data in S3 using standard SQL.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Identity-based policies
AWS-Certified-Solutions-Architect-Professional dumps exhibit Policy variables
AWS-Certified-Solutions-Architect-Professional dumps exhibit IAM groups
AWS-Certified-Solutions-Architect-Professional dumps exhibit Object-level logging
AWS-Certified-Solutions-Architect-Professional dumps exhibit Creating a trail that applies to all regions
AWS-Certified-Solutions-Architect-Professional dumps exhibit [What is Amazon Athena?]

NEW QUESTION 14

A company has a solution that analyzes weather data from thousands of weather stations. The weather stations send the data over an Amazon API Gateway REST API that has an AWS Lambda function integration. The Lambda function calls a third-party service for data pre-processing. The third-party service gets overloaded and fails the pre-processing, causing a loss of data.
A solutions architect must improve the resiliency of the solution. The solutions architect must ensure that no data is lost and that data can be processed later if failures occur.
What should the solutions architect do to meet these requirements?

  • A. Create an Amazon Simple Queue Service (Amazon SQS) queu
  • B. Configure the queue as the dead-letter queue for the API.
  • C. Create two Amazon Simple Queue Service (Amazon SQS) queues: a primary queue and a secondary queu
  • D. Configure the secondary queue as the dead-letter queue for the primary queu
  • E. Update the API to use a new integration to the primary queu
  • F. Configure the Lambda function as the invocation target for the primary queue.
  • G. Create two Amazon EventBridge event buses: a primary event bus and a secondary event bu
  • H. Update the API to use a new integration to the primary event bu
  • I. Configure an EventBridge rule to react to all events on the primary event bu
  • J. Specify the Lambda function as the target of the rul
  • K. Configure thesecondary event bus as the failure destination for the Lambda function.
  • L. Create a custom Amazon EventBridge event bu
  • M. Configure the event bus as the failure destination for the Lambda function.

Answer: C

Explanation:
This option allows the solution to decouple the API from the Lambda function and use EventBridge as an event-driven service that can handle failures gracefully1. By using two event buses, one for normal events and one for failed events, the solution can ensure that no data is lost and that data can be processed later if failures occur2. The primary event bus receives the data from the weather stations through
the API integration and triggers the Lambda function through a rule. The Lambda function can then call the
third-party service for data pre-processing. If the third-party service fails, the Lambda function can send an error response to EventBridge, which will route it to the secondary event bus as a failure destination3. The secondary event bus can then store the failed events in another service, such as Amazon S3 or Amazon SQS, for troubleshooting or reprocessing.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Using Amazon EventBridge with AWS Lambda
AWS-Certified-Solutions-Architect-Professional dumps exhibit Using multiple event buses
AWS-Certified-Solutions-Architect-Professional dumps exhibit Using failure destinations
AWS-Certified-Solutions-Architect-Professional dumps exhibit [Using dead-letter queues]
==================

NEW QUESTION 15

A company uses an AWS CodeCommit repository The company must store a backup copy of the data that is in the repository in a second AWS Region
Which solution will meet these requirements?

  • A. Configure AWS Elastic Disaster Recovery to replicate the CodeCommit repository data to the second Region
  • B. Use AWS Backup to back up the CodeCommit repository on an hourly schedule Create a cross-Region copy in the second Region
  • C. Create an Amazon EventBridge rule to invoke AWS CodeBuild when the company pushes code to the repository Use CodeBuild to clone the repository Create a zip file of the content Copy the file to an S3 bucket in the second Region
  • D. Create an AWS Step Functions workflow on an hourly schedule to take a snapshot of the CodeCommit repository Configure the workflow to copy the snapshot to an S3 bucket in the second Region

Answer: B

Explanation:
AWS Backup is a fully managed service that makes it easy to centralize and automate the creation, retention, and restoration of backups across AWS services. It provides a way to schedule automatic backups for CodeCommit repositories on an hourly basis. Additionally, it also supports cross-Region replication, which allows you to copy the backups to a second Region for disaster recovery.
By using AWS Backup, the company can set up an automatic and regular backup schedule for the CodeCommit repository, ensuring that the data is regularly backed up and stored in a second Region. This can provide a way to recover quickly from any disaster event that might occur.
Reference:
AWS Backup documentation: https://aws.amazon.com/backup/ AWS Backup for AWS CodeCommit documentation:
https://aws.amazon.com/about-aws/whats-new/2020/07/aws-backup-now-supports-aws-codecommit-repositorie

NEW QUESTION 16

A company is using Amazon OpenSearch Service to analyze data. The company loads data into an OpenSearch Service cluster with 10 data nodes from an Amazon S3 bucket that uses S3 Standard storage. The data resides in the cluster for 1 month for read-only analysis. After 1 month, the company deletes the index that contains the data from the cluster. For compliance purposes, the company must retain a copy of all input data.
The company is concerned about ongoing costs and asks a solutions architect to recommend a new solution. Which solution will meet these requirements MOST cost-effectively?

  • A. Replace all the data nodes with UltraWarm nodes to handle the expected capacit
  • B. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.
  • C. Reduce the number of data nodes in the cluster to 2 Add UltraWarm nodes to handle the expected capacit
  • D. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the dat
  • E. Transition the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy.
  • F. Reduce the number of data nodes in the cluster to 2. Add UltraWarm nodes to handle the expected capacit
  • G. Configure the indexes to transition to UltraWarm when OpenSearch Service ingests the dat
  • H. Add cold storage nodes to the cluster Transition the indexes from UltraWarm to cold storag
  • I. Delete the input data from the S3 bucket after 1 month by using an S3 Lifecycle policy.
  • J. Reduce the number of data nodes in the cluster to 2. Add instance-backed data nodes to handle the expected capacit
  • K. Transition the input data from S3 Standard to S3 Glacier Deep Archive when the company loads the data into the cluster.

Answer: B

Explanation:
By reducing the number of data nodes in the cluster to 2 and adding UltraWarm nodes to handle the expected capacity, the company can reduce the cost of running the cluster. Additionally, configuring the indexes to transition to UltraWarm when OpenSearch Service ingests the data will ensure that the data is stored in the most cost-effective manner. Finally, transitioning the input data to S3 Glacier Deep Archive after 1 month by using an S3 Lifecycle policy will ensure that the data is retained for compliance purposes, while also reducing the ongoing costs.

NEW QUESTION 17

A company built an application based on AWS Lambda deployed in an AWS CloudFormation stack. The last production release of the web application introduced an issue that resulted in an outage lasting several minutes. A solutions architect must adjust the deployment process to support a canary release.
Which solution will meet these requirements?

  • A. Create an alias for every new deployed version of the Lambda functio
  • B. Use the AWS CLI update-alias command with the routing-config parameter to distribute the load.
  • C. Deploy the application into a new CloudFormation stac
  • D. Use an Amazon Route 53 weighted routing policy to distribute the load.
  • E. Create a version for every new deployed Lambda functio
  • F. Use the AWS CLIupdate-function-configuration command with the routing-config parameter to distribute the load.
  • G. Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the Deployment configuration to distribute the load.

Answer: A

Explanation:
https://aws.amazon.com/blogs/compute/implementing-canary-deployments-of-aws-lambda-functions-with-alias- https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html

NEW QUESTION 18

A company has an application that uses an Amazon Aurora PostgreSQL DB cluster for the application's database. The DB cluster contains one small primary instance and three larger replica instances. The application runs on an AWS Lambda function. The application makes many short-lived connections to the database's replica instances to perform read-only operations.
During periods of high traffic, the application becomes unreliable and the database reports that too many connections are being established. The frequency of high-traffic periods is unpredictable.
Which solution will improve the reliability of the application?

  • A. Use Amazon RDS Proxy to create a proxy for the DB cluste
  • B. Configure a read-only endpoint for the prox
  • C. Update the Lambda function to connect to the proxy endpoint.
  • D. Increase the max_connections setting on the DB cluster's parameter grou
  • E. Reboot all the instances in the DB cluste
  • F. Update the Lambda function to connect to the DB cluster endpoint.
  • G. Configure instance scaling for the DB cluster to occur when the DatabaseConnections metric is close to the max _ connections settin
  • H. Update the Lambda function to connect to the Aurora reader endpoint.
  • I. Use Amazon RDS Proxy to create a proxy for the DB cluste
  • J. Configure a read-only endpoint for the Aurora Data API on the prox
  • K. Update the Lambda function to connect to the proxy endpoint.

Answer: A

NEW QUESTION 19

A company is building a call center by using Amazon Connect. The company’s operations team is defining a disaster recovery (DR) strategy across AWS Regions. The contact center has dozens of contact flows, hundreds of users, and dozens of claimed phone numbers.
Which solution will provide DR with the LOWEST RTO?

  • A. Create an AWS Lambda function to check the availability of the Amazon Connect instance and to send a notification to the operations team in case of unavailabilit
  • B. Create an Amazon EventBridge rule to invoke the Lambda function every 5 minute
  • C. After notification, instruct the operations team to use theAWS Management Console to provision a new Amazon Connect instance in a second Regio
  • D. Deploy the contact flows, users, and claimed phone numbers by using an AWS CloudFormation template.
  • E. Provision a new Amazon Connect instance with all existing users in a second Regio
  • F. Create an AWS Lambda function to check the availability of the Amazon Connect instanc
  • G. Create an Amazon EventBridge rule to invoke the Lambda function every 5 minute
  • H. In the event of an issue, configure the Lambda function to deploy an AWS CloudFormation template that provisions contact flows and claimed numbers in the second Region.
  • I. Provision a new Amazon Connect instance with all existing contact flows and claimed phone numbers in a second Regio
  • J. Create an Amazon Route 53 health check for the URL of the Amazon Connect instanc
  • K. Create an Amazon CloudWatch alarm for failed health check
  • L. Create an AWS Lambda function to deploy an AWS CloudFormation template that provisions all user
  • M. Configure the alarm to invoke the Lambda function.
  • N. Provision a new Amazon Connect instance with all existing users and contact flows in a second Region.Create an Amazon Route 53 health check for the URL of the Amazon Connect instanc
  • O. Create an Amazon CloudWatch alarm for failed health check
  • P. Create an AWS Lambda function to deploy an AWS CloudFormation template that provisions claimed phone number
  • Q. Configure the alarm to invoke the Lambda function.

Answer: D

Explanation:
Option D provisions a new Amazon Connect instance with all existing users and contact flows in a second Region. It also sets up an Amazon Route 53 health check for the URL of the Amazon Connect instance, an Amazon CloudWatch alarm for failed health checks, and an AWS Lambda function to deploy an AWS CloudFormation template that provisions claimed phone numbers. This option allows for the fastest recovery time because all the necessary components are already provisioned and ready to go in the second Region. In the event of a disaster, the failed health check will trigger the AWS Lambda function to deploy the CloudFormation template to provision the claimed phone numbers, which is the only missing component.

NEW QUESTION 20

A company is creating a centralized logging service running on Amazon EC2 that will receive and analyze logs from hundreds of AWS accounts. AWS PrivateLink is being used to provide connectivity between the client services and the logging service.
In each AWS account with a client, an interface endpoint has been created for the logging service and is available. The logging service running on EC2 instances with a Network Load Balancer (NLB) are deployed in different subnets. The clients are unable to submit logs using the VPC endpoint.
Which combination of steps should a solutions architect take to resolve this issue? (Select TWO.)

  • A. Check that the NACL is attached to the logging service subnet to allow communications to and from the NLB subnet
  • B. Check that the NACL is attached to the NLB subnet to allow communications to and from the logging service subnets running on EC2 instances.
  • C. Check that the NACL is attached to the logging service subnets to allow communications to and from the interface endpoint subnet
  • D. Check that the NACL is attached to the interface endpoint subnet to allow communications to and from the logging service subnets running on EC2 instances.
  • E. Check the security group for the logging service running on the EC2 instances to ensure it allows Ingress from the NLB subnets.
  • F. Check the security group for the loggia service running on EC2 instances to ensure it allows ingress from the clients.
  • G. Check the security group for the NLB to ensure it allows ingress from the interlace endpoint subnets.

Answer: AC

NEW QUESTION 21
......

P.S. Surepassexam now are offering 100% pass ensure AWS-Certified-Solutions-Architect-Professional dumps! All AWS-Certified-Solutions-Architect-Professional exam questions have been updated with correct answers: https://www.surepassexam.com/AWS-Certified-Solutions-Architect-Professional-exam-dumps.html (483 New Questions)