aiotestking uk

AWS-Certified-Solutions-Architect-Professional Exam Questions - Online Test


AWS-Certified-Solutions-Architect-Professional Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Your success in Amazon AWS-Certified-Solutions-Architect-Professional is our sole target and we develop all our AWS-Certified-Solutions-Architect-Professional braindumps in a way that facilitates the attainment of this target. Not only is our AWS-Certified-Solutions-Architect-Professional study material the best you can find, it is also the most detailed and the most updated. AWS-Certified-Solutions-Architect-Professional Practice Exams for Amazon AWS-Certified-Solutions-Architect-Professional are written to the highest standards of technical accuracy.

Free AWS-Certified-Solutions-Architect-Professional Demo Online For Amazon Certifitcation:

NEW QUESTION 1

A company is running an application on Amazon EC2 instances in the AWS Cloud. The application is using a MongoDB database with a replica set as its data tier. The MongoDB database is installed on systems in the company's on-premises data center and is accessible through an AWS Direct Connect connection to the data center environment.
A solutions architect must migrate the on-premises MongoDB database to Amazon DocumentDB (with MongoDB compatibility).
Which strategy should the solutions architect choose to perform this migration?

  • A. Create a fleet of EC2 instance
  • B. Install MongoDB Community Edition on the EC2 instances, and create a databas
  • C. Configure continuous synchronous replication with the database that is running in theon-premises data center.
  • D. Create an AWS Database Migration Service (AWS DMS) replication instanc
  • E. Create a source endpoint for the on-premises MongoDB database by using change data capture (CDC). Create a target endpoint for the Amazon DocumentDB databas
  • F. Create and run a DMS migration task.
  • G. Create a data migration pipeline by using AWS Data Pipelin
  • H. Define data nodes for the on-premises MongoDB database and the Amazon DocumentDB databas
  • I. Create a scheduled task to run the data pipeline.
  • J. Create a source endpoint for the on-premises MongoDB database by using AWS Glue crawlers.Configure continuous asynchronous replication between the MongoDB database and the Amazon DocumentDB database.

Answer: B

Explanation:
https://aws.amazon.com/getting-started/hands-on/move-to-managed/migrate-mongodb-to-documentdb/

NEW QUESTION 2

A company gives users the ability to upload images from a custom application. The upload process invokes an AWS Lambda function that processes and stores the image in an Amazon S3 bucket. The application invokes the Lambda function by using a specific function version ARN.
The Lambda function accepts image processing parameters by using environment variables. The company often adjusts the environment variables of the Lambda function to achieve optimal image processing output. The company tests different parameters and publishes a new function version with the updated environment variables after validating results. This update process also requires frequent changes to the custom application to invoke the new function version ARN. These changes cause interruptions for users.
A solutions architect needs to simplify this process to minimize disruption to users. Which solution will meet these requirements with the LEAST operational overhead?

  • A. Directly modify the environment variables of the published Lambda function versio
  • B. Use theSLATEST version to test image processing parameters.
  • C. Create an Amazon DynamoDB table to store the image processing parameter
  • D. Modify the Lambda function to retrieve the image processing parameters from the DynamoDB table.
  • E. Directly code the image processing parameters within the Lambda function and remove the environment variable
  • F. Publish a new function version when the company updates the parameters.
  • G. Create a Lambda function alia
  • H. Modify the client application to use the function alias AR
  • I. Reconfigure the Lambda alias to point to new versions of the function when the company finishes testing.

Answer: D

Explanation:
A Lambda function alias allows you to point to a specific version of a function and also can be updated to point to a new version of the function without modifying the client application. This way, the company can test different versions of the function with different environment variables and, once the optimal parameters are found, update the alias to point to the new version, without the need to update the client application.
By using this approach, the company can simplify the process of updating the environment variables, minimize disruption to users, and reduce the operational overhead.
Reference:
AWS Lambda documentation: https://aws.amazon.com/lambda/
AWS Lambda Aliases documentation: https://docs.aws.amazon.com/lambda/latest/dg/aliases-intro.html AWS Lambda versioning and aliases documentation:
https://aws.amazon.com/blogs/compute/versioning-aliases-in-aws-lambda/

NEW QUESTION 3

A company runs applications in hundreds of production AWS accounts. The company uses AWS Organizations with all features enabled and has a centralized backup operation that uses AWS Backup.
The company is concerned about ransomware attacks. To address this concern, the company has created a new policy that all backups must be resilient to breaches of privileged-user credentials in any production account.
Which combination of steps will meet this new requirement? (Select THREE.)

  • A. Implement cross-account backup with AWS Backup vaults in designated non-production accounts.
  • B. Add an SCP that restricts the modification of AWS Backup vaults.
  • C. Implement AWS Backup Vault Lock in compliance mode.
  • D. Configure the backup frequency, lifecycle, and retention period to ensure that at least one backup always exists in the cold tier.
  • E. Configure AWS Backup to write all backups to an Amazon S3 bucket in a designated non-production accoun
  • F. Ensure that the S3 bucket has S3 Object Lock enabled.
  • G. Implement least privilege access for the IAM service role that is assigned to AWS Backup.

Answer: ABC

NEW QUESTION 4

A company's CISO has asked a Solutions Architect to re-engineer the company's current CI/CD practices to make sure patch deployments to its applications can happen as quickly as possible with minimal downtime if vulnerabilities are discovered. The company must also be able to quickly roll back a change in case of errors.
The web application is deployed in a fleet of Amazon EC2 instances behind an Application Load Balancer. The company is currently using GitHub to host the application source code, and has configured an AWS CodeBuild project to build the application. The company also intends to use AWS CodePipeline to trigger builds from GitHub commits using the existing CodeBuild project.
What CI/CD configuration meets all of the requirements?

  • A. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for in-place deploymen
  • B. Monitor the newly deployed code, and, if there are any issues, push another code update.
  • C. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for blue/green deployment
  • D. Monitor the newly deployed code, and, if there are any issues, trigger a manual rollback using CodeDeploy.
  • E. Configure CodePipeline with a deploy stage using AWS CloudFormation to create a pipeline for test and production stack
  • F. Monitor the newly deployed code, and, if there are any issues, push another code update.
  • G. Configure the CodePipeline with a deploy stage using AWS OpsWorks and in-place deployments.Monitor the newly deployed code, and, if there are any issues, push another code update.

Answer: B

NEW QUESTION 5

A solutions architect has an operational workload deployed on Amazon EC2 instances in an Auto Scaling Group The VPC architecture spans two Availability Zones (AZ) with a subnet in each that the Auto Scaling group is targeting. The VPC is connected to an on-premises environment and connectivity cannot be interrupted The maximum size of the Auto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:
VPCCIDR 10 0 0 0/23
AZ1 subnet CIDR: 10 0 0 0724
AZ2 subnet CIDR: 10.0.1 0724
Since deployment, a third AZ has become available in the Region The solutions architect wants to adopt the new AZ without adding additional IPv4 address space and without service downtime. Which solution will meet these requirements?

  • A. Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1 subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use the AZ1 subnet only Remove the currentAZ2 subnet Create a new AZ2 subnet using the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets.
  • B. Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnet using hall the address spac
  • C. Update the Auto Scaling group to use this new subne
  • D. Repeat this for the second A
  • E. Define a new subnet in AZ3: then update the Auto Scaling group to target all three new subnets
  • F. Create a new VPC with the same IPv4 address space and define three subnets, with one for each AZ Update the existing Auto Scaling group to target the new subnets in the new VPC
  • G. Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet to have halt the previous address space Adjust the Auto Scaling group to also use the AZ1 subnet agai
  • H. When the instances are healthy, adjust the Auto Seating group to use the AZ1 subnet onl
  • I. Update the current AZ2 subnet and assign the second half of the address space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets

Answer: A

Explanation:
https://repost.aws/knowledge-center/vpc-ip-address-range

NEW QUESTION 6

A company is deploying a distributed in-memory database on a fleet of Amazon EC2 instances. The fleet consists of a primary node and eight worker nodes. The primary node is responsible for monitoring cluster health, accepting user requests, distributing user requests to worker nodes, and sending an aggregate response back to a client. Worker nodes communicate with each other to replicate data partitions.
The company requires the lowest possible networking latency to achieve maximum performance. Which solution will meet these requirements?

  • A. Launch memory optimized EC2 instances in a partition placement group.
  • B. Launch compute optimized EC2 instances in a partition placement group.
  • C. Launch memory optimized EC2 instances in a cluster placement group
  • D. Launch compute optimized EC2 instances in a spread placement group.

Answer: C

NEW QUESTION 7

A company needs to audit the security posture of a newly acquired AWS account. The company’s data security team requires a notification only when an Amazon S3 bucket becomes publicly exposed. The company has already established an Amazon Simple Notification Service (Amazon SNS) topic that has the data security team's email address subscribed.
Which solution will meet these requirements?

  • A. Create an S3 event notification on all S3 buckets for the isPublic even
  • B. Select the SNS topic as the target for the event notifications.
  • C. Create an analyzer in AWS Identity and Access Management Access Analyze
  • D. Create an Amazon EventBridge rule for the event type “Access Analyzer Finding” with a filter for “isPublic: true.” Select the SNS topic as the EventBridge rule target.
  • E. Create an Amazon EventBridge rule for the event type “Bucket-Level API Call via CloudTrail” with a filter for “PutBucketPolicy.” Select the SNS topic as the EventBridge rule target.
  • F. Activate AWS Config and add the cloudtrail-s3-dataevents-enabled rul
  • G. Create an Amazon EventBridge rule for the event type “Config Rules Re-evaluation Status” with a filter for “NON_COMPLIANT.” Select the SNS topic as the EventBridge rule target.

Answer: B

Explanation:
Access Analyzer is to assess the access policy. https://docs.aws.amazon.com/ja_jp/AmazonS3/latest/userguide/access-control-block-public-access.html

NEW QUESTION 8

A company runs an unauthenticated static website (www.example.com) that includes a registration form for users. The website uses Amazon S3 for hosting and uses Amazon CloudFront as the content delivery network with AWS WAF configured. When the registration form is submitted, the website calls an Amazon API Gateway API endpoint that invokes an AWS Lambda function to process the payload and forward the payload to an external API call.
During testing, a solutions architect encounters a cross-origin resource sharing (CORS) error. The solutions architect confirms that the CloudFront distribution origin has the Access-Control-Allow-Origin header set to www.example.com.
What should the solutions architect do to resolve the error?

  • A. Change the CORS configuration on the S3 bucke
  • B. Add rules for CORS to the Allowed Origin element for www.example.com.
  • C. Enable the CORS setting in AWS WA
  • D. Create a web ACL rule in which the Access-Control-Allow-Origin header is set to www.example.com.
  • E. Enable the CORS setting on the API Gateway API endpoin
  • F. Ensure that the API endpoint is configured to return all responses that have the Access-Control -Allow-Origin header set to www.example.com.
  • G. Enable the CORS setting on the Lambda functio
  • H. Ensure that the return code of the function has the Access-Control-Allow-Origin header set to www.example.com.

Answer: C

Explanation:
CORS errors occur when a web page hosted on one domain tries to make a request to a server hosted on another domain. In this scenario, the registration form hosted on the static website is trying to make a request to the API Gateway API endpoint hosted on a different domain, which is causing the error. To resolve this error, the Access-Control-Allow-Origin header needs to be set to the domain from which the request is being made. In this case, the header is already set to www.example.com on the CloudFront distribution origin. Therefore, the solutions architect should enable the CORS setting on the API Gateway API endpoint and ensure that the API endpoint is configured to return all responses that have the Access-Control-Allow-Origin header set to www.example.com. This will allow the API endpoint to respond to requests from the static website without a CORS error.
https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-cors-errors/

NEW QUESTION 9

A company is running a web application in a VPC. The web application runs on a group of Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is using AWS WAF.
An external customer needs to connect to the web application. The company must provide IP addresses to all external customers.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Replace the ALB with a Network Load Balancer (NLB). Assign an Elastic IP address to the NLB.
  • B. Allocate an Elastic IP addres
  • C. Assign the Elastic IP address to the ALProvide the Elastic IP address to the customer.
  • D. Create an AWS Global Accelerator standard accelerato
  • E. Specify the ALB as the accelerator's endpoint.Provide the accelerator's IP addresses to the customer.
  • F. Configure an Amazon CloudFront distributio
  • G. Set the ALB as the origi
  • H. Ping the distribution's DNS name to determine the distribution's public IP addres
  • I. Provide the IP address to the customer.

Answer: C

Explanation:
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-accelerators.alb-accelerator.html Option A is wrong. AWS WAF does not support associating with NLB.
https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html Option B is wrong. An ALB does not support an Elastic IP address. https://aws.amazon.com/elasticloadbalancing/features/

NEW QUESTION 10

A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration The company needs to store large, important documents within the application with the following requirements
* 1 The data must be highly durable and available
* 2. The data must always be encrypted at rest and in transit.
* 3 The encryption key must be managed by the company and rotated periodically
Which of the following solutions should the solutions architect recommend?

  • A. Deploy the storage gateway to AWS in file gateway mode Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes
  • B. Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for object encryption.
  • C. Use Amazon DynamoDB with SSL to connect to DynamoDB Use an AWS KMS key to encrypt DynamoDB objects at rest.
  • D. Deploy instances with Amazon EBS volumes attached to store this data Use EBS volume encryption using an AWS KMS key to encrypt the data.

Answer: B

NEW QUESTION 11

A company that provisions job boards for a seasonal workforce is seeing an increase in traffic and usage. The backend services run on a pair of Amazon EC2 instances behind an Application Load Balancer with Amazon DynamoDB as the datastore. Application read and write traffic is slow during peak seasons.
Which option provides a scalable application architecture to handle peak seasons with the LEAST development effort?

  • A. Migrate the backend services to AWS Lambd
  • B. Increase the read and write capacity of DynamoDB.
  • C. Migrate the backend services to AWS Lambd
  • D. Configure DynamoDB to use global tables.
  • E. Use Auto Scaling groups for the backend service
  • F. Use DynamoDB auto scaling.
  • G. Use Auto Scaling groups for the backend service
  • H. Use Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB.

Answer: C

Explanation:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option C is correct because using Auto Scaling groups for the backend services allows the company to scale up or down the number of EC2 instances based on the demand and traffic. This way, the backend services can handle more requests during peak seasons without compromising performance or availability. Using DynamoDB auto scaling allows the company to adjust the provisioned read and write capacity of the table or index automatically based on the actual traffic patterns. This way, the table or index can handle sudden increases or decreases in workload without throttling or overprovisioning1.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option A is incorrect because migrating the backend services to AWS Lambda may require significant development effort to rewrite the code and test the functionality. Moreover, increasing the read and write capacity of DynamoDB manually may not be efficient or cost-effective, as it does not account for the variability of the workload. The company may end up paying for unused capacity or experiencing throttling if the workload exceeds the provisioned capacity1.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option B is incorrect because migrating the backend services to AWS Lambda may require significant development effort to rewrite the code and test the functionality. Moreover, configuring DynamoDB to use global tables may not be necessary or beneficial for the company, as global tables are mainly used for replicating data across multiple AWS Regions for fast local access and disaster recovery. Global tables do not automatically scale the provisioned capacity of each replica table; they still require manual or auto scaling settings2.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Option D is incorrect because using Amazon Simple Queue Service (Amazon SQS) and an AWS Lambda function to write to DynamoDB may introduce additional complexity and latency to the application architecture. Amazon SQS is a message queue service that decouples and coordinates the components of a distributed system. AWS Lambda is a serverless compute service that runs code in response to events. Using these services may require significant development effort to integrate them with the backend services and DynamoDB. Moreover, they may not improve the read performance of DynamoDB, which may also be affected by high traffic3.
References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Auto Scaling groups
AWS-Certified-Solutions-Architect-Professional dumps exhibit DynamoDB auto scaling
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS Lambda
AWS-Certified-Solutions-Architect-Professional dumps exhibit DynamoDB global tables
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS Lambda vs EC2: Comparison of AWS Compute Resources - Simform
AWS-Certified-Solutions-Architect-Professional dumps exhibit Managing throughput capacity automatically with DynamoDB auto scaling - Amazon DynamoDB
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS Aurora Global Database vs. DynamoDB Global Tables
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amazon Simple Queue Service (SQS)

NEW QUESTION 12

A company has five development teams that have each created five AWS accounts to develop and host applications. To track spending, the development teams log in to each account every month, record the current cost from the AWS Billing and Cost Management console, and provide the information to the company's finance team.
The company has strict compliance requirements and needs to ensure that resources are created only in AWS Regions in the United States. However, some resources have been created in other Regions.
A solutions architect needs to implement a solution that gives the finance team the ability to track and consolidate expenditures for all the accounts. The solution also must ensure that the company can create resources only in Regions in the United States.
Which combination of steps will meet these requirements in the MOST operationally efficient way? (Select THREE.)

  • A. Create a new account to serve as a management accoun
  • B. Create an Amazon S3 bucket for the finance learn Use AWS Cost and Usage Reports to create monthly reports and to store the data in the finance team's S3 bucket.
  • C. Create a new account to serve as a management accoun
  • D. Deploy an organization in AWS Organizations with all features enable
  • E. Invite all the existing accounts to the organizatio
  • F. Ensure that each account accepts the invitation.
  • G. Create an OU that includes all the development team
  • H. Create an SCP that allows the creation of resources only in Regions that are in the United State
  • I. Apply the SCP to the OU.
  • J. Create an OU that includes all the development team
  • K. Create an SCP that denies (he creation of resources in Regions that are outside the United State
  • L. Apply the SCP to the OU.
  • M. Create an 1AM role in the management account Attach a policy that includes permissions to view the Billing and Cost Management consol
  • N. Allow the finance learn users to assume the rol
  • O. Use AWS Cost Explorer and the Billing and Cost Management console to analyze cost.
  • P. Create an 1AM role in each AWS accoun
  • Q. Attach a policy that includes permissions to view the Billing and Cost Management consol
  • R. Allow the finance team users to assume the role.

Answer: BCE

Explanation:
AWS Organizations is a service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. By creating a management account and inviting all the existing accounts to join the organization, the solutions architect can track and consolidate expenditures for all the accounts using AWS Cost Management tools such as AWS Cost Explorer and AWS Budgets. An organizational unit (OU) is a group of accounts within an organization that can be used to apply policies and simplify management. A service control policy (SCP) is a type of policy that you can use to manage permissions in your organization. By creating an OU that includes all the development teams and applying an SCP that allows the creation of resources only in Regions that are in the United States, the solutions architect can ensure that the company meets its compliance requirements and avoids unwanted charges from other Regions. An IAM role is an identity with permission policies that determine what the identity can and cannot do in AWS. By creating an IAM role in the management account and allowing the finance team users to assume it, the solutions architect can give them access to view the Billing and Cost Management console without sharing credentials or creating additional users. References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/aws-cost-management/latest/userguide/what-is-costmanagement.html

NEW QUESTION 13

A company has introduced a new policy that allows employees to work remotely from their homes if they connect by using a VPN The company Is hosting Internal applications with VPCs in multiple AWS accounts Currently the applications are accessible from the company's on-premises office network through an AWS Site-to-Site VPN connection The VPC in the company's main AWS account has peering connections established with VPCs in other AWS accounts.
A solutions architect must design a scalable AWS Client VPN solution for employees to use while they work from home
What is the MOST cost-effective solution that meets these requirements?

  • A. Create a Client VPN endpoint in each AWS account Configure required routing that allows access to internal applications
  • B. Create a Client VPN endpoint in the mam AWS account Configure required routing that allows access to internal applications
  • C. Create a Client VPN endpoint in the main AWS account Provision a transit gateway that is connected to each AWS account Configure required routing that allows access to internal applications
  • D. Create a Client VPN endpoint in the mam AWS account Establish connectivity between the Client VPN endpoint and the AWS Site-to-Site VPN

Answer: C

Explanation:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/scenario-peered.html

NEW QUESTION 14

A company has application services that have been containerized and deployed on multiple Amazon EC2 instances with public IPs. An Apache Kafka cluster has been deployed to the EC2 instances. A PostgreSQL database has been migrated to Amazon RDS for PostgreSQL. The company expects a significant increase of orders on its platform when a new version of its flagship product is released.
What changes to the current architecture will reduce operational overhead and support the product release?

  • A. Create an EC2 Auto Scaling group behind an Application Load Balance
  • B. Create additional read replicas for the DB instanc
  • C. Create Amazon Kinesis data streams and configure the application services to use the data stream
  • D. Store and serve static content directly from Amazon S3.
  • E. Create an EC2 Auto Scaling group behind an Application Load Balance
  • F. Deploy the DB instance in Multi-AZ mode and enable storage auto scalin
  • G. Create Amazon Kinesis data streams and configure the application services to use the data stream
  • H. Store and serve static content directly from Amazon S3.
  • I. Deploy the application on a Kubernetes cluster created on the EC2 instances behind an Application Load Balance
  • J. Deploy the DB instance in Multi-AZ mode and enable storage auto scalin
  • K. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluste
  • L. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
  • M. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balance
  • N. Create additional read replicas for the DB instanc
  • O. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluste
  • P. Store static content in Amazon S3 behind an Amazon CloudFront distribution.

Answer: D

Explanation:
The correct answer is D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed Streaming for Apache Kafka cluster and configure the application services to use the cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Option D meets the requirements of the scenario because it allows you to reduce operational overhead and support the product release by using the following AWS services and features:
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that allows you to run Kubernetes applications on AWS without needing to install, operate, or maintain your own Kubernetes control plane. You can use Amazon EKS to deploy your containerized application services on a Kubernetes cluster that is compatible with your existing tools and processes.
AWS-Certified-Solutions-Architect-Professional dumps exhibit AWS Fargate is a serverless compute engine that eliminates the need to provision and manage servers for your containers. You can use AWS Fargate as the launch type for your Amazon EKS pods, which are the smallest deployable units of computing in Kubernetes. You can also enable auto scaling for your pods, which allows you to automatically adjust the number of pods based on the demand or custom metrics.
AWS-Certified-Solutions-Architect-Professional dumps exhibit An Application Load Balancer (ALB) is a load balancer that distributes traffic across multiple targets in multiple Availability Zones using HTTP or HTTPS protocols. You can use an ALB to balance the load across your Amazon EKS pods and provide high availability and fault tolerance for your application.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amazon RDS for PostgreSQL is a fully managed relational database service that supports the PostgreSQL open source database engine. You can create additional read replicas for your DB instance, which are copies of your primary DB instance that can handle read-only queries and improve performance. You can also use read replicas to scale out beyond the capacity of a single DB instance for read-heavy workloads.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed service that makes it easy to build and run applications that use Apache Kafka to process streaming data. Apache Kafka is an open source platform for building real-time data pipelines and streaming applications. You can use Amazon MSK to create and manage a Kafka cluster that is highly available, secure, and compatible with your existing Kafka applications. You can also configure your application services to use the Amazon MSK cluster as a source or destination of streaming data.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amazon S3 is an object storage service that offers high durability, availability, and scalability. You can store static content such as images, videos, or documents in Amazon S3 buckets, which are containers for objects. You can also serve static content directly from Amazon S3 using public URLs or presigned URLs.
AWS-Certified-Solutions-Architect-Professional dumps exhibit Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds. You can use Amazon CloudFront to create a distribution that caches static content from your Amazon S3 bucket at edge locations closer to your users. This can improve the performance and user experience of your application.
Option A is incorrect because creating an EC2 Auto Scaling group behind an ALB would not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances for your containers. Creating additional read replicas for the DB instance would not provide
high availability or fault tolerance in case of a failure of the primary DB instance, unlike deploying the DB instance in Multi-AZ mode. Creating Amazon Kinesis data streams would not be compatible with your existing Apache Kafka applications, unlike using Amazon MSK.
Option B is incorrect because creating an EC2 Auto Scaling group behind an ALB would not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances for your containers. Creating Amazon Kinesis data streams would not be compatible with your existing Apache Kafka applications, unlike using Amazon MSK. Storing and serving static content directly from Amazon S3 would not provide optimal performance and user experience, unlike using Amazon CloudFront.
Option C is incorrect because deploying the application on a Kubernetes cluster created on the EC2 instances behind an ALB would not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances and Kubernetes control plane for your containers. Using Amazon API Gateway to interact with the application would add an unnecessary layer of complexity and cost to your architecture, as you would need to create and maintain an API gateway that proxies requests to your ALB.

NEW QUESTION 15

A company has AWS accounts that are in an organization in AWS rganizations. The company wants to track Amazon EC2 usage as a metric.
The company's architecture team must receive a daily alert if the EC2 usage is more than 10% higher than the average EC2 usage from the last 30 days.
Which solution will meet these requirements?

  • A. Configure AWS Budgets in the organization's management accoun
  • B. Specify a usage type of EC2 running hour
  • C. Specify a daily perio
  • D. Set the budget amount to be 10% more than the reported average usage for the last 30 days from AWS Cost Explorer.
  • E. Configure an alert to notify the architecture team if the usage threshold is me
  • F. Configure AWS Cost Anomaly Detection in the organization's management accoun
  • G. Configure a monitor type of AWS Servic
  • H. Apply a filter of Amazon EC2. Configure an alert subscription to notify the architecture team if the usage is 10% more than the average usage for the last 30 days.
  • I. Enable AWS Trusted Advisor in the organization's management accoun
  • J. Configure a cost optimization advisory alert to notify the architecture team if the EC2 usage is 10% more than the reported average usage for the last 30 days.
  • K. Configure Amazon Detective in the organization's management accoun
  • L. Configure an EC2 usage anomaly alert to notify the architecture team if Detective identifies a usage anomaly of more than 10%.

Answer: B

Explanation:
The correct answer is B.
* B. This solution meets the requirements because it uses AWS Cost Anomaly Detection, which is a feature of AWS Cost Management that uses machine learning to identify and alert on anomalous spend and usage patterns. By configuring a monitor type of AWS Service and applying a filter of Amazon EC2, the solution can track the EC2 usage as a metric across the organization’s accounts. By configuring an alert subscription with a threshold of 10%, the solution can notify the architecture team via email or Amazon SNS if the EC2 usage is more than 10% higher than the average usage for the last 30 days12
* A. This solution is incorrect because it uses AWS Budgets, which is a feature of AWS Cost Management that helps to plan and track costs and usage. However, AWS Budgets does not support usage type of EC2 running hours as a budget type. The only supported usage types are Amazon S3 storage, Amazon EC2 RI utilization, and Amazon EC2 RI coverage. Moreover, AWS Budgets does not support setting the budget amount based on the reported average usage from AWS Cost Explorer. The budget amount has to be a fixed or variable value34
* C. This solution is incorrect because it uses AWS Trusted Advisor, which is a feature of AWS Premium Support that provides recommendations to follow best practices for cost optimization, security, performance, and fault tolerance. However, AWS Trusted Advisor does not support configuring custom alerts based on EC2 usage or average usage for the last 30 days. The only supported alerts are based on predefined checks and thresholds that are applied to all services and resources in the account56
* D. This solution is incorrect because it uses Amazon Detective, which is a service that helps to analyze and visualize security data to investigate potential security issues. However, Amazon Detective does not support configuring EC2 usage anomaly alerts based on average usage for the last 30 days. The only supported alerts are based on GuardDuty findings and other security-related events that are detected by machine learning models78
References:
1: AWS Cost Anomaly Detection - Amazon Web Services 2: Getting started with AWS Cost Anomaly Detection 3: Set Custom Cost and Usage Budgets – AWS Budgets – Amazon Web Services 4: Creating a budget - AWS Cost Management 5: AWS Trusted Advisor 6: AWS Trusted Advisor - AWS Support 7: Security Investigation Visualization - Amazon Detective - AWS 8: What is Amazon Detective? - Amazon Detective

NEW QUESTION 16

A company is running a compute workload by using Amazon EC2 Spot Instances that are in an Auto Scaling group. The launch template uses two placement groups and a single instance type.
Recently, a monitoring system reported Auto Scaling instance launch failures that correlated with longer wait times for system users. The company needs to improve the overall reliability of the workload.
Which solution will meet this requirement?

  • A. Replace the launch template with a launch configuration to use an Auto Scaling group that uses attribute-based instance type selection.
  • B. Create a new launch template version that uses attribute-based instance type selectio
  • C. Configure the Auto Scaling group to use the new launch template version.
  • D. Update the launch template Auto Scaling group to increase the number of placement groups.
  • E. Update the launch template to use a larger instance type.

Answer: B

Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/create-asg-instance-type-requirements.html#use-attribut

NEW QUESTION 17

A company has more than 10.000 sensors that send data to an on-premises Apache Kafka server by using the Message Queuing Telemetry Transport (MQTT) protocol. The on-premises Kafka server transforms the data and then stores the results as objects in an Amazon S3 bucket.
Recently, the Kafka server crashed. The company lost sensor data while the server was being restored. A solutions architect must create a new design on AWS that is highly available and scalable to prevent a similar occurrence.
Which solution will meet these requirements?

  • A. Launch two Amazon EC2 instances to host the Kafka server in an active/standby configuration across two Availability Zone
  • B. Create a domain name in Amazon Route 53. Create a Route 53 failover polic
  • C. Route the sensors to send the data to the domain name.
  • D. Migrate the on-premises Kafka server to Amazon Managed Streaming for Apache Kafka (Amazon MSK). Create a Network Load Balancer (NLB) that points to the Amazon MSK broker Enable NL8 health check
  • E. Route the sensors to send the data to the NLB.
  • F. Deploy AWS loT Core, and connect it to an Amazon Kinesis Data Firehose delivery strea
  • G. Use an AWS Lambda function to handle data transformatio
  • H. Route the sensors to send the data to AWS loT Core.
  • I. Deploy AWS loT Core, and launch an Amazon EC2 instance to host the Kafka serve
  • J. Configure AWS loT Core to send the data to the EC2 instanc
  • K. Route the sensors to send the data to AWS loT Core.

Answer: C

Explanation:
Because MSK has Maximum number of client connections 1000 per second and the company has 10,000 sensors, the MSK likely will not be able to handle all connections https://docs.aws.amazon.com/msk/latest/developerguide/limits.html

NEW QUESTION 18

A company is expanding. The company plans to separate its resources into hundreds of different AWS accounts in multiple AWS Regions. A solutions architect must recommend a solution that denies access to any operations outside of specifically designated Regions.
Which solution will meet these requirements?

  • A. Create IAM roles for each accoun
  • B. Create IAM policies with conditional allow permissions that include only approved Regions for the accounts.
  • C. Create an organization in AWS Organization
  • D. Create IAM users for each accoun
  • E. Attach a policy to each user to block access to Regions where an account cannot deploy infrastructure.
  • F. Launch an AWS Control Tower landing zon
  • G. Create OUs and attach SCPs that deny access to run services outside of the approved Regions.
  • H. Enable AWS Security Hub in each accoun
  • I. Create controls to specify the Regions where an account can deploy infrastructure.

Answer: C

NEW QUESTION 19

A company processes environment data. The has a set up sensors to provide a continuous stream of data from different areas in a city. The data is available in JSON format.
The company wants to use an AWS solution to send the data to a database that does not require fixed schemas for storage. The data must be send in real time.
Which solution will meet these requirements?

  • A. Use Amazon Kinesis Data Firehouse to send the data to Amazon Redshift.
  • B. Use Amazon Kinesis Data streams to send the data to Amazon DynamoDB.
  • C. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to send the data to Amazon Aurora.
  • D. Use Amazon Kinesis Data firehouse to send the data to Amazon Keyspaces (for Apache Cassandra).

Answer: B

Explanation:
Amazon Kinesis Data Streams is a service that enables real-time data ingestion and processing. Amazon DynamoDB is a NoSQL database that does not require fixed schemas for storage. By using Kinesis Data Streams and DynamoDB, the company can send the JSON data to a database that can handle schemaless data in real time. References:
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/streams/latest/dev/introduction.html
AWS-Certified-Solutions-Architect-Professional dumps exhibit https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

NEW QUESTION 20

A company's solutions architect is reviewing a web application that runs on AWS. The application references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Configure the application to write each object to both S3 bucket
  • B. Set up an Amazon Route 53 public hosted zone with a record set by using a weighted routing policy for each S3 bucke
  • C. Configure the application to reference the objects by using the Route 53 DNS name.
  • D. Create an AWS Lambda function to copy objects from the S3 bucket in us-east-1 to the S3 bucket in the second Regio
  • E. Invoke the Lambda function each time an object is written to the S3 bucket in us-east-1. Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
  • F. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Region Set up an Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins.
  • G. Configure replication on the S3 bucket in us-east-1 to replicate objects to the S3 bucket in the second Regio
  • H. If failover is required, update the application code to load S3 objects from the S3 bucket in the second Region.

Answer: C

Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html

NEW QUESTION 21
......

Recommend!! Get the Full AWS-Certified-Solutions-Architect-Professional dumps in VCE and PDF From Thedumpscentre.com, Welcome to Download: https://www.thedumpscentre.com/AWS-Certified-Solutions-Architect-Professional-dumps/ (New 483 Q&As Version)