aiotestking uk

AWS-Certified-Big-Data-Specialty Exam Questions - Online Test


AWS-Certified-Big-Data-Specialty Premium VCE File

Learn More 100% Pass Guarantee - Dumps Verified - Instant Download
150 Lectures, 20 Hours

Want to know Examcollection AWS-Certified-Big-Data-Specialty Exam practice test features? Want to lear more about Amazon Amazon AWS Certified Big Data - Speciality certification experience? Study Certified Amazon AWS-Certified-Big-Data-Specialty answers to Rebirth AWS-Certified-Big-Data-Specialty questions at Examcollection. Gat a success with an absolute guarantee to pass Amazon AWS-Certified-Big-Data-Specialty (Amazon AWS Certified Big Data - Speciality) test on your first attempt.

Amazon AWS-Certified-Big-Data-Specialty Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
You are configuring your company’s application to use Auto Scaling and need to move user state
information. Which of the following AWS services provides a shared data store with durability and low latency?

  • A. Amazon Simple Storage Service
  • B. Amazon DynamoDB
  • C. Amazon EC2 instance storage
  • D. AWS ElasticCache Memcached

Answer: A

NEW QUESTION 2
When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application
rapidly scales up and down in response to load within a 10-minutes window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active.
What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources with your configuration management systems? Choose 2 answers

  • A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scaling group and removes terminated instances from the configuration management system
  • B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system
  • C. Use your existing configuration management system to control the launching and bootstrapping of instances to reduce the number of moving parts in the automation
  • D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system
  • E. Use Amazon Simple Workflow Service (SWF) to maintain an Amazon DynamoDB database that contains a whitelist of instances that have been previously launched, and allow the Amazon SWFworker to remove information from the configuration management system

Answer: AD

NEW QUESTION 3
Which of the following requires a custom cloudwatch metric to monitor?

  • A. Memory utilization of an EC2 instance
  • B. CPU utilization of an EC2 instance
  • C. Disk usage activity of an EC2 instance
  • D. Data transfer of an EC2 instance

Answer: A

NEW QUESTION 4
A media advertising company handles a large number of real-time messages sourced from over 200
websites in real time. Processing latency must be kept low. Based on calculations, a 60- shared Amazon Kinesis stream is more then sufficient to handle the maximum data throughput, even with traffic spikes. The company also uses an Amazon Kinesis Client Library (KCL) application running on Amazon Elastic Compute Cloud (EC2) managed by an Auto Scaling group. Amazon CloudWatch indicates an average of 25% CPU and a modest level of network traffic across all running servers.
The company reports a 150% to 200% increase in latency of processing messages from Amazon kinesis during peak times. There are NO reports of delay from the sites publishing to Amazon Kinesis. What is the appropriate solution to address the latency?

  • A. Increase the number of shared in the Amazon Kinesis stream to 80 for greater concurrency
  • B. Increate the size of the Amazon EC2 instances to increase network throughput
  • C. Increase the minimum number of instances in the Auto Scaling group
  • D. Increase Amazon DynamoDB throughput on the checkpointing table

Answer: A

NEW QUESTION 5
A company is using Amazon Machine Learning as part of a medical software application. The application will predict the most likely blood type for a patient based on a variety of other clinical tests that are available when blood type knowledge is unavailable.
What is the appropriate model choice and target attribute combination for the problem?

  • A. Multi-class classification model with a categorical target attribute
  • B. Regression model with a numeric target attribute
  • C. Binary Classification with a categorical target attribute
  • D. K-Nearest Neighbors model with a multi-class target attribute

Answer: C

NEW QUESTION 6
A systems engineer for a company proposes digitalization and backup of large archives for customers.
The systems engineer needs to provide users with a secure storage that makes sure that data will never be tempered with once it has been uploaded. How should this be accomplished?

  • A. Create an Amazon Glacier Vaul
  • B. Specify a “Deny” Vault lock policy on this vault to block “glacier:DeleteArchive”.
  • C. Create an Amazon S3 bucke
  • D. Specify a “Deny” bucket policy on this bucket to block “s3:DeleteObject”.
  • E. Create an Amazon Glacier Vaul
  • F. Specify a “Deny” vault access policy on this Vault to block “glacier:DeleteArchive”.
  • G. Create a secondary AWS containing an Amazon S3 bucke
  • H. Grant “s3:PutObject” to the primary account.

Answer: A

NEW QUESTION 7
What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Amazon EBS-backed instances can be stopped and restarted
  • B. Instance-store backed instances can be stopped and restarted
  • C. Auto scaling requires using Amazon EBS-backed instances
  • D. Virtual Private Cloud requires EBS backed instances

Answer: A

NEW QUESTION 8
You have a load balancer configured for VPC, and all backend Amazon EC2 instances are in service. However, your web browser times out when connecting to the load balancer’s DNS name. Which options are probable causes of this behavior?

  • A. The load balancer was not configured to use a public subnet with an Internet gateway configured
  • B. The Amazon EC2 instances do not have a dynamically allocated private IP address
  • C. The security groups or network ACLs are not properly configured for web traffic
  • D. The load balancer is not configured in a private subnet with a NAT instance
  • E. The VPC does not have a VGW configured

Answer: AC

NEW QUESTION 9
A company operates an international business served from a single AWS region. The company wants to expand into a new country. The regulator for that country requires the Data Architect to maintain a log of financial transactions in the country within 24 hours of production transaction. The production application is latency insensitive. The new country contains another AWS region.
What is the most cost-effective way to meet this requirement?

  • A. Use CloudFormation to replicate the production application to the new region
  • B. Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement
  • C. Continue to serve customers from the existing region while using Amazon Kinesis to stream transaction data to the regulator
  • D. Use Amazon S3 cross-region replication to copy and persist production transaction logs to a budget the new country’s region

Answer: D

NEW QUESTION 10
You currently run your infrastructure on Amazon EC2 instances behind on Auto Scaling group. All logs
for your application are currently written to ephemeral storage. Recently your company experienced a major bug in code that made it through testing and was ultimately deployed to your fleet. This bug triggered your Auto Scaling group to scale up and back down before you could successfully retrieve the logs off your server to better assist you in troubleshooting the bug.
Which technique should you use to make sure you are able to review your logs after your instances have shut down?

  • A. Configure the ephemeral policies on your Auto Scaling group to back up on terminate
  • B. Configure your Auto Scaling policies to create a snapshot of all ephemeral storage on terminate
  • C. Install the CloudWatch logs Agent on your AMI, and configure CloudWatch Logs Agent to stream your logs
  • D. Install the CloudWatch monitoring agent on your AMI, and set up a new SNS alert for CloudWatch metrics that triggers the CloudWatch monitoring agent to backup all logs on the ephemeral drive
  • E. Install the CloudWatch Logs Agent on your AM
  • F. Update your Scaling policy to enable automated CloudWatch Log copy

Answer: C

NEW QUESTION 11
An Administrator needs to design the event log storage architecture for events from mobile devices.
The event data will be processed by an Amazon EMR cluster daily for aggregated reporting and analytics before being archived.
How should the administrator recommend storing the log data?

  • A. Create an Amazon S3 bucket and write log data into folders by device Execute the EMR job on the device folders
  • B. Create an Amazon DynamoDB table partitioned on the device and sorted on data, write log data to the tabl
  • C. Execute the EMR job on the Amazon DynamoDB table
  • D. Create an Amazon S3 bucket and write data into folders by da
  • E. Execute the EMR job on the daily folder
  • F. Create an Amazon DynamoDB table partitioned on EventID, write log data to tabl
  • G. Execute the EMR job on the table

Answer: C

NEW QUESTION 12
Your company operates a website for promoters to sell tickets for entertainment events. You are
using a load balancer in front of an Auto Scaling group of web server. Promotion of popular events can cause surges of websites visitors. During scaling-out at theses times, newly launched instances are unable to complete configuration quickly enough, leading to user disappointment.
What option should you choose to improve scaling yet minimize costs? Choose 2 answers

  • A. Create an AMI with the application pre-configure
  • B. Create a new Auto Scaling launch configuration using this new AMI, and configure the Auto Scaling group to launch with this AMI
  • C. Use Auto Scaling pre-warming to launch instances before they are require
  • D. Configure pre- warming to use the CPU trend CloudWatch metric for the group
  • E. Publish a custom CloudWatch metric from your application on the number of tickets sold, and create an Auto Scaling policy based on this
  • F. Using the history of past scaling events for similar event sales to predict future scaling requirement
  • G. Use the Auto Scaling scheduled scaling feature to vary the size of the fleet
  • H. Configure an Amazon S3 bucket for website hostin
  • I. Upload into the bucket an HTML holding page with its ‘x-amz-website-redirect-location’ metadata property set to the load balancer endpoin
  • J. Configure Elastic Load Balancing to redirect to the holding page when the load on web servers is above a certain level

Answer: DE

NEW QUESTION 13
A company uses Amazon Redshift for its enterprise data warehouse. A new op-premises PostgreSQL
OLTP DB must be integrated into the data warehouse. Each table in the PostgreSQL DB has an indexed last_modified timestamp column. The data warehouse has a staging layer to load source data into the data warehouse environment for further processing.
The data log between the source PostgreSQL DB and the Amazon Redshift staging layer should NOT exceed four hours.
What is the most efficient technique to meet these requirements?

  • A. Create a DBLINK on the source DB to connect to Amazon Redshif
  • B. Use a PostgreSQL trigger on the source table to capture the new insert/update/delete event and execute the event on the Amazon Redshift staging table.
  • C. Use a PostgreSQL trigger on the source table to capture the new insert/update/delete event and write it to Amazon Kinesis Stream
  • D. Use a KCL application to execute the event on the Amazon Redshift staging table.
  • E. Extract the incremental changes periodically using a SQL quer
  • F. Upload the changes to multiple Amazon Simple Storage Service (S3) objects and run the COPY command to load the Amazon Redshift staging table.
  • G. Extract the incremental changes periodically using a SQL quer
  • H. Upload the changes to a single Amazon Simple Storage Service (S3) object run the COPY command to load to the Amazon Redshift staging layer.

Answer: C

NEW QUESTION 14
A company is preparing to give AWS Management Console access to developers. Company policy
mandates identity federation and role based access control. Roles are currently assigned using groups in the corporate Active Directory. What combination of the following will give developers access to the AWS console? Choose 2 answers

  • A. AWS Directory Service AD connector
  • B. AWS Directory Service Simple AD
  • C. AWS identity and Access Management groups
  • D. AWS identity and Access Management roles
  • E. AWS identity and Access Management users

Answer: AD

NEW QUESTION 15
A us-based company is expanding their web presence into Europe. The company wants to extend their AWS infrastructure from Northern Virginia (us-east-1) into the Dublin (eu-west-1) region. Which of the following options would enable an equivalent experience for users on both continents?

  • A. Use a public-facing load balancer per region to load-balancer web traffic, and enable HTTP health checks
  • B. Use a public-facing load balancer per region to load balancer web traffic, and enable sticky sessions
  • C. Use Amazon Route S3, and apply a geolocation routing policy to distribution traffic across both regions
  • D. Use Amazon Route S3, and apply a weighted routing policy to distribute traffic across both regions

Answer: C

NEW QUESTION 16
A user has setup an RDS DB with Oracle. The user wants to get notifications when someone modifies
the security group of that DB. How can the user configure that?

  • A. It is not possible to get the notifications on a change in the security group
  • B. Configure SNS to monitor security group changes
  • C. Configure event notification on the DB security group
  • D. Configure the CloudWatch alarm on the DB for a change in the security group

Answer: C

NEW QUESTION 17
A user has launched an EC2 instance from an instance store backed AMI. The infrastructure team
wants to create an AMI from the running instance. Which of the below mentioned steps will not be performed while creating the AMI?

  • A. Define the AMI launch permissions
  • B. Upload the bundled volume
  • C. Register the AMI
  • D. Bundle the volume

Answer: A

NEW QUESTION 18
A customer is collecting clickstream data using Amazon kinesis and is grouping the events by IP address into 5-minute chunks stored in Amazon S3.
Many analysts in the company use Hive on Amazon EMR to analyze this data. Their queries always reference a single IP address. Data must be optimized for querying based on UP address using Hive running on Amazon EMR. What is the most efficient method to query the data with Hive?

  • A. Store an index of the files by IP address in the Amazon DynamoDB metadata store for EMRFS
  • B. Store the Amazon S3 objects with the following naming scheme: bucketname/source=ip_address/year=yy/month=mm/day=dd/hour=hh/filename
  • C. Store the data in an HBase table with the IP address as the row key
  • D. Store the events for an IP address as a single file in Amazon S3 and add metadata with key:Hive_Partitioned_IPAddress

Answer: B

NEW QUESTION 19
You have an application running on an Amazon Elastic Compute Cloud instance, that uploads 5 GB
video objects to Amazon Simple Storage Service (S3). Video uploads are taking longer than expected, resulting in poor application performance. Which method will help improve performance of your application?

  • A. Enable enhanced networking
  • B. Use Amazon S3 multipart upload
  • C. Leveraging Amazon CloudFront, use the HTTP POST method to reduce latency.
  • D. Use Amazon Elastic Block Store Provisioned IOPs and use an Amazon EBS-optimized instance

Answer: B

NEW QUESTION 20
A company is centralizing a large number of unencrypted small files rom multiple Amazon S3 buckets. The company needs to verify that the files contain the same data after centralization.
Which method meets the requirements?

  • A. Company the S3 Etags from the source and destination objects
  • B. Call the S3 CompareObjects API for the source and destination objects
  • C. Place a HEAD request against the source and destination objects comparing SIG v4 header
  • D. Compare the size of the source and destination objects

Answer: B

NEW QUESTION 21
......

100% Valid and Newest Version AWS-Certified-Big-Data-Specialty Questions & Answers shared by prep-labs.com, Get Full Dumps HERE: https://www.prep-labs.com/dumps/AWS-Certified-Big-Data-Specialty/ (New 243 Q&As)