Pre-Summer Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: sntaclus

A company runs multiple workloads on virtual machines (VMs) in an on-premises data center. The company is expanding rapidly. The on-premises data center is not able to scale fast enough to meet business needs. The company wants to migrate the workloads to AWS.

The migration is time sensitive. The company wants to use a lift-and-shift strategy for non-critical workloads.

Which combination of steps will meet these requirements? (Select THREE.)

A.

Use the AWS Schema Conversion Tool (AWS SCT) to collect data about the VMs.

B.

Use AWS Application Migration Service. Install the AWS Replication Agent on the VMs.

C.

Complete the initial replication of the VMs. Launch test instances to perform acceptance tests on the VMs.

D.

Stop all operations on the VMs Launch a cutover instance.

E.

Use AWS App2Container (A2C) to collect data about the VMs.

F.

Use AWS Database Migration Service (AWS DMS) to migrate the VMs.

A company asks a solutions architect to review the architecture for its messaging application. The application uses TCP and UDP traffic. The company is planning to deploy a new VoIP feature, but its 10 test users in other countries are reporting poor call quality.

The VoIP application runs on an Amazon EC2 instance with more than enough resources. The HTTP portion of the company ' s application behind an Application Load Balancer has no issues.

What should the solutions architect recommend for the company to do to address the VoIP performance issues?

A.

Use AWS Global Accelerator.

B.

Implement Amazon CloudFront into the architecture.

C.

Use an Amazon Route 53 geoproximity routing policy.

D.

Migrate from Application Load Balancers to Network Load Balancers.

An image-hosting company stores images as objects in Amazon S3 buckets. The company must prevent accidental exposure of the objects to the public. All S3 objects in the company ' s entire AWS account must remain private.

Which solution will meet these requirements?

A.

Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to remediate any change that makes the objects public.

B.

Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change to S3 bucket policies is detected. Use the AWS CLI to change any S3 bucket policy that Trusted Advisor flags.

C.

Use AWS Resource Access Manager AWS RAM to find publicly accessible S3 buckets. Use Amazon SNS to invoke an AWS Lambda function when AWS RAM detects a change in S3 bucket policies. Configure the Lambda function to programmatically remediate each detected change.

D.

Use the S3 Block Public Access feature at the account level. Deploy the AWS Config s3-account-level-public-access-blocks rule and an AWS Systems Manager document to take automatic remediation actions when the rule is in the non-compliant state.

A financial services company plans to launch a new application on AWS to handle sensitive financial transactions. The company will deploy the application on Amazon EC2 instances. The company will use Amazon RDS for MySQL as the database. The company ' s security policies mandate that data must be encrypted at rest and in transit.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.

B.

Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys. Configure IPsec tunnels for encryption in transit

C.

Implement third-party application-level data encryption before storing data in Amazon RDS for MySQL. Configure AWS Certificate Manager (ACM) SSL/TLS certificates for encryption in transit.

D.

Configure encryption at rest for Amazon RDS for MySQL by using AWS KMS managed keys Configure a VPN connection to enable private connectivity to encrypt data in transit.

A company has a production Amazon RDS for MySQL database. The company needs to create a new application that will read frequently changing data from the database with minimal impact on the database ' s overall performance. The application will rarely perform the same query more than once.

What should a solutions architect do to meet these requirements?

A.

Set up an Amazon ElastiCache cluster. Query the results in the cluster.

B.

Set up an Application Load Balancer (ALB). Query the results in the ALB.

C.

Set up a read replica for the database. Query the read replica.

D.

Set up querying of database snapshots. Query the database snapshots.

A company has a development account that contains Amazon EC2 instances. The company uses the EC2 instances for testing. A recent audit of the development account showed that some developers occasionally forget to stop instances after the tests are finished, which incurs extra costs. The company wants to optimize costs for the development account. The company wants to use AWS Budgets to implement a budget for the account.

Which solution will meet these requirements?

A.

Define an alert in AWS Budgets for when the budget threshold reaches 100% of forecasted costs. Configure AWS Budgets to send an Amazon SNS notification to an AWS Lambda function. Configure the Lambda function to stop the EC2 instances when the function receives a notification.

B.

Define an alert in AWS Budgets for when the budget threshold reaches 100% of forecasted costs. Implement an action in the alert to automatically stop the EC2 instances.

C.

Define an alert in AWS Budgets for when the budget threshold reaches 100% of the budgeted amount. Create an Amazon EventBridge scheduled rule. Implement an AWS Lambda function to stop the EC2 instances based on the scheduled rule.

D.

Define an alert in AWS Budgets for when the budget threshold reaches 100% of the budgeted amount. Implement an action in the alert to automatically stop the EC2 instances.

A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The company has a high performance computing (HPC) environment in its data center and wants to expand its forecasting capabilities.

A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained throughput Files that are stored in the solution should be accessible to thousands of compute instances that will simultaneously access and process the entire dataset.

What should the solutions architect do to meet these requirements?

A.

Use Amazon FSx for Lustre scratch file systems

B.

Use Amazon FSx for Lustre persistent file systems.

C.

Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.

D.

Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS

CLI to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.

The administrator is using an IAM role that has the following IAM policy attached:

What is the cause of the unsuccessful request?

A.

The EC2 instance has a resource-based policy with a Deny statement.

B.

The principal has not been specified in the policy statement.

C.

The " Action " field does not grant the actions that are required to terminate the EC2 instance.

D.

The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.

A company processes large amounts of data by using Amazon EC2 instances in an Auto Scaling group. The data processing jobs run for up to 48 hours each week. The data processing jobs can handle interruptions. However, the company wants to minimize the interruptions. The company wants to use the latest generation of Amazon EC2 instances each year.

Which solution will meet these requirements in the MOST cost-effective way?

A.

Purchase Convertible Reserved Instances on an All Upfront basis for a 3-year term for the instance types currently in use.

B.

Purchase Standard Reserved Instances on an All Upfront basis for a 1-year term for the instance types currently in use.

C.

Purchase Spot Instances with a price-capacity-optimized allocation strategy. Override instance types in the Auto Scaling group.

D.

Purchase Spot Instances with a capacity-optimized allocation strategy. Override instance types in the Auto Scaling group.

A company has hired an external vendor to work in the company’s AWS account. The vendor uses an automated tool that the vendor hosts in its own AWS account. The vendor does not have IAM access to the company ' s AWS account. A solutions architect needs to grant access to the vendor.

Which solution will meet these requirements MOST securely?

A.

Create an IAM role in the company ' s account to delegate access to the vendor ' s IAM role. Attach the appropriate IAM policies to the new IAM role to grant the permissions that the vendor requires.

B.

Create an IAM user in the company ' s account with a password. Attach the appropriate IAM policies to the IAM user.

C.

Create an IAM group in the company ' s account. Add the IAM user for the vendor ' s automated tool from the vendor account to the IAM group. Attach policies to the group.

D.

Create a new identity provider (IdP) of provider type AWS account. Supply the vendor ' s AWS account ID and username. Attach policies to the IdP.

A company hosts an application in a private subnet. The company has already integrated the application with Amazon Cognito. The company uses an Amazon Cognito user pool to authenticate users.

The company needs to modify the application so the application can securely store user documents in an Amazon S3 bucket.

Which combination of steps will securely integrate Amazon S3 with the application? (Select TWO.)

A.

Create an Amazon Cognito identity pool to generate secure Amazon S3 access tokens for users when they successfully log in.

B.

Use the existing Amazon Cognito user pool to generate Amazon S3 access tokens for users when they successfully log in.

C.

Create an Amazon S3 VPC endpoint in the same VPC where the company hosts the application.

D.

Create a NAT gateway in the VPC where the company hosts the application. Assign a policy to the S3 bucket to deny any request that is not initiated from Amazon Cognito.

E.

Attach a policy to the S3 bucket that allows access only from the users ' IP addresses.

A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum capacity in the cluster, which causes performance issues.

Which solution will resolve this issue with the LEAST administrative overhead?

A.

Scale out the nodes by tracking the memory usage.

B.

Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.

C.

Use an AWS Lambda function to resize the EKS cluster automatically.

D.

Use an Amazon EC2 Auto Scaling group to distribute the workload.

A media streaming company needs to deploy its video processing application across multiple Availability Zones for high availability. The application consists of containerized microservices that process video files. The microservices must automatically recover from failures.

Which solution meets these requirements with the LEAST operational overhead?

A.

Deploy the containers to Amazon ECS with the EC2 launch type.

B.

Deploy the containers to Amazon EKS with self-managed nodes.

C.

Deploy the containers to Amazon ECS with the Fargate launch type.

D.

Deploy the containers directly to Amazon EC2 instances.

A company is developing a highly available natural language processing NLP application. The application handles large volumes of concurrent requests. The application performs NLP tasks such as entity recognition, sentiment analysis, and key phrase extraction on text data.

The company needs to store data that the application processes in a highly available and scalable database.

Which solution will meet these requirements?

A.

Create an Amazon API Gateway REST API endpoint to handle incoming requests. Configure the REST API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Comprehend to perform NLP tasks on the text data. Store the processed data in Amazon DynamoDB.

B.

Create an Amazon API Gateway HTTP API endpoint to handle incoming requests. Configure the HTTP API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Translate to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.

C.

Create an Amazon SQS queue to buffer incoming requests. Deploy the NLP application on Amazon EC2 instances in an Auto Scaling group. Use Amazon Comprehend to perform NLP tasks. Store the processed data in an Amazon RDS database.

D.

Create an Amazon API Gateway WebSocket API endpoint to handle incoming requests. Configure the WebSocket API to invoke an AWS Lambda function for each request. Configure the Lambda function to call Amazon Textract to perform NLP tasks on the text data. Store the processed data in Amazon ElastiCache.

A company runs game applications on AWS. The company needs to collect, visualize, and analyze telemetry data from the company ' s game servers. The company wants to gain insights into the behavior, performance, and health of game servers in near real time. Which solution will meet these requirements?

A.

Use Amazon Kinesis Data Streams to collect telemetry data. Use Amazon Managed Service for Apache Flink to process the data in near real time and publish custom metrics to Amazon CloudWatch. Use Amazon CloudWatch to create dashboards and alarms from the custom metrics.

B.

Use Amazon Data Firehose to collect, process, and store telemetry data in near real time. Use AWS Glue to extract, transform, and load (ETL) data from Firehose into required formats for analysis. Use Amazon QuickSight to visualize and analyze the data.

C.

Use Amazon Kinesis Data Streams to collect, process, and store telemetry data. Use Amazon EMR to process the data in near real time into required formats for analysis. Use Amazon Athena to analyze and visualize the data.

D.

Use Amazon DynamoDB Streams to collect and store telemetry data. Configure DynamoDB Streams to invoke AWS Lambda functions to process the data in near real time. Use Amazon Managed Grafana to visualize and analyze the data.

A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances The workload requires low-latency network performance and high network throughput with tightly coupled node-to-node communication.

Which solution will meet these requirements?

A.

Configure the EC2 instances to be part of a cluster placement group

B.

Launch the EC2 instances with Dedicated Instance tenancy.

C.

Launch the EC2 instances as Spot Instances.

D.

Configure an On-Demand Capacity Reservation when the EC2 instances are launched.

A medical company wants to perform transformations on a large amount of clinical trial data that comes from several customers. The company must extract the data from a relational databasethatcontains the customer data. Then the company will transform the data by using a series of complex rules. The company will load the data to Amazon S3 when the transformations are complete.

All data must be encrypted where it is processed before the company stores the data in Amazon S3. All data must be encrypted by using customer-specific keys.

Which solution will meet these requirements with the LEAST amount of operational effort?

A.

Create one AWS Glue job for each customer Attach a security configuration to each job that uses server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.

B.

Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses client-side encryption with a custom client-side root key (CSE-Custom) to encrypt the data.

C.

Create one AWS Glue job for each customer Attach a security configuration to each job that uses client-side encryption with AWS KMS managed keys (CSE-KMS) to encrypt the data.

D.

Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the data.

A healthcare company needs a storage solution for electronic health records EHRs. The company must store the EHRs for at least 10 years to comply with regulations. The company rarely accesses the records. The records must be secure, immutable, and retrievable within a few hours when needed. Which solution will meet these requirements in the MOST cost-effective way?

A.

Store the records in Amazon S3 Standard. Enable server-side encryption with Amazon S3 managed keys SSE-S3 and S3 Versioning.

B.

Store the records in Amazon S3 Glacier Flexible Retrieval. Configure S3 Object Lock and set a retention period of 10 years.

C.

Store the records in Amazon S3 One Zone-Infrequent Access S3 One Zone-IA. Configure an S3 Lifecycle policy to remove records after 10 years.

D.

Store the records in Amazon S3 Intelligent-Tiering. Configure automatic archiving to the Archive Access tier.

An application has performance issues due to increased demand. The demand is on read-only historical records in Amazon RDS using custom queries. The company wants improved performance without changing database structure and with minimal management overhead.

Which approach meets the requirement?

A.

Deploy DynamoDB and move all data.

B.

Deploy Amazon ElastiCache (Redis OSS) and cache application data.

C.

Deploy Memcached on EC2 and cache data.

D.

Deploy DynamoDB Accelerator (DAX) on Amazon RDS.

A company is using AWS Identity and Access Management (IAM) Access Analyzer to refine IAM permissions for employee users. The company uses an organization in AWS Organizations and AWS Control Tower to manage its AWS accounts. The company has designated a specific member account as an audit account.

A solutions architect needs to set up IAM Access Analyzer to aggregate findings from all member accounts in the audit account.

What is the first step the solutions architect should take?

A.

Use AWS CloudTrail to configure one trail for all accounts. Create an Amazon S3 bucket in the audit account. Configure the trail to send logs related to access activity to the new S3 bucket in the audit account.

B.

Configure a delegated administrator account for IAM Access Analyzer in the AWS Control Tower management account. In the delegated administrator account for IAM Access Analyzer, specify the AWS account ID of the audit account.

C.

Create an Amazon S3 bucket in the audit account. Generate a new permissions policy, and add a service role to the policy to give IAM Access Analyzer access to AWS CloudTrail and the S3 bucket in the audit account.

D.

Add a new trust policy that includes permissions to allow IAM Access Analyzer to perform sts:AssumeRole actions. Modify the permissions policy to allow IAM Access Analyzer to generate policies.