New Year Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: sntaclus

A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.

A DevOps engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation the DevOps engineer believes the failures are due to database changes not having fully propagated before the Lambda function is invoked

How should the DevOps engineer overcome this?

A.

Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function.

B.

Add an AfterAlIowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond.

C.

Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function.

D.

Add a validateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services such as the database are not yet ready.

A DevOps engineer manages a Java-based application that runs in an Amazon Elastic Container Service (Amazon ECS) cluster on AWS Fargate. Auto scaling has not been configured for the application. The DevOps engineer has determined that the Java Virtual Machine (JVM) thread count is a good indicator of when to scale the application. The application serves customer traffic on port 8080 and makes JVM metrics available on port 9404. Application use has recently increased. The DevOps engineer needs to configure auto scaling for the application. Which solution will meet these requirements with the LEAST operational overhead?

A.

Deploy the Amazon CloudWatch agent as a container sidecar. Configure the CloudWatch agent to retrieve JVM metrics from port 9404. Create CloudWatch alarms on the JVM thread count metric to scale the application. Add a step scaling policy in Fargate to scale up and scale down based on the CloudWatch alarms.

B.

Deploy the Amazon CloudWatch agent as a container sidecar. Configure a metric filter for the JVM thread count metric on the CloudWatch log group for the CloudWatch agent. Add a target tracking policy in Fargate. Select the metric from the metric filter as a scale target.

C.

Create an Amazon Managed Service for Prometheus workspace. Deploy AWS Distro for OpenTelemetry as a container sidecar to publish the JVM metrics from port 9404 to the Prometheus workspace. Configure rules for the workspace to use the JVM thread count metric to scale the application. Add a step scaling policy in Fargate. Select the Prometheus rules to scale up and scaling down.

D.

Create an Amazon Managed Service for Prometheus workspace. Deploy AWS Distro for OpenTelemetry as a container sidecar to retrieve JVM metrics from port 9404 to publish the JVM metrics from port 9404 to the Prometheus workspace. Add a target tracking policy in Fargate. Select the Prometheus metric as a scale target.

A company uses Amazon API Gateway and AWS Lambda functions to implement an API. The company uses a pipeline in AWS CodePipeline to build and deploy the API. The pipeline contains a source stage, build stage, and deployment stage.

The company deploys the API without performing smoke tests. Soon after the deployment, the company observes multiple issues with the API. A security audit finds security vulnerabilities in the production code.

The company wants to prevent these issues from happening in the future.

Which combination of steps will meet this requirement? (Select TWO.)

A.

Create a smoke test script that returns an error code if the API code fails the test. Add an action in the deployment stage to run the smoke test script after deployment. Configure the deployment stage for automatic rollback.

B.

Create a smoke test script that returns an error code if the API code fails the test. Add an action in the deployment stage to run the smoke test script after deployment. Configure the deployment stage to fail if the smoke test script returns an error code.

C.

Add an action in the build stage that uses Amazon Inspector to scan the Lambda function code after the code is built. Configure the build stage to fail if the scan returns any security findings. D. Add an action in the build stage to run an Amazon CodeGuru code scan after the code is built. Configure the build stage to fail if the scan returns any security findings.

D.

Add an action in the deployment stage to run an Amazon CodeGuru code scan after deployment. Configure the deployment stage to fail if the scan returns any security findings.

A company uses Amazon S3 to store proprietary information. The development team creates buckets for new projects on a daily basis. The security team wants to ensure that all existing and future buckets have encryption logging and versioning enabled. Additionally, no buckets should ever be publicly read or write accessible.

What should a DevOps engineer do to meet these requirements?

A.

Enable AWS CloudTrail and configure automatic remediation using AWS Lambda.

B.

Enable AWS Conflg rules and configure automatic remediation using AWS Systems Manager documents.

C.

Enable AWS Trusted Advisor and configure automatic remediation using Amazon EventBridge.

D.

Enable AWS Systems Manager and configure automatic remediation using Systems Manager documents.

A company provides an application to customers. The application has an Amazon API Gateway REST API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table. The data load process results in long cold-start times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured.

Customers report that the application intermittently takes a long time to respond to requests. The application receives thousands of requests throughout the day. In the middle of the day, the application experiences 10 times more requests than at any other time of the day. Near the end of the day, the application's request volume decreases to 10% of its normal total.

A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.

Which solution will meet these requirements?

A.

Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.

B.

Configure reserved concurrency on the Lambda function with a concurrency value of 0.

C.

Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.

D.

Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.

A company is building a web and mobile application that uses a serverless architecture powered by AWS Lambda and Amazon API Gateway The company wants to fully automate the backend Lambda deployment based on code that is pushed to the appropriate environment branch in an AWS CodeCommit repository

The deployment must have the following:

• Separate environment pipelines for testing and production

• Automatic deployment that occurs for test environments only

Which steps should be taken to meet these requirements'?

A.

Configure a new AWS CodePipelme service Create a CodeCommit repository for each environment Set up CodePipeline to retrieve the source code from the appropriate repository Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.

B.

Create two AWS CodePipeline configurations for test and production environments Configure the production pipeline to have a manual approval step Create aCodeCommit repository for each environment Set up each CodePipeline to retrieve the source code from the appropriate repository Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.

C.

Create two AWS CodePipeline configurations for test and production environments Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment Set up each CodePipeline to retrieve the source code from the appropriate branch in the repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation

D.

Create an AWS CodeBuild configuration for test and production environments Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment Push the Lambda function code to an Amazon S3 bucket Set up the deployment step to deploy the Lambda functions from the S3 bucket.

A company is developing a microservices-based application on AWS. The application consists of AWS Lambda functions and Amazon Elastic Container Service (Amazon ECS) services that need to be deployed frequently.

A DevOps engineer needs to implement a consistent deployment solution across all components of the application. The solution must automate the deployments, minimize downtime during updates, and manage configuration data for the application.

Which solution will meet these requirements with the LEAST deployment effort?

A.

Use AWS CloudFormation to define and provision the Lambda functions and ECS services. Implement stack updates with resource replacement for all components. Use AWS Secrets Manager to manage the configuration data.

B.

Use AWS CodeDeploy to manage deployments for the Lambda functions and ECS services. Implement canary deployments for the Lambda functions. Implement blue/green deployments for the ECS services. Use AWS Systems Manager Parameter Store to manage the configuration data.

C.

Use AWS Step Functions to orchestrate deployments for the Lambda functions and ECS services. Use canary deployments for the Lambda functions and ECS services in a different AWS Region. Use AWS Systems Manager Parameter Store to manage the configuration data.

D.

Use AWS Systems Manager to manage deployments for the Lambda functions and ECS services. Implement all-at-once deployments for the Lambda functions. Implement rolling updates for the ECS services. Use AWS Secrets Manager to manage the configuration data.

A company manages an application that stores logs in Amazon CloudWatch Logs. The company wants to archive the logs to an Amazon S3 bucket. Logs are rarely accessed after 90 days and must be retained for 10 years.

Which combination of steps should a DevOps engineer take to meet these requirements? (Select TWO.)

A.

Configure a CloudWatch Logs subscription filter to use AWS Glue to transfer all logs to an S3 bucket.

B.

Configure a CloudWatch Logs subscription filter to use Amazon Data Firehose to stream all logs to an S3 bucket.

C.

Configure a CloudWatch Logs subscription filter to stream all logs to an S3 bucket.

D.

Configure the S3 bucket lifecycle policy to transition logs to S3 Glacier Instant Retrieval after 90 days and to expire logs after 3,650 days.

E.

Configure the S3 bucket lifecycle policy to transition logs to Reduced Redundancy after 90 days and to expire logs after 3,650 days.

An ecommerce company hosts a web application on Amazon EC2 instances that are in an Auto Scaling group. The company deploys the application across multiple Availability Zones.

Application users are reporting intermittent performance issues with the application.

The company enables basic Amazon CloudWatch monitoring for the EC2 instances. The company identifies and implements a fix for the performance issues. After resolving the issues, the company wants to implement a monitoring solution that will quickly alert the company about future performance issues.

Which solution will meet this requirement?

A.

Enable detailed monitoring for the EC2 instances. Create custom CloudWatch metrics for application-specific performance indicators. Set up CloudWatch alarms based on the custom metrics. Use CloudWatch Logs Insights to analyze application logs for error patterns.

B.

Use AWS X-Ray to implement distributed tracing. Integrate X-Ray with Amazon CloudWatch RUM. Use Amazon EventBridge to trigger automatic scaling actions based on custom events.

C.

Use Amazon CloudFront to deliver the application. Use AWS CloudTrail to monitor API calls. Use AWS Trusted Advisor to generate recommendations to optimize performance. Use Amazon GuardDuty to detect potential performance issues.

D.

Enable VPC Flow Logs. Use Amazon Data Firehose to stream flow logs to Amazon S3. Use Amazon Athena to analyze the logs and to send alerts to the company.

A company uses AWS Organizations to manage multiple AWS accounts. The accounts are in an OU that has a policy attached to allow all actions. The company is migrating several Git repositories to a specified AWS CodeConnections supported Git provider. The Git repositories manage AWS CloudFormation stacks for application infrastructure that the company deploys across multiple AWS Regions. The company wants a DevOps team to integrate CodeConnections into the CloudFormation stacks. The DevOps team must ensure that company staff members can integrate only with the specified Git provider. The deployment process must be highly available across Regions. Which combination of steps will meet these requirements? (Select THREE.)

A.

Add a new SCP statement to the OU that denies the CodeConnections CreatingConnections action where the provider type is not the specified Git provider.

B.

Add a new SCP statement to the OU that allows the CodeConnections CreatingConnections action where the provider type is the specified Git provider.

C.

Use CodeConnections to configure a single CodeConnections connection to each Git repository.

D.

Use CodeConnections to create a CodeConnections connection from each Region where the company operates to each Git repository.

E.

Use CodeConnections to create a CodeConnections repository link. Update each CloudFormation stack to sync from the Git repository.

F.

For each Git repository, create a pipeline in AWS CodePipeline that has the Git repository set as the source and a CloudFormation deployment stage.