AWS Certified DevOps Engineer Professional Practice DOP-C02

/32

AWS DevOps Engineer Professional Certification Practice

AWS DevOps Engineer Professional Certification Practice Exams DOP-C02 2023

A company must encrypt all AMIs that the company shares across accounts. A DevOps engineer has access to a source account where an unencrypted custom AMI has been built. The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI. The DevOps engineer must share the AMI with the target account.

The company has created an AWS Key Management Service (AWS KMS) key in the source account.

Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)

 

Share encrypted AMIs across accounts to launch encrypted EC2 instances

https://aws.amazon.com/blogs/security/how-to-share-encrypted-amis-across-accounts-to-launch-encrypted-ec2-instances/

 

6

A company uses AWS CodePipeline pipelines to automate releases of its application A typical pipeline consists of three stages build, test, and deployment. The company has been using a separate AWS CodeBuild project to run scripts for each stage. However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines.
The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.

Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)

 

 

7

Codedeploy Application:

CodeDeploy deployment group:

A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs. The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations.

The company has configured AWS Config for the organization. During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs.

Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)

 

To use AWS Firewall Manager

 

Your account must be a member of AWS Organizations.
– Your AWS account must be a member of an organization in the AWS Organizations service, and the organization must have all features enabled.

 

Your account must be the AWS Firewall Manager administrator.
– To configure Firewall Manager policies, your account must be set as the AWS Firewall Manager administrator account, in the Settings pane.

 

You must have AWS Config enabled for your accounts and Regions.
– You must enable AWS Config for each of your AWS Organizations member accounts and for each AWS Region that contains resources that you want to protect using AWS Firewall Manager.

 

To manage AWS Network Firewall or Route 53 resolver DNS Firewall, the AWS Organizations management account must enable AWS Resource Access Manager (AWS RAM).
– The AWS Organizations management account must enable AWS RAM for all member accounts in your organization.

8

An ecommerce company has chosen AWS to host its new platform. The company’s DevOps team has started building an AWS Control Tower landing zone. The DevOps team has set the identity store within AWS IAM Identity Center (AWS Single Sign-On) to external identity provider (IdP) and has configured SAML 2.0.

The DevOps team wants a robust permission model that applies the principle of least privilege. The model must allow the team to build and manage only the team’s own resources.

Which combination of steps will meet these requirements? (Choose three.)

 

Connect to an external identity provider

 

If you’re using a self-managed directory in Active Directory or an AWS Managed Microsoft AD, see Connect to a Microsoft AD directory.
For other external identity providers (IdPs), you can use AWS IAM Identity Center (successor to AWS Single Sign-On) to authenticate identities from the IdPs through the Security Assertion Markup Language (SAML) 2.0 standard.
This enables your users to sign in to the AWS access portal with their corporate credentials. They can then navigate to their assigned accounts, roles, and applications hosted in external IdPs.

 

Provisioning when users come from an external IdP:
When using an external IdP, you must provision all users and groups into IAM Identity Center before you can make any assignments to AWS accounts or applications.
In this case, you can: You can configure Automatic provisioning, or you can configure Manual provisioning of your users and groups.

 

SCIM profile and SAML 2.0 implementation:
– SCIM profile and SAML 2.0 implementation
IAM Identity Center supports identity federation with SAML (Security Assertion Markup Language) 2.0. This allows IAM Identity Center to authenticate identities from external identity providers (IdPs).
SAML 2.0 passes information about a user between a SAML authority (called an identity provider or IdP), and a SAML consumer (called a service provider or SP). The IAM Identity Center service uses this information to provide federated single sign-on.
Single sign-on allows users to access AWS accounts and configured applications based on their existing identity provider credential.

– SCIM profile
IAM Identity Center provides support for the System for Cross-domain Identity Management (SCIM) v2.0 standard. SCIM keeps your IAM Identity Center identities in sync with identities from your IdP.
This includes any provisioning, updates, and deprovisioning of users between your IdP and IAM Identity Center.

 

Automatic provisioning:

IAM Identity Center supports automatic provisioning (synchronization) of user and group information from your identity provider (IdP) into IAM Identity Center using the System for Cross-domain Identity Management (SCIM) v2.0 protocol.
When you configure SCIM synchronization, you create a mapping of your identity provider (IdP) user attributes to the named attributes in IAM Identity Center.
This causes the expected attributes to match between IAM Identity Center and your IdP.
You configure this connection in your IdP using your SCIM endpoint for IAM Identity Center and a bearer token that you create in IAM Identity Center.

 

Permission sets:

IAM Identity Center assigns access to a user or group in one or more AWS accounts with permission sets. When you assign a permission set, IAM Identity Center creates corresponding IAM Identity Center-controlled IAM roles in each account, and attaches the policies specified in the permission set to those roles. IAM Identity Center manages the role, and allows the authorized users you’ve defined to assume the role, by using the IAM Identity Center User Portal or AWS CLI. As you modify the permission set, IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly.

 

Working With AWS IAM Identity Center and AWS Control Tower:
https://docs.aws.amazon.com/controltower/latest/userguide/sso.html

 

Permission sets

https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html

 

11

A company uses AWS Organizations and AWS Control Tower to manage all the company’s AWS accounts. The company uses the Enterprise Support plan.

A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts. When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan. The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan.

Which solution will meet these requirements?

 

AFT (Account Factory for Terraform) offers feature options based on best practices. You can opt-in to these features, by means of feature flags, during AFT deployment.

 

AWS CloudTrail data events:
When enabled, the AWS CloudTrail data events option configures these capabilities.
– Creates an Organization Trail in the AWS Control Tower management account, for CloudTrail
– Turns on logging for Amazon S3 and Lambda data events
– Encrypts and exports all the CloudTrail data events to an aws-aft-logs-* S3 bucket in the AWS Control Tower Log Archive account, with AWS KMS encryption
– Turns on the Log file validation setting
To enable this option, set ‘aft_feature_cloudtrail_data_events flag to True in your AFT deployment input configuration.

 

AWS Enterprise Support plan:
When this option is enabled, the AFT pipeline turns on the AWS Enterprise Support plan for accounts provisioned by AFT.
AWS accounts by default come with the AWS Basic Support plan enabled. AFT provides automated enrollment into the enterprise support level, for accounts that AFT provisions. The provisioning process opens a support ticket for the account, requesting it to be added to the AWS Enterprise Support plan.
To enable the Enterprise Support option, set ‘aft_feature_enterprise_support=false’ flag to True in your AFT deployment input configuration.

 

Delete the AWS default VPC:
When you enable this option, AFT deletes all AWS default VPCs in the management account and in all AWS Regions, even if haven’t deployed AWS Control Tower resources in those AWS Regions.
AFT doesn’t delete AWS default VPCs automatically for any AWS Control Tower accounts that AFT provisions or for existing AWS accounts that you enroll in AWS Control Tower through AFT.
New AWS accounts are created with a VPC set up in each AWS Region, by default. Your enterprise may have standard practices for creating VPCs, which require you to delete the AWS default VPC and avoid enabling it, especially for the AFT management account.
To enable this option, set ‘aft_feature_delete_default_vpcs_enabled’ flag to True in your AFT deployment input configuration.

 

https://docs.aws.amazon.com/controltower/latest/userguide/aft-feature-options.html

 

19

An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application. The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running.
All resources should be removed when the CloudFormation stack is deleted. However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted.

How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors?

The request type is sent in the RequestType field in the vendor request object sent by AWS CloudFormation when the template developer creates, updates, or deletes a stack that contains a custom resource.

Each request type has a particular set of fields that are sent with the request, including an Amazon S3 URL for the response by the custom resource provider. The provider must respond to the S3 bucket with either a SUCCESS or FAILED result within one hour. After one hour, the request times out. Each result also has a particular set of fields expected by AWS CloudFormation.

Custom resource provider requests with RequestType set to Delete are sent when the template developer deletes a stack that contains a custom resource. To successfully delete a stack with a custom resource, the custom resource provider must respond successfully to a delete request.

 

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/crpg-ref-requesttypes.html

22

A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region. The pipeline deploys an AWS Lambda application to the same Region. The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action.

The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code’s .zip file and the CloudFormation template. The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project’s build action.

The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1. A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1.

Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)

 

S3 Cross-Region Replication (CRR) is used to copy objects across Amazon S3 buckets in different AWS Regions asynchronously.

-Meet compliance requirements
-Minimize latency
-Increase operational efficiency
However, to replicate objects takes 15 minutes which is not ideal for deployment action.

 

Use AWS CodePipeline to Perform Multi-Region Deployments:

 

1.Set up S3 Bucket in each region
2.Set up CodeDeploy action in each region
3.Deploy an application to multiple regions using each AWS CodeDeploy action in the pipeline.

https://aws.amazon.com/blogs/devops/using-aws-codepipeline-to-perform-multi-region-deployments/

 

23

A company hosts a security auditing application in an AWS account. The auditing application uses an IAM role to access other AWS accounts. All the accounts are in the same organization in AWS Organizations.

A recent security audit revealed that users in the audited AWS accounts could modify or delete the auditing application’s IAM role. The company needs to prevent any modification to the auditing application’s IAM role by any entity other than a trusted administrator IAM role.

Which solution will meet these requirements?

 

SCPs offer central access controls for all IAM entities in your accounts.

 

You can use them to enforce the permissions you want everyone in your business to follow. Using SCPs, you can give your employee more freedom to manage their own permissions because you know they can only operate within the boundaries you define.

If your central security team uses a administrative IAM role to audit and make changes to AWS settings, use a SCP.
With the SCP, you can restrict all IAM entities in the account from modifying AdminRole or its associated permissions.

 

https://aws.amazon.com/ko/blogs/security/how-to-use-service-control-policies-to-set-permission-guardrails-across-accounts-in-your-aws-organization/

 

A permissions boundary is an advanced feature that allows you to limit the maximum permissions that a principal can have.

As the IAM administrator, you can define one or more permissions boundaries using managed policies and allow your employee to create a principal with this boundary. The employee can then attach a permissions policy to this principal. However, the effective permissions of the principal are the intersection of the permissions boundary and permissions policy. As a result, the new principal cannot exceed the boundary that you defined.

 

https://aws.amazon.com/ko/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/

 

26

A company has multiple accounts in an organization in AWS Organizations. The company’s SecOps team needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket.

A DevOps engineer must implement this change without affecting the operation of any AWS accounts. The implementation must ensure that individual member accounts in the organization cannot turn off the notification.
Which solution will meet these requirements?

 

 

A: Incorrect
AWS GuardDuty focuses on threat detection and monitoring for malicious activities (CloudTrail logs, VPC flow logs, and DNS logs) within your AWS environment.
it is primarily on threat detection and response, not configuration monitoring.

 

B: Incorrect
CloudFormation StackSets is feasible to mornitoer changes of S3 policy if it is created in a delegated administrator account.
But individual member accounts can stop logging or delete the trails with the CloudTrail console.
It needs additional action to implement automatic drift remediation for AWS CloudFormation if a user delete the Amazon EventBridge rule – https://aws.amazon.com/ko/blogs/mt/implement-automatic-drift-remediation-for-aws-cloudformation-using-amazon-cloudwatch-and-aws-lambda/ .

 

C: Correct
A conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.

 

D: Incorrect
Amazon Inspector is a vulnerability management service automatically discovers and scans running Amazon EC2 instances, container images in Amazon Elastic Container Registry, and AWS Lambda functions for known software vulnerabilities and unintended network exposure.
It is not configuration monitoring.
A delegated administrator is Regional for Amazon Inspector unlikely AWS Organizations.

 

30

A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic.
A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis.

Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)

 

You don’t need to install the Amazon CloudWatch Logs agent on ECS instance if you do not use External instances .

You can configure the containers in your tasks to send log information to CloudWatch Logs. If you’re using the Fargate launch type for your tasks, you can view the logs from your containers. If you’re using the EC2 launch type, you can view different logs from your containers in one convenient location, and it prevents your container logs from taking up disk space on your container instances.

 

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html

32

A DevOps engineer needs to back up sensitive Amazon S3 objects that are stored within an S3 bucket with a private bucket policy using S3 cross-Region replication functionality. The objects need to be copied to a target bucket in a different AWS Region and account.

Which combination of actions should be performed to enable this replication? (Choose three.)

 

Managing replication rules using the Amazon S3 console

Replication is the automatic, asynchronous copying of objects across buckets in the same or different AWS Regions. It replicates newly created objects and object updates from a source bucket to a specified destination bucket.

 

Creating an IAM role:

By default, all Amazon S3 resources—buckets, objects, and related subresources—are private, and only the resource owner can access the resource. Amazon S3 needs permissions to read and replicate objects from the source bucket. You grant these permissions by creating an IAM role and specifying the role in your replication configuration.

 

Create replication rule in a source bucket:

You use the Amazon S3 console to add replication rules to the source bucket. Replication rules define the source bucket objects to replicate and the destination bucket or buckets where the replicated objects are stored

 

Granting permissions when the source and destination buckets are owned by different AWS accounts:

When the source and destination buckets aren’t owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions.

 

Changing replica ownership:

When different AWS accounts own the source and destination buckets, you can tell Amazon S3 to change the ownership of the replica to the AWS account that owns the destination bucket.

 

Enable receiving replicated objects from a source bucket in a target bucket:

You can quickly generate the policies needed to enable receiving replicated objects from a source bucket through the AWS Management Console.

 

https://docs.aws.amazon.com/AmazonS3/latest/userguide/disable-replication.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-repl-config-perm-overview.html

 

36

A company has multiple member accounts that are part of an organization in AWS Organizations. The security team needs to review every Amazon EC2 security group and their inbound and outbound rules.

The security team wants to programmatically retrieve this information from the member accounts using an AWS Lambda function in the management account of the organization.

Which combination of access changes will meet these requirements? (Choose three.)

 

A management account can’t be able to directly access resources created by other accounts within the organisation by default. Each account in an AWS Organisation is a separate entity, and resources created within those accounts are isolated for security reasons.
To access resources created by other accounts in your organisation from the management account, you can do the following:

  1. In each member account, create an IAM role that has access to the AmazonEC2ReadOnlyAccess managed policy.
  2. In each member account, create a trust relationship that allows users in the management account to assume the IAM roles of the member accounts.
  3. In the management account, create an I AM role that allows the sts:AssumeRole action against the member account IAM role’s ARN.

 

https://repost.aws/questions/QUyAGaxLlpRES1pplrInOo9g/accessing-other-user-resources-with-management-account

 

37

A company requires that its internally facing web application be highly available. The architecture is made up of one Amazon EC2 web server instance and one NAT instance that provides outbound internet access for updates and accessing public data.

Which combination of architecture adjustments should the company implement to achieve high availability? (Choose two.)

 

NAT gateways in each Availability Zone are implemented with redundancy. Create a NAT gateway in each Availability Zone to ensure zone-independent architecture.

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html
40

A company has chosen AWS to host a new application. The company needs to implement a multi-account strategy. A DevOps engineer creates a new AWS account and an organization in AWS Organizations. The DevOps engineer also creates the OU structure for the organization and sets up a landing zone by using AWS Control Tower.

The DevOps engineer must implement a solution that automatically deploys resources for new accounts that users create through AWS Control Tower Account Factory. When a user creates a new account, the solution must apply AWS CloudFormation templates and SCPs that are customized for the OU or the account to automatically deploy all the resources that are attached to the account. All the OUs are enrolled in AWS Control Tower.

Which solution will meet these requirements in the MOST automated way?

Customizations for AWS Control Tower

 

Customizations for AWS Control Tower combines AWS Control Tower and other highly-available, trusted AWS services to help customers more quickly set up a secure, multi-account AWS environment using AWS best practices.

You can easily add customizations to your AWS Control Tower landing zone using an AWS CloudFormation template and service control policies (SCPs).

You can deploy the custom template and policies to individual accounts and organizational units (OUs) within your organization. It also integrates with AWS Control Tower lifecycle events to ensure that resource deployments stay in sync with your landing zone. For example, when a new account is created using the AWS Control Tower account factory, Customizations for AWS Control Tower ensures that all resources attached to the account’s OUs will be automatically deployed.

 

https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/

 

48

A rapidly growing company wants to scale for developer demand for AWS development environments. Development environments are created manually in the AWS Management Console. The networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the Amazon VPC and all subnets.

The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables.
To keep up with demand, the DevOps engineer wants to automate the creation of development environments. Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments.

Which approach will meet these requirements and quickly provide consistent AWS environments for developers?

 

Nested Stacks

As your infrastructure grows, common patterns can emerge in which you declare the same components in multiple templates. You can separate out these common components and create dedicated templates for them.

 

StackSets

Using an administrator account, it enable you to create, update, or delete stacks across multiple accounts and AWS Regions with a single operation if your organisation has cross-region and multi-accounts deployment requirements.

 

Have to know:

Fn::ImportValue intrinsic functions can not be used in the Parameters section.

Use !GetAtt if you reference resources with TemplateURL

 

https://aws.amazon.com/blogs/networking-and-content-delivery/moving-towards-devops-ci-cd-approach-to-configure-and-manage-aws-networking-resources/

 

51

A DevOps engineer has automated a web service deployment by using AWS CodePipeline with the following steps:
1. An AWS CodeBuild project compiles the deployment artifact and runs unit tests.
2. An AWS CodeDeploy deployment group deploys the web service to Amazon EC2 instances in the staging environment.
3. A CodeDeploy deployment group deploys the web service to EC2 instances in the production environment.

The quality assurance (QA) team requests permission to inspect the build artifact before the deployment to the production environment occurs. The QA team wants to run an internal penetration testing tool to conduct manual tests. The tool will be invoked by a REST API call.

Which combination of actions should the DevOps engineer take to fulfill this request? (Choose two.)

 

 

 

Codepipeline invoke actions:

 

56

A company runs an application on Amazon EC2 instances. The company uses a series of AWS CloudFormation stacks to define the application resources. A developer performs updates by building and testing the application on a laptop and then uploading the build output and CloudFormation stack templates to Amazon S3. The developer’s peers review the changes before the developer performs the CloudFormation stack update and installs a new version of the application onto the EC2 instances.

The deployment process is prone to errors and is time-consuming when the developer updates each EC2 instance with the new application. The company wants to automate as much of the application deployment process as possible while retaining a final manual approval step before the modification of the application or resources.
The company already has moved the source code for the application and the CloudFormation templates to AWS CodeCommit. The company also has created an AWS CodeBuild project to build and test the application.

Which combination of steps will meet the company’s requirements? (Choose two.)

 

CodeDeploy

1. Create an application
2. Create an deployment group
– In Environment configuration, choose Amazon EC2 instances.
– In Agent configuration with AWS Systems Manager, choose how you install AWS CodeDeploy Agent with AWS Systems Manager required prerequisites on the EC2.
– In Deploymnet settings, choose CodeDeployDefault.AllAtOnce as example.

 

CloudFormation

1. Create change sets for the application stacks.
2. Vew the proposed changes before executing them.

 

CodePipeline

1. Invoke the CodeBuild job
2. Pause for a manual approval step.
3. Vew the proposed changes in CloudForamtion before approving deployment.
4. Approve deployment
5. Run the CloudFormation change sets action
6. Deploy the application on EC2

 

58

A DevOps engineer needs to apply a core set of security controls to an existing set of AWS accounts. The accounts are in an organization in AWS Organizations. Individual teams will administer individual accounts by using the AdministratorAccess AWS managed policy. For all accounts. AWS CloudTrail and AWS Config must be turned on in all available AWS Regions.

Individual account administrators must not be able to edit or delete any of the baseline resources. However, individual account administrators must be able to edit or delete their own CloudTrail trails and AWS Config rules.

Which solution will meet these requirements in the MOST operationally efficient way?

 

AWS Config

By using AWS Config you can audit the configuration of your AWS resources and ensure that they comply with configuration best practices. You can use AWS CloudFormation StackSets to enable AWS Config in multiple accounts and Regions.

AWS Config feature allows customers to register a delegated admin account that will be used to deploy and manage these Config resources across AWS Organizations.

https://aws.amazon.com/blogs/mt/aws-config-best-practices/

 

AWS CloudTrail

If you have created an organization in AWS Organizations, you can create a trail that logs all events for all AWS accounts in that organization. This is sometimes called an organization trail.
The management account for the organization can assign a delegated administrator to create new organization trails or manage existing organization trails.

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html

 

Service control policies (SCPs)

Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines.

SCPs prevents users from disabling AWS Config or changing its rules

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_config.html#example_config_1

 

66

A company has its AWS accounts in an organization in AWS Organizations. AWS Config is manually configured in each AWS account. The company needs to implement a solution to centrally configure AWS Config for all accounts in the organization The solution also must record resource changes to a central account.

Which combination of actions should a DevOps engineer perform to meet these requirements? (Choose two.)

 

AWS Config aggregator

 

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. With AWS Config, you can review changes in configurations and relationships between AWS resources, explore resource configuration histories, and use rules to determine compliance. An aggregator is an AWS Config resource type that collects AWS Config configuration and compliance data from multiple AWS accounts and Regions into a single account and Region to get a centralized view of your resource inventory and compliance.

A delegated administrator account is an account in an AWS Organizations that is granted additional administrative permissions for a specified AWS service. This means that in addition to the management account, you can also use a delegated admin account to aggregate data from all the member accounts in AWS Organizations without any additional authorization. With this capability, different teams in an organization (auditing, security, or compliance) can use separate accounts and aggregate organization-wide data in their respective administration accounts for centralized governance. This capability also eliminates the need for those teams to gain access to the management account to fetch the aggregated data.

 

https://aws.amazon.com/blogs/mt/org-aggregator-delegated-admin/

 

67

A company has 20 service teams. Each service team is responsible for its own microservice. Each service team uses a separate AWS account for its microservice and a VPC with the 192.168.0.0/22 CIDR block. The company manages the AWS accounts with AWS Organizations.

Each service team hosts its microservice on multiple Amazon EC2 instances behind an Application Load Balancer. The microservices communicate with each other across the public internet. The company’s security team has issued a new guideline that all communication between microservices must use HTTPS over private network connections and cannot traverse the public internet.

A DevOps engineer must implement a solution that fulfills these obligations and minimizes the number of changes for each service team.

Which solution will meet these requirements?

 

A virtual private cloud (VPC)

VPC is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch AWS resources, such as Amazon EC2 instances, into your VPC.

 

AWS PrivateLink

AWS PrivateLink is a highly available, scalable technology that you can use to privately connect your VPC to services as if they were in your VPC. You do not need to use an internet gateway, NAT device, public IP address, AWS Direct Connect connection, or AWS Site-to-Site VPN connection to allow communication with the service from your private subnets. Therefore, you control the specific API endpoints, sites, and services that are reachable from your VPC.

 

VPC peering

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different Regions (also known as an inter-Region VPC peering connection).

When you establish peering relationships between VPCs across different AWS Regions, resources in the VPCs (for example, EC2 instances and Lambda functions) in different AWS Regions can communicate with each other using private IP addresses, without using a gateway, VPN connection, or network appliance. The traffic remains in the private IP space. All inter-Region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks. Inter-Region VPC peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy.

 

86

A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline.

A DevOps engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps engineer believes the failures are due to database changes not having fully propagated before the Lambda function is invoked.

How should the DevOps engineer overcome this?

 

List of lifecycle event hooks for an AWS Lambda deployment

An AWS Lambda hook is one Lambda function specified with a string on a new line after the name of the lifecycle event. Each hook is executed once per deployment. Here are descriptions of the hooks available for use in your AppSpec file.

  • BeforeAllowTraffic – Use to run tasks before traffic is shifted to the deployed Lambda function version.
  • AfterAllowTraffic – Use to run tasks after all traffic is shifted to the deployed Lambda function version.

 

See AppSpec ‘hooks’ section For an ECS deployment and an EC2/On-Premises deployment

https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html

 

91

A company uses a single AWS account to test applications on Amazon EC2 instances. The company has turned on AWS Config in the AWS account and has activated the restricted-ssh AWS Config managed rule.

The company needs an automated monitoring solution that will provide a customized notification in real time if any security group in the account is not compliant with the restricted-ssh rule. The customized notification must contain the name and ID of the noncompliant security group.

A DevOps engineer creates an Amazon Simple Notification Service (Amazon SNS) topic in the account and subscribes the appropriate personnel to the topic.

What should the DevOps engineer do next to meet these requirements?

 

EventBridge

1. Specify an EventBridge rule as narrow as you can filter possible.
2. You can customize the text from an event before EventBridge passes the information to the target of a rule. Using the input transformer in the console or the API when you create a rule, the transformed event is sent to a target instead of the original event.

 

B. correct as well

Subscription filters didn’t act on the message (body, payload). They only acted on the message attributes before.

However, from 22 Nov 2022, AWS provides the payload-based message filtering option of SNS, which augments the existing attribute-based option, enabling you to offload additional filtering logic to SNS and further reduce your application integration costs.

https://aws.amazon.com/blogs/compute/introducing-payload-based-message-filtering-for-amazon-sns/

If a subscription doesn’t have a filter policy, the subscriber receives every message published to its topic. When you publish a message to a topic with a filter policy in place, Amazon SNS compares the message attributes or the message body to the properties in the filter policy for each of the topic’s subscriptions. If any of the message attributes or message body properties match, Amazon SNS sends the message to the subscriber. Otherwise, Amazon SNS doesn’t send the message to that subscriber.

 

92

A company requires an RPO of 2 hours and an RTO of 10 minutes for its data and application at all times. An application uses a MySQL database and Amazon EC2 web servers.

The development team needs a strategy for failover and disaster recovery. Which combination of deployment strategies will meet these requirements? (Choose two.)

 

Aurora DB clusters

An Amazon Aurora DB cluster consists of one or more DB instances and a cluster volume that manages the data for those DB instances. An Aurora cluster volume is a virtual database storage volume that spans multiple Availability Zones, with each Availability Zone having a copy of the DB cluster data. Two types of DB instances make up an Aurora DB cluster:

– Primary DB instance

– Aurora Replica

 

Aurora Global Database

Amazon Aurora Global Database is designed for globally distributed applications, allowing a single Amazon Aurora database to span multiple AWS Regions. It replicates your data with no impact on database performance, enables fast local reads with low latency in each Region, and provides disaster recovery from Region-wide outages.

 

Types of Aurora endpoints

Cluster endpoint – A cluster endpoint (or writer endpoint) for an Aurora DB cluster connects to the current primary DB instance for that DB cluster. This endpoint is the only one that can perform write operations such as DDL statements. Because of this, the cluster endpoint is the one that you connect to when you first set up a cluster or when your cluster only contains a single DB instance.

Reader endpoint – A reader endpoint for an Aurora DB cluster provides load-balancing support for read-only connections to the DB cluster. Use the reader endpoint for read operations, such as queries. By processing those statements on the read-only Aurora Replicas, this endpoint reduces the overhead on the primary instance. It also helps the cluster to scale the capacity to handle simultaneous SELECT queries, proportional to the number of Aurora Replicas in the cluster. Each Aurora DB cluster has one reader endpoint.

Custom endpoint – A custom endpoint for an Aurora cluster represents a set of DB instances that you choose. When you connect to the endpoint, Aurora performs load balancing and chooses one of the instances in the group to handle the connection. You define which instances this endpoint refers to, and you decide what purpose the endpoint serves.

Instance endpoint – An instance endpoint connects to a specific DB instance within an Aurora cluster. Each DB instance in a DB cluster has its own unique instance endpoint. So there is one instance endpoint for the current primary DB instance of the DB cluster, and there is one instance endpoint for each of the Aurora Replicas in the DB cluster.

 

 

RDS Multi-AZ

In an Amazon RDS Multi-AZ deployment, Amazon RDS automatically creates a primary database (DB) instance and synchronously replicates the data to an instance in a different AZ. When it detects a failure, Amazon RDS automatically fails over to a standby instance without manual intervention.

 

RDS read replica

A read replica is a read-only copy of a DB instance. You can reduce the load on your primary DB instance by routing queries from your applications to the read replica. In this way, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.

 

RDS cross-region read-write replica

By using cross-Region read replicas in Amazon RDS, you can create a MariaDB, MySQL, Oracle, PostgreSQL, or SQL Server read replica in a different Region from the source DB instance.

 

93

A company is running an application on Amazon EC2 instances in an Auto Scaling group. Recently, an issue occurred that prevented EC2 instances from launching successfully, and it took several hours for the support team to discover the issue. The support team wants to be notified by email whenever an EC2 instance does not start successfully.

Which action will accomplish this?

 

You can be notified when Amazon EC2 Auto Scaling is launching or terminating the EC2 instances in your Auto Scaling group. You manage notifications using Amazon Simple Notification Service (Amazon SNS).

For example, if you configure your Auto Scaling group to use the autoscaling: EC2_INSTANCE_TERMINATE notification type, and your Auto Scaling group terminates an instance, it sends an email notification. This email contains the details of the terminated instance, such as the instance ID and the reason that the instance was terminated.

 

 

98

A company recently created a new AWS Control Tower landing zone in a new organization in AWS Organizations. The landing zone must be able to demonstrate compliance with the Center for Internet Security (CIS) Benchmarks for AWS Foundations.

The company’s security team wants to use AWS Security Hub to view compliance across all accounts. Only the security team can be allowed to view aggregated Security Hub findings. In addition, specific users must be able to view findings from their own accounts within the organization. All accounts must be enrolled in Security Hub after the accounts are created.

Which combination of steps will meet these requirements in the MOST automated way? (Choose three.)

 

AWS Control Tower offers a straightforward way to set up and govern an AWS multi-account environment. AWS Control Tower orchestrates the capabilities of several other AWS services, including AWS Organizations, AWS Service Catalog, and AWS IAM Identity Center (successor to AWS Single Sign-On), to build a landing zone in less than an hour. Resources are set up and managed on your behalf.

 

AWS Control Tower – Landing Zone

Automatically provisioned, secure, and compliant multi-account environment based on AWS best practices

Landing Zone consists of:

  • AWS Organization – create and manage multi account structure
  • Account Factory – easily configure new accounts to adhere to config and policies
  • Organizational Units (OUs) – group and categorise accounts based on purpose
  • AWS Config – to monitor and assess your resources’ compliance with Guardrails
  • Service Control Policies (SCPs) – enforce fine-grained permissions and restrictions
  • IAM Identity Center – centrally manage user access to accounts and services
  • Guardrails – rules and policies to enforce security, compliance and best practices

 

For Security Hub:

  • Designate the Security Hub delegated administrator. https://docs.aws.amazon.com/securityhub/latest/userguide/designate-orgs-admin-account.html
  • When you turn on automatic enablement, Security Hub treats new accounts as member accounts when they are added to the organization.
    https://docs.aws.amazon.com/securityhub/latest/userguide/accounts-orgs-auto-enable.html
  • Use new account assignment APIs for AWS SSO to automate multi-account access. https://aws.amazon.com/blogs/security/use-new-account-assignment-apis-for-aws-sso-to-automate-multi-account-access/

101

A company is divided into teams. Each team has an AWS account, and all the accounts are in an organization in AWS Organizations. Each team must retain full administrative rights to its AWS account. Each team also must be allowed to access only AWS services that the company approves for use. AWS services must gain approval through a request and approval process.

How should a DevOps engineer configure the accounts to meet these requirements?

 

How to Create an Approval Flow for an AWS Service Catalog Product Launch Using AWS Lambda

 

https://aws.amazon.com/blogs/apn/how-to-create-an-approval-flow-for-an-aws-service-catalog-product-launch-using-aws-lambda/

 

Best Practices for AWS Organizations Service Control Policies in a Multi-Account Environment

 

https://aws.amazon.com/blogs/industries/best-practices-for-aws-organizations-service-control-policies-in-a-multi-account-environment/

 

104

A DevOps engineer has implemented a CI/CD pipeline to deploy an AWS CloudFormation template that provisions a web application. The web application consists of an Application Load Balancer (ALB), a target group, a launch template that uses an Amazon Linux 2 AMI, an Auto Scaling group of Amazon EC2 instances, a security group, and an Amazon RDS for MySQL database. The launch template includes user data that specifies a script to install and start the application.

The initial deployment of the application was successful. The DevOps engineer made changes to update the version of the application with the user data. The CI/CD pipeline has deployed a new version of the template. However, the health checks on the ALB are now failing. The health checks have marked all targets as unhealthy.

During investigation, the DevOps engineer notices that the CloudFormation stack has a status of UPDATE_COMPLETE. However, when the DevOps engineer connects to one of the EC2 instances and checks /var/log/messages, the DevOps engineer notices that the Apache web server failed to start successfully because of a configuration error.

How can the DevOps engineer ensure that the CloudFormation deployment will fail if the user data fails to successfully finish running?

 

112

A company is using an organization in AWS Organizations to manage multiple AWS accounts. The company’s development team wants to use AWS Lambda functions to meet resiliency requirements and is rewriting all applications to work with Lambda functions that are deployed in a VPC. The development team is using Amazon Elastic File System (Amazon EFS) as shared storage in Account A in the organization.

The company wants to continue to use Amazon EFS with Lambda. Company policy requires all serverless projects to be deployed in Account B.

A DevOps engineer needs to reconfigure an existing EFS file system to allow Lambda functions to access the data through an existing EFS access point.

Which combination of steps should the DevOps engineer take to meet these requirements? (Choose three.)

 

How can I use system policies to control access to my EFS file system?

https://repost.aws/knowledge-center/access-efs-across-accounts

 

How can I grant directory access to specific EC2 instances using IAM and EFS access points?

https://repost.aws/knowledge-center/efs-access-points-directory-access

 

1. The VPCs of your NFS client and your EFS file system are connected using either a VPC peering connection or a VPC Transit Gateway. This allows Amazon Elastic Compute Cloud (Amazon EC2) instances from the same or different accounts, to access EFS file systems in a different VPC.

2. Your IAM role (Lambda role or any other role) has console or read access on both the Amazon EFS and NFS client resources.

3. In the EFS file system policy, add policy to allow ClientMount and ClientWrite for the access points for the Lambda.

 

122

A global company manages multiple AWS accounts by using AWS Control Tower. The company hosts internal applications and public applications.

Each application team in the company has its own AWS account for application hosting. The accounts are consolidated in an organization in AWS Organizations. One of the AWS Control Tower member accounts serves as a centralized DevOps account with CI/CD pipelines that application teams use to deploy applications to their respective target AWS accounts. An IAM role for deployment exists in the centralized DevOps account.

An application team is attempting to deploy its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster in an application AWS account. An IAM role for deployment exists in the application AWS account. The deployment is through an AWS CodeBuild project that is set up in the centralized DevOps account. The CodeBuild project uses an IAM service role for CodeBuild. The deployment is failing with an Unauthorized error during attempts to connect to the cross-account EKS cluster from CodeBuild.

Which solution will resolve this error?

 

To deploy an application in an different account from a centralized DevOps account:

 

  1. CodeBuild in the centralized DevOps account triggers CodeDeploy in the account.
  2. The CodeDeploy role assume a deployment IAM role in an application account.
  3. The application deployment IAM role should have a trust relationship with the DevOps deployment IAM role allowing sts:AssumeRole action in its trust policy.
  4. The application deployment IAM role should have the required access to the EKS cluster.

 

125

A company updated the AWS CloudFormation template for a critical business application. The stack update process failed due to an error in the updated template, and AWS CloudFormation automatically began the stack rollback process. Later, a DevOps engineer discovered that the application was still unavailable and that the stack was in the UPDATE_ROLLBACK_FAILED state.

Which combination of actions should the DevOps engineer perform so that the stack rollback can complete successfully? (Choose two.)

 

If your stack is stuck in the UPDATE_ROLLBACK_FAILED state after a failed update, then the only actions that you can perform on the stack are the ContinueUpdateRollback or DeleteStack operations. This is because CloudFormation requires further input from you to acknowledge that the stack is out of sync with the template that the stack is attempting to roll back to.

To retry the rollback and resolve the error, you can use ContinueUpdateRollback with either the CloudFormation console or AWS Command Line Interface (AWS CLI).

 

To resolve the error, you might need to raise limits, change permissions, or modify other settings.

 

https://repost.aws/knowledge-center/cloudformation-update-rollback-failed

127

A company manually provisions IAM access for its employees. The company wants to replace the manual process with an automated process. The company has an existing Active Directory system configured with an external SAML 2.0 identity provider (IdP).

The company wants employees to use their existing corporate credentials to access AWS. The groups from the existing Active Directory system must be available for permission management in AWS Identity and Access Management (IAM). A DevOps engineer has completed the initial configuration of AWS IAM Identity Center (AWS Single Sign-On) in the company’s AWS account.

What should the DevOps engineer do next to meet the requirements?

 

Both SCIM and SAML are important considerations for configuring IAM Identity Center.

 

SAML 2.0 implementation

IAM Identity Center supports identity federation with SAML (Security Assertion Markup Language) 2.0. This allows IAM Identity Center to authenticate identities from external identity providers (IdPs). SAML 2.0 is an open standard used for securely exchanging SAML assertions. SAML 2.0 passes information about a user between a SAML authority (called an identity provider or IdP), and a SAML consumer (called a service provider or SP). The IAM Identity Center service uses this information to provide federated single sign-on. Single sign-on allows users to access AWS accounts and configured applications based on their existing identity provider credentials.

IAM Identity Center adds SAML IdP capabilities to your IAM Identity Center store, AWS Managed Microsoft AD, or to an external identity provider. Users can then single sign-on into services that support SAML, including the AWS Management Console and third-party applications such as Microsoft 365, Concur, and Salesforce.

The SAML protocol however does not provide a way to query the IdP to learn about users and groups. Therefore, you must make IAM Identity Center aware of those users and groups by provisioning them into IAM Identity Center.

 

SCIM profile

IAM Identity Center provides support for the System for Cross-domain Identity Management (SCIM) v2.0 standard. SCIM keeps your IAM Identity Center identities in sync with identities from your IdP. This includes any provisioning, updates, and deprovisioning of users between your IdP and IAM Identity Center(automatically).

For more information about how to implement SCIM, see Automatic provisioning. For additional details about IAM Identity Center’s SCIM implementation, see the IAM Identity Center SCIM Implementation Developer Guide.

 

https://docs.aws.amazon.com/singlesignon/latest/userguide/scim-profile-saml.html

132

A company manages a multi-tenant environment in its VPC and has configured Amazon GuardDuty for the corresponding AWS account. The company sends all GuardDuty findings to AWS Security Hub.

Traffic from suspicious sources is generating a large number of findings. A DevOps engineer needs to implement a solution to automatically deny traffic across the entire VPC when GuardDuty discovers a new suspicious source.

Which solution will meet these requirements?

 

This is about how to automatically block suspicious traffic with AWS Network Firewall and Amazon GuardDuty.

 

Figure 2: Detailed workflow diagram: Automatically block suspicious traffic with Network Firewall and GuardDuty

https://aws.amazon.com/ko/blogs/security/automatically-block-suspicious-traffic-with-aws-network-firewall-and-amazon-guardduty/

 

137

Your score is

The average score is 23%

0%

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x