ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 46

Question list
Search
Search

List of questions

Search

Related questions











Your system recently experienced down time during the troubleshooting process. You found that a new administrator mistakenly terminated several production EC2 instances. Which of the following strategies will help prevent a similar situation in the future?

The administrator still must be able to: launch, start stop, and terminate development resources. launch and start production instances.

A.
Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
A.
Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
Answers
B.
Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources.
B.
Leverage resource based tagging, along with an IAM user which can prevent specific users from terminating production, EC2 resources.
Answers
C.
Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
C.
Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
Answers
D.
Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
D.
Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.
Answers
Suggested answer: B

Explanation:

Working with volumes

When an API action requires a caller to specify multiple resources, you must create a policy statement that allows users to access all required resources. If you need to use a Condition element with one or more of these resources, you must create multiple statements as shown in this example.

The following policy allows users to attach volumes with the tag "volume_user=iam-user-name" to instances with the tag "department=dev", and to detach those volumes from those instances. If you attach this policy to an IAM group, the aws:username policy variable gives each IAM user in the group permission to attach or detach volumes from the instances with a tag named volume_user that has his or her IAM user name as a value. {

"Version": "2012-10-17",

"Statement": [{

"Effect": "Allow",

"Action": [

"ec2:AttachVolume",

"ec2:DetachVolume"

],

"Resource": "arn:aws:ec2:us-east-1:123456789012:instance/*",

"Condition": {

"StringEquals": {

"ec2:ResourceTag/department": "dev"

}

}

},

{

"Effect": "Allow",

"Action": [

"ec2:AttachVolume",

"ec2:DetachVolume"

],

"Resource": "arn:aws:ec2:us-east-1:123456789012:volume/*",

"Condition": {

"StringEquals": {

"ec2:ResourceTag/volume_user": "${aws:username}"

}

}

}

]

}

Launching instances (RunInstances)

The RunInstances API action launches one or more instances. RunInstances requires an AMI and creates an instance; and users can specify a key pair and security group in the request. Launching into EC2-VPC requires a subnet, and creates a network interface. Launching from an Amazon EBS-backed AMI creates a volume. Therefore, the user must have permission to use these Amazon EC2 resources. The caller can also configure the instance using optional parameters to RunInstances, such as the instance type and a subnet. You can create a policy statement that requires users to specify an optional parameter, or restricts users to particular values for a parameter. The examples in this section demonstrate some of the many possible ways that you can control the configuration of an instance that a user can launch. Note that by default, users don't have permission to describe, start, stop, or terminate the resulting instances. One way to grant the users permission to manage the resulting instances is to create a specific tag for each instance, and then create a statement that enables them to manage instances with that tag. For more information, see 2: Working with instances. a. AMI The following policy allows users to launch instances using only the AMIs that have the specified tag, "department=dev", associated with them. The users can't launch instances using other AMIs because the Condition element of the first statement requires that users specify an AMI that has this tag. The users also can't launch into a subnet, as the policy does not grant permissions for the subnet and network interface resources. They can, however, launch into EC2-Classic. The second statement uses a wildcard to enable users to create instance resources, and requires users to specify the key pair project_keypair and the security group sg-1a2b3c4d. Users are still able to launch instances without a key pair. {

"Version": "2012-10-17",

"Statement": [{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region::image/ami-*"

],

"Condition": {

"StringEquals": {

"ec2:ResourceTag/department": "dev"

}

}

},

{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region:account:instance/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region:account:key-pair/project_keypair",

"arn:aws:ec2:region:account:security-group/sg-1a2b3c4d"

]

}

]

}

Alternatively, the following policy allows users to launch instances using only the specified AMIs, ami-9e1670f7 and ami- 45cf5c3c. The users can't launch an instance using other AMIs (unless another statement grants the users permission to do so), and the users can't launch an instance into a subnet.

{

"Version": "2012-10-17",

"Statement": [{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region::image/ami-9e1670f7",

"arn:aws:ec2:region::image/ami-45cf5c3c",

"arn:aws:ec2:region:account:instance/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region:account:key-pair/*",

"arn:aws:ec2:region:account:security-group/*"

]

}

]

}

Alternatively, the following policy allows users to launch instances from all AMIs owned by Amazon. The Condition element of the first statement tests whether ec2:Owner is amazon. The users can't launch an instance using other AMIs (unless another statement grants the users permission to do so). The users are able to launch an instance into a subnet. {

"Version": "2012-10-17",

"Statement": [{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region::image/ami-*"

],

"Condition": {

"StringEquals": {

"ec2:Owner": "amazon"

}

}

},

{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region:account:instance/*",

"arn:aws:ec2:region:account:subnet/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region:account:network-interface/*",

"arn:aws:ec2:region:account:key-pair/*",

"arn:aws:ec2:region:account:security-group/*"

]

}

]

} b. Instance type The following policy allows users to launch instances using only the t2.micro or t2.small instance type, which you might do to control costs. The users can't launch larger instances because the Condition element of the first statement tests whether ec2:InstanceType is either t2.micro or t2.small.

{

"Version": "2012-10-17",

"Statement": [{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region:account:instance/*"

],

"Condition": {

"StringEquals": {

"ec2:InstanceType": ["t2.micro", "t2.small"]

}

}

},

{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region::image/ami-*",

"arn:aws:ec2:region:account:subnet/*",

"arn:aws:ec2:region:account:network-interface/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region:account:key-pair/*",

"arn:aws:ec2:region:account:security-group/*"

]

}

]

}

Alternatively, you can create a policy that denies users permission to launch any instances except t2.micro and t2.small instance types. {

"Version": "2012-10-17",

"Statement": [{

"Effect": "Deny",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region:account:instance/*"

],

"Condition": {

"StringNotEquals": {

"ec2:InstanceType": ["t2.micro", "t2.small"]

}

}

},

{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region::image/ami-*",

"arn:aws:ec2:region:account:network-interface/*",

"arn:aws:ec2:region:account:instance/*",

"arn:aws:ec2:region:account:subnet/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region:account:key-pair/*",

"arn:aws:ec2:region:account:security-group/*"

]

}

]

} c. Subnet The following policy allows users to launch instances using only the specified subnet, subnet-12345678. The group can't launch instances into any another subnet (unless another statement grants the users permission to do so). Users are still able to launch instances into EC2-Classic.

{

"Version": "2012-10-17",

"Statement": [{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region:account:subnet/subnet-12345678",

"arn:aws:ec2:region:account:network-interface/*",

"arn:aws:ec2:region:account:instance/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region::image/ami-*",

"arn:aws:ec2:region:account:key-pair/*",

"arn:aws:ec2:region:account:security-group/*"

]

}

]

}

Alternatively, you could create a policy that denies users permission to launch an instance into any other subnet. The statement does this by denying permission to create a network interface, except where subnet subnet-12345678 is specified.

This denial overrides any other policies that are created to allow launching instances into other subnets. Users are still able to launch instances into EC2-Classic. {

"Version": "2012-10-17",

"Statement": [{

"Effect": "Deny",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region:account:network-interface/*"

],

"Condition": {

"ArnNotEquals": {

"ec2:Subnet": "arn:aws:ec2:region:account:subnet/subnet-12345678"

}

}

},

{

"Effect": "Allow",

"Action": "ec2:RunInstances",

"Resource": [

"arn:aws:ec2:region::image/ami-*",

"arn:aws:ec2:region:account:network-interface/*",

"arn:aws:ec2:region:account:instance/*",

"arn:aws:ec2:region:account:subnet/*",

"arn:aws:ec2:region:account:volume/*",

"arn:aws:ec2:region:account:key-pair/*",

"arn:aws:ec2:region:account:security-group/*"

]

}

]}

A company’s service for video game recommendations has just gone viral. The company has new users from all over the world. The website for the service is hosted on a set of Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). The website consists of static content with different resources being loaded depending on the device type. Users recently reported that the load time for the website has increased. Administrators are reporting high loads on the EC2 instances that host the service. Which set actions should a solutions architect take to improve response times?

A.
Create separate Auto Scaling groups based on device types. Switch to Network Load Balancer (NLB). Use the User- Agent HTTP header in the NLB to route to a different set of EC2 instances.
A.
Create separate Auto Scaling groups based on device types. Switch to Network Load Balancer (NLB). Use the User- Agent HTTP header in the NLB to route to a different set of EC2 instances.
Answers
B.
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User-Agent HTTP header.
B.
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use Lambda@Edge to load different resources based on the User-Agent HTTP header.
Answers
C.
Create a separate ALB for each device type. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
C.
Create a separate ALB for each device type. Create one Auto Scaling group behind each ALB. Use Amazon Route 53 to route to different ALBs depending on the User-Agent HTTP header.
Answers
D.
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use the User-Agent HTTP header to load different content.
D.
Move content to Amazon S3. Create an Amazon CloudFront distribution to serve content out of the S3 bucket. Use the User-Agent HTTP header to load different content.
Answers
Suggested answer: A

A company had a tight deadline to migrate its on-premises environment to AWS. It moved over Microsoft SQL Servers and Microsoft Windows Servers using the virtual machine import/export service and rebuild other applications native to the cloud.

The team created both Amazon EC2 databases and used Amazon RDS. Each team in the company was responsible for migrating their applications, and they have created individual accounts for isolation of resources. The company did not have much time to consider costs, but now it would like suggestions on reducing its AWS spend.

Which steps should a Solutions Architect take to reduce costs?

A.
Enable AWS Business Support and review AWS Trusted Advisor’s cost checks. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Save AWS Simple Monthly Calculator reports in Amazon S3 for trend analysis. Create a master account under Organizations and have teams join for consolidated billing.
A.
Enable AWS Business Support and review AWS Trusted Advisor’s cost checks. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Save AWS Simple Monthly Calculator reports in Amazon S3 for trend analysis. Create a master account under Organizations and have teams join for consolidated billing.
Answers
B.
Enable Cost Explorer and AWS Business Support. Reserve Amazon EC2 and Amazon RDS DB instances. Use Amazon CloudWatch and AWS Trusted Advisor for monitoring and to receive costsavings suggestions. Create a master account under Organizations and have teams join for consolidated billing.
B.
Enable Cost Explorer and AWS Business Support. Reserve Amazon EC2 and Amazon RDS DB instances. Use Amazon CloudWatch and AWS Trusted Advisor for monitoring and to receive costsavings suggestions. Create a master account under Organizations and have teams join for consolidated billing.
Answers
C.
Create an AWS Lambda function that changes the instance size based on Amazon CloudWatch alarms. Reserve instances based on AWS Simple Monthly Calculator suggestions. Have an AWS WellArchitected framework review and apply recommendations. Create a master account under Organizations and have teams join for consolidated billing.
C.
Create an AWS Lambda function that changes the instance size based on Amazon CloudWatch alarms. Reserve instances based on AWS Simple Monthly Calculator suggestions. Have an AWS WellArchitected framework review and apply recommendations. Create a master account under Organizations and have teams join for consolidated billing.
Answers
D.
Create a budget and monitor for costs exceeding the budget. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Create an AWS Lambda function that changes instance sizes based on Amazon CloudWatch alarms. Have each team upload their bill to an Amazon S3 bucket for analysis of team spending. Use Spot Instances on nightly batch processing jobs.
D.
Create a budget and monitor for costs exceeding the budget. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating demand. Create an AWS Lambda function that changes instance sizes based on Amazon CloudWatch alarms. Have each team upload their bill to an Amazon S3 bucket for analysis of team spending. Use Spot Instances on nightly batch processing jobs.
Answers
Suggested answer: A

Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?

A.
Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
A.
Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template which includes your AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
Answers
B.
Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
B.
Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones. Asynchronously replicate transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
Answers
C.
Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload.
C.
Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3 bucket using multi-part upload.
Answers
D.
Install your application on a compute-optimized EC2 instance capable of supporting the application's average load. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.
D.
Install your application on a compute-optimized EC2 instance capable of supporting the application's average load. Synchronously replicate transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.
Answers
Suggested answer: A

Explanation:

Overview of Creating Amazon EBS-Backed AMIs

First, launch an instance from an AMI that's similar to the AMI that you'd like to create. You can connect to your instance and customize it. When the instance is configured correctly, ensure data integrity by stopping the instance before you create an AMI, then create the image. When you create an Amazon EBS-backed AMI, we automatically register it for you. Amazon EC2 powers down the instance before creating the AMI to ensure that everything on the instance is stopped and in a consistent state during the creation process. If you're confident that your instance is in a consistent state appropriate for AMI creation, you can tell Amazon EC2 not to power down and reboot the instance. Some file systems, such as XFS, can freeze and unfreeze activity, making it safe to create the image without rebooting the instance. During the AMI-creation process, Amazon EC2 creates snapshots of your instance's root volume and any other EBS volumes attached to your instance. If any volumes attached to the instance are encrypted, the new AMI only launches successfully on instances that support Amazon EBS encryption. For more information, see Amazon EBS Encryption. Depending on the size of the volumes, it can take several minutes for the AMI-creation process to complete (sometimes up to 24 hours). You may find it more efficient to create snapshots of your volumes prior to creating your AMI. This way, only small, incremental snapshots need to be created when the AMI is created, and the process completes more quickly (the total time for snapshot creation remains the same). For more information, see Creating an Amazon EBS Snapshot. After the process completes, you have a new AMI and snapshot created from the root volume of the instance. When you launch an instance using the new AMI, we create a new EBS volume for its root volume using the snapshot. Both the AMI and the snapshot incur charges to your account until you delete them. For more information, see Deregistering Your AMI. If you add instance-store volumes or EBS volumes to your instance in addition to the root device volume, the block device mapping for the new AMI contains information for these volumes, and the block device mappings for instances that you launch from the new AMI automatically contain information for these volumes. The instance-store volumes specified in the block device mapping for the new instance are new and don't contain any data from the instance store volumes of the instance you used to create the AMI. The data on EBS volumes persists. For more information, see Block Device Mapping.

A software as a service (SaaS) company offers a cloud solution for document management to private law firms and the public sector. A local government client recently mandated that highly confidential documents cannot be stored outside the country. The company CIO asks a Solutions Architect to ensure the application can adapt to this new requirement. The CIO also wants to have a proper backup plan for these documents, as backups are not currently performed. What solution meets these requirements?

A.
Tag documents that are not highly confidential as regular in Amazon S3. Create individual S3 buckets for each user. Upload objects to each user's bucket. Set S3 bucket replication from these buckets to a central S3 bucket in a different AWS account and AWS Region. Configure an AWS Lambda function triggered by scheduled events in Amazon CloudWatch to delete objects that are tagged as secret in the S3 backup bucket.
A.
Tag documents that are not highly confidential as regular in Amazon S3. Create individual S3 buckets for each user. Upload objects to each user's bucket. Set S3 bucket replication from these buckets to a central S3 bucket in a different AWS account and AWS Region. Configure an AWS Lambda function triggered by scheduled events in Amazon CloudWatch to delete objects that are tagged as secret in the S3 backup bucket.
Answers
B.
Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Create a cross-region S3 bucket in a separate AWS account. Set proper IAM roles to allow crossregion permissions to the S3 buckets. Configure an AWS Lambda function triggered by Amazon CloudWatch scheduled events to copy objects that are tagged as secret to the S3 backup bucket and objects tagged as normal to the cross-region S3 bucket.
B.
Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Create a cross-region S3 bucket in a separate AWS account. Set proper IAM roles to allow crossregion permissions to the S3 buckets. Configure an AWS Lambda function triggered by Amazon CloudWatch scheduled events to copy objects that are tagged as secret to the S3 backup bucket and objects tagged as normal to the cross-region S3 bucket.
Answers
C.
Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Use S3 selective cross-region replication based on object tags to move regular documents to an S3 bucket in a different AWS Region. Configure an AWS Lambda function that triggers when new S3 objects are created in the main bucket to replicate only documents tagged as secret into the S3 bucket in the same AWS Region.
C.
Tag documents as either regular or secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Use S3 selective cross-region replication based on object tags to move regular documents to an S3 bucket in a different AWS Region. Configure an AWS Lambda function that triggers when new S3 objects are created in the main bucket to replicate only documents tagged as secret into the S3 bucket in the same AWS Region.
Answers
D.
Tag highly confidential documents as secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Use S3 selective cross-region replication based on object tags to move regular documents to a different AWS Region. Create an Amazon CloudWatch Events rule for new S3 objects tagged as secret to trigger an AWS Lambda function to replicate them into a separate bucket in the same AWS Region.
D.
Tag highly confidential documents as secret in Amazon S3. Create an individual S3 backup bucket in the same AWS account and AWS Region. Use S3 selective cross-region replication based on object tags to move regular documents to a different AWS Region. Create an Amazon CloudWatch Events rule for new S3 objects tagged as secret to trigger an AWS Lambda function to replicate them into a separate bucket in the same AWS Region.
Answers
Suggested answer: D

Which of the following is NOT a true statement about Auto Scaling?

A.
Auto Scaling can launch instances in different Azs.
A.
Auto Scaling can launch instances in different Azs.
Answers
B.
Auto Scaling can work with CloudWatch.
B.
Auto Scaling can work with CloudWatch.
Answers
C.
Auto Scaling can launch an instance at a specific time.
C.
Auto Scaling can launch an instance at a specific time.
Answers
D.
Auto Scaling can launch instances in different regions.
D.
Auto Scaling can launch instances in different regions.
Answers
Suggested answer: D

Explanation:

Auto Scaling provides an option to scale up and scale down based on certain conditions or triggers from Cloudwatch. A user can configure such that Auto Scaling launches instances across Azs, but it cannot span across regions.

Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/as-dg.pdf

A user is configuring MySQL RDS with PIOPS. What should be the minimum size of DB storage provided by the user?

A.
1 TB
A.
1 TB
Answers
B.
50 GB
B.
50 GB
Answers
C.
5 GB
C.
5 GB
Answers
D.
100 GB
D.
100 GB
Answers
Suggested answer: D

Explanation:

If the user is trying to enable PIOPS with MySQL RDS, the minimum size of storage should be 100 GB.

Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.html

A data analytics company has an Amazon Redshift cluster that consists of several reserved nodes. The cluster is experiencing unexpected bursts of usage because a team of employees is compiling a deep audit analysis report. The queries to generate the report are complex read queries and are CPU intensive.

Business requirements dictate that the cluster must be able to service read and write queries at all times. A solutions architect must devise a solution that accommodates the bursts of usage. Which solution meets these requirements MOST cost-effectively?

A.
Provision an Amazon EMR cluster. Offload the complex data processing tasks.
A.
Provision an Amazon EMR cluster. Offload the complex data processing tasks.
Answers
B.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%.
B.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%.
Answers
C.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%
C.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster’s CPU metrics in Amazon CloudWatch reach 80%
Answers
D.
Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.
D.
Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.
Answers
Suggested answer: A

A Solutions Architect is redesigning an image-viewing and messaging platform to be delivered as SaaS. Currently, there is a farm of virtual desktop infrastructure (VDI) that runs a desktop image-viewing application and a desktop messaging application. Both applications use a shared database to manage user accounts and sharing. Users log in from a web portal that launches the applications and streams the view of the application on the user’s machine. The Development Operations team wants to move away from using VDI and wants to rewrite the application.

What is the MOST cost-effective architecture that offers both security and ease of management?

A.
Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data. Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.
A.
Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data. Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.
Answers
B.
Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3, and use Amazon Cognito for user accounts and sharing. Create AWS CloudFormation templates to launch the application by using EC2 user data to install and configure the application.
B.
Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3, and use Amazon Cognito for user accounts and sharing. Create AWS CloudFormation templates to launch the application by using EC2 user data to install and configure the application.
Answers
C.
Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using an Amazon RDS database for user accounts and sharing. Create AWS CloudFormation templates to launch the application and perform blue/green deployments.
C.
Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using an Amazon RDS database for user accounts and sharing. Create AWS CloudFormation templates to launch the application and perform blue/green deployments.
Answers
D.
Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and messenger that stores images in Amazon S3. Have the website use an Amazon RDS database for user accounts and sharing.
D.
Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and messenger that stores images in Amazon S3. Have the website use an Amazon RDS database for user accounts and sharing.
Answers
Suggested answer: C

A Company had a security event whereby an Amazon S3 bucket with sensitive information was made public. Company policy is to never have public S3 objects, and the Compliance team must be informed immediately when any public objects are identified.

How can the presence of a public S3 object be detected, set to trigger alarm notifications, and automatically remediated in the future? (Choose two.)

A.
Turn on object-level logging for Amazon S3. Turn on Amazon S3 event notifications to notify by using an Amazon SNS topic when a PutObject API call is made with a public-read permission.
A.
Turn on object-level logging for Amazon S3. Turn on Amazon S3 event notifications to notify by using an Amazon SNS topic when a PutObject API call is made with a public-read permission.
Answers
B.
Configure an Amazon CloudWatch Events rule that invokes an AWS Lambda function to secure the S3 bucket.
B.
Configure an Amazon CloudWatch Events rule that invokes an AWS Lambda function to secure the S3 bucket.
Answers
C.
Use the S3 bucket permissions for AWS Trusted Advisor and configure a CloudWatch event to notify by using Amazon SNS.
C.
Use the S3 bucket permissions for AWS Trusted Advisor and configure a CloudWatch event to notify by using Amazon SNS.
Answers
D.
Turn on object-level logging for Amazon S3. Configure a CloudWatch event to notify by using an SNS topic when a PutObject API call with public-read permission is detected in the AWS CloudTrail logs.
D.
Turn on object-level logging for Amazon S3. Configure a CloudWatch event to notify by using an SNS topic when a PutObject API call with public-read permission is detected in the AWS CloudTrail logs.
Answers
E.
Schedule a recursive Lambda function to regularly change all object permissions inside the S3 bucket.
E.
Schedule a recursive Lambda function to regularly change all object permissions inside the S3 bucket.
Answers
Suggested answer: B, D
Total 906 questions
Go to page: of 91