ExamGecko
Home Home / Amazon / PAS-C01

Amazon PAS-C01 Practice Test - Questions Answers, Page 3

Question list
Search
Search

List of questions

Search

Related questions











An SAP solutions architect is leading the SAP basis team for a company. The company's SAP landscape includes SAP HANA database instances for the following systems sandbox development quality assurance test {QAT} system performance test (SPT) and production. The sandbox development and OAT systems are running on Amazon EC2 On-Demand Instances. The SPT and production systems are running on EC2 Reserved Instances All the EC2 instances are using Provisioned IOPS SSO (K)2) Amazon Elastic Block Store (Amazon EBS) volumes.

The entire development learn is in the same time zone and works from 8 AM to 6 PM. The sandbox system is for research and testing that are not critical. The SPT and production systems are business critical The company runs load-testing jobs and stress-testing jobs on the QAT systems overnight to reduce testing duration. The company wants to optimize infrastructure cost for the existing AWS resources. How can the SAP solutions architect meet these requirements with the LEAST amount of administrative effort?

A.
Use a Spot Fleet instead of the Reserved Instances and On-Demand Instances
A.
Use a Spot Fleet instead of the Reserved Instances and On-Demand Instances
Answers
B.
Use Amazon EventBridge (Amazon CloudWatch Events) and Amazon CloudWatch alarms to stop the development and sandbox EC2 instances from 7 PM every night to 7 AM the next day
B.
Use Amazon EventBridge (Amazon CloudWatch Events) and Amazon CloudWatch alarms to stop the development and sandbox EC2 instances from 7 PM every night to 7 AM the next day
Answers
C.
Make the SAP basis team available 24 hours a day 7 days a week to use the AWS CLi to stop and start the development and sandbox EC2 instances manually
C.
Make the SAP basis team available 24 hours a day 7 days a week to use the AWS CLi to stop and start the development and sandbox EC2 instances manually
Answers
D.
Change the EBS volume type to Throughput Optimized HDD (sti) for the /hana/data and nana, log file systems for the production and non-production SAP HANA databases
D.
Change the EBS volume type to Throughput Optimized HDD (sti) for the /hana/data and nana, log file systems for the production and non-production SAP HANA databases
Answers
Suggested answer: B

Explanation:

B is correct because using Amazon EventBridge (Amazon CloudWatch Events) and Amazon CloudWatch alarms to stop the development and sandbox EC2 instances from 7 PM every night to 7 AM the next day reduces the infrastructure cost for the non-critical systems that are not used during that time. This solution also requires the least amount of administrative effort as it is automated and does not require manual intervention. Reference: https://docs.aws.amazon.com/whitepapers/latest/sap-on-aws-technical-deployment-guide/cost-optimization.html https://docs.aws.amazon.com/whitepapers/latest/sap-on-aws-technical- deployment-guide/amazon-ec2.html


A company is hosting an SAP HANA database on AWS. The company is automating operational tasks including backup and system refreshes. The company wants to use SAP HANA Studio to perform data backup of an SAP HANA tenant database to a backint interface. The SAP HANA database is running in multi-tenant database container (MDO mode. The company receives the following error message during an attempt to perform the backup.

What should an SAP solutions architect do to resolve this issue?

A.
Set the execute permission for AWS Backint agent binary aws-backint-agent and for the launcher script aws-backint-agent-launcher sh in the installation directory
A.
Set the execute permission for AWS Backint agent binary aws-backint-agent and for the launcher script aws-backint-agent-launcher sh in the installation directory
Answers
B.
Verify the installation steps Create symbolic links (symlinks)
B.
Verify the installation steps Create symbolic links (symlinks)
Answers
C.
Ensure that the cataiog_backup_using_backint SAP HANA parameter is set to true Ensure that the data_backup_parameter_file and log_backup_parameter_file parameters have the correct path location in the global ini file
C.
Ensure that the cataiog_backup_using_backint SAP HANA parameter is set to true Ensure that the data_backup_parameter_file and log_backup_parameter_file parameters have the correct path location in the global ini file
Answers
D.
Add the SAP HANA system to SAP HANA Studio Select multiple container mode and then try to initiate the backup again
D.
Add the SAP HANA system to SAP HANA Studio Select multiple container mode and then try to initiate the backup again
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/ja_jp/sap/latest/sap-hana/aws-backint-agent-troubleshooting.html


A company is planning to migrate its on-premises SAP ERP Central Component (SAP ECC) system on SAP HANA to AWS Each month the system experiences two peaks in usage. The first peak is on the 21st day of the month when the company runs payroll. The second peak is on the last day of the month when the company processes and exports credit data Both peak workloads are of high importance and cannot be rescheduled The current SAP ECC system has six application servers an of a similar size. During normal operation outside of peak usage four application servers would suffice Which purchasing option will meet the company's requirements MOST cost-effectively on AWS?

A.
Four Reserved Instances and two Spot Instances
A.
Four Reserved Instances and two Spot Instances
Answers
B.
Six On-Demand Instances
B.
Six On-Demand Instances
Answers
C.
Six Reserved Instances
C.
Six Reserved Instances
Answers
D.
Four Reserved Instances and two On-Demand Instances
D.
Four Reserved Instances and two On-Demand Instances
Answers
Suggested answer: D

Explanation:

D is correct because using four Reserved Instances and two On-Demand Instances provides the most cost-effective purchasing option for the company. Reserved Instances offer lower prices than On- Demand Instances for the four application servers that are needed for normal operation. On-Demand Instances offer flexibility and scalability for the two additional application servers that are needed only during peak usage. Spot Instances are not suitable for high-importance workloads that cannot be rescheduled as they can be interrupted at any time. Reference: https://docs.aws.amazon.com/whitepapers/latest/sap-on-aws-technical-deployment-guide/cost-optimization.html https://docs.aws.amazon.com/whitepapers/latest/sap-on-aws-technical- deployment-guide/amazon-ec2.html


A company hosts its SAP NetWeaver workload on SAP HANA m the AWS Cloud The SAP NetWeaver application is protected by a cluster solution that uses Red Hat Enterprise Linux High Availability Add- On The duster solution uses an overlay IP address to ensure that the high availability cluster is still accessible during failover scenarios. An SAP solutions architect needs to facilitate the network connection to this overlay IP address from multiple locations These locations include more than 25 VPCs other AWS Regions and the onpremises environment The company already has set up an AWS Direct Connect connection between the on-premises environment and AWS.

What should the SAP solutions architect do to meet these requirements in the MOST scalable manner?

A.
Use VPC peering between the VPCs to route traffic between them
A.
Use VPC peering between the VPCs to route traffic between them
Answers
B.
Use AWS Transit Gateway to connect the VPCs and on-premises networks together
B.
Use AWS Transit Gateway to connect the VPCs and on-premises networks together
Answers
C.
Use a Network Load Balancer to route connections to various targets within VPCs
C.
Use a Network Load Balancer to route connections to various targets within VPCs
Answers
D.
Deploy a Direct Connect gateway to connect the Direct Connect connection over a private VIF to one or more VPCs in any accounts
D.
Deploy a Direct Connect gateway to connect the Direct Connect connection over a private VIF to one or more VPCs in any accounts
Answers
Suggested answer: B

Explanation:

AWS Transit Gateway allows the SAP solutions architect to connect multiple VPCs and on-premises networks together in a scalable manner. It acts as a hub that controls how traffic is routed between the connected networks. By attaching the VPCs and the on-premises environment to the Transit Gateway, the SAP solutions architect can establish a single connection to the overlay IP address in the high availability cluster, ensuring that the cluster is accessible from all locations.


A company is implementing SAP HANA on AWS According 10 the company's security policy SAP backups must be encrypted Only authorized team members can have the ability to decrypt the SAP backups What is the MOST operationally efficient solution that meets these requirements?

A.
Configure AWS Backint Agent for SAP HANA to create SAP backups in an Amazon S3 bucket After a backup is created encrypt the backup by using client-side encryption Share the encryption key with authorized team members only
A.
Configure AWS Backint Agent for SAP HANA to create SAP backups in an Amazon S3 bucket After a backup is created encrypt the backup by using client-side encryption Share the encryption key with authorized team members only
Answers
B.
Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups Create a key policy to grant decryption permission to authorized team members only
B.
Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups Create a key policy to grant decryption permission to authorized team members only
Answers
C.
Configure AWS Storage Gateway to transfer SAP backups from a file system to an Amazon S3 bucket Use an S3 bucket policy to grant decryption permission to authorized team members only
C.
Configure AWS Storage Gateway to transfer SAP backups from a file system to an Amazon S3 bucket Use an S3 bucket policy to grant decryption permission to authorized team members only
Answers
D.
Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups Grant object ACL decryption permission to authorized team members only
D.
Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups Grant object ACL decryption permission to authorized team members only
Answers
Suggested answer: B

Explanation:

This is the most operationally efficient solution that meets the company's security policy requirements. AWS KMS is a service that enables you to create and manage encryption keys that are used to encrypt and decrypt data. By configuring AWS Backup Agent for SAP HANA to use AWS KMS for SAP backups, the company can ensure that the backups are encrypted at rest and that only authorized team members have the ability to decrypt them. The key policy allows the company to define which team members are authorized to access the key, so that it can be used to decrypt the backup. This approach is operationally efficient because it does not require the company to manually encrypt and decrypt backups, and it enables the company to manage access to the encryption key through IAM policies, without the need for sharing encryption keys.


A data analysis company has two SAP landscapes that consist of sandbox development QA, preproduction and production servers. One landscape is on Windows and the other landscape is on Red Hat Enterprise Linux. The servers reside in a room m a building that other tenants share.

An SAP solutions architect proposes to migrate the SAP applications to AWS The SAP solutions architect wants to move the production backups to AWS and wants to make the backups highly available to restore >n case of unavailability of an on-premises server.

Which solution will meet these requirements MOST cost-effectively?

A.
Take a backup of the production servers Implement an AWS Storage Gateway Volume Gateway Create file shares by using the Storage Gateway Volume Gateway Copy the backup files to the file shares through NFS and 9MB.
A.
Take a backup of the production servers Implement an AWS Storage Gateway Volume Gateway Create file shares by using the Storage Gateway Volume Gateway Copy the backup files to the file shares through NFS and 9MB.
Answers
B.
Take a backup of the production servers Send those backups to tape drives implement an AWS Storage Gateway Tape Gateway Send the backups to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) through the S3 console Move the backups immediately to S3 Glacier Deep Archive
B.
Take a backup of the production servers Send those backups to tape drives implement an AWS Storage Gateway Tape Gateway Send the backups to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) through the S3 console Move the backups immediately to S3 Glacier Deep Archive
Answers
C.
Implement a third-party tool to take images of the SAP application servers and database server Take regular snapshots at 1-hour intervals send the snapshots to Amazon S3 Glacier directly through the S3 Glacier console Store the same images in different S3 buckets in different AWS Regions
C.
Implement a third-party tool to take images of the SAP application servers and database server Take regular snapshots at 1-hour intervals send the snapshots to Amazon S3 Glacier directly through the S3 Glacier console Store the same images in different S3 buckets in different AWS Regions
Answers
D.
Take a backup of the production servers Implement an Amazon S3 File Gateway Create file shares by using the S3 File Gateway Copy the backup files lo the file shares through NFS and SMB Map backup files directly to Amazon S3 Configure an S3 Lifecycle policy to send the backup files to S3 Glacier based on the company's data retention policy
D.
Take a backup of the production servers Implement an Amazon S3 File Gateway Create file shares by using the S3 File Gateway Copy the backup files lo the file shares through NFS and SMB Map backup files directly to Amazon S3 Configure an S3 Lifecycle policy to send the backup files to S3 Glacier based on the company's data retention policy
Answers
Suggested answer: D

Explanation:

Take a backup of the production servers, Implement an Amazon S3 File Gateway, Create file shares by using the S3 File Gateway, Copy the backup files to the file shares through NFS and SMB, Map backup files directly to Amazon S3 and Configure an S3 Lifecycle policy to send the backup files to S3 Glacier based on the company's data retention policy. This option is cost-effective because it avoids the need for third-party tools, tape drives and storage gateways, and reduces the amount of time and resources needed for the migration process. Additionally, the S3 lifecycle policy allows you to automate the storage and archiving process and ensure that your data is stored in the most cost- effective way.


A company's SAP basis team is responsible for database backups in Amazon S3. The company frequently needs to restore the last 3 months of backups into the pre-production SAP system to perform tests and analyze performance. Previously an employee accidentally deleted backup files from the S3 bucket. The SAP basis team wants to prevent accidental deletion of backup files in the future. Which solution will meet these requirements?

A.
Create a new resource-based policy that prevents deletion of the S3 bucket
A.
Create a new resource-based policy that prevents deletion of the S3 bucket
Answers
B.
Enable versioning and multi-factor authentication (MFA) on the S3 bucket
B.
Enable versioning and multi-factor authentication (MFA) on the S3 bucket
Answers
C.
Create signed cookies for the backup files in the S3 bucket Provide the signed cookies to authorized users only
C.
Create signed cookies for the backup files in the S3 bucket Provide the signed cookies to authorized users only
Answers
D.
Apply an S3 Lifecycle policy to move the backup fries immediately to S3 Glacier
D.
Apply an S3 Lifecycle policy to move the backup fries immediately to S3 Glacier
Answers
Suggested answer: B

Explanation:


An SAP solutions architect is designing an SAP HANA scale-out architecture for SAP Business Warehouse (SAP BW) on SAP HANA on AWS. The SAP solutions architect identifies the design as a three-node scale out deployment of x1e 32xlarge Amazon EC2 instances The SAP solutions architect must ensure that the SAP HANA scale-out nodes can achieve the lowlatency and high-throughput network performance that are necessary for node-to-node communication Which combination of steps should the SAP solutions architect take to meet these requirements?

(Select TWO.)

A.
Create a cluster placement group Launch the instances into the cluster placement group
A.
Create a cluster placement group Launch the instances into the cluster placement group
Answers
B.
Create a spread placement group Launch the instances into the spread placement group
B.
Create a spread placement group Launch the instances into the spread placement group
Answers
C.
Create a partition placement group Launch the instances into the partition placement group
C.
Create a partition placement group Launch the instances into the partition placement group
Answers
D.
Based on the operating system version verify that enhanced networking is enabled on all the nodes
D.
Based on the operating system version verify that enhanced networking is enabled on all the nodes
Answers
E.
Switch to a different instance family that provides network throughput that is greater than 25 Gbps
E.
Switch to a different instance family that provides network throughput that is greater than 25 Gbps
Answers
Suggested answer: A, D

Explanation:

A cluster placement group is an Amazon EC2 feature that enables low-latency and high-throughput network performance for the instances that are launched into the group. This is achieved by placing


A company needs to migrate its critical SAP workloads from an on-premises data center to AWS The company has a few source production databases that are 10 TB or more in size The company wants to minimize the downtime for this migration As part of the proof of concept the company used a low-speed high-latency connection between its data center and AWS During the actual migration the company wants to maintain a consistent connection that delivers high bandwidth and low latency. The company also wants to add a layer of connectivity resiliency. The backup connectivity does not need to be as fast as the primary connectivity An SAP solutions architect needs to determine the optimal network configuration for data transfer.

The solution must transfer the data with minimum latency

Which configuration will meet these requirements?

A.
Set up one AWS Direct Connect connection for connectivity between the on-premises data center and AWS Add an AWS Site-to-Site VPN connection as a backup to the Direct Connect connection
A.
Set up one AWS Direct Connect connection for connectivity between the on-premises data center and AWS Add an AWS Site-to-Site VPN connection as a backup to the Direct Connect connection
Answers
B.
Set up an AWS Direct Connect gateway with multiple Direct Connect connections that use a link aggregation group (LAG) between the on-premises data center and AWS
B.
Set up an AWS Direct Connect gateway with multiple Direct Connect connections that use a link aggregation group (LAG) between the on-premises data center and AWS
Answers
C.
Set up Amazon Elastic fie System (Amazon EPS) file system storage between the on-premises data center and AWS Configure a cron job to copy the data into this EFS mount Access the data in the EFS file system from the target environment
C.
Set up Amazon Elastic fie System (Amazon EPS) file system storage between the on-premises data center and AWS Configure a cron job to copy the data into this EFS mount Access the data in the EFS file system from the target environment
Answers
D.
Set up two redundant AWS Site-to-Site VPN connections for connectivity between the on-premises data center and AWS
D.
Set up two redundant AWS Site-to-Site VPN connections for connectivity between the on-premises data center and AWS
Answers
Suggested answer: A

A company wants 10 migrate its SAP ERP landscape to AWS The company will use a highly available distributed deployment for the new architecture Clients will access SAP systems from a local data center through an AWS Site-to-Site VPN connection that is already in place An SAP solutions architect needs to design the network access to the SAP production environment Which configuration approaches will meet these requirements? (Select TWO.)

A.
For the ASCS instance configure an overlay IP address that is within the production VPC ClDR range Create an AWS Transit Gateway Attach me VPN to the transit gateway Use the transit gateway to route the communications between the local data center and the production VPC Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
A.
For the ASCS instance configure an overlay IP address that is within the production VPC ClDR range Create an AWS Transit Gateway Attach me VPN to the transit gateway Use the transit gateway to route the communications between the local data center and the production VPC Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
Answers
B.
For the ASCS instance configure an overlay IP address that is outside the production VPC ClDR range Create an AWS Transit Gateway Attach the VPN to the transit gateway Use the transit gateway to route the communications between the local data center and the production VPC Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
B.
For the ASCS instance configure an overlay IP address that is outside the production VPC ClDR range Create an AWS Transit Gateway Attach the VPN to the transit gateway Use the transit gateway to route the communications between the local data center and the production VPC Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
Answers
C.
For the ASCS instance configure an overlay IP address that is within the production VPC ClDR range Create a target group that points to the overlay IP address Create a Network Load Balancer and register the target group Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
C.
For the ASCS instance configure an overlay IP address that is within the production VPC ClDR range Create a target group that points to the overlay IP address Create a Network Load Balancer and register the target group Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
Answers
D.
For the ASCS instance configure an overlay IP address that is outside the production VPC ClDR range Create a target group that points to the overlay IP address Create a Network Load Balancer, and register the target group Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
D.
For the ASCS instance configure an overlay IP address that is outside the production VPC ClDR range Create a target group that points to the overlay IP address Create a Network Load Balancer, and register the target group Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance
Answers
E.
For the ASCS instance configure an overlay IP address that is outside the production VPC ClDR range Create a target group that points to the overlay IP address Create an Application Load Balancer and register the target group Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance.
E.
For the ASCS instance configure an overlay IP address that is outside the production VPC ClDR range Create a target group that points to the overlay IP address Create an Application Load Balancer and register the target group Create a static route on the production VPC to route traffic that is directed to the overlay IP address to the ASCS instance.
Answers
Suggested answer: B, D
Total 65 questions
Go to page: of 7