AWS Certified SAP on AWS - Specialty PAS-C01
Free trial
Verified
Question 1
A global enterprise is running SAP ERP Central Component (SAP ECC) workloads on Oracle in an on-premises environment. The enterprise plans to migrate to SAP S/4HANA on AWS.
The enterprise recently acquired two other companies. One of the acquired companies is running SAP ECC on Oracle as its ERP system. The other acquired company is running an ERP system that is not from SAP. The enterprise wants to consolidate the three ERP systems into one ERP system on SAP S/4HANA on AWS. Not all the data from the acquired companies needs to be migrated to the final ERP system. The enterprise needs to complete this migration with a solution that minimizes cost and maximizes operational efficiency.
Which solution will meet these requirements?
- A: Perform a lift-and-shift migration of all the systems to AWS. Migrate the ERP system that is not from SAP to SAP ECC. Convert all three systems to SAP S/4HANA by using SAP Software Update Manager (SUM) Database Migration Option (DMO). Consolidate all three SAP S/4HANA systems into a final SAP S/4HANA system. Decommission the other systems.
- B: Perform a lift-and-shift migration of all the systems to AWS. Migrate the enterprise's initial system to SAP HANA, and then perform a conversion to SAP S/4HANA. Consolidate the two systems from the acquired companies with this SAP S/4HANA system by using the Selective Data Transition approach with SAP Data Management and Landscape Transformation (DMLT).
- C: Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move to re-architect the enterprise’s initial system to SAP S/4HANA and to change the platform to AWS. Consolidate the two systems from the acquired companies with this SAP S/4HANA system by using the Selective Data Transition approach with SAP Data Management and Landscape Transformation (DMLT).
- D: Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move to re-architect all the systems to SAP S/4HANA and to change the platform to AWS. Consolidate all three SAP S/4HANA systems into a final SAP S/4HANA system. Decommission the other systems.
Question 2
A company is running an SAP on Oracle system on IBM Power architecture in an on-premises data center. The company wants to migrate the SAP system to AWS. The Oracle database is 15 TB in size. The company has set up a 100 Gbps AWS Direct Connect connection to AWS from the on-premises data center.
Which solution should the company use to migrate the SAP system MOST quickly?
- A: Before the migration window, build a new installation of the SAP system on AWS by using SAP Software Provisioning Manager. During the migration window, export a copy of the SAP system and database by using the heterogeneous system copy process and R3load. Copy the output of the SAP system files to AWS through the Direct Connect connection. Import the SAP system to the new SAP installation on AWS. Switch over to the SAP system on AWS.
- B: Before the migration window, build a new installation of the SAP system on AWS by using SAP Software Provisioning Manager. Back up the Oracle database by using native Oracle tools. Copy the backup of the Oracle database to AWS through the Direct Connect connection. Import the Oracle database to the SAP system on AWS. Configure Oracle Data Guard to begin replicating on-premises database log changes from the SAP system to the new AWS system. During the migration window, use Oracle to replicate any remaining changes to the Oracle database hosted on AWS. Switch over to the SAP system on AWS.
- C: Before the migration window, build a new installation of the SAP system on AWS by using SAP Software Provisioning Manager. Create a staging Oracle database on premises to perform Cross Platform Transportable Tablespace (XTTS) conversion on the Oracle database. Take a backup of the converted staging database. Copy the converted backup to AWS through the Direct Connect connection. Import the Oracle database backup to the SAP system on AWS. Take regularly scheduled incremental backups and XTTS conversions of the staging database. Transfer these backups and conversions to the AWS target database. During the migration window, perform a final incremental Oracle backup. Convert the final Oracle backup by using XTTS. Replay the logs in the target Oracle database hosted on AWS. Switch over to the SAP system on AWS.
- D: Before the migration window, launch an appropriately sized Amazon EC2 instance on AWS to receive the migrated SAP database. Create an AWS Server Migration Service (AWS SMS) job to take regular snapshots of the on-premises Oracle hosts. Use AWS SMS to copy the snapshot as an AMI to AWS through the Direct Connect connection. Create a new SAP on Oracle system by using the migrated AMI. During the migration window, take a final incremental SMS snapshot and copy the snapshot to AWS. Restart the SAP system by using the new up-to-date AMI. Switch over to the SAP system on AWS.
Question 3
A company plans to move its SAP systems from on premises to AWS to reduce infrastructure costs. The company is willing to make a 3-year commitment. However, the company wants to have maximum flexibility for the selection of Amazon EC2 instances across AWS Regions, instance families, and instance sizes.
Which purchasing option will meet these requirements at the LOWEST cost?
- A: Spot Instances
- B: 3-year Compute Savings Plan
- C: 3-year EC2 Instance Savings Plan
- D: 3-year Reserved Instances
Question 4
A company runs its SAP Business Suite on SAP HANA systems on AWS. The company's production SAP ERP Central Component (SAP ECC) system uses an x1e.32xlarge (memory optimized) Amazon EC2 instance and is 3.5 TB in size.
Because of expected future growth, the company needs to resize the production system to use a u-* EC2 High Memory instance. The company must resize the system as quickly as possible and must minimize downtime during the resize activities.
Which solution will meet these requirements?
- A: Resize the instance by using the AWS Management Console or the AWS CLI.
- B: Create an AMI of the source system Launch a new EC2 High Memory instance that is based on that AMI.
- C: Launch a new EC2 High Memory instance. Install and configure SAP HANA on the new instance by using AWS Launch Wizard for SAP. Use SAP HANA system replication to migrate the data to the new instance.
- D: Launch a new EC2 High Memory instance. Install and configure SAP HANA on the new instance by using AWS Launch Wizard for SAP. Use SAP HANA backup and restore to back up the source system directly to Amazon S3 and to migrate the data to the new instance.
Question 5
A company deploys its SAP ERP system on AWS in a highly available configuration across two Availability Zones. The cluster is configured with an overlay IP address and a Network Load Balancer (NLB) to provide access to the SAP application layer to all users. The company's analytics team has created several Operational Data Provisioning (ODP) extractor services for the SAP ERP system.
A highly available ETL system will call the ODP extractor services. The ETL system is hosted on Amazon EC2 instances that are deployed in an analytics VPC in a different AWS account. An SAP solutions architect needs to prevent the ODP extractor services from being used as an attack vector to overload the SAP ERP system.
Which solution will provide the MOST protection for the ODP extractor services?
- A: Configure VPC peering between the SAP VPC and the analytics VPC. Use network ACL rules in the SAP VPC to allow traffic to the NLB from only authorized sources: the analytics VPC CIDR block and the SAP end users' network CIDR block.
- B: Create a transit gateway in the SAP account. Share the transit gateway with the analytics account. Attach the SAP VPC and the analytics VPC to the transit gateway. Use network ACL rules in the SAP VPC to allow traffic to the NLB from only authorized sources: the analytics VPC CIDR block and the SAP end users' network CIDR block.
- C: Configure VPC peering between the SAP VPC and the analytics VPUpdate the NLB security group rules to accept traffic only from authorized sources: the ETL instances CIDR block and the SAP end users' network CIDR block.
- D: Create a VPC endpoint service configuration on the SAP VPC. Specify the NLB in the endpoint configuration. In the analytics account, create an IAM role that has permission to create a connection to the endpoint service. Attach the role to the ETL instances. While logged in to the ETL instances, programmatically create an interface endpoint to the NLB. Accept the request to activate the interface connection.
Question 6
A company wants to migrate a native SAP HANA database to AWS. The database ingests large amounts of data every month, and the size of the database is growing rapidly.
The company needs to store data for 10 years to meet a regulatory requirement. The company uses data from the last 2 years frequently in several reports. This recent data is critical and must be accessed quickly. The data that is 3-6 years old is used a few times a year and can be accessed in a longer time frame. The data that is more than 6 years old is rarely used and also can be accessed in a longer time frame.
Which combination of steps will meet these requirements? (Choose three.)
- A: Keep the frequently accessed data from the last 2 years in a hot tier on an SAP HANA certified Amazon EC2 instance.
- B: Move the frequently accessed data from the last 2 years to SAP Information Life Cycle Management (ILM) with SAP IQ.
- C: Move the less frequently accessed data that is 3-6 years old to a warm tier on Amazon Elastic File System (Amazon EFS) by using SAP HANA dynamic tiering.
- D: Move the less frequently accessed data that is 3-6 years old to a warm tier on Amazon Elastic File System (Amazon EFS) by using data aging.
- E: Move the rarely accessed data that is more than 6 years old to a cold tier on Amazon S3 by using SAP Data Hub.
- F: Move the rarely accessed data that is more than 6 years old to a cold tier on SAP BW Near Line Storage (NLS) with Apache Hadoop.
Question 7
An SAP engineer is designing a storage configuration for an SAP S/4HANA production system on AWS. The system will run on an Amazon EC2 instance with a memory size of 2 TB. The SAP HANA sizing report recommends storage of 2,400 GB for data and 512 GB for logs. The system requires 9,000 IOPS for data storage and throughput of 300 MBps for log storage.
Which Amazon Elastic Block Store (Amazon EBS) volume configuration will meet these requirements MOST cost-effectively?
- A: For /hana/data, use two 900 GB Provisioned IOPS SSD (io1) EBS volumes that are configured with RAID 0 striping and the required IOPS. For /hana/log, use one 512 GB General Purpose SSD (gp3) EBS volume that is configured with the required throughput.
- B: For /hana/data, use one 2,400 GB General Purpose SSD (gp3) EBS volume that is configured with the required IOPS. For /hana/log, use one 512 GB gp3 EBS volume that is configured with the required throughput.
- C: For /hana/data use two 1,200 GB Provisioned IOPS SSD (io2) EBS volumes that are configured with RAID 0 striping and the required IOPS. For /hana/log, use one 525 GB io2 EBS volume that is configured with the required throughput.
- D: For /hana/data, use two 1,200 GB General Purpose SSD (gp3) EBS volumes that are configured with RAID 0 striping and the required IOPS. For /hana/log, use one 512 GB gp3 EBS volume that is configured with the required throughput.
Question 8
A company is running SAP on premises and is using hard disk drive (HDD) cost-optimized storage to store SAP HANA archive files. The company directly mounts these disks as local file systems. The company also backs up the archives on a regular basis.
The company needs to migrate this setup to AWS.
Which solution will meet these requirements MOST cost-effectively?
- A: Use General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Use Amazon S3 for backups. Use S3 Glacier for long-term retention of the archives.
- B: Use Provisioned IOPS SSD (io1) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Back up the archives to Cold HDD (sc1) EBS volumes.
- C: Use Provisioned IOPS SSD (io1) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Use Amazon S3 for backups. Use S3 Glacier for long-term retention of the archives.
- D: Use Cold HDD (sc1) Amazon Elastic Block Store (Amazon EBS) volumes as the archive destination. Use Amazon S3 for backups. Use S3 Glacier for long-term retention or the archives.
Question 9
An SAP consultant is planning a migration of an on-premises SAP landscape to AWS. The landscape includes databases from Oracle, IBM Db2, and Microsoft SQL Server. The system copy procedure accesses the copied data on the destination system to complete the copy.
Which password must the SAP consultant obtain from the source system before the SAP consultant initiates the export or backup?
- A: The password of the adm operating system user
- B: The password of the SAP* user in client 000
- C: The password of the administrator user of the database
- D: The password of the DDIC user in client 000
Question 10
A company recently migrated its SAP workload to AWS. The company's SAP engineer implements SAProuter on an Amazon EC2 instance that runs SUSE Linux Enterprise Server. The EC2 instance is in a public subnet and is an On-Demand Instance. The SAP engineer performs all the necessary configurations for SAProuter, security groups, and route tables.
The SAProuter system needs to be online and available only when SAP Support is needed. The SAP engineer performs an initial test to validate SAP Support connectivity with SAProuter. The test is successful, and the SAP engineer stops the EC2 instance.
When an event occurs that causes the company to need SAP Support, the company starts the EC2 instance that hosts SAProuter. After the EC2 instance is running, the SAP Support team cannot establish connectivity with SAProuter.
What should the SAP engineer do to permanently resolve this issue?
- A: Re-install SAProuter on an EC2 instance in a private subnet. Update the SAProuter configuration with the instance's private IP address. Deploy a managed NAT gateway for AWS. Route SAP connectivity through the NAT gateway.
- B: Allocate an Elastic IP address to the EC2 instance that hosts SAProuter. Update the SAP router configuration with the Elastic IP address.
- C: Modify the security group that is associated with the EC2 instance that hosts SAProuter to allow access to all ports from the 0.0.0.0/0 CIDR block.
- D: Update the SAProuter configuration with the private IP address of the EC2 instance that hosts SAProuter.
Question 11
A company is evaluating options to migrate its on-premises SAP ERP Central Component (SAP ECC) EHP 8 system to AWS. The company does not want to make any changes to the SAP versions or database versions. The system runs on SUSE Linux Enterprise Server and SAP HANA 2.0 SPS 05. The existing on-premises system has a 1 TB database.
The company has 1 Gbps or internet bandwidth available for the migration. The company must complete the migration with the least possible downtime and disruption to business.
Which solution will meet these requirements?
- A: Install SAP ECC EHP 8 on Amazon EC2 instances. Use the same SAP SID and kernel version that the source system uses. Install SAP HANA on EC2 instances. Use the same version of SAP HANA that the source system uses. Take a full backup of the source SAP HANA database to disk. Copy the backup by using an AWS Storage Gateway Tape Gateway. Restore the backup on the target SAP HANA instance that is running on Amazon EC2.
- B: Install SAP ECC EHP 8 on Amazon EC2 instances. Use the same SAP SID and kernel version that the source system uses. Install SAP HANA on EC2 instances. Use the same version of SAP HANA that the source database uses. Establish replication at the source, and register the SAP HANA instance that is running on Amazon EC2 as secondary. After the systems are synchronized, initiate a takeover so that the SAP HANA instance that is running on Amazon EC2 becomes primary. Shut down the on-premises system. Start SAP on the EC2 instances.
- C: Install SAP ECC EHP 8 on Amazon EC2 instances. Use the same SAP SID and kernel version that the source system uses. Install SAP HANA on EC2 instances. Use the same version that the source system uses. Take a full offline backup of the source SAP HANA database. Copy the backup to Amazon S3 by using the AWS CLI. Restore the backup on a target SAP HANA instance that runs on Amazon EC2. Start SAP on the EC2 instances.
- D: Take an offline SAP Software Provisioning Manager export of the on-premises system. Use an AWS Storage Gateway File Gateway to transfer the export. Import the export on Amazon EC2 instances to create the target SAP system.
Question 12
A company is running its SAP system on AWS with a secondary SAP HANA database in a sidecar setup. The company requires high IOPS for write performance on its Amazon Elastic Block Store (Amazon EBS) volumes for the secondary SAP HANA database.
The EBS volume that the company uses for its SAP HANA data volume cannot provide the required IOPS. Instance bandwidth for the Amazon EC2 instance that is hosting the SAP HANA database is sufficient. An SAP solutions architect needs to propose a solution to resolve the IOPS performance issue.
Which solution will achieve the required IOPS?
- A: Replace the EBS storage with EC2 instance store storage.
- B: Create a RAID 0 configuration with several EBS volumes.
- C: Use Amazon EC2 Auto Scaling to launch Spot Instances.
- D: Create a placement group with several EBS volumes.
Question 13
An SAP solutions architect is designing an SAP HANA scale-out architecture for SAP Business Warehouse (SAP BW) on SAP HANA on AWS. The SAP solutions architect identifies the design as a three-node scale-out deployment of xte.32xiarge Amazon EC2 instances.
The SAP solutions architect must ensure that the SAP HANA scale-out nodes can achieve the low-latency and high-throughput network performance that are necessary for node-to-node communication.
Which combination of steps should the SAP solutions architect take to meet these requirements? (Choose two.)
- A: Create a cluster placement group. Launch the instances into the cluster placement group.
- B: Create a spread placement group. Launch the instances into the spread placement group.
- C: Create a partition placement group. Launch the instances into the partition placement group.
- D: Based on the operating system version, verify that enhanced networking is enabled on all the nodes.
- E: Switch to a different instance family that provides network throughput that is greater than 25 Gbps.
Question 14
A company is preparing a greenfield deployment of SAP S/4HANA on AWS. The company wants to ensure that this new SAP S/4HANA landscape is fully supported by SAP. The company's SAP solutions architect needs to set up a new SAProuter connection directly to SAP from the company's landscape within the VPC.
Which combination of steps must the SAP solutions architect take to accomplish this goal? (Choose three.)
- A: Launch the instance that the SAProuter software will be installed on into a private subnet of the VPC. Assign the instance an Elastic IP address.
- B: Launch the instance that the SAProuter software will be installed on into a public subnet of the VPC. Assign the VPC an Elastic IP address.
- C: Launch the instance that the SAProuter software will be installed on into a public subnet of the VPAssign the instance an overlay IP address.
- D: Create a specific security group for the SAProuter instance. Configure rules to allow the required inbound and outbound access to the SAP support network. Include a rule that allows inbound traffic to TCP port 3299.
- E: Create a specific security group for the SAProuter instance. Configure rules to allow the required inbound and outbound access to the SAP support network. Include a rule that denies inbound traffic to TCP port 3299.
- F: Use a Secure Network Communication (SNC) internet connection.
Question 15
A company is running its on-premises SAP ERP Central Component (SAP ECC) production system on an Oracle database. The company needs to migrate the system to AWS and change the database to SAP HANA on AWS.
The system must be highly available. The company also needs a failover system to be available in a different AWS Region to support disaster recovery (DR). The DR solution must meet an RTO of 4 hours and an RPO of 30 minutes. The sizing estimate for the SAP HANA database on AWS is 4 TB.
Which combination of steps should the company take to meet these requirements? (Choose two.)
- A: Deploy the production system and the DR system in two Availability Zones in the same Region.
- B: Deploy the production system across two Availability Zones in one Region. Deploy the DR system in a third Availability Zone in the same Region.
- C: Deploy the production system across two Availability Zones in the primary Region. Deploy the DR system in a single Availability Zone in another Region.
- D: Create an Amazon Elastic File System (Amazon EFS) file system in the primary Region for the SAP global file system. Deploy a second EFS file system in the DR Region. Configure EFS replication between the file systems.
- E: Set up Amazon Elastic Block Store (Amazon EBS) to store the shared file system data. Configure AWS Backup for DR.
Question 16
A company is running SAP HANA as the database for all its SAP systems on AWS. The company has a production SAP landscape and a non-production SAP landscape in the same VPC. The company has deployed AWS Backint Agent for SAP HANA (AWS Backint agent) to store backups in an S3 bucket. The S3 bucket is encrypted and is configured with an S3 Lifecycle management policy that moves backup data that is older than 3 days to the S3 Glacier Flexible Retrieval storage class.
An SAP engineer needs to perform a system copy by restoring the previous week's full backup of the production SAP HANA instance to the non-production SAP HANA instance.
Which combination of steps must the SAP engineer take before the SAP engineer initiates the restoration procedure? (Choose two.)
- A: Update the AWS Backint agent configuration file of the non-production SAP HANA instance with the details of the AWS Backint agent configuration of the production instance.
- B: Move the database backup files from the S3 Glacier Flexible Retrieval storage class to the S3 Standard storage class.
- C: Reset the default encryption behavior of the S3 bucket to use S3 managed encryption keys.
- D: Update the AWS Backint agent to the most recent version.
- E: Update the SAP HANA database to the most recent supported version.
Question 17
A company is planning to migrate its on-premises production SAP HANA system to AWS. The company uses a SUSE Linux Enterprise High Availability Extension two-node cluster to protect the system against failure. The company wants to use the same solution to provide high availability for the landscape on AWS.
Which combination of prerequisites must the company fulfill to meet this requirement? (Choose two.)
- A: Use instance tags to identify the instances in the cluster.
- B: On the cluster, configure an overlay IP address that is outside the VPC CIDR range to access the active instance.
- C: On the cluster, configure an overlay IP address that is within the VPC CIDR range to access the active instance.
- D: On the cluster, configure an Elastic IP address that is outside the VPC CIDR range to access the active instance.
- E: On the cluster, configure an Elastic IP address that is within the VPC CIDR range to access the active instance.
Question 18
A company wants to deploy its SAP S/4HANA workload on AWS. The company will need to deploy additional SAP S/4HANA systems during the next year to meet the demands of planned projects. The company wants to adopt a DevOps model for deployment of additional SAP S/4HANA systems. The company’s project team needs to be able to provision new SAP S/4HANA systems with minimum user inputs.
An SAP solutions architect must design a solution that can automate most of the implementation tasks. The solution must allow project team members to implement additional SAP S/4HANA systems with minimum required authorizations.
Which solution will meet these requirements with the LEAST operational overhead?
- A: Deploy an SAP S/4HANA system by using AWS Launch Wizard for SAP. Create an AWS Service Catalog product. Authorize the project team to use the AWS Service Catalog product for future deployments of additional SAP S/4HANA systems.
- B: Provision an Amazon EC2 instance by using an AWS CloudFormation template. Use SAP Software Provisioning Manager to install an SAP S/4HANA system on the EC2 instance to create a base image. Create an Amazon Elastic Block Store (Amazon EBS) snapshot of the SAP S/4HANA system. Create an AWS Service Catalog product for the EC2 instance launch and the EBS snapshot restore. Authorize the project team to use AWS Service Catalog to launch additional EC2 Instances and restore EBS snapshots to new SAP S/4HANA instances.
- C: Create a base SAP S/4HANA system on an Amazon EC2 instance by using SAP Software Provisioning Manager. Create a custom AMI from the installed SAP S/4HANA base system. Use the custom AMI for future deployments of additional SAP S/4HANA systems.
- D: Provision an Amazon EC2 instance by using an AWS CloudFormation template. Use SAP Software Provisioning Manager to install an SAP S/4HANA system on the EC2 instance to create a base image. Create a custom AMI from the SAP S/4HANA system. Create an AWS Service Catalog product for the C2 instance launch and the custom AMI restore. Authorize the project team to use AWS Service Catalog to launch additional SAP S/4HANA instances.
Question 19
A company migrated its SAP ERP Central Component (SAP ECC) environment to an m4.large Amazon EC2 instance (Xen based) in 2016. The company changed the instance type to m5.xlarge (KVM based). Since the change, users are receiving a pop-up box that indicates that the SAP license will expire soon.
What could be the cause of this issue?
- A: The change from the Xen-based m4.large instance type to the KVM-based m5.xlarge instance type is not allowed.
- B: The Xen-based m4.large instance was running with a lower kernel patch level (SAP Kernel 7.49 Patch Level 401). When the change to a KVM-based instance occurred, the hardware key changed. The instance requires a new license.
- C: The Xen-based m4.large instance was running with a higher kernel patch level (SAP Kernel 7.49 Patch Level 500). When the change to a KVM-based instance occurred, the hardware key changed. The instance requires a new license.
- D: Whenever an instance type changes, the change requires a new license.
Question 20
A company plans to migrate its SAP NetWeaver environment from its on-premises data center to AWS. An SAP solutions architect needs to deploy the AWS resources for an SAP S/4HANA-based system in a Multi-AZ configuration without manually identifying and provisioning individual AWS resources. The SAP solutions architect's task includes the sizing, configuration, and deployment of the SAP S/4HANA system.
What is the QUICKEST way to provision the SAP S/4HANA landscape on AWS to meet these requirements?
- A: Use the SAP HANA Quick Start reference deployment.
- B: Use AWS Launch Wizard for SAP.
- C: Create AWS CloudFormation templates to automate the deployment.
- D: Manually deploy SAP HANA on AWS.
Question 21
A company is running its on-premises SAP ERP Central Component (SAP ECC) production workload on SUSE Linux Enterprise Server. The SAP ECC workload uses an Oracle database that has 20 TB of data.
The company needs to migrate the SAP ECC workload to AWS with no change in database technology. The company must minimize production system downtime.
Which solution will meet these requirements?
- A: Migrate the SAP ECC workload to AWS by using AWS Application Migration Service.
- B: Install SAP ECC application instances on SUSE Linux Enterprise Server. Use AWS Database Migration Service (AWS DMS) to migrate the Oracle database to Amazon RDS for Oracle.
- C: Migrate the SAP ECC workload to AWS by using SAP Software Provisioning Manager on Oracle Enterprise Linux.
- D: Install SAP ECC with an Oracle database on Oracle Enterprise Linux. Perform the migration by using Oracle Cross-Platform Transportable Tablespace (XTTS).
Question 22
A company is running an SAP Commerce application in a development environment. The company is ready to deploy the application to a production environment on AWS.
The company expects the production application to receive a large increase in transactions during sales and promotions. The application's database must automatically scale the storage, CPU, and memory to minimize costs during periods of low demand and maintain high availability and performance during periods of high demand.
Which solution will meet these requirements?
- A: Use an SAP HANA single-node deployment that runs on burstable performance Amazon EC2 instances.
- B: Use an Amazon Aurora MySQL database that runs on serverless DB instance types.
- C: Use a HyperSQL database that runs on Amazon Elastic Container Service (Amazon ECS) containers with ECS Service Auto Scaling.
- D: Use an Amazon RDS for MySQL DB cluster that consists of high memory DB instance types.
Question 23
A company is planning to implement a new SAP workload on SUSE Linux Enterprise Server on AWS. The company needs to use AWS Key Management Service (AWS KMS) to encrypt every file at rest. The company also requires that its production SAP workloads and non-production SAP workloads are separated into different AWS accounts.
The production account and the non-production account share a common SAP transport directory, /usr/sap/trans. The two accounts are connected by VPC peering.
What should the company do to achieve the data encryption at rest for the new SAP workload?
- A: Create an asymmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Import the KMS key into the non-production account to allow the production systems to access the SAP transport directory.
- B: Create a symmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the non-production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
- C: Create a symmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
- D: Create an asymmetric KMS customer managed key in the production account. Create Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage for all root and SAP data. Implement encryption that uses the KMS key. Share the EFS file system from the production account with the non-production account. Create an IAM role in the non-production account and a key policy for the KMS key in the production account to allow the non-production systems to access the SAP transport directory.
Question 24
A company needs to migrate its critical SAP workloads from an on-premises data center to AWS. The company has a few source production databases that are 10 TB or more in size. The company wants to minimize the downtime for this migration.
As part of the proof of concept, the company used a low-speed, high-latency connection between its data center and AWS. During the actual migration, the company wants to maintain a consistent connection that delivers high bandwidth and low latency. The company also wants to add a layer of connectivity resiliency. The backup connectivity does not need to be as fast as the primary connectivity.
An SAP solutions architect needs to determine the optimal network configuration for data transfer. The solution must transfer the data with minimum latency.
Which configuration will meet these requirements?
- A: Set up one AWS Direct Connect connection for connectivity between the on-premises data center and AWS. Add an AWS Site-to-Site VPN connection as a backup to the Direct Connect connection.
- B: Set up an AWS Direct Connect gateway with multiple Direct Connect connections that use a link aggregation group (LAG) between the on-premises data center and AWS.
- C: Set up Amazon Elastic File System (Amazon EFS) file system storage between the on-premises data center and AWS. Configure a cron job to copy the data into this EFS mount. Access the data in the EFS file system from the target environment.
- D: Set up two redundant AWS Site-to-Site VPN connections for connectivity between the on-premises data center and AWS.
Question 25
A company wants to migrate its SAP S/4HANA infrastructure to AWS. The infrastructure includes production, pre-production, test, and development environments. The pre-production environment is an identical copy of the production environment.
The production system must comply with a new policy that requires the landscape to be able to fail over to a secondary AWS Region. The required RPO is 5 minutes. The required RTO is 4 hours. The estimated SAP HANA database size is 6 TB.
Which solution will meet these requirements MOST cost-effectively?
- A: Deploy the pre-production environment in a primary Region. Deploy the other environments in a secondary Region. Configure the disaster recovery SAP HANA system on the pre-production hardware. Implement replication by setting the preload_column_tables parameter to false. Before failover, stop the pre-production environment, set the preload_column_tables parameter to true, and allocate the memory for production takeover.
- B: Deploy all environments in a primary Region. Configure a 500 GB disaster recovery (DR) site in a secondary Region. Configure DR SAP HANA system replication on the pre-production hardware by setting the preload_column_tables parameter to false. In the event of a disaster, resize the DR environment to 6 TB, set the preload_column_tables parameter to true, and perform a takeover.
- C: Deploy all environments in a primary Region. Configure a 6 TB disaster recovery (DR) site in a secondary Region. In the event of a disaster, perform a takeover on the DR site.
- D: Deploy all environments in a primary Region. Configure a 6 TB disaster recovery (DR) site in the same Region. In the event of a disaster, perform a takeover on the DR site.
Free preview mode
Enjoy the free questions and consider upgrading to gain full access!