- Home
- Amazon-Web-Services
- SAP-C01 Dumps
Want to know Actualtests SAP-C01 Exam practice test features? Want to lear more about Amazon-Web-Services AWS Certified Solutions Architect- Professional certification experience? Study 100% Correct Amazon-Web-Services SAP-C01 answers to Updated SAP-C01 questions at Actualtests. Gat a success with an absolute guarantee to pass Amazon-Web-Services SAP-C01 (AWS Certified Solutions Architect- Professional) test on your first attempt.
Online SAP-C01 free questions and answers of New Version:
NEW QUESTION 1
A company plans to move regulated and security-sensitive businesses to AWS. The Security team is developing a framework to validate the adoption of AWS best practice and industry-recognized compliance standards. The AWS Management Console is the preferred method for teams to provision resources.
Which strategies should a Solutions Architect use to meet the business requirements and continuously assess, audit, and monitor the configurations of AWS resources? (Choose two.)
- A. Use AWS Config rules to periodically audit changes to AWS resources and monitor the compliance of the configuratio
- B. Develop AWS Config custom rules using AWS Lambda to establish a test-driven development approach, and further automate the evaluation of configuration changes against the required controls.
- C. Use Amazon CloudWatch Logs agent to collect all the AWS SDK log
- D. Search the log data using a pre-defined set of filter patterns that machines mutating API call
- E. Send notifications using Amazon CloudWatch alarms when unintended changes are performe
- F. Archive log data by using a batch exportto Amazon S3 and then Amazon Glacier for a long-term retention and auditability.
- G. Use AWS CloudTrail events to assess management activities of all AWS account
- H. Ensure that CloudTrail is enabled in all accounts and available AWS service
- I. Enable trails, encrypt CloudTrail event log files with an AWS KMS key, and monitor recorded activities with CloudWatch Logs.
- J. Use the Amazon CloudWatch Events near-real-time capabilities to monitor system events patterns, and trigger AWS Lambda functions to automatically revert non-authorized changes in AWS resource
- K. Also, target Amazon SNS topics to enable notifications and improve the response time of incident responses.
- L. Use CloudTrail integration with Amazon SNS to automatically notify unauthorized API activities.Ensure that CloudTrail is enabled in all accounts and available AWS service
- M. Evaluate the usage of Lambda functions to automatically revert non-authorized changes in AWS resources.
Answer: AC
Explanation:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-cloudtrail.html https://docs.aws.amazon.com/en_pv/awscloudtrail/latest/userguide/best-practices-security.html
NEW QUESTION 2
A company runs a public-facing application that uses a Java-based web sen/ice via a RESTful API It is hosted on Apache Tomcat on a single server in a data center that runs consistently at 30% CPU utilization Use of the API is expected to increase by 10 times with a new product launch The business wants to migrate the application to AWS with no disruption and needs it to scale to meet demand
The company has already decided to use Amazon Route 53 and CNAME records lo redirect traffic How can these requirements be met with the LEAST amount of effort?
- A. Use AWS Elastic Beanstalk to deploy the Java web service and enable Auto Scaling Then switch the application to use the new web service
- B. Lift and shift the Apache server to the cloud using AWS SMS Then switch the application to direct web service traffic to the new instance
- C. Create a Docker image and migrate the image to Amazon ECS Then change the application code to direct web service queries to the ECS container
- D. Modify the application to call the web service via Amazon API Gateway Then create a new AWS Lambda Java function to run the Java web service code After testing change API Gateway to use the Lambda function
Answer: A
NEW QUESTION 3
A company operating a website on AWS requires high levels of scalability, availability and performance. The company is running a Ruby on Rails application on Amazon EC2. It has a data tier on MySQL 5.6 on Amazon EC2 using 16 TB of Amazon EBS storage. Amazon CloudFront is used to cache application content. The Operations team is reporting continuous and unexpected growth of EBS volumes assigned to the MySQL database. The Solutions Architect has been asked to design a highly scalable, highly available, and high-performing solution.
Which solution is the MOST cost-effective at scale?
- A. Implement Multi-AZ and Auto Scaling for all EC2 instances in the current configuratio
- B. Ensure that all EC2 instances are purchased as reserved instance
- C. Implement new elastic Amazon EBS volumes for the data tier.
- D. Design and implement the Docker-based containerized solution for the application using Amazon EC
- E. Migrate to an Amazon Aurora MySQL Multi-AZ cluste
- F. Implement storage checks for Aurora MySQL storage utilization and an AWS Lambda function to grow the Aurora MySQL storage, as necessar
- G. Ensure that Multi-AZ architectures are implemented.
- H. Ensure that EC2 instances are right-sized and behind an Elastic Load Balancing load balancer.Implement Auto Scaling with EC2 instance
- I. Ensure that the reserved instances are purchased for fixed capacity and that Auto Scaling instances run on deman
- J. Migrate to an Amazon Aurora MySQLMulti-AZ cluste
- K. Ensure that Multi-AZ architectures are implemented.
- L. Ensure that EC2 instances are right-sized and behind an Elastic Load Balance
- M. Implement Auto Scaling with EC2 instance
- N. Ensure that Reserved instances are purchased for fixed capacity and that Auto Scaling instances run on deman
- O. Migrate to an Amazon Aurora MySQL Multi-AZ cluste
- P. Implement storage checks for Aurora MySQL storage utilization and an AWS Lambda function to grow Aurora MySQL storage, as necessar
- Q. Ensure Multi-AZ architectures are implemented.
Answer: C
NEW QUESTION 4
A company is refactoring an existing web service that provides read and write access to structured data. The service must respond to short but significant spikes in the system load The service must be fault tolerant across multiple AWS Regions.
Which actions should be taken to meet these requirements?
- A. Store the data in Amazon DocumentDB Create a single global Amazon CloudFront distribution with a custom origin built on edge-optimized Amazon API Gateway and AWS Lambda Assign the company's domain as an alternate domain for the distributio
- B. and configure Amazon Route 53 with an alias to the CloudFront distribution
- C. Store the data in replicated Amazon S3 buckets in two Regions Create an Amazon CloudFront distribution in each Region, with custom origins built on Amazon API Gateway and AWS Lambda launched in each Region Assign the company's domain as an alternate domain for both distributions and configure Amazon Route 53 with a failover routing policy between them
- D. Store the data in an Amazon DynamoDB global table in two Regions using on-demand capacity mode In both Regions, run the web service as Amazon ECS Fargate tasks in an Auto Scaling ECS service behind an Application Load Balancer (ALB) In Amazon Route 53, configure an alias record in the company's domain and a Route 53 latency-based routing policy with health checks to distribute traffic between the two ALBs
Answer: A
NEW QUESTION 5
A company runs its containerized batch jobs on Amazon ECS. The jobs are scheduled by submitting a container image, a task definition, and the relevant data to an Amazon S3 bucket. Container images may be unique per job. Running the jobs as quickly as possible is of utmost importance, so submitting jobs artifacts to the S3 bucket triggers the job to run immediately. Sometimes there may no jobs running at all. However, jobs of any size can be submitted with no prior warning to the IT Operations team. Job definitions include CPU and memory resource requirements.
What solution will allow the batch jobs to complete as quickly as possible after being scheduled?
- A. Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch typ
- B. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
- C. Schedule the jobs directly on EC2 instance
- D. Use Reserved Instances for the baseline minimum load, and use On-Demand Instances in an Auto Scaling group to scale up the platform based on demand.
- E. Schedule the jobs on an Amazon ECS cluster using the Fargate launch typ
- F. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
- G. Schedule the jobs on an Amazon ECS cluster using the Fargate launch typ
- H. Use Spot Instances in an Auto Scaling group to scale the platform based on deman
- I. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
Answer: C
NEW QUESTION 6
The Security team needs to provide a team of interns with an AWS environment so they can build the serverless video transcoding application. The project will use Amazon S3, AWS Lambda, Amazon API Gateway, Amazon Cognito, Amazon DynamoDB, and Amazon Elastic Transcoder.
The interns should be able to create and configure the necessary resources, but they may not have access to create or modify AWS IAM roles. The Solutions Architect creates a policy and attaches it to the interns’ group.
How should the Security team configure the environment to ensure that the interns are self-sufficient?
- A. Create a policy that allows creation of project-related resources onl
- B. Create roles with required service permissions, which are assumable by the services.
- C. Create a policy that allows creation of all project-related resources, including roles that allow access only to specified resources.
- D. Create roles with the required service permissions, which are assumable by the service
- E. Have theinterns create and use a bastion host to create the project resources in the project subnet only.
- F. Create a policy that allows creation of project-related resources onl
- G. Require the interns to raise a request for roles to be created with the Security tea
- H. The interns will provide the requirements for the permissions to be set in the role.
Answer: A
NEW QUESTION 7
A Solutions Architect is designing the storage layer for a data warehousing application. The data files are large, but they have statically placed metadata at the beginning of each file that describes the size and placement of the file’s index. The data files are read in by a fleet of Amazon EC2 instances that store the index size, index location, and other category information about the data file in a database. That database is used by Amazon EMR to group files together for deeper analysis.
What would be the MOST cost-effective, high availability storage solution for this workflow?
- A. Store the data files in Amazon S3 and use Range GET for each file’s metadata, then index the relevant data.
- B. Store the data files in Amazon EFS mounted by the EC2 fleet and EMR nodes.
- C. Store the data files on Amazon EBS volumes and allow the EC2 fleet and EMR to mount and unmount the volumes where they are needed.
- D. Store the content of the data files in Amazon DynamoDB tables with the metadata, index, and data as their own keys.
Answer: A
Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectGET.html
NEW QUESTION 8
A company is planning to migrate an application from on-premises to AWS. The application currently uses an Oracle database and the company can tolerate a brief downtime of 1 hour when performing the switch to the new infrastructure. As part of the migration, the database engine will be changed to MySQL. A Solutions Architect needs to determine which AWS services can be used to perform the migration while minimizing the amount of work and time required.
Which of the following will meet the requirements?
- A. Use AWS SCT to generate the schema scripts and apply them on the target prior to migratio
- B. Use AWS DMS to analyse the current schema and provide a recommendation for the optimal database engin
- C. Then, use AWS DMS to migrate to the recommended enginee
- D. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
- E. Use AWS SCT to generate the schema scripts and apply them on the target prior to migratio
- F. Use AWS DMS to begin moving data from the on-premises database to AW
- G. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new databas
- H. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
- I. Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to Amazon RD
- J. Then, use AWS DMS to migrate to the platfor
- K. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
- L. Use AWS DMS to begin moving data from the on-premises database to AW
- M. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new databas
- N. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
Answer: B
NEW QUESTION 9
A large global company wants to migrate a stateless mission-critical application to AWS. The application is based on IBM WebSphere (application and integration middleware), IBM MQ (messaging middleware), and IBM DB2 (database software) on a z/OS operating system.
How should the Solutions Architect migrate the application to AWS?
- A. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.Re-platform the IBM MQ to an Amazon EC2-based M
- B. Re-platform the z/OS-based DB2 to Amazon RDS DB2.
- C. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.Re-platform the IBM MQ to an Amazon M
- D. Re-platform z/OS-based DB2 to Amazon EC2-based DB2.
- E. Orchestrate and deploy the application by using AWS Elastic Beanstal
- F. Re-platform the IBM MQ to Amazon SQ
- G. Re-platform z/OS-based DB2 to Amazon RDS DB2.
- H. Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon EC2-based solutio
- I. Re-platform the IBM MQ to an Amazon MQ.
Answer: B
Explanation:
https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now- https://aws.amazon.com/quickstart/architecture/ibm-mq/
NEW QUESTION 10
A Solutions Architect must create a cost-effective backup solution for a company’s 500MB source code repository of proprietary and sensitive applications. The repository runs on Linux and backs up daily to tape. Tape backups are stored for 1 year.
The current solutions are not meeting the company’s needs because it is a manual process that is prone to error, expensive to maintain, and does not meet the need for a Recovery Point Objective (RPO) of 1 hour or Recovery Time Objective (RTO) of 2 hours. The new disaster recovery requirement is for backups to be stored offsite and to be able to restore a single file if needed.
Which solution meets the customer’s needs for RTO, RPO, and disaster recovery with the LEAST effort and expense?
- A. Replace local tapes with an AWS Storage Gateway virtual tape library to integrate with current backup softwar
- B. Run backups nightly and store the virtual tapes on Amazon S3 standard storage inUS-EAST-1. Use cross-region replication to create a second copy in US-WEST-2. Use Amazon S3 lifecycle policies to perform automatic migration to Amazon Glacier and deletion of expired backups after 1 year?
- C. Configure the local source code repository to synchronize files to an AWS Storage Gateway file Amazon gateway to store backup copies in an Amazon S3 Standard bucke
- D. Enable versioning on the Amazon S3 bucke
- E. Create Amazon S3 lifecycle policies to automatically migrate old versions of objects to Amazon S3 Standard 0 Infrequent Access, then Amazon Glacier, then delete backups after 1 year.
- F. Replace the local source code repository storage with a Storage Gateway stored volum
- G. Change the default snapshot frequency to 1 hou
- H. Use Amazon S3 lifecycle policies to archive snapshots to Amazon Glacier and remove old snapshots after 1 yea
- I. Use cross-region replication to create a copy of the snapshots in US-WEST-2.
- J. Replace the local source code repository storage with a Storage Gateway cached volum
- K. Create a snapshot schedule to take hourly snapshot
- L. Use an Amazon CloudWatch Events schedule expression rule to run on hourly AWS Lambda task to copy snapshots from US-EAST -1 to US-WEST-2.
Answer: B
Explanation:
https://d1.awsstatic.com/whitepapers/aws-storage-gateway-file-gateway-for-hybrid-architectures.pdf
NEW QUESTION 11
A company is moving a business-critical application onto AWS. It is a traditional three-tier web application using an Oracle database. Data must be encrypted in transit and at rest. The database hosts 12 TB of data. Network connectivity to the source Oracle database over the internal is allowed, and the company wants to reduce the operational costs by using AWS Managed Services where possible. All resources within the web and application tiers have been migrated. The database has a few tables and a simple schema using primary keys only; however, it contains many Binary Large Object (BLOB) fields. It was not possible to use the database’s native replication tools because of licensing restrictions.
Which database migration solution will result in the LEAST amount of impact to the application’s availability?
- A. Provision an Amazon RDS for Oracle instanc
- B. Host the RDS database within a virtual private cloud (VPC) subnet with internet access, and set up the RDS database as an encrypted Read Replica of the source databas
- C. Use SSL to encrypt the connection between the two database
- D. Monitor the replication performance by watching the RDS ReplicaLag metri
- E. During the application maintenance window, shut down the on-premises database and switch over the application connection to the RDS instance when there is no more replication la
- F. Promote the Read Replica into a standalone database instance.
- G. Provision an Amazon EC2 instance and install the same Oracle database softwar
- H. Create a backup of the source database using the supported tool
- I. During the application maintenance window, restore the backup into the Oracle database running in the EC2 instanc
- J. Set up an Amazon RDS for Oracle instance, and create an import job between the database hosted in AW
- K. Shut down the source database and switch over the database connections to the RDS instance when the job is complete.
- L. Use AWS DMS to load and replicate the dataset between the on-premises Oracle database and the replication instance hosted on AW
- M. Provision an Amazon RDS for Oracle instance with Transparent Data Encryption (TDE) enabled and configure it as target for the replication instanc
- N. Create a customer-managed AWS KMS master key to set it as the encryption key for the replication instance.Use AWS DMS tasks to load the data into the target RDS instanc
- O. During the application maintenance window and after the load tasks reach the ongoing replication phase, switch the database connections to the new database.
- P. Create a compressed full database backup on the on-premises Oracle database during an application maintenance windo
- Q. While the backup is being performed, provision a 10 Gbps AWS Direct Connect connection to increase the transfer speed of the database backup files to Amazon S3, and shorten the maintenance window perio
- R. Use SSL/TLS to copy the files over the Direct Connect connectio
- S. When the backup files are successfully copied, start the maintenance window, and rise any of the Amazon RDS supported tools to import the data into a newly provisioned Amazon RDS for Oracle instance with encryption enable
- T. Wait until the data is fully loaded and switch over the database connections to the new databas
- . Delete the Direct Connect connection to cut unnecessary charges.
Answer: C
Explanation:
https://aws.amazon.com/blogs/apn/oracle-database-encryption-options-on-amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.AdvSecurity.htm l (DMS in transit encryption) https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html
NEW QUESTION 12
An organization has two Amazon EC2 instances:
The first is running an ordering application and an inventory application.
The second is running a queuing system.
During certain times of the year, several thousand orders are placed per second. Some orders were lost when the queuing system was down. Also, the organization’s inventory application has the incorrect quantity of products because some orders were processed twice.
What should be done to ensure that the applications can handle the increasing number of orders?
- A. Put the ordering and inventory applications into their own AWS Lambda function
- B. Have the ordering application write the messages into an Amazon SQS FIFO queue.
- C. Put the ordering and inventory applications into their own Amazon ECS containers and create an Auto Scaling group for each applicatio
- D. Then, deploy the message queuing server in multiple AvailabilityZones.
- E. Put the ordering and inventory applications into their own Amazon EC2 instances, and create an Auto Scaling group for each applicatio
- F. Use Amazon SQS standard queues for the incoming orders, and implement idempotency in the inventory application.
- G. Put the ordering and inventory applications into their own Amazon EC2 instance
- H. Write the incoming orders to an Amazon Kinesis data stream Configure AWS Lambda to poll the stream and update the inventory application.
Answer: C
Explanation:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html
NEW QUESTION 13
A company has an existing on-premises three-tier web application. The Linux web servers serve content from a centralized file share on a NAS server because the content is refreshed several times a day from various sources. The existing infrastructure is not optimized and the company would like to move to AWS in order to gain the ability to scale resources up and down in response to load. On-premises and AWS resources are connected using AWS Direct Connect.
How can the company migrate the web infrastructure to AWS without delaying the content refresh process?
- A. Create a cluster of web server Amazon EC2 instances behind a Classic Load Balancer on AW
- B. Share an Amazon EBS volume among all instances for the conten
- C. Schedule a periodic synchronization of this volume and the NAS server.
- D. Create an on-premises file gateway using AWS Storage Gateway to replace the NAS server and replicate content to AW
- E. On the AWS side, mount the same Storage Gateway bucket to each web server Amazon EC2 instance to serve the content.
- F. Expose an Amazon EFS share to on-premises users to serve as the NAS serv
- G. Mount the same EFS share to the web server Amazon EC2 instances to serve the content.
- H. Create web server Amazon EC2 instances on AWS in an Auto Scaling grou
- I. Configure a nightly process where the web server instances are updated from the NAS server.
Answer: C
Explanation:
File gateway is limited by performance its gateway instance, whether EC2 or On-premises, Cache will get filled up fast if not properly configured, For large number of EC2 instances EFS scales better. So, bottom line is File Storage gateway is for legacy applications and you have to add cost of large gateway instances before comparing it to same quantity of EFS storage. https://www.reddit.com/r/aws/comments/82pyop/storage_gateway_vs_efs/
https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html
NEW QUESTION 14
A Solutions Architect must migrate an existing on-premises web application with 70 TB of static files supporting a public open-data initiative. The architect wants to upgrade to the latest version of the host operating system as part of the migration effort.
Which is the FASTEST and MOST cost-effective way to perform the migration?
- A. Run a physical-to-virtual conversion on the application serve
- B. Transfer the server image over the internet, and transfer the static data to Amazon S3.
- C. Run a physical-to-virtual conversion on the application serve
- D. Transfer the server image over AWS Direct Connect, and transfer the static data to Amazon S3.
- E. Re-platform the server to Amazon EC2, and use AWS Snowball to transfer the static data to Amazon S3.
- F. Re-platform the server by using the AWS Server Migration Service to move the code and data to a new Amazon EC2 instance.
Answer: C
NEW QUESTION 15
A company prefers to limit running Amazon EC2 instances to those that were launched from AMIs pre-approved by the Information Security department. The Development team has an agile continuous integration and deployment process that cannot be stalled by the solution.
Which method enforces the required controls with the LEAST impact on the development process? (Choose two.)
- A. Use IAM policies to restrict the ability of users or other automated entities to launch EC2 instances based on a specific set of pre-approved AMIs, such as those tagged in a specific way by Information Security.
- B. Use regular scans within Amazon Inspector with a custom assessment template to determine if the EC2 instance that the Amazon Inspector Agent is running on is based upon a pre-approved AM
- C. If it is not, shut down the instance and inform information Security by email that this occurred.
- D. Only allow launching of EC2 instances using a centralized DevOps team, which is given work packages via notifications from an internal ticketing syste
- E. Users make requests for resources using this ticketing tool, which has manual information security approval steps to ensure that EC2 instances are only launched from approved AMIs.
- F. Use AWS Config rules to spot any launches of EC2 instances based on non-approved AMIs, trigger an AWS Lambda function to automatically terminate the instance, and publish a message to an Amazon SNS topic to inform Information Security that this occurred.
- G. Use a scheduled AWS Lambda function to scan through the list of running instances within the virtual private cloud (VPC) and determine if any of these are based on unapproved AMI
- H. Publish a message to an SNS topic to inform Information Security that this occurred and then shut down the instance.
Answer: AD
Explanation:
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules_getting-started.html
NEW QUESTION 16
A Solutions Architect is building a containerized NET Core application that will run in AWS Fargate The backend of the application requires Microsoft SQL Server with high availability All tiers of the application must be highly available The credentials used for the connection string to SQL Server should not be stored on disk within the .NET Core front-end containers.
Which strategies should the Solutions Architect use to meet these requirements'?
- A. Set up SQL Server to run in Fargate with Service Auto Scaling Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to SQL Server running in Fargate Specify the ARN of the secret in AWS Secrets Manager m the secrets section of the Fargate task definition so the sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the connection string Set up the NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availability Zones
- B. Create a Multi-AZ deployment of SQL Server on Amazon RDS Create a secret in AWS Secrets Manager for the credentials to the RDS database Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to the RDS database in Secrets Manager Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task definition so the sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the connection string Set up the NET Core service m Fargate using Service Auto Scalina behind an Application Load Balancer in multiple Availability Zones.
- C. Create an Auto Scaling group to run SQL Server on Amazon EC2 Create a secret in AWS Secrets Manager for the credentials to SQL Server running on EC2 Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to SQL Server on EC2 Specify the ARN of the secret in Secrets Manager In the secrets section of the Fargate task definition sothe sensitive data can be injected into the containers as environment variables on startup for reading into the application to construct the connection string Set up the NET Core service using Service Auto Scaling behind an Application Load Balancer in multiple Availabilitv Zones.
- D. Create a Multi-AZ deployment of SQL Server on Amazon RDS Create a secret in AWS Secrets Manager for the credentials to the RDS database Create non-persistent empty storage for the NET Core containers in the Fargate task definition to store the sensitive information Create an Amazon ECS task execution role that allows the Fargate task definition to get the secret value for the credentials to the RDS database in Secrets Manager Specify the ARN of the secret in Secrets Manager in the secrets section of the Fargate task definition so the sensitive data can be written to the non-persistent empty storage on startup for reading into the application to construct the connection.
Answer: C
NEW QUESTION 17
A company is having issues with a newly deployed server less infrastructure that uses Amazon API Gateway, Amazon Lambda, and Amazon DynamoDB.
In a steady state, the application performs as expected However, during peak load, tens of thousands of simultaneous invocations are needed and user request fail multiple times before succeeding. The company has checked the logs for each component, focusing specifically on Amazon CloudWatch Logs for Lambda. There are no error logged by the services or applications.
What might cause this problem?
- A. Lambda has very memory assigned, which causes the function to fail at peak load.
- B. Lambda is in a subnet that uses a NAT gateway to reach out to the internet, and the function instance does not have sufficient Amazon EC2 resources in the VPC to scale with the load.
- C. The throttle limit set on API Gateway is very low during peak load, the additional requests are not making their way through to Lambda
- D. DynamoDB is set up in an auto scaling mod
- E. During peak load, DynamoDB adjust capacity and through successfully.
Answer: A
NEW QUESTION 18
A Solutions Architect is redesigning an image-viewing and messaging platform to be delivered as SaaS. Currently, there is a farm of virtual desktop infrastructure (VDI) that runs a desktop image-viewing application and a desktop messaging application. Both applications use a shared database to manage user accounts and sharing. Users log in from a web portal that launches the applications and streams the view of the application on the user’s machine. The Development Operations team wants to move away from using VDI and wants to rewrite the application.
What is the MOST cost-effective architecture that offers both security and ease of management?
- A. Run a website from an Amazon S3 bucket with a separate S3 bucket for images and messaging data.Call AWS Lambda functions from embedded JavaScript to manage the dynamic content, and use Amazon Cognito for user and sharing management.
- B. Run a website from Amazon EC2 Linux servers, storing the images in Amazon S3, and use Amazon Cognito for user accounts and sharin
- C. Create AWS CloudFormation templates to launch the application by using EC2 user data to install and configure the application.
- D. Run a website as an AWS Elastic Beanstalk application, storing the images in Amazon S3, and using an Amazon RDS database for user accounts and sharin
- E. Create AWS CloudFormation templates to launch the application and perform blue/green deployments.
- F. Run a website from an Amazon S3 bucket that authorizes Amazon AppStream to stream applications for a combined image viewer and messenger that stores images in Amazon S3. Have the website use an Amazon RDS database for user accounts and sharing.
Answer: D
Explanation:
https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-images.html
NEW QUESTION 19
A company is running a large application on-premises. Its technology stack consists of Microsoft .NET for the web server platform and Apache Cassandra for the database. The company wants to migrate the application to AWS to improve service reliability. The IT team also wants to reduce the time it spends on capacity management and maintenance of this infrastructure. The Development team is willing and available to make code changes to support the migration.
Which design is the LEAST complex to manage after the migration?
- A. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
- B. Migrate the existing Cassandra database to Amazon Aurora with multiple read replicas, and run both in a Multi-AZ mode.
- C. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
- D. Migrate the Cassandra database to Amazon EC2 instances that are running in a Multi-AZ configuration.
- E. Migrate the web servers to an AWS Elastic Beanstalk environment that is running the .NET platform in a Multi-AZ Auto Scaling configuratio
- F. Migrate the existing Cassandra database to Amazon DynamoDB.
- G. Migrate the web servers to Amazon EC2 instances in an Auto Scaling group that is running .NE
- H. Migrate the existing Cassandra database to Amazon DynamoDB.
Answer: B
NEW QUESTION 20
A company uses an Amazon EMR cluster to process data once a day. The raw data comes from Amazon S3, and the resulting processed data is also stored in Amazon S3. The processing must complete within 4 hours; currently, it only takes 3 hours. However, the processing time is taking 5 to 10 minutes. longer each week due to an increasing volume of raw data.
The team is also concerned about rising costs as the compute capacity increases. The EMR cluster is currently running on three m3.xlarge instances (one master and two core nodes).
Which of the following solutions will reduce costs related to the increasing compute needs?
- A. Add additional task nodes, but have the team purchase an all-upfront convertible Reserved Instance for each additional nod e to offset the costs.
- B. Add additional task nodes, but use instance fleets with the master node in on-Demand mode and a mix of On-Demand and Spot Instances for the core and task node
- C. Purchase a scheduled Reserved Instances for the master node.
- D. Add additional task nodes, but use instance fleets with the master node in Spot mode and a mix of On-Demand and Spot Instances for the core and task node
- E. Purchase enough scheduled Reserved Instances to offset the cost of running any On-Demand instances.
- F. Add additional task nodes, but use instance fleets with the master node in On-Demand mode and a mix of On-Demand and Spot Instances for the core and task node
- G. Purchase a standard all-upfront Reserved Instance for the master node.
Answer: B
NEW QUESTION 21
An on-premises application will be migrated to the cloud. The application consists of a single Elasticsearch virtual machine with data source feeds from local systems that will not be migrated, and a Java web application on Apache Tomcat running on three virtual machines. The Elasticsearch server currently uses 1 TB of storage out of 16 TB available storage, and the web application is updated every 4 months. Multiple users access the web application from the Internet. There is a 10Gbit AWS Direct Connect connection established, and the application can be migrated over a schedules 48-hour change window.
Which strategy will have the LEAST impact on the Operations staff after the migration?
- A. Create an Elasticsearch server on Amazon EC2 right-sized with 2 TB of Amazon EBS and a public AWS Elastic Beanstalk environment for the web applicatio
- B. Pause the data sources, export the Elasticsearch index from on premises, and import into the EC2 Elasticsearch serve
- C. Move data source feeds to the new Elasticsearch server and move users to the web application.
- D. Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web applicatio
- E. Use AWS DMS to replicate Elasticsearch dat
- F. When replication has finished, move data source feeds to the new Amazon ES cluster endpoint and move users to the new web application.
- G. Use the AWS SMS to replicate the virtual machines into AW
- H. When the migration is complete, pause the data source feeds and start the migrated Elasticsearch and web application instance
- I. Place the web application instances behind a public Elastic Load Balance
- J. Move the data source feeds to the new Elasticsearch server and move users to the new web Application Load Balancer.
- K. Create an Amazon ES cluster for Elasticsearch and a public AWS Elastic Beanstalk environment for the web applicatio
- L. Pause the data source feeds, export the Elasticsearch index from on premises, and import into the Amazon ES cluste
- M. Move the data source feeds to the new Amazon ES cluster endpoint and move users to the new web application.
Answer: D
NEW QUESTION 22
A company has multiple AWS accounts hosting IT applications. An Amazon CloudWatch Logs agent is installed on all Amazon EC2 instances. The company wants to aggregate all security events in a centralized AWS account dedicated to log storage.
Security Administrators need to perform near-real-time gathering and correlating of events across multiple AWS accounts.
Which solution satisfies these requirements?
- A. Create a Log Audit IAM role in each application AWS account with permissions to view CloudWatch Logs, configure an AWS Lambda function to assume the Log Audit role, and perform an hourly export of CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account.
- B. Configure CloudWatch Logs streams in each application AWS account to forward events to CloudWatch Logs in the logging AWS accoun
- C. In the logging AWS account, subscribe an Amazon Kinesis Data Firehose stream to Amazon CloudWatch Events, and use the stream to persist log data in Amazon S3.
- D. Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source, and persist the log data in an Amazon S3 bucket inside the logging AWS account.
- E. Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the logging AWS account, use an AWS Lambda function to read messages from the stream and push messages to Data Firehose, and persist the data in Amazon S3.
Answer: C
Explanation:
The solution uses Amazon Kinesis Data Streams and a log destination to set up an endpoint in the logging account to receive streamed logs and uses Amazon Kinesis Data Firehose to deliver log data to the Amazon Simple Storage Solution (S3) bucket. Application accounts will subscribe to stream all (or part) of their Amazon CloudWatch logs to a defined destination in the logging account via subscription filters. https://aws.amazon.com/blogs/architecture/central-logging-in-multi-account-environments/
NEW QUESTION 23
A Solutions Architect needs to design a highly available application that will allow authenticated users to stay connected to the application even when there are underlying failures
Which solution will meet these requirements?
- A. Deploy the application on Amazon EC2 instances Use Amazon Route 53 to forward requests to the EC2 Instances Use Amazon DynamoDB to save the authenticated connection details
- B. Deploy the application on Amazon EC2 instances in an Auto Scaling group Use an internet-facing Application Load Balancer to handle requests Use Amazon DynamoDB to save the authenticated connection details
- C. Deploy the application on Amazon EC2 instances in an Auto Scaling group Use an internet-facing Application Load Balancer on the front end Use EC2 instances to save the authenticated connectiondetails
- D. Deploy the application on Amazon EC2 instances in an Auto Scaling group Use an internet-facing Application Load Balancer on the front end Use EC2 instances hosting a MySQL database to save the authenticated connection details
Answer: B
NEW QUESTION 24
An online retailer needs to regularly process large product catalogs, which are handled in batches. These are sent out to be processed by people using the Amazon Mechanical Turk service, but the retailer has asked its Solutions Architect to design a workflow orchestration system that allows it to handle multiple concurrent Mechanical Turk operations, deal with the result assessment process, and reprocess failures.
Which of the following options gives the retailer the ability to interrogate the state of every workflow with the LEAST amount of implementation effort?
- A. Trigger Amazon CloudWatch alarms based upon message visibility in multiple Amazon SQS queues (one queue per workflow stage) and send messages via Amazon SNS to trigger AWS Lambda functions to process the next ste
- B. Use Amazon ES and Kibana to visualize Lambda processing logs to see the workflow states.
- C. Hold workflow information in an Amazon RDS instance with AWS Lambda functions polling RDS for status change
- D. Worker Lambda functions then process the next workflow step
- E. Amazon QuickSight will visualize workflow states directly out of Amazon RDS.
- F. Build the workflow in AWS Step Functions, using it to orchestrate multiple concurrent workflow
- G. The status of each workflow can be visualized in the AWS Management Console, and historical data can be written to Amazon S3 and visualized using Amazon QuickSight.
- H. Use Amazon SWF to create a workflow that handles a single batch of catalog records with multiple worker tasks to extract the data, transform it, and send it through Mechanical Tur
- I. Use Amazon ES and Kibana to visualize AWS Lambda processing logs to see the workflow states.
Answer: C
Explanation:
AWS Step Functions is a fully managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Instead of writing a Decider program, you define state machines in JSON. AWS customers should consider using Step Functions for new applications. If Step Functions does not fit your needs, then you should consider Amazon Simple Workflow (SWF). Amazon SWF provides you complete control over your orchestration logic, but increases the complexity of developing applications. You may write decider programs in the programming language of your choice, or you may use the Flow framework to use programming constructs that structure asynchronous interactions for you. AWS will continue to provide the Amazon SWF service, Flow framework, and support all Amazon SWF customers. https://aws.amazon.com/swf/faqs/
NEW QUESTION 25
What combination of steps could a Solutions Architect take to protect a web workload running on Amazon EC2 from DDoS and application layer attacks? (Select two.)
- A. Put the EC2 instances behind a Network Load Balancer and configure AWS WAF on it.
- B. Migrate the DNS to Amazon Route 53 and use AWS Shield
- C. Put the EC2 instances in an Auto Scaling group and configure AWS WAF on it.
- D. Create and use an Amazon CloudFront distribution and configure AWS WAF on it.
- E. Create and use an internet gateway in the VPC and use AWS Shield.
Answer: BD
Explanation:
References: https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/
100% Valid and Newest Version SAP-C01 Questions & Answers shared by Certifytools, Get Full Dumps HERE: https://www.certifytools.com/SAP-C01-exam.html (New 179 Q&As)