SAA-C03 Exam - AWS Certified Solutions Architect - Associate (SAA-C03)

certleader.com

Cause all that matters here is passing the Amazon-Web-Services SAA-C03 exam. Cause all that you need is a high score of SAA-C03 AWS Certified Solutions Architect - Associate (SAA-C03) exam. The only one thing you need to do is downloading Ucertify SAA-C03 exam study guides now. We will not let you down with our money-back guarantee.

Online Amazon-Web-Services SAA-C03 free dumps demo Below:

NEW QUESTION 1
A company needs to move data from an Amazon EC2 instance to an Amazon S3 bucket. The company mutt ensure that no API calls and no data aim routed through public internet routes Only the EC2 instance can have access to upload data to the S3 bucket.
Which solution will meet these requirements?

  • A. Create an interlace VPC endpoinl for Amazon S3 in the subnet where the EC2 instance is located Attach a resource policy to the S3 bucket to only allow the EC2 instance's 1AM rote for access
  • B. Create a gateway VPC endpoinl for Amazon S3 in the Availability Zone where the EC2 instance is located Attach appropriate security groups to the endpoint Attach a resource policy to the S3 bucket to only allow the EC2 instance's lAM tote for access
  • C. Run the nslookup toot from inside the EC2 instance to obtain the private IP address of the S3 bucket's service API endpoint Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket Attach a resource policy to the S3 bucket to only allow the EC2 instance's AM role for access
  • D. Use the AWS provided publicly available ip-ranges |son file to obtam the pnvate IP address of the S3 bucket's service API endpoint Create a route in the VPC route table to provide the EC2 instance with access to the S3 bucket Attach a resource policy to the S3 bucket to only allow the EC2 instance's 1AM role for access

Answer: B

NEW QUESTION 2
A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Create an Auto Scaling group so that EC2 instances can scale ou
  • B. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  • C. Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket.Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
  • D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output dat
  • E. Configure the S3 bucket as the rule's targe
  • F. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complet
  • G. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
  • H. Create a Docker container to use instead of an EC2 instanc
  • I. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.

Answer: B

NEW QUESTION 3
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region The database is encrypted with an AWS Key Management Service (AWS KMS) customer managed key The company was recently acquired and must securely share a backup of the database with the acquiring company's AWS account in ap-southeast-3.
What should a solutions architect do to meet these requirements?

  • A. Create a database snapshot Copy the snapshot to a new unencrypted snapshot Share the new snapshot with the acquiring company's AWS account
  • B. Create a database snapshot Add the acquiring company's AWS account to the KMS key policy Share the snapshot with the acquiring company's AWS account
  • C. Create a database snapshot that uses a different AWS managed KMS key Add the acquiring company's AWS account to the KMS key alia
  • D. Share the snapshot with the acquiring company's AWS account.
  • E. Create a database snapshot Download the database snapshot Upload the database snapshot to an Amazon S3 bucket Update the S3 bucket policy to allow access from the acquiring company's AWS account

Answer: A

NEW QUESTION 4
A solution architect is creating a new Amazon CloudFront distribution for an application Some of Ine information submitted by users is sensitive. The application uses HTTPS but needs another layer" of security The sensitive information should be protected throughout the entire application stack end access to the information should be restricted to certain applications
Which action should the solutions architect take?

  • A. Configure a CloudFront signed URL
  • B. Configure a CloudFront signed cookie.
  • C. Configure a CloudFront field-level encryption profile
  • D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the Viewer Protocol Policy

Answer: C

NEW QUESTION 5
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?

  • A. Create a gateway VPC endpoint to the S3 bucket.
  • B. Stream the logs to Amazon CloudWatch Log
  • C. Export the logs to the S3 bucket.
  • D. Create an instance profile on Amazon EC2 to allow S3 access.
  • E. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Answer: A

NEW QUESTION 6
A company uses NFS to store large video files in on-premises network attached storage. Each video file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The company decides to migrate the video files to Amazon S3. The company must migrate the video files as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?

  • A. Create an S3 bucket Create an 1AM role that has permissions to write to the S3 bucke
  • B. Use the AWS CLI to copy all files locally to the S3 bucket.
  • C. Create an AWS Snowball Edge jo
  • D. Receive a Snowball Edge device on premise
  • E. Use the Snowball Edge client to transfer data to the devic
  • F. Return the device so that AWS can import the data intoAmazon S3.
  • G. Deploy an S3 File Gateway on premise
  • H. Create a public service endpoint to connect to the S3 File Gateway Create an S3 bucket Create a new NFS file share on the S3 File Gateway Point the new file share to the S3 bucke
  • I. Transfer the data from the existing NFS file share to the S3 File Gateway.
  • J. Set up an AWS Direct Connect connection between the on-premises network and AW
  • K. Deploy an S3 File Gateway on premise
  • L. Create a public virtual interlace (VIF) to connect to the S3 File Gatewa
  • M. Create an S3 bucke
  • N. Create a new NFS file share on the S3 File Gatewa
  • O. Point the new file share to the S3 bucke
  • P. Transfer the data from the existing NFS file share to the S3 File Gateway.

Answer: C

NEW QUESTION 7
A company is preparing to store confidential data in Amazon S3 For compliance reasons the data must be encrypted at rest Encryption key usage must be logged tor auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and «the MOST operationally efferent?

  • A. Server-side encryption with customer-provided keys (SSE-C)
  • B. Server-side encryption with Amazon S3 managed keys (SSE-S3)
  • C. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with manual rotation
  • D. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automate rotation

Answer: D

Explanation:
https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html
When you enable automatic key rotation for a customer managed key, AWS KMS generates new cryptographic material for the KMS key every year. AWS KMS also saves the KMS key's older cryptographic material in perpetuity so it can be used to decrypt data that the KMS key encrypted.
Key rotation in AWS KMS is a cryptographic best practice that is designed to be transparent and easy to use.
AWS KMS supports optional automatic key rotation only for customer managed CMKs. Enable and disable key rotation. Automatic key rotation is disabled by default on customer managed CMKs. When you enable (or re-enable) key rotation, AWS KMS automatically rotates the CMK 365 days after the enable date and every 365 days thereafter.

NEW QUESTION 8
A company has enabled AWS CloudTrail logs to deliver log files to an Amazon S3 bucket for each of its developer accounts. The company has created a central AWS account for streamlining management and audit reviews An internal auditor needs to access the CloudTrail logs yet access needs to be restricted for all developer account users The solution must be secure and optimized
How should a solutions architect meet these requirements?

  • A. Configure an AWS Lambda function m each developer account to copy the log files to the central account Create an IAM role in the central account for the auditor Attach an IAM policy providing read-only permissions to the bucket
  • B. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket m the central account Create an IAM user in the central account for the auditor Attach an IAM policy providing full permissions to the bucket
  • C. Configure CloudTrail from each developer account to deliver the log files to an S3 bucket in the central account Create an IAM role in the central account for the auditor Attach an IAM policy providingread-only permissions to the bucket
  • D. Configure an AWS Lambda function in the central account to copy the log files from the S3 bucket m each developer account Create an IAM user m the central account for the auditor Attach an IAM policy providing full permissions to the bucket

Answer: C

Explanation:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-sharing-logs.html

NEW QUESTION 9
A company is migrating a distributed application to AWS The application serves variable workloads The legacy platform consists of a primary server trial coordinates jobs across multiple compute nodes The company wants to modernize the application with a solution that maximizes resiliency and scalability
How should a solutions architect design the architecture to meet these requirements?

  • A. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling grou
  • B. Configure EC2 Auto Scaling to use scheduled scaling
  • C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a destination for the jobs Implement the compute nodes with Amazon EC2 Instances that are managed in an Auto Scaling group Configure EC2 Auto Scaling based on the size of the queue
  • D. Implement the primary server and the compute nodes with Amazon EC2 instances that are managed Inan Auto Scaling grou
  • E. Configure AWS CloudTrail as a destination for the fobs Configure EC2 Auto Scaling based on the load on the primary server
  • F. implement the primary server and the compute nodes with Amazon EC2 instances that are managed in an Auto Scaling group Configure Amazon EventBridge (Amazon CloudWatch Events) as a destination for the jobs Configure EC2 Auto Scaling based on the load on the compute nodes

Answer: C

NEW QUESTION 10
An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The company has set up S3 event notifications to publish the object creation events to an A company has a service that produces event queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?

  • A. Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.
  • B. Change the SQS standard queue to an SQS FIFO queu
  • C. Use the message deduplication ID to discard duplicate messages.
  • D. Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout.
  • E. Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.

Answer: B

NEW QUESTION 11
A company collects data from thousands of remote devices by using a RESTful web services application that runs on an Amazon EC2 instance. The EC2 instance receives the raw data, transforms the raw data, and stores all the data in an Amazon S3 bucket. The number of remote devices will increase into the millions soon. The company needs a highly scalable solution that minimizes operational overhead.
Which combination of steps should a solutions architect take to meet these requirements9 (Select TWO.)

  • A. Use AWS Glue to process the raw data in Amazon S3.
  • B. Use Amazon Route 53 to route traffic to different EC2 instances.
  • C. Add more EC2 instances to accommodate the increasing amount of incoming data.
  • D. Send the raw data to Amazon Simple Queue Service (Amazon SOS). Use EC2 instances to process the data.
  • E. Use Amazon API Gateway to send the raw data to an Amazon Kinesis data strea
  • F. Configure Amazon Kinesis Data Firehose to use the data stream as a source to deliver the data to Amazon S3.

Answer: BE

NEW QUESTION 12
A company needs to ingested and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2 instances and sends data to Amazon Kinesis Data Streams. which is contained wild default settings. Every other day the application consumes the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing the company observes that Amazon S3 is not receiving all the data that trio application sends to Kinesis Data Streams.
What should a solutions architect do to resolve this issue?

  • A. Update the Kinesis Data Streams default settings by modifying the data retention period.
  • B. Update the application to use the Kinesis Producer Library (KPL) lo send the data to Kinesis Data Streams.
  • C. Update the number of Kinesis shards lo handle the throughput of me data that is sent to Kinesis Data Streams.
  • D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.

Answer: A

NEW QUESTION 13
A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)

  • A. Use an Auto Scaling group to launch the EC2 instances in private subnet
  • B. Deploy an RDS Multi-AZ DB instance in private subnets.
  • C. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.Deploy an Application Load Balancer in the private subnets.
  • D. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones.Deploy an RDS Multi-AZ DB instance in private subnets.
  • E. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zone
  • F. Deploy an Application Load Balancer in the public subnet.
  • G. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zone
  • H. Deploy an Application Load Balancer in the public subnets.

Answer: AE

Explanation:
Explanation
Before you begin: Decide which two Availability Zones you will use for your EC2 instances. Configure your
virtual private cloud (VPC) with at least one public subnet in each of these Availability Zones. These public subnets are used to configure the load balancer. You can launch your EC2 instances in other subnets of these Availability Zones instead.

NEW QUESTION 14
A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS A custom application in the company's data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as soon as possible.
The data center does not have any available network bandwidth for additional workloads A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue
  • B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device
  • C. Order an AWS Snowball Edge Storage Optimized devic
  • D. Copy the data to the devic
  • E. Create a customtransformation job by using AWS Glue
  • F. Order an AWS
  • G. Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the transformation application

Answer: D

NEW QUESTION 15
Availability Zone The company wants the application to be highly available with minimum downtime and minimum loss of data
Which solution will meet these requirements with the LEAST operational effort?

  • A. Place the EC2 instances in different AWS Regions Use Amazon Route 53 health checks to redirect traffic Use Aurora PostgreSQL Cross-Region Replication
  • B. Configure the Auto Scaling group to use multiple Availability Zones Configure the database as Multi-AZ Configure an Amazon RDS Proxy instance for the database
  • C. Configure the Auto Scaling group to use one Availability Zone Generate hourly snapshots of the database Recover the database from the snapshots in the event of a failure.
  • D. Configure the Auto Scaling group to use multiple AWS Regions Write the data from the application to Amazon S3 Use S3 Event Notifications to launch an AWS Lambda function to write the data to the database

Answer: B

NEW QUESTION 16
A company wants to migrate its on-premises application to AWS. The application produces output files that vary in size from tens of gigabytes to hundreds of terabytes The application data must be stored in a standard file system structure The company wants a solution that scales automatically, is highly available, and requires minimum operational overhead.
Which solution will meet these requirements?

  • A. Migrate the application to run as containers on Amazon Elastic Container Service (Amazon ECS) Use Amazon S3 for storage
  • B. Migrate the application to run as containers on Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon Elastic Block Store (Amazon EBS) for storage
  • C. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
  • D. Use Amazon Elastic File System (Amazon EFS) for storage.
  • E. Migrate the application to Amazon EC2 instances in a Multi-AZ Auto Scaling grou
  • F. Use Amazon Elastic Block Store (Amazon EBS) for storage.

Answer: C

NEW QUESTION 17
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?

  • A. Use AWS Secrets Manage
  • B. Turn on automatic rotation.
  • C. Use AWS Systems Manager Parameter Stor
  • D. Turn on automatic rotatio
  • E. • Create an Amazon S3 bucket lo store objects that are encrypted with an AWS Key
  • F. Management Service (AWS KMS) encryption ke
  • G. Migrate the credential file to the S3 bucke
  • H. Point the application to the S3 bucket.
  • I. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume (or each EC2 instanc
  • J. Attach the new EBS volume to each EC2 instanc
  • K. Migrate the credential file to the new EBS volum
  • L. Point the application to the new EBS volume.

Answer: C

NEW QUESTION 18
......

Thanks for reading the newest SAA-C03 exam dumps! We recommend you to try the PREMIUM Dumps-hub.com SAA-C03 dumps in VCE and PDF here: https://www.dumps-hub.com/SAA-C03-dumps.html (0 Q&As Dumps)