AWS-Solution-Architect-Associate Exam - AWS Certified Solutions Architect - Associate

certleader.com

Our pass rate is high to 98.9% and the similarity percentage between our aws solution architect associate dumps and real exam is 90% based on our seven-year educating experience. Do you want achievements in the Amazon AWS-Solution-Architect-Associate exam in just one try? I am currently studying for the aws solution architect associate exam dumps. Latest aws solution architect associate questions, Try Amazon AWS-Solution-Architect-Associate Brain Dumps First.

Online Amazon AWS-Solution-Architect-Associate free dumps demo Below:

NEW QUESTION 1
You are setting up some EBS volumes for a customer who has requested a setup which includes a RAID (redundant array of inexpensive disks). AWS has some recommendations for RAID setups. Which RAID setup is not recommended for Amazon EBS?

  • A. RAID 5 only
  • B. RAID 5 and RAID 6
  • C. RAID 1 only
  • D. RAID 1 and RAID 6

Answer: B

Explanation: With Amazon EBS, you can use any of the standard RAID configurations that you can use with a traditional bare metal server, as long as that particular RAID configuration is supported by the operating system for your instance. This is because all RAID is accomplished at the software level. For greater I/O performance than you can achieve with a single volume, RAID 0 can stripe multiple volumes together; for on-instance redundancy, RAID 1 can mirror two volumes together.
RAID 5 and RAID 6 are not recommended for Amazon EBS because the parity write operations of these RAID modes consume some of the IOPS available to your volumes.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/raid-config.html

NEW QUESTION 2
You have launched an EC2 instance with four (4) 500GB EBS Provisioned IOPS volumes attached The EC2 Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS The two EBS volumes are configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with
4.000 IOPS (4 000 16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers the expected 16 000 IOPS random read and write performance Sometime later in order to increase the total random 1/0 performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to the RAID Each volume Is provisioned to 4.000 IOPs like the original four for a total of 24.000 IOPS on the EC2 instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total random IOPS measured at the instance level does not increase at all.
What is the problem and a valid solution?

  • A. Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volumestorage of each of the 6 EBS volumes to ITB
  • B. The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that provides larger throughput.
  • C. Small block sizes cause performance degradation, limiting the 1'0 throughput, configure the instance device driver and file system to use 64KB blocks to increase throughput.
  • D. RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase each Provisioned IOPS EBS volume to 6.000 IOPS.
  • E. The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be a 500GB 4.000 Provisioned IOPS volume.

Answer: E

NEW QUESTION 3
A user wants to increase the durability and availability of the EBS volume. Which of the below mentioned actions should he perform?

  • A. Take regular snapshots.
  • B. Create an AMI.
  • C. Create EBS with higher capacity.
  • D. Access EBS regularl

Answer: A

Explanation: In Amazon Web Services, Amazon EBS volumes that operate with 20 GB or less of modified data since their most recent snapshot can expect an annual failure rate (AFR) between 0.1% and 0.5%. For this reason, to maximize both durability and availability of their Amazon EBS data, the user should frequently create snapshots of the Amazon EBS volumes.
Reference: http://media.amazonwebservices.com/AWS_Storage_Options.pdf

NEW QUESTION 4
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the corporate data center via an IPsec VPN. The application must authenticate against the on-premises LDAP server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (53) keyspace specific to that user.
Which two approaches can satisfy these objectives? (Choose 2 answers)

  • A. Develop an identity broker that authenticates against IAM security Token service to assume a Lam role in order to get temporary AWS security credentials The application calls the identity broker to get AWS temporary security credentials with access to the appropriate 53 bucket.
  • B. The application authenticates against LDAP and retrieves the name of an IAM role associated with the use
  • C. The application then ca Ils the IAM Security Token Service to assume that IAM role The application can use the temporary credentials to access the appropriate 53 bucket.
  • D. Develop an identity broker that authenticates against LDAP and then calls IAM Security To ken Service to get IAM federated user credentials The application calls the identity broker to get IAM federated user credentials with access to the appropriate 53 bucket.
  • E. The application authenticates against LDAP the application then calls the AWS identity and Access Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the IAM temporary credentials to access the appropriate 53 bucket.
  • F. The application authenticates against IAM Security Token Service using the LDAP credentials the application uses those temporary AWS security credentials to access the appropriate 53 bucket.

Answer: BC

NEW QUESTION 5
When you put objects in Amazon 53, what is the indication that an object was successfully stored?

  • A. A HTIP 200 result code and MDS checksum, taken together, indicate that the operation was successful.
  • B. Amazon 53 is engineered for 99.999999999% durabilit
  • C. Therefore there is no need to confirm that data was inserted.
  • D. A success code is inserted into the 53 object metadata.
  • E. Each 53 account has a special bucket named _s3_1og
  • F. Success codes are written to this bucket with a timestamp and checksum.

Answer: A

NEW QUESTION 6
Your manager has just given you access to multiple VPN connections that someone else has recently set up between all your company's offices. She needs you to make sure that the communication between the VPNs is secure. Which of the following services would be best for providing a low-cost hub-and-spoke model for primary or backup connectMty between these remote offices?

  • A. Amazon C|oudFront
  • B. AWS Direct Connect
  • C. AWS C|oudHSM
  • D. AWS VPN CIoudHub

Answer: D

Explanation: If you have multiple VPN connections, you can provide secure communication between sites using the
AWS VPN CIoudHub. The VPN CIoudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who would like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectMty between these remote offices.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CIoudHub.htmI

NEW QUESTION 7
You can modify the backup retention period; valid values are 0 (for no backup retention) to a maximum of days.

  • A. 45
  • B. 35
  • C. 15
  • D. 5

Answer: B

NEW QUESTION 8
A photo-sharing service stores pictures in Amazon Simple Storage Service (53) and allows application sign-in using an OpenID Connect-compatible identity provider. Which AWS Security Token Service approach to temporary access should you use for the Amazon 53 operations?

  • A. SANIL-based Identity Federation
  • B. Cross-Account Access
  • C. AWS Identity and Access Management roles
  • D. Web Identity Federation

Answer: D

NEW QUESTION 9
Having set up a website to automatically be redirected to a backup website if it fails, you realize that there are different types of failovers that are possible. You need all your resources to be available the majority of the time. Using Amazon Route 53 which configuration would best suit this requirement?

  • A. Active-active failover.
  • B. Non
  • C. Route 53 can't failover.
  • D. Active-passive failover.
  • E. Active-active-passive and other mixed configuration

Answer: A

Explanation: You can set up a variety of failover configurations using Amazon Route 53 alias: weighted, latency, geolocation routing, and failover resource record sets.
Active-active failover: Use this failover configuration when you want all of your resources to be available the majority of the time. When a resource becomes unavailable, Amazon Route 53 can detect that it's unhealthy and stop including it when responding to queries.
Active-passive failover: Use this failover configuration when you want a primary group of resources to be available the majority of the time and you want a secondary group of resources to be on standby in case all of the primary resources become unavailable. When responding to queries, Amazon Route 53 includes
only the healthy primary resources. If all of the primary resources are unhealthy, Amazon Route 53 begins to include only the healthy secondary resources in response to DNS queries.
Active-active-passive and other mixed configurations: You can combine alias and non-alias resource record sets to produce a variety of Amazon Route 53 behaviors.
Reference: http://docs.aws.amazon.com/Route53/Iatest/DeveIoperGuide/dns-failover.html

NEW QUESTION 10
An organization has three separate AWS accounts, one each for development, testing, and production. The organization wants the testing team to have access to certain AWS resources in the production account. How can the organization achieve this?

  • A. It is not possible to access resources of one account with another account.
  • B. Create the IAM roles with cross account access.
  • C. Create the IAM user in a test account, and allow it access to the production environment with the IAM policy.
  • D. Create the IAM users with cross account acces

Answer: B

Explanation: An organization has multiple AWS accounts to isolate a development environment from a testing or production environment. At times the users from one account need to access resources in the other account, such as promoting an update from the development environment to the production environment. In this case the IAM role with cross account access will provide a solution. Cross account access lets one account share access to their resources with users in the other AWS accounts.
Reference: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

NEW QUESTION 11
One of the criteria for a new deployment is that the customer wants to use AWS Storage Gateway. However you are not sure whether you should use gateway-cached volumes or gateway-stored volumes or even what the differences are. Which statement below best describes those differences?

  • A. Gateway-cached lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locall
  • B. Gateway-stored enables you to configure youron-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.
  • C. Gateway-cached is free whilst gateway-stored is not.
  • D. Gateway-cached is up to 10 times faster than gateway-stored.
  • E. Gateway-stored lets you store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locall
  • F. Gateway-cached enables you to configure youron-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3.

Answer: A

Explanation: Volume gateways provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application sewers. The gateway supports the following volume configurations:
Gateway-cached volumes — You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Gateway-cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.
Gateway-stored volumes — If you need low-latency access to your entire data set, you can configure your on-premises gateway to store all your data locally and then asynchronously back up point-in-time snapshots of this data to Amazon S3. This configuration provides durable and inexpensive off-site backups that you can recover to your local data center or Amazon EC2. For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.
Reference: http://docs.aws.amazon.com/storagegateway/latest/userguide/volume-gateway.html

NEW QUESTION 12
A user has launched one EC2 instance in the US West region. The user wants to access the RDS instance launched in the US East region from that EC2 instance. How can the user configure the access for that EC2 instance?

  • A. Configure the IP range of the US West region instance as the ingress security rule of RDS
  • B. It is not possible to access RDS of the US East region from the US West region
  • C. Open the security group of the US West region in the RDS security group’s ingress rule
  • D. Create an IAM role which has access to RDS and launch an instance in the US West region with it

Answer: A

Explanation: The user cannot authorize an Amazon EC2 security group if it is in a different AWS Region than the RDS DB instance. The user can authorize an IP range or specify an Amazon EC2 security group in the same region that refers to an IP address in another region.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithSecurityGroups.html

NEW QUESTION 13
What is a placement group in Amazon EC2?

  • A. It is a group of EC2 instances within a single Availability Zone.
  • B. It the edge location of your web content.
  • C. It is the AWS region where you run the EC2 instance of your web content.
  • D. It is a group used to span multiple Availability Zone

Answer: A

Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

NEW QUESTION 14
What does Amazon Route53 provide?

  • A. A global Content Delivery Network.
  • B. None of these.
  • C. A scalable Domain Name System.
  • D. An SSH endpoint for Amazon EC2.

Answer: C

NEW QUESTION 15
By default what are ENIs that are automatically created and attached to instances using the EC2 console set to do when the attached instance terminates?

  • A. Remain as is
  • B. Terminate
  • C. Hibernate
  • D. Pause

Answer: B

NEW QUESTION 16
A user is aware that a huge download is occurring on his instance. He has already set the Auto Scaling policy to increase the instance count when the network I/O increases beyond a certain limit. How can the user ensure that this temporary event does not result in scaling?

  • A. The network I/O are not affected during data download
  • B. The policy cannot be set on the network I/O
  • C. There is no way the user can stop scaling as it is already configured
  • D. Suspend scaling

Answer: D

Explanation: The user may want to stop the automated scaling processes on the Auto Scaling groups either to perform manual operations or during emergency situations. To perform this, the user can suspend one or more scaling processes at any time. Once it is completed, the user can resume all the suspended processes. Reference:http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/AS_Concepts.htmI

NEW QUESTION 17
How many types of block devices does Amazon EC2 support A

  • A. 2
  • B. 3
  • C. 4
  • D. 1

Answer: A

NEW QUESTION 18
You're running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup. Your backup application is only able to write to POSIX-compatible blockbased storage. You have 140TB of data and would like to mount it as a single folder on your file server Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case?

  • A. Use Storage Gateway and configure it to use Gateway Cached volumes.
  • B. Configure your backup software to use 53 as the target for your data backups.
  • C. Configure your backup software to use Glacier as the target for your data backups.
  • D. Use Storage Gateway and configure it to use Gateway Stored volume

Answer: A

Explanation: Gateway-Cached Volume Architecture
Gateway-cached volumes let you use Amazon Simple Storage Service (Amazon 53) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Gateway cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TIB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon 53 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.
Gateway-cached volumes can range from 1 GIB to 32 TIB in size and must be rounded to the nearest GIB. Each gateway configured for gateway-cached volumes can support up to 32 volumes for a total maximum storage volume of 1,024 TIB (1 Pi B).
In the gateway-cached volume solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon 53.
The following diagram provides an overview of the AWS Storage Gateway-cached volume deployment.
After you've installed the AWS Storage Gateway software appliance-the virtual machine (VM)-on a host in your data center and activated it, you can use the AWS Management Console to provision storage
volumes backed by Amazon 53. You can also provision storage volumes programmatically using the AWS Storage Gateway API or the AWS SDK libraries. You then mount these storage volumes to your on-premises application servers as iSCSI devices.
You also al locate disks on-premises for the VM. These on-premises disks serve the following purposes: Disks for use by the gateway as cache storage - As your applications write data to the storage volumes in AWS, the gateway initially stores the data on the on-premises disks referred to as cache storage before uploading the data to Amazon 53. The cache storage acts as the on-premises durable store for data that is waiting to upload to Amazon 53 from the upload buffer.
The cache storage also lets the gateway store your appIication's recently accessed data on-premises for low-latency access. If your application requests data, the gateway first checks the cache storage for the data before checking Amazon 53.
You can use the following guidelines to determine the amount of disk space to allocate for cache storage. Generally, you should allocate at least 20 percent of your existing file store size as cache storage. Cache storage should also be larger than the upload buffer. This latter guideline helps ensure cache storage is large enough to persistently hold all data in the upload buffer that has not yet been uploaded to Amazon 53.
Disks for use by the gateway as the upload buffer - To prepare for upload to Amazon 53, your gateway also stores incoming data in a staging area, referred to as an upload buffer. Your gateway uploads this buffer data over an encrypted Secure Sockets Layer (SSL) connection to AWS, where it is stored encrypted in Amazon 53.
You can take incremental backups, called snapshots, of your storage volumes in Amazon 53. These point-in-time snapshots are also stored in Amazon 53 as Amazon EBS snapshots. When you take a new snapshot, only the data that has changed since your last snapshot is stored. You can initiate snapshots on a scheduled or one-time basis. When you delete a snapshot, only the data not needed for any other snapshots is removed.
You can restore an Amazon EBS snapshot to a gateway storage volume if you need to recover a backup of your data. Alternatively, for snapshots up to 16 TiB in size, you can use the snapshot as a starting point for a new Amazon EBS volume. You can then attach this new Amazon EBS volume to an Amazon EC2 instance.
All gateway-cached volume data and snapshot data is stored in Amazon 53 encrypted at rest using server-side encryption (SSE). However, you cannot access this data with the Amazon 53 API or other tools such as the Amazon 53 console.

NEW QUESTION 19
You have multiple VPN connections and want to provide secure communication between sites using the AWS VPN CIoudHub. Which statement is the most accurate in describing what you must do to set this up correctly?

  • A. Create a virtual private gateway with multiple customer gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs)
  • B. Create a virtual private gateway with multiple customer gateways, each with a unique set of keys
  • C. Create a virtual public gateway with multiple customer gateways, each with a unique Private subnet
  • D. Create a virtual private gateway with multiple customer gateways, each with unique subnet id

Answer: A

Explanation: If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CIoudHub. The VPN CIoudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing Internet connections who'd like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectMty between these remote offices.
To use the AWS VPN CIoudHub, you must create a virtual private gateway with multiple customer
gateways, each with unique Border Gateway Protocol (BGP) Autonomous System Numbers (ASNs). Customer gateways advertise the appropriate routes (BGP prefixes) over their VPN connections. These routing advertisements are received and re-advertised to each BGP peer, enabling each site to send data to and receive data from the other sites. The routes for each spoke must have unique ASNs and the sites must not have overlapping IP ranges. Each site can also send and receive data from the VPC as if they were using a standard VPN connection.
Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPN_CIoudHub.htmI

100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by 2passeasy, Get Full Dumps HERE: https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/ (New 672 Q&As)