DBS-C01 Exam - AWS Certified Database - Specialty

certleader.com

Ucertify DBS-C01 Questions are updated and all DBS-C01 answers are verified by experts. Once you have completely prepared with our DBS-C01 exam prep kits you will be ready for the real DBS-C01 exam without a problem. We have Renew Amazon-Web-Services DBS-C01 dumps study guide. PASSED DBS-C01 First attempt! Here What I Did.

Free demo questions for Amazon-Web-Services DBS-C01 Exam Dumps Below:

NEW QUESTION 1
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup.
The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company.
Which solution will meet these requirements with minimal effort?

  • A. Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RD
  • B. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
  • C. Create an AWS Lambda function to trigger on AWS CloudTrail API call
  • D. Filter on specific RDS API calls and write the output to the tracking systems.
  • E. Create RDS event subscription
  • F. Have the tracking systems subscribe to specific RDS event system notifications.
  • G. Write RDS logs to Amazon Kinesis Data Firehos
  • H. Create an AWS Lambda function to act on theserules and write the output to the tracking systems.

Answer: C

NEW QUESTION 2
A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.
Which AWS services should the Database Specialist consider? (Choose two.)

  • A. Amazon DynamoDB
  • B. Amazon Redshift
  • C. Amazon Neptune
  • D. Amazon Elasticsearch Service
  • E. Amazon ElastiCache

Answer: AE

NEW QUESTION 3
A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.
How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

  • A. Set the TCP keepalive parameters low
  • B. Call the AWS CLI failover-db-cluster command
  • C. Enable Enhanced Monitoring on the DB cluster
  • D. Start a database activity stream on the DB cluster

Answer: B

NEW QUESTION 4
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.
Which solution addresses these requirements?

  • A. Set the rds.force_ssl=0 parameter in DB parameter group
  • B. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=allow.
  • C. Set the rds.force_ssl=1 parameter in DB parameter group
  • D. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=disable.
  • E. Set the rds.force_ssl=0 parameter in DB parameter group
  • F. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-ca.
  • G. Set the rds.force_ssl=1 parameter in DB parameter group
  • H. Download and use the Amazon RDS certificatebundle and configure the PostgreSQL connection string with sslmode=verify-full.

Answer: D

NEW QUESTION 5
A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2 Regions.
This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location.
Which set of actions will meet these requirements?

  • A. Take a snapshot of the instance in the us-west-2 Regio
  • B. Create a new instance from the snapshot in the ap-southeast-1 Regio
  • C. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
  • D. Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the uswest- 2 Regio
  • E. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
  • F. Create a new RDS instance in the ap-southeast-1 Regio
  • G. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Regio
  • H. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
  • I. Create an RDS read replica in the us-west-2 Region where the primary instance reside
  • J. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Regio
  • K. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

Answer: A

NEW QUESTION 6
A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQLDB instance. Immediately after creating the read replica, users that query it report slow response times.
What could be causing these slow response times?

  • A. New volumes created from snapshots load lazily in the background
  • B. Long-running statements on the master
  • C. Insufficient resources on the master
  • D. Overload of a single replication thread by excessive writes on the master

Answer: B

NEW QUESTION 7
The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality. This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect WHERE clause filtering the wrong set of rows.
The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal.
How can the Database Specialist accomplish this?

  • A. Quickly rewind the DB cluster to a point in time before the release using Backtrack.
  • B. Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
  • C. Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
  • D. Create a clone of the DB cluster with Backtrack enable
  • E. Rewind the cloned cluster to a point in time before the releas
  • F. Copy deleted rows from the clone to the original database.

Answer: D

NEW QUESTION 8
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?

  • A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storag
  • B. Provision enough instances to support high demand.
  • C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • D. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • E. Provision enough instances to support high demand.
  • F. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • G. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • H. Enable Amazon Redshift Concurrency Scaling.
  • I. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent dat
  • J. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum laye
  • K. Leverage Amazon Redshift elastic resize.

Answer: C

NEW QUESTION 9
A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

  • A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Setacross all nodes in the cluster.
  • B. Increase the size of the ElastiCache cluster nodes to a larger instance size.
  • C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
  • D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

Answer: B

NEW QUESTION 10
A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.
What can the Database Specialist do to reduce the overall cost?

  • A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
  • B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
  • C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
  • D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Answer: A

NEW QUESTION 11
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.
How should the Database Specialist apply the parameter group change for the DB instance?

  • A. Select the option to apply the change immediately
  • B. Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
  • C. Apply the change manually by rebooting the DB instance during the approved maintenance window
  • D. Reboot the secondary Multi-AZ DB instance

Answer: D

NEW QUESTION 12
An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.
The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.
Which solution meets these requirements?

  • A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
  • B. Provision a clone of the existing DB cluster for the new Application team.
  • C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoingreplication using AWS DMS change data capture (CDC).
  • D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPUconsumption.

Answer: A

NEW QUESTION 13
A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

  • A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
  • B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
  • C. Edit and enable Aurora DB cluster cache management in parameter groups.
  • D. Set TCP keepalive parameters to a high value.
  • E. Set JDBC connection string timeout variables to a low value.
  • F. Set Java DNS caching timeouts to a high value.

Answer: ABC

NEW QUESTION 14
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?

  • A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
  • B. Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
  • C. Enable a read replicas and direct read traffic to it when Amazon RDS is down
  • D. Enable an Amazon RDS for MySQL Multi-AZ configuration

Answer: C

NEW QUESTION 15
An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:
DBS-C01 dumps exhibit Update scores in real time whenever a player is playing the game.
DBS-C01 dumps exhibit Retrieve a player’s score details for a specific game session.
A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each game has a unique game_id.
Which choice of keys is recommended for the DynamoDB table?

  • A. Create a global secondary index with game_id as the partition key
  • B. Create a global secondary index with user_id as the partition key
  • C. Create a composite primary key with game_id as the partition key and user_id as the sort key
  • D. Create a composite primary key with user_id as the partition key and game_id as the sort key

Answer: B

NEW QUESTION 16
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)

  • A. Set DeletionProtection to True
  • B. Set MultiAZ to True
  • C. Set TerminationProtection to True
  • D. Set DeleteAutomatedBackups to False
  • E. Set DeletionPolicy to Delete
  • F. Set DeletionPolicy to Retain

Answer: ACF

NEW QUESTION 17
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload.
The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise.
How can a Database Specialist address these requirements with minimal user involvement?

  • A. Split up the DB cluster into two different clusters: one for OLTP and the other for reportin
  • B. Monitor and set up replication between the two clusters to keep data consistent.
  • C. Review all evaluate the peak combined workloa
  • D. Ensure that utilization of the DB cluster node is at an acceptable leve
  • E. Adjust the number of instances, if necessary.
  • F. Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workloa
  • G. The cluster can be restarted again depending on the workload at the time.
  • H. Set up automatic scaling on the DB cluste
  • I. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

Answer: D

NEW QUESTION 18
A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.
How can the Database Specialist meet these requirements?

  • A. Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
  • B. Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.
  • C. Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
  • D. Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Answer: C

NEW QUESTION 19
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.
Which solution meets these requirements in the MOST efficient way?

  • A. Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
  • B. Use Amazon DynamoDB as the database and use DynamoDB Accelerator
  • C. Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache
  • D. Use Amazon DynamoDB as the database and use Amazon API Gateway

Answer: D

NEW QUESTION 20
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect tothe restored RDS DB instance. What is the likely cause of this problem?

  • A. The restored DB instance does not have Enhanced Monitoring enabled
  • B. The production DB instance is using a custom parameter group
  • C. The restored DB instance is using the default security group
  • D. The production DB instance is using a custom option group

Answer: B

NEW QUESTION 21
A company wants to automate the creation of secure test databases with random credentials to be stored safely for later use. The credentials should have sufficient information about each test database to initiate a connection and perform automated credential rotations. The credentials should not be logged or stored anywhere in an unencrypted form.
Which steps should a Database Specialist take to meet these requirements using an AWS CloudFormation template?

  • A. Create the database with the MasterUserName and MasterUserPassword properties set to the default value
  • B. Then, create the secret with the user name and password set to the same default value
  • C. Add a Secret Target Attachment resource with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the databas
  • D. Finally, update the secret’s password value with a randomly generated string set by the GenerateSecretString property.
  • E. Add a Mapping property from the database Amazon Resource Name (ARN) to the secret AR
  • F. Then, create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString propert
  • G. Add the database with the MasterUserName and MasterUserPassword properties set to the user name of the secret.
  • H. Add a resource of type AWS::SecretsManager::Secret and specify the GenerateSecretString property.Then, define the database user name in the SecureStringTemplate templat
  • I. Create a resource for the database and reference the secret string for the MasterUserName and MasterUserPassword propertie
  • J. Then, add a resource of type AWS::SecretsManagerSecretTargetAttachment with the SecretId and TargetId properties set to the Amazon Resource Names (ARNs) of the secret and the database.
  • K. Create the secret with a chosen user name and a randomly generated password set by the GenerateSecretString propert
  • L. Add an SecretTargetAttachment resource with the SecretId property set to the Amazon Resource Name (ARN) of the secret and the TargetId property set to a parameter value matching the desired database AR
  • M. Then, create a database with the MasterUserName and MasterUserPassword properties set to the previously created values in the secret.

Answer: C

NEW QUESTION 22
A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity.
What should the company do to achieve this in the shortest amount of time?

  • A. Use a blue-green deployment with a complete application-level failover test
  • B. Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
  • C. Use RDS fault injection queries to simulate the primary node failure
  • D. Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone

Answer: C

NEW QUESTION 23
......

P.S. Certleader now are offering 100% pass ensure DBS-C01 dumps! All DBS-C01 exam questions have been updated with correct answers: https://www.certleader.com/DBS-C01-dumps.html (85 New Questions)