Q1. - (Topic 1)
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You have a virtual machine (VM) in Microsoft Azure, which has a 2 terabyte (TB) database. Microsoft SQL Server backups are performed by using Backup to URL.
You need to provision the storage account for the backups while minimizing costs. Which storage option should you use?
A. Premium P10 disk storage
B. Premium P20 disk storage
C. Premium P30 disk storage
D. Standard locally redundant disk storage
E. Standard geo-redundant disk storage
F. Standard zone redundant blob storage
G. Standard locally redundant blob storage
H. Standard geo-redundant blob storage
Answer: G
Explanation:
A URL specifies a Uniform Resource Identifier (URI) to a unique backup file. The URL is used to provide the location and name of the SQL Server backup file. The URL must point to an actual blob, not just a container. If the blob does not exist, it is created. If an existing
blob is specified, BACKUP fails, unless the “WITH FORMAT” option is specified to overwrite the existing backup file in the blob.
LOCALLY REDUNDANT STORAGE (LRS) makes multiple synchronous copies of your data within a single datacenter.
Q2. DRAG DROP - (Topic 1)
You are building a new Always On Availability Group in Microsoft Azure. The corporate domain controllers (DCs) are attached to a virtual network named ProductionNetwork. The DCs are part of an availability set named ProductionServers1.
You create the first node of the availability group and add it to an availability set named ProductionServers2. The availability group node is a virtual machine (VM) that runs Microsoft SQL Server. You attach the node to ProductionNetwork.
The servers in the availability group must be directly accessible only by other company VMs in Azure.
You need to configure the second SQL Server VM for the availability group.
How should you configure the VM? To answer, drag the appropriate configuration settings to the correct target locations. Each configuration setting may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
Explanation;
Box 1: ProductionNetwork
The virtual network is named ProductionNetwork.
Box 2: None /Not Assigned
As the servers in the availability group must be directly accessible only by other company VMs in Azure, there should be no Public IP address.
Box 3: ProductionServer2
You create the first node of the availability group and add it to an availability set named ProductionServers2. The availability group node is a virtual machine (VM) that runs Microsoft SQL Server.
Q3. - (Topic 2)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
You manage a Microsoft SQL Server environment with several databases.
You need to ensure that queries use statistical data and do not initialize values for local variables.
Solution: You enable the LEGACY_CARDINALITY_ESTIMATION option for the databases. Does the solution meet the goal?
A. Yes
B. No
Answer: B
Explanation:
LEGACY_CARDINALITY_ESTIMATION = { ON | OFF | PRIMARY }
Enables you to set the query optimizer cardinality estimation model to the SQL Server 2012 and earlier version independent of the compatibility level of the database. This is equivalent to Trace Flag 9481.
References:https://msdn.microsoft.com/en-us/library/mt629158.aspx
Q4. - (Topic 2)
You plan to deploy 20 Microsoft Azure SQL Database instances to an elastic pool in Azure to support a batch processing application.
Two of the databases in the pool reach their peak workload threshold at the same time every day. This leads to inconsistent performance for batch completion.
You need to ensure that all batches perform consistently. What should you do?
A. Create an In-Memory table.
B. Increase the storage limit in the pool.
C. Implement a readable secondary database.
D. Increase the total number of elastic Database Transaction Units (eDTUs) in the pool.
Answer: D
Explanation:
In SQL Database, the relative measure of a database's ability tohandle resource demands is expressed in Database Transaction Units (DTUs) for single databases and elastic DTUs (eDTUs) for databases in an elastic pool.
A pool is given a set number of eDTUs, for a set price. Within the pool, individual databases are given the flexibility to auto-scale within set parameters. Under heavy load, a database can consume more eDTUs to meet demand.
Additional eDTUs can be added to an existing pool with no database downtime. References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-elastic-pool
Q5. HOTSPOT - (Topic 1)
You plan to migrate a Microsoft SQL Server workload from an on-premises server to a Microsoft Azure virtual machine (VM). The current server contains 4 cores with an average
CPU workload of 6 percent and a peak workload of 10 percent when using 2.4Ghz processors.
You gather the following metrics:
You need to design a SQL Server VM to support the migration while minimizing costs. For each setting, which value should you use? To answer, select the appropriate storage
option from each list in the answer area.
NOTE: Each correct selection is worth one point.
Answer:
Explanation:
Data drive: Premium Storage Transaction log drive: Standard Storage TempDB drive: Premium Storage
Note: A standard disk is expected to handle 500 IOPS or 60MB/s. A P10 Premium disk is expected to handle 500 IOPS.
A P20 Premium disk is expected to handle 2300 IOPS. A P30 Premium disk is expected to handle 5000 IOPS.
VM size: A3
Max data disk throughput is 8x500 IOPS
References:https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines- windows-sizes
Topic 2, Manage databases and instances
13. - (Topic 2)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
You manage a Microsoft SQL Server environment with several databases.
You need to ensure that queries use statistical data and do not initialize values for local variables.
Solution: You enable the PARAMETER_SNIFFING option for the databases. Does the solution meet the goal?
A. Yes
B. No
Q6. - (Topic 1)
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You have deployed several GS-series virtual machines (VMs) in Microsoft Azure. You plan to deploy Microsoft SQL Server in a development environment. Each VM has a dedicated
disk for backups.
You need to backup a database to the local disk on a VM. The backup must be replicated to another region.
Which storage option should you use?
A. Premium P10 disk storage
B. Premium P20 diskstorage
C. Premium P30 disk storage
D. Standard locally redundant disk storage
E. Standard geo-redundant disk storage
F. Standard zone redundant blob storage
G. Standard locally redundant blob storage
H. Standard geo-redundant blob storage
Answer: E
Explanation:
Note: SQL Database automatically creates a database backups and uses Azure read- access geo-redundant storage (RA-GRS) to provide geo-redundancy. These backups are created automatically and at no additional charge. You don't need to do anything to make them happen. Database backups are an essential part of any business continuity and disaster recovery strategy because they protect your data from accidental corruption or deletion.
References:https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automated- backups
Q7. - (Topic 1)
Note: This question is part of a series of questions that present the same scenario. Each
question in the series contains a unique solution. Determine whether the solution meets stated goals.
Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
You need to create the templates.
Solution: You use Visual Studio to create a JSON template that defines the deployment and configuration settings for the SQL Server environment.
Does the solution meet the goal?
A. Yes
B. No
Answer: A
Explanation:
Azure Resource Manager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.
A good JSON editor can simplify the task of creating templates.
Note: In its simplest structure, an Azure Resource Manager template contains the following elements:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
"contentVersion": "", "parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}
References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource- group-authoring-templates
Q8. - (Topic 1)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
You need to create the templates.
Solution: You use Visual Studio to create a XAML template that defines the deployment and configuration settings for the SQL Server environment.
Does the solution meet the goal?
A. Yes
B. No
Answer: B
Explanation:
Azure ResourceManager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.
A good JSON editor can simplify the task of creating templates.
Note: In its simplest structure, an Azure Resource Manager template contains the following elements:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
"contentVersion": "", "parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}
References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
Q9. - (Topic 1)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets stated goals.
Your company plans to use Microsoft Azure Resource Manager templates for all future deployments of SQL Server on Azure virtual machines.
You need to create the templates.
Solution: You use Visual Studio to create a XAML template that defines the deployment and configuration settings for the SQL Server environment.
Does the solution meet the goal?
A. Yes
B. No
Answer: B
Explanation:
Azure ResourceManager template consists of JSON, not XAML, and expressions that you can use to construct values for your deployment.
A good JSON editor can simplify the task of creating templates.
Note: In its simplest structure, an Azure Resource Manager template contains the following elements:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01- 01/deploymentTemplate.json#",
"contentVersion": "", "parameters": { },
"variables": { },
"resources": [ ],
"outputs": { }
}
References:https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates
Q10. - (Topic 1)
Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
You have deployed a GS-series virtual machine (VM) in Microsoft Azure. You plan to deploy Microsoft SQL Server.
You need to deploy a 30 megabyte (MB) database that requires 100 IOPS to be guaranteed while minimizing costs.
Which storage option should you use?
A. Premium P10 disk storage
B. Premium P20 disk storage
C. Premium P30 disk storage
D. Standard locally redundant disk storage
E. Standard geo-redundant disk storage
F. Standard zone redundant blob storage
G. Standard locally redundant blob storage
H. Standard geo-redundant blob storage
Answer: A
Explanation:
Premium Storage Disks Limits
When you provision a disk against a Premium Storage account, how much input/output operations per second (IOPS) and throughput (bandwidth) it can get depends on the size of the disk. Currently, there are three types of Premium Storage disks: P10, P20, and P30. Each one has specific limits for IOPS and throughput as specified in the following table:
References:https://docs.microsoft.com/en-us/azure/storage/storage-premium-storage