Q1. - (Topic 9)
You are redesigning a SQL Server Analysis Services (SSAS) database that contains a cube named Sales. Before the initial deployment of the cube, partition design was optimized for processing time. The cube currently includes five partitions named FactSalesl through FactSales5. Each partition contains from 1 million to 2 million rows.
The FactSales5 partition contains the current year's information. The other partitions contain information from prior years; one year per partition. Currently, no aggregations are defined on the partitions.
You remove fact rows that are more than five years old from the fact table in the data source and configure query logs on the SSAS server.
Several queries and reports are running very slowly.
You need to optimize the partition structure and design aggregations to improve query performance and minimize administrative overhead.
What should you do? (More than one answer choice may achieve the goal. Select the BEST answer.)
A. Use the Usage-Based Optimization Wizard to create aggregations for the current partitions.
B. Use the Aggregation Design Wizard to create aggregations for the current partitions.
C. Combine all the partitions into a single partition. Use the Usage-Based Optimization Wizard to create aggregations.
D. Combine all the partitions into a single partition. Use the Aggregation Design Wizard to create aggregations.
Answer: A
Q2. - (Topic 2)
You need to configure the partition storage settings to support the reporting requirements. Which partition storage setting should you use?
A. Low-latency MOLAP
B. In-Memory
C. High-latency MOLAP
D. Regular
E. DirectQuery
F. LazyAggregations
Answer: A
Q3. - (Topic 9)
You are designing an extract, transform, load (ETL) process for loading data from a SQL
Server database into a large fact table in a data warehouse each day with the prior day's sales data.
The ETL process for the fact table must meet the following requirements:
.....
Load new data in the shortest possible time.
Remove data that is more than 36 months old.
Ensure that data loads correctly.
Minimize record locking.
Minimize impact on the transaction log.
You need to design an ETL process that meets the requirements.
What should you do? (More than one answer choice may achieve the goal. Select the BEST answer.)
A. Partition the destination fact table by date. Insert new data directly into the fact table and delete old data directly from the fact table.
B. Partition the destination fact table by date. Use partition switching and staging tables both to remove old data and to load new data.
C. Partition the destination fact table by customer. Use partition switching both to remove old data and to load new data into each partition.
D. Partition the destination fact table by date. Use partition switching and a staging table to remove old data. Insert new data directly into the fact table.
Answer: B
Q4. - (Topic 9)
You are designing a reporting solution that uses SQL Server Reporting Services (SSRS) in SharePoint integrated mode.
The reporting solution must meet the following requirements:
. Allow report writers to reuse content between different reports.
. Allow report writers to modify reusable content in SharePoint.
. Retain version history for report content.
You need to choose a reporting method that meets the requirements.
What should you use? (More than one answer choice may achieve the goal. Select the BEST answer.)
A. drillthrough reports
B. linked reports
C. subreports
D. report parts
Answer: D
Q5. - (Topic 3)
You need to configure package execution logging to meet the requirements.
What should you do?
A. Configure logging in each ETL package to log the OnError, OnInformation, and Diagnostic events.
B. Set the SSIS catalog's Server-wide Default Logging Level property to Performance.
C. Set the SSIS catalog's Server-wide Default Logging Level property to Basic.
D. Set the SSIS catalog's Server-wide Default Logging Level property to Verbose.
E. Configure logging in each ETL package to log the OnError, OnPreExecute, and OnPostExecute events.
Answer: B
Q6. - (Topic 5)
You need to implement the aggregation designs for the cube.
What should you do?
A. Use the CREATE CACHE statement.
B. Use the Aggregation Design Wizard.
C. Create relational indexes on the source tables.
D. Use the Usage-Based Optimization Wizard.
Answer: B
Q7. DRAG DROP - (Topic 8)
You need to recommend a solution to implement the data security requirements for CUBE1.
Which three actions should you recommend performing in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Answer:
Q8. - (Topic 5)
You need to identify changes in the financial database.
What should you do?
A. Add SQL Server log shipping to each table.
B. Add SQL Server mirroring to each table.
C. Perform a full extract of each table.
D. Enable change data capture on each table.
E. Create an AlwaysOn Availability Group that includes all the tables.
Answer: D
Q9. DRAG DROP - (Topic 10)
You are developing a SQL Server Analysis Services (SSAS) multidimensional project that is configured to source data from a SQL Azure database. The cube is processed each night at midnight.
The largest partition in the cube takes 12 hours to process, and users are unable to access the cube until noon. The partition must be available for querying as soon as possible after processing commences.
You need to ensure that the partition is available for querying as soon as possible, without using source data to satisfy the query.
Which three actions should you perform in sequence? (To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.)
Answer:
Q10. - (Topic 4)
You need to develop a BISM that meets the business requirements for ad-hoc and daily operational analysis. You must minimize development effort.
Which development approach and mode should you use?
A. Develop a tabular project and configure the model with the DirectQuery mode setting on and the project query mode set to DirectQuery.
B. Develop a tabular project and configure the model with the DirectQuery mode setting on and the project query mode set to In-Memory with DirectQuery.
C. Develop a multidimensional project and configure the model with the DirectQuery mode setting off.
D. Develop a multidimensional project and configure the cube to use hybrid OLAP (HOLAP) storage mode.
Answer: C
Explanation:
/ After the upgrade users must be able to perform the following tasks:
/ Ad-hoc analysis of data in the SSAS databases by using the Microsoft Excel PivotTable
client (which uses MDX).
/ Daily operational analysis by executing a custom application that uses ADOMD.NET and
existing Multidimensional Expressions (MDX) queries.
/ Deploy a data model to allow the ad-hoc analysis of data. The data model must be
cached and source data from an OData feed.
We cannot use DirectQuery mode so C is the only answer that will provide the required
caching.
When a model is in DirectQuery mode, it can only be queried by using DAX. You cannot
use MDX to create queries. This means that you cannot use the Excel Pivot Client,
because Excel uses MDX.