You are given a cube you have never seen before and need to make it process faster – where do you start ? Do we add indexes to the data warehouse, add faster disks, delete some aggregations, add more memory, or start messing the the “INI” files ? I would say none of the above!
While there are lots of resources on the internet with tips for improving processing time, the best place to start is with a “baseline”. E.g. we need to understand how long the Analysis Services database takes to process and more importantly where time is being spent and why. Once we understand this we can focus in on a specific area and re-baseline to see improvements in that area.
The best way to baseline is to capture a profiler trace – we actually only need a single event. The “Progress Report End”. I’ll hopefully get to post the procedure for using XMLA to automate a server side trace, but for the moment lets assume you have a trace file and want to visually analyse it.
This blog shows some visualisations and data generated from an excel tool I wrote to help analyse the trace data. Feel free to use and abuse the excel workbook which is attached to the end of the blog.
Understanding Analysis Services Trace events and sub events
The chart below shows the possible “event subclass” messages that profiler will generate and this helps tell the story of where time is being spent (and where we should look to optimise)

You can see how this corresponds to an actual trace file which can look daunting at first

Enter the Excel Tool for Parsing SSAS Trace Files
I have attached an excel PowerPivot workbook I created to help analyse profiler traces for processing. An example below is based on a 100 million row blown up adventure works cube I created.
The table below shows that it took 575 seconds or 0.16 of an hour to do the ProcessFull

The table below shows us that the product dimension did take some 35 seconds. There is probably a lot of room for tuning there, but initially 35 seconds does not seem like a lot of time compared to the overall 575 seconds processing time (well find out later that the product dimension does actually kill processing performance of measure groups because of bitmap indexes)

The table below shows that the internet sales and reseller sales are where the time is being spent. For the cube to process faster we need to look first!

The two charts below is where it starts to get interesting. We need to do is to determine at what event in the processing time is being spent and we can then look to find out why.
a) Most of the time is spent in the “Read Data” event, but the duration for “ExecuteSQL” is really small. This means that SQL Server DBEngine is returning the data really fast, but Analysis Services is struggling to convert it into the requires format.
b) 46% of the time is spent in building Bitmap Index’s for attributes, most of which are never going to be used to slice data (e.g. Phone number or email address). For some of my customers bitmap indexes can creep up to over 80% of processing time – a sure sign that we need to optimise the dimension attributes.


The “Detailed Trace” sheets shows statistics for each event for the objects in the cube so we can drill down to see the main offenders. From the below we can see that one partition spent 28 seconds on ReadData and a whopping 172 seconds on the bitmap indexes.

Comparing ReadData and ExecuteSQL
Where ReadData is significantly greater than ExecuteSQL you will always see a corresponding wait statistics of ASYNC_NETWORK_IO on the DBEngine. Essentially there is not much point in making the DBEngine query faster as analysis Services is too slow to consume it.
Why would analysis services be too slow to consume the data from sql server. Well the most common culprits are (IMO):
- Incorrect Data Types resulting in implicit conversion which is very slow in Analysis Services. This is discussed by Henk and Dirk s in this blog (smart guys!)http://henkvandervalk.com/how-to-process-a-ssas-molap-cube-as-fast-as-possible-part-2
- Huge aggregation difference between the DBEngine fact data and the MOLAP cube data. You can try fixing this by a group by query, faster CPU, memory, or considering a fact table in the data warehouse at higher grain.
- Very high grain keys on dimensions or string keys which are slower to map data to.
- In very rare cases Analysis Services cannot write data fast enough so has to slow down reads.
- Maybe the data warehouse is servicing data from the buffer pool and is very quick.
What if ExecuteSQL is really high
If ExecuteSQL is really high then we have a problem with DBEngine side of the house:
a) Are we using proper physical tables for facts or nasty views with joins (most common issue by far)
b) Do we need indexes
c) Is the data warehouse table too wide
d) Is the data warehouse table optimised for sequential read. Eg are we getting that magic 512k read ahead IO, or something much smaller and with less throughput.
What if BuildIndex is High
This is one of the most common issues we face and can be tough to solve. The basic problem is that MOLAP will create a bitmap index for every single attribute in a dimension by default. We need to optimise these by:
a) not storing too many attributes in the cube – use the data warehouse for data dumps!
b) Turn off bitmap indexes by using the AttributeHierarchyOptimized Property
c) Turn off hierarchies for attributes that are not used
d) Ensure attribute have narrow keys (not strings or compound)
e) Don’t use bitmap indexes on attributes which are almost the same grain as the key.
f) Use attribute relationships as much as possible
g) Avoid large dimensions like the plague. Sure we may have a lot of customers, but having a dimension for the transaction grain is usually a big no no. This sort of model is not really suited to MOLAP.
h) if you do have huge dimensions and lots of attributes be aware that a bitmap index is created in EVERY partition, so partitioning you fact table by day and then doing a Process Full is going to really hurt size and process time wise.
What if Aggregate is High
This means that we may have too many (or too big) aggregations on a measure group. The best practise is to try and avoid say more than 20 aggregations on a measure group.
Locating Expensive Attributes
If you use the “cube size” workbook you can see which attributes are taking up the most space on disk and these are the ones that you should see if they can be optimised
http://blogs.prodata.ie/post/Disk-Space-Analyser-for-Analysis-Services-(MOLAP).aspx
If you are in a hurry then you can just look at the files in a partition folder on disk. The “Product Alternative Key” below from Adventure works is a classic example. Across all the partitions this chews up some 177 objects and 327 MB by itself. The grain is almost the same as the product key so we could :
a) Turn off the bitmap index
b) Use an integer key
c) Turn it into a reporting property rather than a hierarchy.

So what Difference did tuning the bitmap indexes/Attributes make?
I turned off the bitmap indexes attributes that were not needed on the customer and product dimensions and here is the improvement in cube size and Process Index time for my blown up Adventure Works.
We achieved a 42% reduction in ProcessIndex time on the basic Adventure works. For some of my customers this is 5-10x improvement in cube processing time!

Useful Links for tuning MOLAP Processing
Here are some of my favourite links on tuning cube processing:
SQL Server 2008 White Paper: Analysis Services Performance Guide
http://www.microsoft.com/en-us/download/details.aspx?id=17303
The basics of faster fact processing
http://blogs.prodata.ie/post/The-basics-of-faster-fact-processing-in-Analysis-Services.aspx
Henks tech blog
Analysis Services Processing Best Practises
Download Link for the Excel Workbook for Analysing Profiler Trace Files
below is a link to download the PowerPivot model for analysing processing trace files.
Instructions
a) Capture Trace file while database is processing
b) Import Trace file into a SQL Server Data Table
c) Set The Variables on the first page of the excel workbook: Server, Database and Cube
d) Update the Worksheet “Objects” by clicking the button to run the VBA macro. This is some VBA code that relates partitions to measure groups and attributes to dimensions as the trace file doesn’t contain this linkage and we need it to be able to “drill down” on performance detail.
e) Update the PowerPivot data table “Trace”. By default it uses a local database called “SSAS_Processing” and a table called “Trace”, but feel free to change the connection in PowerPivot.
AW_Processing_1.0.xlsm AW_Processing_1.0.xlsm
Want to leant more on cube Processing and Tuning
Come to one of my sessions on cube processing !