Hope for running SSAS in the cloud with “Big Compute”!

Often we have advised customers wanting performance out the their Analysis Services engines, be that MOLAP, Tabular or PowerPivot, to avoid the public cloud offerings by Microsoft and in some cases even avoid virtualisation on premise.

What – Why shouldn’t we virtualise Analysis Services?

The issue is not so much the virtualisation products like Azure, Hyper-V or VMware. All  these products have no problem running Analysis Services virtualised with little overhead compare to a physical server. The days of consultants and naysayers saying the “virtualisation” is the bottleneck are largely gone or moved to extreme scale cases.

The real problem stems from the choice of hardware for the host server. This is a gross over simplification, but basically we can buy two different types of servers:

1. Very fast PC/servers with a small number of cores and excellent single threaded and memory performance. Think your one socket almost 4Ghz beast you play games on with SSDs and 1,800+ FSB. Maybe something like Aarons Bertrand’s new toy http://www.sqlperformance.com/2014/01/system-configuration/justifying-the-new-mac-pro
mac_pro_a

2. Servers with high capacity, lots of cores and much slower CPU and memory performance. Think your four NUMA node, 1.9 Ghz  80 core virtualisation host with 800 FSB ram speed and ahem “Power Saving” features to slow the CPU down some more!

Guess which type of server people typically buy for virtualisation projects and guess which type suits the CPU and memory intensive workloads of Analysis Services?

I discuss hardware selection for Analysis Services more in this blog here:

http://blogs.prodata.ie/post/Selecting-Hardware-for-Analysis-Services-(10GB-1TB-size).aspx

Some of my customers have fixed this problem by deploying “high compute” pods within their VMware infrastructure specially suited to the single threaded sensitivity of Analysis Services, but sadly the Microsoft Azure IaaS offerings have very much historically been “one size” fits clock from a compute performance perspective.(ram and core counts do vary).

Just to be clear there is nothing stopping you from virtualising SSAS workloads right now and I’m sure some people have and are quite happy with the “adequate” performance they get. However performance is often an implicitly stated requirement and customers may not be happy when the “production” solution runs slower than say my 1,000 euro notebook or my old desktop.

So what is Changing, Enter “Big Compute”

After initially tackling the broader workload of web applications and OLTP, Microsoft is now starting to look deeply at analytical workloads in the cloud, both with its platform offerings and also by starting to provide VM servers aimed at high compute workloads.

http://msdn.microsoft.com/library/windowsazure/dn594431.aspx

What does “Big Compute” actually mean. Well something like this:

  • Intel Xeon E5-2670 2.6 Ghz CPU
  • 16 cores
  • 112GB DDR3-1600 Mhz RAM
  • 40 Gbps back end connectivity (wow!)

Some limitations

  • This is only available in EU West (Amsterdam).
  • This is only available for PaaS, it is not available as a IaaS image for you to install SQL Server.

Obviously I am hoping that these limitations lift eventually so we can put Tabular models and SSAS in the cloud without the embarrassing massive drop in performance when compared to our laptops..

Call to action – ask Microsoft to:

  • Offer “small compute” images. I want an 4/8 core VM with 1,600 FSB and 2.8+ Ghz CPU with 64-128GB ram.
  • Offer the “Big Compute” images for IaaS and customers with analytics workloads on SSAS.  Big compute is not just for HPC nerds guys!!

Leave a Reply