Live from IBM Insight2015: The feasibility of DIY analytics

The cognitive era is here and in order for you to disrupt instead of being disrupted you need to leverage the insights hidden in your data. To do this you can use a cloud-based solution like Watson or an on-premises one like IBM’s Analytics. But how can you get cost efficient insights through analytics if none of these options are available to you (for whatever reason? Would it be feasible to build a solution to do it yourself?

Apache Spark Machine Learning

Apache Spark is a free open source in-memory engine for large-scale data processing. It is used in database, stream, graph processing and machine learning (MLlib). One of the features of MLlib is collaborative filtering. Collaborative filtering is commonly used for recommendation systems, and is important in cognitive applications. A good example is the Netflix recommendation tool: if you liked this, then we recommend…

The challenge with recommendation systems is that you need to do large scale matrix factoring (ALS). To get recommendations within reasonable timeframes, you need a lot of CPU power, which of course translates to higher costs. So, maybe DIY is not feasible?

GPU computing

A graphics processing unit (GPU), also occasionally called a visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. GPUs are optimized for data-parallel high-throughput workloads, and can be programmed using a language called CUDA. To give you an idea of GPU performance:

Xeon e5 2687 CPU
8 cores, 16 threads @ 1600Mhz
0.35 SP TFLOPS
0.17 DP TFLOPS
52.2 GB/s memory bandwidth

Tesla K40 CPU
2880 cores, 30720 threads @ 745MHZ
4.29 SP TFLOPS
1.43 DP TFLOPS
288 GB/s memory bandwidth

As you can see, a Tesla K40 GPU has higher flops and memory bandwidth. At Insight2015 the benchmark results of CUDA Matrix Factorizations (cuMF) were shared in a session:

gpu_resultIt turns out that GPU computing might be the holy grail for machine learning: you get 10x the performance for 1% of the cost! It seems that with Apache Spark ML and Cuda you can deploy and operate a cost-efficient machine learning platform if you have the skill set (machine learning, Cuda optimizations). Are the GPUs the future of machine learning calculations? Let me know in the comments!

The following two tabs change content below.

Jeroen van Dun

Manager of Software Engineering at Rocket Software
Jeroen van Dun is Product Manager in the Rocket LegaSuite Lab.

Latest posts by Jeroen van Dun (see all)

, , ,

No comments yet.

Leave a Reply