Live from IBM Edge 2016: beyond Moore’s Law

Share on Facebook10Share on Google+0Tweet about this on Twitter2Share on LinkedIn5Email this to someoneShare on Reddit0

Big DataIBM Edge kicked off this morning, and as befits an event focused on infrastructure, there’s been a good bit of conversation about both hardware and software. I’ve been to a few technology conferences in my day, but I have to admit…never have I heard Moore’s Law referenced so many times in one day.

More than one speaker today made reference to the fact that in the very near future, it’s going to take longer than 18 months to double the number of transistors on a chip, and that tech companies are going to have to get more creative when it comes to increasing performance. Earlier this year, Intel pushed back the introduction of new chips featuring new 10-nanometer transistors to some point in late 2017–roughly a year longer than expected–and also indicated that longer development cycles will likely be the new norm.

That presents a problem for companies trying to deal with the deluge of data coming from connected devices and unstructured sources. If we can’t rely on a constant increase in raw computing power, how do we give users the ability to keep up? The answer is that you may have to look elsewhere, and get creative.

One answer–especially when it comes to cloud computing–is to use accelerators at various points in the data path. For example, it’s becoming easier (and cheaper) to implement GPU accelerators for servers and HPC clusters, and several cloud providers are starting to make them available as a service.

Another answer is leveraging more large flash storage arrays. When dealing with large data sets on disk arrays, the read/write speed of the physical disks can often be the biggest limiting factor. Moving to flash can help to provide a speed boost in these situations where decreased latency can speed times to results. Large arrays are getting cheaper, and the technology has improved over the earliest models.

Yet another answer–especially when it comes to analytics–is to get smarter about where and when the computing actually takes place. This is the approach IBM is taking to mainframe analytics, giving users the option to leverage a combination of Apache Spark and data virtualization to avoid the traditional extract-transform-load (ETL) operations that waste so much time. Suddenly, analytics and insight become available in real time, not days later.

While these are far from being the only ways companies can can continue to meet the demand for faster time to results, it’s a good sign to see that the topic is being actively discussed, and that companies such as IBM are thinking about creative ways to skirt Moore’s Law and go beyond raw CPU power.

The following two tabs change content below.

Matthew West

Director of Content Marketing at Rocket Software
I like plate reverb, Rat pedals, Thai curry, New Weird fiction, my kids, Vespas, Jazzmasters, my wife & Raiders of the Lost Ark. Not necessarily in that order. Director of Content Marketing here at Rocket.

,

No comments yet.

Leave a Reply