IBM Raises Bar on Machine Learning, Exponentially

IBM demonstrates 10x faster large-scale machine learning

Together with EPFL scientists, our IBM Research team has developed a scheme for training big data sets quickly. It can process a 30 Gigabyte training dataset in less than one minute using a single graphics processing unit (GPU) — a 10x speedup over existing methods for limited memory training. The results, which efficiently utilize the full potential of the GPU, are being presented at the 2017 NIPS Conference in Long Beach, California.

Training a machine learning model on a terabyte-scale dataset is a common, difficult problem. If you’re lucky, you may have a server with enough memory to fit all of the data, but the training will still take a very long time. This may be a matter of a few hours, a few days or even weeks.

Specialized hardware devices such as GPUs have been gaining traction in many fields for accelerating compute-intensive workloads, but it’s difficult to extend this to very data-intensive workloads.

Read More at Next Big Future

Facebook Comments
About Paul Gordon 1610 Articles

Paul Gordon is the publisher and editor of iState.TV. He has published and edited newspapers, poetry magazines and online weekly magazines.
He is the director of Social Cognito, an SEO/Web Marketing Company. You can reach Paul at pg@istate.tv