Tuesday, Jan 24, 2017
HomeFeaturesExecutive Viewpoint 2017 Prediction: Kinetica – The Cognitive Era of Computing Makes Its Debut

Executive Viewpoint 2017 Prediction: Kinetica – The Cognitive Era of Computing Makes Its Debut

Cognitive computing, which seeks to simulate human thought and reasoning, could be considered the ultimate goal of information technology. So it’s a rather bold claim to predict that the Cognitive Era of computing will begin in 2017.

Some contend that IBM’s Watson supercomputer already possesses cognitive computing capabilities, and I would agree. Watson’s enormous cost, however, puts such processing power beyond the reach of all but a few organizations worldwide. I also contend that the cost and other barriers will begin to disappear based on changes coming in 2017.

The foundation for affordable cognitive computing already exists based on steady advances in CPU, memory, storage and networking technologies. A major breakthrough came with the advent of the Graphical Processing Unit (GPU) and its use in data analytics. GPUs are capable of processing data up to 100 times faster than configurations containing CPUs alone. The reason for the improvement is the massively parallel processing, with some GPUs containing upwards of 4,000 cores—over two orders of magnitude more than the 16-32 cores found in today’s most powerful CPUs. The GPU’s small, efficient cores are also better suited to performing similar, repeated instructions in parallel, making it ideal for accelerating the compute-intensive workloads that will characterize the Cognitive Era.

This article predicts that three additional changes in 2017 will build on this foundation to provide everything needed for putting cognitive computing capabilities within reach of most organizations.

Change #1: GPUs will become pervasive in the Cloud

Amazon has already begun deploying GPUs, and the company claims its EC2 P2 Instances establish “the largest GPU-Powered virtual machine in the cloud.” Microsoft has announced plans, and claims its Azure N-Series Virtual Machines, currently in preview, will utilize “the fastest GPUs in the public cloud.” And Google will soon begin equipping its Cloud Platform with GPUs for its Google Compute Engine and Google Cloud Machine Learning services.

These cloud service providers are all deploying GPUs for the same reason—to gain a competitive advantage—and others can be expected to follow their lead. The extent of the performance advantage is revealed in the graph below, which shows the results of benchmark tests conducted by a large retailer comparing the performance of a SAP HANA in-memory database with that of a GPU-accelerated configuration from Kinetica. The pervasive availability of GPU acceleration in the cloud will be welcomed news for those organizations that are finding it difficult to implement GPUs in their own data centers.

The sub-second performance by the GPU-accelerated Kinetica configuration for all three functions tested constitute a dramatic improvement over the unaccelerated SAP HANA in-memory database configuration.

Change #2: GPU-acceleration will gain carrier-class capabilities

Enhancements in availability and security will build on the foundation of proven performance and scalability offered by GPUs to make their use carrier-class. Availability enhancements include data replication with automatic failover, while the security enhancements will add support for user authentication and role-and group-based authorization. Together these enhancements will make GPU acceleration suitable for applications that are mission-critical and/or must comply with strict security regulations.

These enhanced capabilities will be enhanced even further with their implementation in carrier-class public cloud infrastructures, resulting in the virtual elimination of risk of adoption for organizations in both the public and private sectors.

Change #3: Libraries will be optimized to take full advantage of GPUs

Most existing libraries achieve better performance when executed in a GPU, but very few have yet to be optimized to take advantage of the GPU’s massive parallel processing. A select few machine learning libraries have been GPU-optimized by Google, Microsoft and NVIDIA, as well as by the University of California at Berkeley. With pervasive availability of GPUs in the public cloud infrastructure, many more libraries from many more sources can be expected to become GPU-optimized beginning in 2017.

The Cognitive Era of Computing

These three changes will help usher in the Cognitive Era of computing by making it more affordable for many, if not most, organizations to converge artificial intelligence, business intelligence, machine learning, expert systems, natural language processing, pattern recognition and other data analytics in various ways to create systems capable of self-learning in real-time.

By breaking through the cost and other barriers that remain to achieving performance on the scale of a Watson supercomputer, 2017 may well be remembered as the year the Cognitive Era of computing began. For those interested, more information about GPU acceleration and its role in cognitive computing is available at:

Kinetica

NO COMMENTS

LEAVE A COMMENT

X
})(jQuery)