Modern enterprises increasingly lean on high performance computing (HPC) and artificial intelligence (AI) to advance their organizations. The powerful combination of these technologies -- alongside high-performance data analytics - results in better business intelligence, insights to improve productivity, and innovative approaches for new products and services.
Today, many financial institutions rely on AI’s prowess for automatic fraud detection. In the automotive industry, modeling and simulation with HPC systems help design safer, more fuel-efficient cars. AI can also couple with the Internet of Things (IoT) devices and monitor manufacturing equipment for warning signs of failure and notify the appropriate staff to perform required maintenance. Healthcare providers use AI to examine patients’ CT scans to identify anomalies requiring medical intervention. In many cases, AI can evaluate the scans faster than possible for radiologists. In a fast-moving global economy, AI offers remarkable benefits for today’s enterprises seeking to remain leaders in their respective industries.
Implementing AI on HPC
For organizations seeking to enable machine learning, or neural-network based deep learning, four steps frame the process: Data preparation, model creation and training, quantization, and inference.
- Data preparation: Modern corporations have massive volumes of data available to them. Some of that data is structured, tagged, and organized. Other unstructured data, like documents or images, prove more challenging to interpret for meaning. Combining, organizing, and culling data in advance gives AI a jump-start on evaluating it for essential insights.
- Model creation and training: AI requires a model, or framework, on which to base its data evaluations. Today, many tools like TensorFlow* (which is optimized for use on Intel® Xeon® platforms) and Apache Spark* help ease this process with tools and libraries to accelerate model creation and implementation. Training a model can take time, and substantial compute resources, making HPC a critical component of the process.
- Quantization: In its simplest terms, quantization equates to intelligent data compression. When vast data sets are involved with AI training, quantization helps reduce the size of the data sets to make the process less compute-intensive. Using this technique, enterprises can accelerate calculations and get their AI models up and running more quickly.
- Inference: Once an AI model undergoes necessary training for accurate results, it can start sifting through new and existing data to mine for correlations, meaning, and insights. Plus, the more use a model experiences, the smarter it gets.
Steps for implementing AI on HPC
For researchers and data scientists planning for AI implementation on their current HPC, considering a four-step process can help jump-start the process:
- Data preparation: Advanced science can involve massive amounts of data. Honing that information to eliminate extraneous data, and centralizing it for fast access, will accelerate the AI training process.
- Model creation and training: While some institutions may have the in-house skills to create custom AI modes, others will benefit from existing tools and libraries like TensorFlow* and Apache Spark* which can reduce the time required for model building by a substantial margin. With a fledgling model in place, training is the next step. As you can see in the diagram, training a model to recognize images is an involved process. For deep learning, various "layers" of a neural network need time to form, build, "weight" successes, and re-test. Over time the model improves its accuracy. Because a process like this is resource-intensive, your HPC system offers an ideal platform for the task.
- Quantization: AI training can involve truly massive data sets. Quantization encompasses a variety of techniques used for intelligent data compression. Reducing data volume though quantization can speed calculations needed for model training, thereby shortening the time required for the training process.
- Inference: At this stage, all your efforts creating your model pay off. Because the newly built model can now infer detailed correlations from disparate data sources, it can offer in-depth insights and revelations for breakthrough science.
While many enterprises have adopted HPC already for its multitude of benefits, uptake of AI has lagged behind it. That trend is changing quickly, though. According to IDC, 70 percent of CIOs will aggressively apply data and AI to IT operations, tools, and processes by 2021.1 In the interim, though, companies face perceived barriers like staff expertise for AI implementation, the cost associated with deployment, and the need for specialized IT infrastructure. However, if 2nd Generation Intel® Xeon® Scalable processors, which can deliver up to 3.7x faster performance on HPC workloads,2 underlie your HPC system, you may be closer to AI deployment than you realize. Intel offers an AI-ready platform including tools like Intel® Deep Learning Boost (Intel® DL Boost) – which helps deliver up to 30x faster time to insight vs. previous Gen Intel® Xeon® processors3 – that are purpose-built to ease each step down the road toward AI implementation.
To learn more about AI implementation in your current HPC environment, read our white papers, The Case for Running AI, Analytics on HPC Clusters and Jump-Start Your AI Journey with Your Existing HPC Infrastructure, visit https://www.intel.co.uk/ai or explore the Intel® Select Solution for HPC & AI Converged.