The Path to Higher Performance Computing

Better performance means getting access to data faster. We examine how low-latency storage is changing the face of HPC.

The way that we view speed in a data center/HPC environment is changing. For many years, the focus was on delivering faster and faster processors to cope with the ever-increasing complexity in data analysis. Recent years have seen that approach change, however. Processor speed is obviously still important, but so is the speed at which data can be accessed and piped into the CPU, preventing costly idle time.

This is particularly important in the high-performance computing (HPC) space. With HPC, clusters of servers are used for parallel processing, effectively turning a data center into a supercomputer. With such huge power on tap, the processors are no longer a bottleneck in terms of computing performance.

“With the increasing amounts of data and compute, we’ve seen data centers scale,” says Chris Darvill, Senior Director of Sales Engineering for Cloudera EMEA. “But it’s obviously not possible to keep adding data centers all over the world. You also need to be able to better drive efficiency, both from a storage, from a memory, from a networking and from a power perspective.”

In fact, improved memory and storage are key to the future of HPC. Indeed, as Prowess Consulting points out, in environments with large datasets, like genome mapping or weather simulations, it is storage technology that “is likely to become the bottleneck, and data-transfer rates [are] especially important to the speed of an overall system”. [1]

With this in mind, a next-generation HPC environment demands storage that can deliver the input/output rates to keep the processing clusters occupied with little to no idle time. This means using low-latency storage, which can significantly reduce the time delay between when data is requested and when it is served.

Discover how Intel® Technologies can unlock the potential of your business data ›

The move to flash storage has seen latency times reduce dramatically, but even with all-flash arrays, there’s not the speed required for HPC. Instead, data needs to be tiered, with new types of faster storage putting data closer to the processor. Top loading 90-Bay or 60-Bay systems from Supermicro, for example, support everything from SATA drives to NVMe, SSDs and new persistent memory (PMEM) technologies.

Intel® Optane™ DC persistent memory revolutionises the data center memory-storage hierarchy of the past. [2]

One example of this is the Intel® Optane™ DC P4800X data center SSD. This model uses 3D XPoint storage, which is non-volatile in the same way as traditional NAND flash storage but has the advantage that each bit can be addressed individually. The result is that the number of input/out operations per second (IOPS) is hugely increased. Intel quotes this drive as being able to handle up to 550,000 IOPS. [3]

Often, high-end applications use in-memory computing, switching to low-latency DRAM for data storage. While this has a massive improvement on performance, there are several limitations, including the relatively small amount of DRAM on a system (not to mention cost), and the fact that this data is volatile, which can cause slow restarts.

Intel® Optane™ DC P4800X SSDs can expand data center memory and significantly improve data processing speeds.

With the Optane DC P4800X and Intel® Xeon® processors, the drive can be used with Intel® Memory Drive Technology, which lets the SSD appear as DRAM to the system, expanding the amount of memory available and increasing performance speeds.

Yet, it’s the next generation of Intel® Scalable Processors that will have the biggest impact on HPC performance, especially when combined with Intel® Optane™ DC persistent memory.

This new memory technology puts non-volatile storage right next to the processor in higher densities than DRAM. Yet, Optane DC Persistent Memory can be accessed and used in the same way as traditional RAM, opening up the capabilities for real-time analysis, HPC and artificial intelligence.

“With the advent of PMEM technology from providers like Intel,” explains Darren Watkins, Managing Director at Virtus Data Centers, “we’re starting to see low latency access to storage, which plays right into the hands of big data and artificial intelligence. The use case for artificial intelligence is going to be about latency and low latency access to ensure process and capability. That process and capability has to sit in a data center because it’s of a high-performance nature.”

Low latency storage is the key to improving big data analytics, machine learning and AI performance.

In terms of performance, the early signs are encouraging. Running Intel® Optane™ DC Persistent memory, Intel saw a new performance record for Microsoft Hyper-V and Storage Spaces Direct, with 13.7-million IOPS. That’s twice the record set just the year before. [4]

Intel® Optane™ DC memory will also feature in the new Frontera supercomputer being built at Texas Advanced Computing Center (TACC). Powered by more than 16,000 Intel® Xeon® Scalable processors, the HPC system will go live in 2019 and deliver a peak performance of between 35-40 petaflops. This will make it one of the most powerful HPC systems in the world, tackling projects such as “analyses of particle collisions from the Large Hadron Collider, global climate modeling, improved hurricane forecasting, and multi-messenger astronomy.” [5]

HPC is no longer just defined by CPU speed, it’s also by how fast data can be served, reducing processor idle time. Fast storage is the answer, putting data where it’s needed, when it’s needed, reducing time to insight and giving businesses actionable information in real time.

Unleash the power of your business data with leading-edge Intel® Technologies ›