In computing terms, traditional storage – whether it’s an SSD or HDD – is slow when you compare it to DRAM. In particular, latency is a big issue when dealing with external storage. This is the delay before the transfer of data begins following its instruction.
When accessing a typical SSD, for example, random access of data is around 0.1ms. Using DRAM, the latency is under a nanosecond. For streaming a large amount of data or accessing a file, latency isn’t so important, and the transaction time is more dependent on the transfer speed.
However, when an application relies on performing thousands of transactions quickly, such as many real-time analytics packages, latency becomes key. The higher the latency, the longer the processor sits idle waiting for data to be retrieved. It’s why many businesses are turning to next-generation Persistent Memory (PMEM) technologies.
“By turning our systems into what we’re calling memory-driven flash technologies,” says Nick Dyer, Principle Storage Systems Engineer for HP in the UK and Ireland, “we’ve seen average latencies in real-world tests go down from one to two milliseconds of latency down to an average of less than 200 microseconds of latency on either 100 percent reads or even 50:50 read:write ratios.”
“The real-world use case for this [improved speed],” he adds, “is that it allows us to deliver functionality to customers where we’re able to take workloads that are using batch processing from hours down to minutes.”
It’s no wonder that this sort of ‘in-memory’ computing has exploded in recent years. According to Gartner, 75 percent of cloud-native application development will use in-memory/PMEM computing by 2019, and by 2021, at least 25 percent of large and global organisations will adopt platforms using in-memory technologies.
Persistent memory in the data center allows applications to run without incurring the latency penalty of going out to storage.
Senseye, for example, provides cloud-based software solutions that enable predictive maintenance within manufacturing and other industrial sectors. By exploiting Internet of Things (IoT) connectivity and greater processing power, Senseye’s analytics can forecast the future state of industrial machines by understanding when the current state of those machines has changed from their normal behaviour.
“It’s key that we have high quality performing hardware with high speed storage,” says Rob Russell, Chief Technical Officer at Senseye. “Technologies such as PMEM and Intel® Optane™ are crucial… We use many third-party cloud hosted solutions as part of our system and their underlying technologies that enable real-time access to data that we’ve captured across our customer base, ensuring that we have a highly performing system that we can scale out.”
Moving data processing to in-memory tech such as Intel® Optane™ can provide a massive speed boost to critical applications, providing the speed needed for real-time data access.
Of course, “using memory to accelerate performance of I/O-bound applications is not a new idea,” writes Mike Matchett, a senior analyst and consultant at Taneja Group. “It has always been true that processing data in memory is faster (10 to 1,000 times or more) than waiting on relatively long I/O times to read and write data from slower media – flash included.”
While in-memory processing can dramatically increase speed, it also brings its own set of problems. Primarily, there’s the issue of volatility. That is, the data stored in memory is not permanent and, if power is removed, then that data is lost. Consequently, data needs to be cached back to slower flash storage.
The second issue is one of capacity, and modern data processing is rapidly pushing against the limits of what DRAM is capable of.
“The traditional DRAM layer is costly,” explains Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group. “It offers limited capacity for large datasets, and while the software developer community has been very creative in trying to engineer around memory, it’s not sufficient for the types of workload footprints that we see our customers demanding in the future.”
Persistent memory, such as Intel® Optane™ DC Persistent Memory, provides a future-proofed solution. Installed alongside traditional RAM, PMEM has many of the advantages of DRAM, including low latency access. But it comes in greater capacities. Intel® Optane™ DC, for example, will be available in 128GB, 256GB and 512GB sizes.
Persistent memory also gives you wider options for how data can be accessed and used. As Jai Menon explained in his article Memory 2.0 – The Persistent Memory Era, there are three use cases for persistent memory. In Wave 1, applications want more memory, so use the increased densities offered by persistent memory but don’t use the non-volatility. In Wave 2: applications use persistent memory instead of DRAM, taking advantage of non-volatility. While come Wave 3, applications use PMEM instead of flash storage, storing all data in memory.
Having persistence makes a huge difference to reliability, but it can also dramatically increase performance times, particularly when database restarts are required. Intel demonstrated restarting a server running Aerospike’s in-memory database. On a server with Optane DC Persistent Memory, the restart took only 16.9 seconds, compared to 25 minutes on a server using a combination of DRAM and Flash.
Persistent memory is a game-changer, providing the capacity, price, and reliability required by data-intensive applications, while delivering the architecture and base that DRAM can’t.