In recent years, data-driven businesses have looked to move their datasets out of storage and into memory. The idea is to exploit the speed and responsiveness of DRAM to slash latency and multiply app performance. In-memory data platforms like SAP HANA, for example, raise performance by an order of magnitude - even making new operations feasible. But large datasets require huge amounts of memory, and that pushes up the cost to cloud providers and consumers.
The volatility of DRAM creates a second issue that's less easily solved: suffer a failure in an in-memory system, and data that hasn't been synchronised to storage is lost forever. It could be irreplaceable, and even when it's not there could be hours of downtime before the database is restarted.
But the arrival of Intel® Optane™ DC Persistent Memory (PMEM) represents the start of a new era in data center and cloud performance. As the name suggests, the much-anticipated new technology combines the latency and bandwidth advantages of DRAM with the persistence of storage media, such as flash. In a failure, the dataset remains in memory, from where it can be recovered by the rebooting database - slashing the restart times for critical business apps down to minutes or seconds.
“When we think about cloud computing,” says Nick Dyer, Principle Storage Systems Engineer for HP in the UK and Ireland, “we typically think about Amazon Web Services or Azure, with [huge] datasets that we run behind the scenes. But those are typically built on hyperscale technologies. Optane allows us to actually be able to harness more real-time access to data, so we can even deliver lower latency into those hyperscale platforms.”
Intel® Optane™ DC Persistent Memory has a second advantage over conventional DRAM. Available in module sizes up to 512GB, and with a significantly lower cost per terabyte, Optane technology lowers the cost of high-memory computing in the cloud and data center.
Persistent memory VMs reduce the cost per transaction for high-capacity in-memory computing.
In computing environments like Google Cloud Platform, the benefits aren't just hypothetical. Customers using in-memory platforms need to balance growing resource demands against the limitations on instance sizes - and the high cost of scaling up.
As Nan Boden, senior director of global technology partnerships for Google Cloud explains : “Our customers' use of in-memory workloads with SAP HANA for innovative data management use cases is driving the demand for even larger memory capacity.”
Over 25,000 customers run SAP applications on HPE infrastructure and persistent memory technologies like Optane DC have a key role to play. “Optane is a very exciting technology for HPE Nimble Storage,” explains Nick Dyer, “because actually we can harness the raw compute power that Intel CPUs can bring, drop Intel® Optane™ storage class memory in, and that allows us to deliver what we call memory-driven flash technology, without having to go and replace the entire storage media behind the scenes.”
Looking to provide more cost-effective ways to support analytics workloads, Google recently became the first public cloud provider to offer Virtual Machines (VMs) with Intel® Optane™ DC Persistent Memory. According to Boden, the new VMs will “offer higher overall memory capacity with lower cost compared to instances with only DRAM.”
Aimed specifically at enterprise customers handling large datasets in memory, the persistent memory VMs reduce the cost per transaction for high-capacity in-memory computing. More than that: in the initial deployment of VMs with up to 7TB of memory, early adopters reported as much as a twelve-fold reduction in SAP HANA startup times.
Other major providers are trialling persistent memory in the cloud. It's still relatively early days, but the in-memory database market is forecast  to grow 19% annually between now and 2023. As the push for high-memory VMs intensifies, persistent memory will become a key enabler in the next phase of data centre and cloud computing: the wholesale migration of workloads from storage into memory.
It's an exciting time for enterprises running data-intensive, transactional and analytical workloads, all of which will soon benefit from a surge in available memory for a more manageable cost. As instance sizes increase, it will become possible to do more in-memory, driving high-end performance forward.
At the same time, falling memory costs will quickly lower the barriers to entry for in-memory cloud computing, bringing it within reach of smaller enterprises and agile teams looking to deploy targeted high-performance cloud solutions. Before long, we'll all benefit from the resulting step-change in cloud performance.
Some businesses are already aware of the advantages. Epsilon Telecommunications is a connectivity service provider, with a mission to simplify and accelerate the ability for customers to connect to their own applications, services and ecosystems.
“Cloud computing relies on network storage,” says Mark Daley, Director of Digital Strategy at Epsilon. “So providing the ability for cloud compute to interact with the storage, to access the storage, to scale that storage up is critical... The next generation of persistent memory and other technologies really enable us to speed up everything that we’re doing in terms of next generation data, application and networking technologies.”