Multi-Cloud Is the Future of Cloud Computing
The days of monolithic data centers, where all applications were developed, tested, and deployed entirely on-premises, have long since passed. So has the notion that it’s a mutually exclusive choice to host an app either on-premises, in a public cloud or in a private off-prem cloud. And while "hybrid cloud" was the buzz word du jour in cloud computing just a few years ago, a new term, multi-cloud, has more recently become popular (see the sidebar for definitions).
According to Datamation*, 58 percent of enterprises in the private sector are already using two or more cloud providers.1 And while government adoption of multi-cloud environments has lagged behind the private sector, a FedTech* article reports that in 2018, 40 new U.S. Federal agencies began to participate in the Federal Risk and Authorization Management Program (FedRAMP), which authorizes and monitors federal cloud services. The article went on to say that "hybrid cloud and multi-cloud solutions are likely to grow in prominence" and that about 65 cloud products could receive FedRAMP authorization in 2019, including a mix of hybrid tools.2
A hybrid cloud is a computing environment that combines public cloud and private cloud environments. Data and applications can be shared between the two environments, enabling organizations to seamlessly scale on-premises infrastructure to off-premises infrastructure (and back).
Multi-cloud is the next evolutionary step, using a combination of the best-of-breed solutions and services from different cloud providers, including private cloud, to create the most suitable solution for a business or government agency.
It Can Run Here, It Can Run There, It Can Run Anywhere
Vague allusions to Dr. Seuss aside, hybrid, and multi-cloud environments raise an important question: How do application developers and operations teams ensure that a particular application or service will run reliably, whether it’s hosted on-prem, at cloud provider A, or cloud provider B? Non-cloud-native applications are often heavily dependent on access to specific hardware or other resources such as data or a compiler. Moving such an application from its happy on-prem home to a different infrastructure with a different stack can easily cause at the very least performance issues, and may cause the app to fail completely. This would be bad for business, whether you are running an eCommerce site, providing banking services, registering new voters or monitoring a satellite launch.
And while enterprises and government agencies struggle with portability and reliability, they also face another hurdle—the need for agility. Today’s digitally focused business environment means consumer needs and expectations change rapidly.
Containers offer a potential solution to these challenges. Containers are lightweight and can spin up more quickly than virtual machines (VMs). And, because a containerized app is packaged with all of its dependencies (executables, binary code, libraries, and configuration files—anything that can be installed on a server), it’s more likely to be able to run on multiple infrastructures without breaking.
The portability and repeatability of containers really empowers the idea of multi-cloud. Gartner* predicts that by 2022, more than 75 percent of global organizations will be running containerized applications in production, compared to fewer than 30 percent today.3 And containers also help with agility. A survey by Forrester* found that 66 percent of organizations that had adopted containers experienced accelerated developer efficiency, while 75 percent of companies achieved a moderate-to-significant increase in application deployment speed.4
Modern Apps Need a Modern Infrastructure
While containers are a great way to transition to a multi-cloud environment, they are only as good as the resources that are available to them. For example, if an app (or microservice) is dealing with large volumes of data, is there enough storage and network bandwidth? Does the container need ample memory to do its job? And is it secure?
Deploying containers on an Intel® architecture-based platform can address all these issues and more. For example, 2nd Generation Intel® Xeon® Scalable processors can boost compute resources, with up to 56 cores per socket, higher memory frequency, and capacity than the previous generation, a new CPU memory controller that can take advantage of Intel® Optane™ DC persistent memory, plus many more built-in architecture enhancements. The new 2nd Generation Intel® Xeon® Scalable processors deliver up to 3.5x better VM density performance versus aging servers, at up to 59 percent savings.5
Containerized applications can benefit from Intel® Optane™ DC SSDs’ low latency and cost efficiency, as well as from high-performance Intel® Ethernet products. For excellent in-memory processing performance, adding Intel® Optane™ DC persistent memory modules can expand the system memory pool into the terabytes, for much less cost than DRAM. Plus, this technology has low latency similar to DRAM, and yet offers native persistence that can maintain a working data set through power cycles.
Container security can be enhanced using Kata* Containers in conjunction with security technologies from Intel such as Intel® Trusted Execution Technology (Intel® TXT) and Intel® Virtualization Technology (Intel® VT).
Deploying containerized applications—and running those containers on Intel® technology—is an excellent way to modernize your infrastructure and prepare for multi-cloud. Read the paper, Containers: Paving the Way to Portability and Hybrid Cloud, to learn more about how Intel can help you get started with containers, or if you’re already using them, boost their performance.