The Challenge of Keeping Up with Data

Features and Performance

  • Intel® Optane™ persistent memory provides an affordable alternative to expensive DRAM, that can deliver huge capacity and accommodate demanding workloads and emerging tools like in-memory databases (IMDB)

  • Intel® Optane™ persistent memory is propelling major infrastructure consolidation. The increase in memory size from Intel® Optane™ media provides the opportunity to consolidate workloads to concentrate them on fewer nodes, ultimately saving on more deployments and maximizing previously underutilized processors due to memory constraints

  • As an entirely new memory tier that has new properties of performance and persistence, architects and developers are using Intel® Optane™ persistent memory as a springboard for innovation taking advantage of new usages around restart and replication, groundbreaking performances from breaking system bottlenecks that used to severely constrain workloads and making use of what could be considered the world's tiniest, but fastest storage device sitting on a memory bus

  • Today's in-memory databases are constrained to traditional computing architectures, where memory is small, expensive, and volatile. The new Intel® Optane™ persistent memory represents a new memory tier that is large, persistent, and affordable, significantly boosting capacity and performance for rapid data processing

  • Virtual machine (VM) density and virtualization performance has historically been constrained by system memory that is small, expensive, and volatile. New Intel® Optane™ persistent memory provides virtualization services with more capacity, offering a large, affordable, and persistent memory tier for delivering increased VM density at affordable cost

author-image

Por

Every Day, the Amount of Data Created Across the World Is Exploding to New Levels
Every day, the amount of data created across the world is exploding to new levels. Businesses thrive on this data to make critical decisions, gain new insights, and differentiate services. The demand for memory capacity is growing at an insatiable rate and there is a need to keep larger amounts of data closer to the CPU. The technology that dominates traditional main memory, DRAM, is fast to access, but small, expensive, and volatile. Storage is large, cheap, persistent, but is slow to access. There is a huge latency and bandwidth penalty as you jump from RAM based memory and disk based storage. The ever increasing amount of data and the need to access more of it quickly have further magnified the gap. Intel's breakthrough product, Intel® Optane™ persistent memory, is disrupting the traditional memory-storage hierarchy by creating a new tier to fill the memory-storage gap providing greater overall performance, efficiency, and affordability.

Internet Technology Completes the Hierarchy

Introducing Intel® Optane™ Persistent Memory
The new Intel® Optane™ persistent memory introduces a new category that sits between memory and storage and will deliver the best of both worlds through the convergence of memory and storage product traits. Intel® Optane™ persistent memory is available in capacities of 128 GiB, 256 GiB, and 512 GiB and is a much larger alternative to DRAM which currently caps at 128 GiB. With this new flexible tier of products, designers and developers will have access to large capacity and affordable memory that is both flexible (volatile or non-volatile) and serves as a high-performance storage tier. They will also have the option of application managed memory which is vital to optimize system performance. The low, consistent latency, combined with high bandwidth, Quality of Service (QoS) and endurance of this unique Intel® technology will mean more capacity and more virtual machines (VMs) for cloud and virtualized users, much higher capacity for in-memory databases without prohibitive price tags, super-fast storage, and larger memory pools. Ultimately, Intel can deliver higher system performance with larger memory that is 3X the performance of NVMe*-SSD, with much higher endurance than NAND SSDs for write-intensive workloads.

The persistent memory modules are DDR4 socket comp- atible and can co-exist with conventional DDR4 DRAM DIMMs on the same platform. The module fits into standard DDR4 DIMM slots on 2nd Generation Intel® Xeon® Scalable processors. Designed with and optimized for the 2nd Generation Intel® Xeon® Scalable processors, a user can have up to one Intel® Optane™ persistent memory per channel and up to six on a single-socket providing up to 3 TiB of Intel® Optane™ persistent memory which means an 8 socket system could access up to 24 TiB of system memory. The module is compatible with 2nd Generation Intel® Xeon® Gold and Platinum processor SKUs.

Intel® Optane™ persistent memory is big, affordable, and persistent which makes extracting more value from larger data sets than previously possible. This new technology can solve issues for customers and launch a revolution in the data center. Finally an affordable alternative to expensive DRAM, that can deliver HUGE capacity and can accommodate demanding workloads and emerging applications like in-memory databases (IMDB). When deployed, Intel® Optane™ persistent memory can help improve TCO via not just memory savings, but broadly via reduced SW licensing costs, node reduction, power efficiencies, and other operational efficiencies.

Not only can Intel® Optane™ persistent memory bring cost savings, it can improve infrastructure consolidation making your servers do more. Intel® Optane™ persistent memory can spur major infrastructure consolidation. The increase in memory size from Intel® Optane™ media provide the opportunity to consolidate workloads that have been spread across several nodes, leaving CPUs underutilized, to concentrating and consolidating workloads on fewer nodes, ultimately saving on the deployments necessary and maximizing CPU utilization. Each Node does more, and the CPU can do more.

Intel® Optane™ persistent memory brings an entirely new memory tier that has new properties of performance and persistence, and architects and developers are using it as a springboard for innovation taking advantage of new usages around restart and replication, groundbreaking performances from breaking system bottlenecks that used to severely constrain workloads, and making use of what could be considered the world's tiniest, but fastest storage device sitting on a memory bus.

Operational Modes
The Intel® Optane™ persistent memory has two operating modes: Memory Mode and App Direct Mode. With distinct operating modes, customers have the flexibility to take advantage of Intel® Optane™ persistent memory benefits across multiple workloads.

Memory Mode – Memory Mode is great for large memory capacity and does not require application changes which makes Intel® Optane™ persistent memory easy to adopt. In Memory Mode, the Intel® Optane™ persistent memory extends the amount of available volatile memory visible to the Operating System. DRAM is used as Cache for the Intel® Optane™ persistent memory. The CPU memory controller uses the DRAM as cache and the Intel® Optane™ persistent memory as addressable main memory. Virtualization can benefit from Intel® Optane™ persistent memory in Memory Mode because there is larger memory capacity which provides more VMs and more memory per VM at a lower cost compared to DRAM. Workloads that are I/O bound can also benefit from using Memory Mode as the Intel® Optane™ persistent memory provides larger memory capacity which supports larger databases and at a lower cost compared to DRAM. With increased capacity there is greater VM, container, and application density which increases the utilization of the 2nd Generation Intel® Xeon® Scalable processors. The data in the Intel® Optane™ persistent memory when used in Memory Mode is volatile as it is handled with a single encryption key that upon power down is discarded making the data inaccessible.

App Direct Mode – In App Direct Mode, software and applications have the ability to talk directly to the Intel® Optane™ persistent memory, which reduces complexity in the stack. There is the option of having App Direct Mode use legacy storage APIs. This allows it to act like an SSD and can boot an OS. The operating system sees Intel® Optane™ persistent memory and DRAM as two separate pools of memory. It is persistent like storage, byte addressable like memory, cache coherent which extends the usage of persistent memory outside the local node, and consistent low latency supporting larger datasets. The power of persistent memory adds business resilience to systems with faster restart times because data is retained even during power cycles. Memory bound workloads benefit from Intel® Optane™ persistent memory with its large capacity and higher endurance and greater bandwidth compared to NAND SSDs.

Dual Mode – A sub-set of App Direct, can be provisioned so that some of the Intel® Optane™ persistent memory is in Memory Mode and the remaining is in App Direct Mode. In Dual Mode, applications can take advantage of high performance storage without the latency of moving data to and from the I/O bus.

Security
Intel® Optane™ persistent memory has 256-AES hardware encryption so you can rest easy knowing your data is more secure. While in Memory Mode the Intel® Optane™ persistent memory encryption key is removed when powered down and is regenerated at each boot. This means data is no longer accessible. In App Direct Mode data is encrypted using a key on the module. Intel® Optane™ persistent memory is locked at power loss and a passphrase is needed to unlock and access the data. The encryption key is stored in a security metadata region on the module and is only accessible by the Intel® Optane™ persistent memory controller. If repurposing or discarding the module, a secure cryptographic erase and DIMM over-write is utilized to keep data from being accessed.

Extract More Value from Larger Data Sets Than Previously Possible
With the advent of larger persistent memory capacities, larger datasets can exist closer to the CPU for faster processing which mean greater insights. Higher capacities of Intel® persistent memory create a more affordable solution which is accelerating this industry-wide trend towards IMDB. Delivered on the 2nd Generation Intel® Xeon® Scalable processors, large memory-bound workloads will have significant performance increase for rapid data processing.

With Intel® persistent memory, customers will have large capacity memory to choose from which means that they can support bigger datasets for analysis. An additional benefit here is that the CPU can read/write cache lines to Intel® persistent memory, which means overhead of constructing the data into 4 KB blocks to be written to disk is eliminated. Intel® persistent memory will offer many of these benefits through App Direct mode with higher reliability as Intel® Optane™ persistent memory can handle more petabytes written than NAND SSDs. Customers can also benefit from the module's native persistence which provides quicker recovery and less downtime compared to DRAM all at a much lower cost.

Many mission critical databases and enterprise apps store large amounts of data in working memory. If a server goes down, either planned or not, it can take hours to re-load the memory array, increasing the downtime. Application downtime can be measured in thousands of dollars per minute. Since Intel® Optane™ persistent memory retains data during power cycles, these types of applications can be returned to service orders of magnitude faster. This means enterprises, cloud, and communication service providers can consistently meet their SLAs and avoid expensive system redundancy costs. An example is SAP HANA* which can realize a 13x faster restart time at a 39% cost savings.1 2 3 4 5

Scale Delivery of More Services to More Customers at Compelling Performance
Virtual machines are requiring bigger amounts of data. Having larger memory capacity near the CPU means that customers can support more virtual machines and more memory per virtual machine all at a cost lower than typical DRAM. Before Intel® Optane™ persistent memory, the memory system was constrained and the CPU was underutilized. Consequently, this severely limits performance. Now Intel® Optane™ persistent memory enables more virtual machines (VM), or larger VMs at a lower HW cost per VM. With Microsoft Windows Server* 2019/Hyper-V, customers can realize 33% more system memory, 36% more VMs per node all at a 30% lower hardware cost.1 2 6 7

Drive Application Innovation and Explore New Data-intensive Use Case with This Best-in-Class Product
With Intel® Optane™ persistent memory introducing a new tier of memory, with its compelling new characteristics, and the ability to have direct load load/store access to it, developers are able to drive new innovation and capabilities.

Rapid adoption is easier and customers are able to take full advantage of the Intel® Optane™ persistent memory capabilities with a growing global ecosystem of ISVs, OSVs, virtualization providers, database and enterprise application vendors, data analytics vendors, open source solutions providers, Cloud Service Providers, and HW OEMs, Standards bodies such as the Storage Network Industry Association (SNIA), ACPI, UEFI, and DMTF.

More Capacity, Go Faster, Save More for SAP HANA*1 2 3 4 5
Pricing Guidance as of March 1, 2019. Intel does not guarantee any costs or cost reduction.
You should consult other information and performance tests to assist you in your purchase decision.

Programming Model
The software interface for using Intel® Optane™ persistent memory was designed in collaboration with dozens of companies to create a unified programming model for persistent memory. The Storage Network Industry Association (SNIA) formed a technical workgroup which has published a specification of the model. This software interface is independent of any specific persistent memory technology and can be used with Intel® Optane™ persistent memory or any other persistent memory technology.

The model exposes three main capabilities:

  • The management path allows system administrators to configure persistent memory products and check their health.
  • The storage path supports the traditional storage APIs where existing applications and file systems need no change; they simply see the persistent memory as very fast storage.
  • The memory-mapped path exposes persistent memory through a persistent memory-aware file system so that applications have direct load/store access to the persistent memory. This direct access does not use the page cache like traditional file systems and has been named DAX by the operating system vendors.

When an independent software vendor (ISV) decides to fully leverage what persistent memory can do, converting the application to memory map persistent memory and place data structures in it can be a significant change. Keeping track of persistent memory allocations and making changes to data structures as transactions (to keep them consistent in the face of power failure) is complex programming that hasn't been required for volatile memory and is done differently for block-based storage.

The Persistent Memory Development Kit (PMDK – http://pmem.io) provides libraries meant to make persistent memory programming easier. Software developers only pull in the features they need, keeping their programs lean and fast on persistent memory.

These libraries are fully validated and performance-tuned by Intel. They are open source and product neutral, working well on a variety of persistent memory products. The PMDK contains a collection of open source libraries which build on the SNIA programming model. The PMDK is fully documented and includes code samples, tutorials, and blogs. Language support for the libraries exists in C and C++, with support for Java, Python*, and other languages in progress.

Turn Your Data from a Burden to an Asset
Intel® Optane™ persistent memory represents a groundbreaking technology innovation. Delivered with the next-generation Intel® Xeon® Scalable processor, this technology will transform critical data workloads—from cloud and databases, to in-memory analytics, and content delivery networks.

Windows Server* 2019/Hyper-V - Multi-Tenant Virtualization
Workload – Windows Server* 2019/Hyper-V with OLTP Cloud Benchmark1 2 6 7
Pricing guidance as of March 1, 2019. Intel does not guarantee any costs or cost reduction.
You should consult other information and performance tests to assist you in your purchase decision.