Saturday, 20 April 2019

Adding Lustre Storage to the HPC Equation

For organizations that need extreme scalability in high-performance computing systems, Lustre is often the file system of choice — for a lot of good reasons.

Dell EMC Study Materials, Dell EMC Guides, Dell EMC Certifications, Dell EMC Learning

When it comes to high-performance computing applications, there is basically no such thing as too much data storage. Who doesn’t need more storage? Everywhere you look, HPC applications are ballooning in size.

A few examples:

◈ AccuWeather, the world’s largest source of weather forecasts and warnings, responds to more than 30 billion data requests daily.
◈ The wave of medical data washing over the global healthcare industry is expected to swell to 2,314 exabytes by 2020.
◈ If you were to print out a map of a human genome, the stack of paper would be 300 feet high, which is about as tall as the Statue of Liberty.

The years ahead will only bring more of the same. IDC forecasts that by 2025, the global datasphere will grow to 163 zettabytes a year, or a trillion gigabytes. That’s 10 times the 16 ZB of data generated in 2016.

For organizations running data-intensive HPC and AI applications, the implications are pretty clear: Application performance will increasingly depend on extremely scalable, high-performance storage architectures that can keep pace with an ever-growing deluge of data. And this is where Lustre storage really shines.

The Lustre edge


Lustre is a parallel file system built for the challenges of high performance computing. For organizations that require extreme storage scalability without performance degradation, the Lustre file system can be a great solution. It enables the ability to scale storage up and down to suit the needs of the application, while maintaining the performance required for HPC and other data-intensive workloads.

While Lustre has been widely deployed for HPC-driven research workloads in academic settings, it has been making steady inroads into enterprise environments. Lustre has been deployed in thousands of data centers in industries ranging from healthcare and energy to manufacturing and financial services, and it is consistently recognized as the file system of choice for the world’s fastest computers.

Ready Solutions for HPC Lustre Storage


Dell EMC offers a wide range of solutions and supported products for organizations that want to leverage the Lustre file system. These offerings include Dell EMC Ready Solutions for HPC Lustre Storage. This solution is designed for those who want to deploy a fully supported, easy-to-use, high-throughput, scale-out and cost-effective parallel file system storage solution.

Using an intelligent, extensive and intuitive management interface — the Integrated Manager for Lustre  — Dell EMC Ready Solutions simplify deploying, managing and monitoring hardware and file system components. They’re designed to be easy to scale in terms of both capacity and performance, which equates to a convenient path for future growth.

The updated Ready Solutions for HPC Lustre Storage include Dell EMC’s refreshed PowerEdge servers, Dell EMC Networking and high-density Dell EMC PowerVault ME storage to deliver improved capacity, density and performance compared to previous generation storage. In addition, these Ready Solutions are available in additional Lustre sizing options — configurations are available in scalable building blocks for 4-, 8-, 10- and 12-TB of estimated usable storage. And for a complete package, the solution can be delivered with full hardware and software support from Dell EMC and Whamcloud.

A customer story


Swinburne University of Technology in Australia is among the organizations benefiting from Dell EMC HPC Storage with the Lustre file system. This combination of technologies is on the job today in the university’s OzSTAR supercomputer.

OzSTAR is built on Dell EMC PowerEdge servers, a high-speed, low-latency Dell EMC H-Series networking fabric, and Dell EMC Ready Solutions for HPC storage with the Lustre ZFS file system. With all this goodness under the hood, the OzSTAR system delivers a peak performance of 1.2 petaflops.

OzSTAR is primarily used by the Swinburne-based Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav) to search for gravitational waves and study the extreme physics of black holes and warped space-time. In a single second, OzSTAR can perform 10,000 calculations for every one of the 100 billion stars in our galaxy, according to OzGrav’s director, Professor Matthew Bailes.

Looking ahead, the university expects the OzSTAR supercomputer to be one of the keys to enabling Swinburne’s Data Science Research Institute to tackle new data science challenges, including those involving machine learning, deep learning, database interrogation and data visualization.

The bottom line


The big data explosion, coupled with accelerated technology, has made it possible to make new discoveries, and create AI algorithms for a number of automation use cases. As data sets continue to grow exponentially, it’s vital to have a scalable storage solution. When it’s incorporated into the right architecture, the Lustre file system provides an ideal solution to this need.

Related Posts

1 comment: