A Reliable, Cost-Effective Approach to HPC Storage, Even with Multiple Concurrent Users

HPC clients need to optimize their HPC environments for specific workloads and ensure fast time-to-results. This requires a reliable, high-speed, and high-density approach to data. But, as HPC workloads scale, cost-effectively storing, managing, and processing massive datasets becomes a challenge, especially if your data is growing quickly. It can be even more of a challenge if you have multiple concurrent users accessing that ever-expanding data.

The HPC Storage Cluster solves this problem by leveraging software-defined storage from BeeGFS. As a result, you get highly available and dense capacity, robust data protection features, and support for multiple concurrent users, all without sacrificing the high storage speeds necessary for HPC performance. Then, when you’re ready, you can scale as needed.

HPC Storage Cluster

Ideal Use Cases

  • HPC Storage
  • High-Density Data Stoage
  • Multi-user Environments
  • Brownfield (existing environment) Deployments

Storage for any HPC Workload

  • Distributed, parallel BeeGFS storage
  • Open-source file system
  • High-reliability and security (via ZFS)
  • High-density (multi-PB)
  • High-speed via maximizing bandwidth regardless of scale

Relevant Industries

Education & Research
Aerospace & Defense
Oil & Gas
Life Sciences
Engineering & Manufacturing

Inside the HPC Storage Cluster

Compute

Intel Xeon Scalable Processors

Storage

10.8PB RAW Across Object and File Storage Systems

Networking

NVIDIA Networking 1GbE Management Network

HPC Storage Cluster

Why BeeGFS?

BeeGFS is a lightweight and powerful software-defined distributed filesystem that can scale infinitely without sacrificing performance. BeeGFS is an easy-deployable alternative to other parallel filesystems like IBM Spectrum Scale or Lustre.

The BeeGFS architecture allows users to manage any IO profile requirements without performance restrictions and provides the scalability & flexibility needed for the most demanding HPC applications.

BeeGFS also has native RDMA support. Nodes can serve multiple RDMA (InfiniBand, Omni-Path, RoCE and TCP/IP) network connections at the same time and automatically switch to a redundant connection path to mitigate hardware failures.

No specific enterprise Linux distribution or other special environment is required to run BeeGFS. It uses existing partitions, formatted with any standard Linux file system. That mean a HPC Storage Cluster can plug into any environment and support any HPC use case.