Workloads that extract value from massive data sets with accelerated computing (HPC or AI/ML), while highly desirable, can suffer from computing bottlenecks and poor performance. And, even if you deploy all flash, using DAS and NAS can mean additional challenges. The Big Data Cluster removes bottlenecks via a shared pool of NVMe over fabric (NVMeOF) that enables jobs to run up to 10x faster. And S3-compliant storage allows you to control costs.
AMD EPYC Processors
NVIDIA Networking Ethernet and Infiniband
425TB of Capacity per Node
Software-defined storage using Weka.io file system (WekaFS), optimized for large datasets
High-speed, low-latency NVIDIA adapters
Weka.io’s key offering, WekaFS, is an alternative GPFS, IBM Spectrum Scale or Lustre. Because legacy storage designs forced customers to deploy different architectures to satisfy the needs of different workloads, WekaFS was built from the ground up to address the diverse requirements of modern workloads. This technology enables clients to pool all their data and manage it through a global namespace. With dramatically simplified administration of storage, Big Data Cluster users can easily access and manage data at scale and deliver better outcomes.