Are you managing compute resources in a multi-tenant environment? Do you have some systems going unused while another type of system is always unable to meet user demand? Have you found your cloud environment to have unsustainably high operational costs? If so, consider the benefits of our CDI Cluster.
This Linux-based reference architecture provides a strong foundation for disaggregated environments, using industry leading technologies and optimized infrastructure design to ensure performance and flexibility. By leveraging PCIe-based fabrics and an innovative software layer, the CDI Cluster provides cloud-like flexibility with bare-metal performance, drastically improving ROI.
Also, by disaggregating resources from their physical configuration, the CDI Cluster improve the longevity of the underlying hardware investment. By easily reprovisioning these resources via software, you can redeploy resources in optimized configurations for a variety of workloads.
Composable disaggregated infrastructure (CDI) leverages PCIe-based low latency interconnects to enable dynamically provisioned systems. This lets system administrators pool resources like CPUs, accelerators (GPUs, FPGAs, ASICs), memory or storage without physical reconfiguration. Similarly, resources can be expanded, reduced, or refreshed on the fly.
By disassociating components from their physical location in a server within a cluster, unique hardware platforms can be spun up through software alone. This means users can optimally leverage available resources instead of relying on whatever node architecture is available.
The CDI Cluster lets IT managers and system administrators change the way they think of resource procurement. By disaggregating hardware, CDI clusters can grow in whatever way your team needs, without unnecessary extra costs. For instance, an existing CPU-only CDI Clusters can be augmented to include GPU-acceleration using expansion chassis’ for PCIe-based GPUs. These chassis’, like JBODs, run without additional CPUs or storage devices.
Since CDI technology does not virtualize systems, there is no performance loss on the cluster when compared to a similarly equipped traditional HPC or AI cluster. Meanwhile, the software-managed, dynamically composable nodes maintain the ‘on-demand’ nature of the cloud many users have grown accustomed to.
3rd Gen AMD EPYC™, the world’s highest performing x86 server CPU family
NVIDIA® A10 GPUs
2 PCI-E 4.0 x16 and 1 NVMe/SATA M.2
1x 200G HDR NVIDIA InfiniBand switch
Up to 2 hot-pluggable 10G NVIDIA Spectrum SN4000 Open Ethernet Switches
NVIDIA 10GBASE-T Management Switch
8 DIMMs; up to 2TB 3DS ECC DDR4-3200MHz LRDIMM
Giga IO FabreX CLI/3rd Party or Liqid Command Center composable disaggregated resource management software
Giga IO FabreX TOR Switch or Liqid Grid 48 Port Gen 4
Liqid Composable Infrastructure enables users to build a living data center architecture that adapts to meet their business needs and scales as required. Leveraging cutting-edge NVMe-over-flash networking technology, along with Liqid Command Center resource orchestration software, Liqid users can disassociate their resources from their physical server configuration, creating pooled resources that can be composed in new configurations on the fly. This flexibility is paired with powerful improvements in performance, optimization, and efficiency.
GigaIO is an enterprise-class open standard composable infrastructure solution. GigaIO FabreX breaks the constraints of old static architectures, opening new configuration possibilities with composable disaggregated infrastructure, to maximize utilization of all the elements within your racks.