Clients need to get results as fast as possible. That’s why Source Code created a series of reference architectures for specific types of workloads.
Each one is the result of hours of engineering, testing, and optimization. We save clients time and focus for customizing the design to meet your specific workload and organizational needs–not redoing the basic elements with each new engagement.
Each reflects all our past work designing clusters that meet the unique demands high-performance workloads place on hardware. And they are a great starting place for your customized cluster.
Learn more about our cluster reference architectures:
Looking for a fast, powerful system designed from the ground up to optimize large AI datasets? Consider an AI Cluster.
This custom-engineered, Linux-based cluster includes best-of-breed technology (including the NVIDIA® HGX™ H100 and AMD EPYC™), configured for fast deployment and powerful results. Best of all? Our AI Cluster can give you GPU-accelerated computing that scales to any size and still provides stronger ROI compared to the equally powerful but more costly NVIDIA® DGX™.
HPC clients need to optimize their HPC environments for specific workloads and ensure fast time-to-results. This requires a reliable, high-speed, and high-density approach to data. But, as HPC workloads scale, cost-effectively storing, managing, and processing massive datasets becomes a challenge, especially if your data is growing quickly. It can be even more of a challenge if you have multiple concurrent users accessing that ever-expanding data.
Are you managing compute resources in a multi-tenant environment? Do you have some systems going unused while another type of system is always unable to meet user demand? Have you found your cloud environment to have unsustainably high operational costs? If so, consider the benefits of our CDI Cluster, which leverages composable disaggregated infrastructure to allow for dynamic reconfiguration and provisioning of cluster resources.
Workloads that extract value from massive data sets with accelerated computing (HPC or AI/ML), while highly desirable, can suffer from computing bottlenecks and poor performance. And, even if you deploy all flash, using DAS and NAS can mean additional challenges. The Big Data Cluster removes bottlenecks via a shared pool of NVMe over fabric (NVMeOF) that enables jobs to run up to 10x faster. And S3-compliant storage allows you to control costs.
Cybersecurity and enterprise risk management are critical challenges for modern IT environments. Centralized, location-based data storage like HTTP (where data is found and accessed based on which device it lives on) has inherent security risks, performance issues, and other flaws. This is a critical challenge for enterprise environments.
HPC workloads like computational fluid dynamics (CFD) are proving unsuitable for public cloud computing. The pay-per-usage model used in the cloud leads to high operational costs that only get worse the more dedicated to the cloud you become. For CFD and similar workloads, it can be far more effective to build a balanced, efficient on-premises cluster to run your jobs.
Many cluster deployments start with a proof-of-concept (POC) cluster. Other organizations need a smaller deployment (~50TB) they intend to scale to multiple petabytes – but the design considerations of small clusters and large ones are often contradictory. But, maybe you need a small, scalable cluster for something else? Thanks to use of Ceph software-defined-storage, you can add the Multi-Format Storage Cluster to datacenters that support high-growth block storage, object stores, even file storage data lakes. And, the Multi-Format Storage Cluster is designed in a modular fashion with enterprise-grade, virtualized storage, both of which make scaling easy and cost-effective but still give you the performance you need.
Fast time-to-results is critical in HPC. But designing and procuring a balanced, open-source HPC system that will give you a high performance and strong ROI can be challenging and slow. The turn-key HPC Cluster eliminates design/procurement issues for faster time-to-results. Its combination of well-balanced, tried-and-tested hardware and software components also gives you high performance as well as easy scalability and management, for great ROI.