Company’s platform joins an exclusive set of enterprise-grade software solutions certified for use on NVIDIA DGX systems
Tel Aviv, March 25, 2021 – Run:AI, a company solving compute resource management for AI workloads at scale, announced today that it has joined the NVIDIA DGX-Ready Software partner program. The DGX-Ready Software program features enterprise-grade software solutions that are fully tested and certified for use on clusters of NVIDIA DGX™ systems, simplifying the deployment, management, and scaling of AI infrastructure.
Run:AI’s Kubernetes-based software platform for orchestration of containerized AI workloads enables GPU clusters to be utilized for different deep learning workloads dynamically – from building AI models, to training, to inference. GPU clusters can easily be used for build and train only, for inference only, or for mixed workloads combining build, train, and inference simultaneously.  With Run:AI, jobs at any stage get access to the compute power they need, automatically. The Run:AI scheduler queues jobs and executes them according to priorities. Important jobs can preempt others based on fairness policies and jobs can go over their predefined quota if idle resources are available.
“All Run:AI customers are using NVIDIA GPUs, including DGX systems – the world’s most powerful systems for AI computing,” said Omri Geller, CEO and co-founder of Run:AI. “As customers scale their infrastructure, orchestration of AI – for things like multi-node distributed training – becomes more complex. Becoming an NVIDIA DGX-Ready Software partner enables Run:AI to offer users of NVIDIA DGX systems a certified AI orchestration platform that greatly simplifies management of GPUs, built specifically for ML and DL.”
“As AI adoption increases, so does the need to coordinate access to AI infrastructure and maximize resource utilization,” said John Barco, senior director of product management at NVIDIA. “NVIDIA DGX-Ready Software partners like Run:AI make AI infrastructure more flexible and easier to manage, streamlining AI development workflows.”
More information on how Run:AI works with NVIDIA DGX systems can be found here.
Run:AI is excited to participate in the upcoming NVIDIA GTC (GPU Technology Conference) taking place on April 12-16 with the following sessions:
S32851: Cluster Orchestration for AI Productivity by Omri Geller, CEO and co-founder, Run:AI
S33287: From 20% to 80% GPU Utilization – best practices for maximizing GPU utilization. This talk is presented by Scan Computers CEO Elan Raja with Run:AI CTO and co-founder Dr. Ronen Dar as a guest speaker. Scan Computers International Ltd, an NVIDIA Elite Solution Provider, are a UK-based market leader in developing deep learning and AI services and solutions for business verticals including healthcare, financial services, retail, education, AEC and manufacturing.
Signing up for GTC is free and can be found here: https://www.nvidia.com/gtc/.
About Run:AI
Run:AI is a cloud-native compute management software platform tailored to the unique needs of AI clusters, for managing, orchestrating, and dramatically accelerating AI workloads. By centralizing and virtualizing GPU compute resources, Run:AI software decouples data science workloads from the underlying hardware for more efficient use of GPU, and dramatically increases the speed of model training and inference throughput and latency.