Press Releases

Run:ai Certified to Run NVIDIA AI Enterprise Software Suite

by
Run:ai Team
–
September 20, 2022

TEL AVIV, Israel, Sept. 20, 2022 /PRNewswire/ -- Run:ai, the leader in compute orchestration for AI workloads, today announced that its Atlas Platform is certified to run NVIDIA AI Enterprise, an end-to-end, cloud-native suite of AI and data analytics software that is optimized to enable any organization to use AI.

"The certification of Run:AI Atlas for NVIDIA AI Enterprise will help data scientists run their AI workloads most efficiently," said Omri Geller, CEO and co-founder of Run:ai. "Our mission is to speed up AI and get more models into production, and NVIDIA has been working closely with us to help achieve that goal."

With many companies now operating advanced machine learning technology and running bigger models on more hardware, demand for AI computing chips continues to grow. GPUs are indispensable for running AI applications, and companies are turning to software to reap the most benefit from their AI infrastructure and get models to market faster.

The Run:ai Atlas Platform uses a smart Kubernetes Scheduler and software-based Fractional GPU technology to provide AI practitioners seamless access to multiple GPUs, multiple GPU nodes, or fractions of a single GPU. This enables teams to match the right amount of computing power to the needs of every AI workload, so they can get more done on the same chips. With these capabilities, Run:ai's Atlas Platform lets enterprises maximize the efficiency of their infrastructure, avoiding a scenario where GPUs sit idle or use only a small amount of their power.

"Enterprises across industries are turning to AI to power the breakthroughs that will help improve customer service, boost sales and optimize operations," said Justin Boitano, vice president of enterprise and edge computing at NVIDIA. "Run:ai's certification for NVIDIA AI Enterprise provides customers with an integrated, cloud-native platform for deploying AI workflows with MLOps management capabilities."

Run:ai creates fractional GPUs as virtual ones within available GPU framebuffer memory and compute space. These fractional GPUs can be accessed by containers, enabling different workloads to run in these containers — in parallel and on the same GPU. Run:ai works well on VMware vSphere and bare metal servers, and supports various distributions of Kubernetes.

This certification is the latest in a series of Run:ai's collaborations with NVIDIA. In March, Run:ai completed a proof of concept which enabled multi-cloud GPU flexibility for companies using NVIDIA GPUs in the cloud. This was followed by the company fully integrating NVIDIA Triton Inference Server. And in June, Run:ai worked with Weights & Biases and NVIDIA to gain access to NVIDIA-accelerated computing resources orchestrated by Run:ai's Atlas Platform.

About Run:ai

Run:ai's Atlas Platform brings cloud-like simplicity to AI resource management - providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system - which includes a workload-aware scheduler and an abstraction layer - helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure, including on-premises, edge and cloud. Learn more at www.run.ai.

‍

‍