Partnership provides ML practitioners with a complete solution for the full ML lifecycle, from MLOps and experiment tracking, to resource management at scale
San Francisco, June 27, 2022 - Today, Weights & Biases (W&B), the leading developer-first MLOps platform, and Run:ai, the leader in compute orchestration for AI workloads, have announced a joint partnership. The partnership will focus on developing integrations between the two platforms to provide ML developers and MLOps platform owners a seamless experience for MLOps and GPU orchestration for machine learning and deep learning workloads.Â
“We are excited to partner with Run.ai to provide data scientists and ML practitioners the best tools for their ML development workflow,” said Seann Gardiner, VP of Business Development of Weights & Biases. “The combination of the developer tools provided by Weights and Biases, the dynamic allocation and orchestration of compute resources from Run.ai, and the optimized hardware and software provided by NVIDIA gives ML teams everything they need to accelerate the development and deployment of AI in the enterprise.”
The partnership will streamline AI projects for both AI researchers as well as the MLOps and IT teams managing AI infrastructure. ML practitioners will be able to leverage the MLOps capabilities provided by Weights & Biases - such as experiment and artifact tracking, hyperparameter optimization, and collaboration reports - and gain access to NVIDIA accelerated computing resources orchestrated by Run:ai’s Atlas Platform, all in one experience. ML developers will be able to monitor NVIDIA GPU utilization within the W&B dashboard, and then improve utilization with Run:ai’s scheduling and orchestration capabilities. The MLOps platform owner will be able to optimize GPU resource scheduling and consumption for the ML practitioners, and provide a single ML system of record to keep an accurate history of all ML experiments, model history, and dataset versioning.Â
“Run:ai and Weights & Biases together give data scientists, and the IT teams that support them, a complete solution for the full ML lifecycle, from building models, to training and inference in production,” said Omri Geller, CEO and co-founder of Run:ai. “Companies building their AI infrastructure and tooling from scratch can use NVIDIA GPUs, Run:ai, and W&B and have everything they need to manage AI initiatives at scale.”
The Weights & Biases and Run:ai partnership will particularly benefit ML researchers and enterprises leveraging NVIDIA GPUs. Both Run:ai and W&B have been validated as NVIDIA AI Accelerated and NVIDIA DGX-Ready software partners. The partnership will build on integrations between W&B and Run.ai with NVIDIA AI Enterprise software to help enterprises maximize their investment in NVIDIA-accelerated systems for ML and AI.Â
“Putting AI into production requires enterprises to manage a broad range of processes, including data governance, experiment tracking, workload management and compute orchestration,” said Scott McClellan, Senior Director of Data Science and MLOps at NVIDIA. “Pairing Weights & Biases and Run:ai MLOps software with NVIDIA accelerated systems and NVIDIA AI Enterprise software enables enterprises to effectively deploy intelligent applications that solve real-world business challenges.”
‍
About Weights & Biases
Weights & Biases is the leading developer-first MLOps platform to build better models faster. Used by top researchers including teams at OpenAI, Lyft, Pfizer, Toyota, Github, and MILA, W&B is part of the new standard of best practices for machine learning. Learn more at https://wandb.ai/siteÂ
‍
About Run:ai
Run:ai’s Atlas Platform brings cloud-like simplicity to AI resource management - providing researchers with on-demand access to pooled resources for any AI workload. An innovative cloud-native operating system - which includes a workload-aware scheduler and an abstraction layer - helps IT simplify AI implementation, increase team productivity, and gain full utilization of expensive GPUs. Using Run:ai, companies streamline development, management, and scaling of AI applications across any infrastructure including on-premises, edge and cloud. Learn more at www.run.ai
‍