Interview

Run:ai & MLOps Community on Resource Management & the Future of AI Accelerators

by
Run:ai Team
–
May 10, 2022

Ronen Dar, CTO of Run:ai and Gijsbert Janssen van Doorn, Director Technical Product Marketing joined MLOps Community for two live video conversations in May 2022 on resource management and getting the most out of GPUs. Both sessions are now available to watch on demand.

In MLOps Community's 99th regular Meetup, we discuss "The Role of Resource Management in MLOps." MLOps practitioners are focused on deploying and running models in production, and resource management can sometimes be left as a problem for another day. But as AI/ML initiatives grow, ignoring resource management challenges can lead to researchers fighting for resources, time-consuming manual workload rescheduling, and spiraling costs associated with ML inference. In this talk, we explore the role resource management has in MLOps, what to strive for, and how to get buy-in from IT. Watch the full recording →

In MLOps Community's Coffee Sessions #99, we discuss "Getting the Most Out of Your AI Infrastructure." Learn why Run:ai chose to build a cloud-native AI orchestration platform for containerized workloads, the particular challenges that would make Run:ai a good fit for your organization, and what the future holds for AI accelerators and GPUs. Watch the full recording →