MLflow vs KubeFlow

Architecture And Key Differences

What Is MLflow?

MLflow is a platform for managing the entire machine learning (ML) lifecycle. It is an open source project created by Databricks, the makers of Spark.

MLFlow can track experiments, parameters used, and the results. It lets you package ML code into a reproducible and reusable format that you can share with colleagues or move to production environments called MLflow projects. 

You can manage MLflow models from different ML libraries and deploy them to multiple model serving and inference platforms. MLflow is library independent, which means you can access all features via CLI and REST API.

MLflow includes a central model repository for model lifecycle management, including model versioning, annotations, and step transitions.

What Is KubeFlow?

Kubeflow is a cloud native framework for simplifying the adoption of ML in containerized environments on Kubernetes. It began as an internal Google project and later became a public open source project.

You can use this free, open-source project to simply and collaboratively run ML workflows on Kubernetes clusters. 

Kubeflow seamlessly integrates and scales with Kubernetes. Additionally, Kubeflow can run in any environment Kubernetes runs, including on clouds like Google Cloud, AWS, Azure, and on-premises.

MLflow and Kubeflow Components

MLflow offers the following four components for managing ML workflows:

  • MLflow Tracking—provides a UI and API for logging parameters, metrics, artifacts, and code versions. You can use it when running ML code and later on for visualizing the results. MLflow Tracking works in any environment, including notebooks and a standalone script. It logs results to a server or local files and compares multiple runs. 
  • MLflow Projects—offer a standard format for packaging your reusable code. Essentially, a project serves as a directory with a Git repository or code, using a descriptor file or convention to specify how to run the code and its dependencies. 
  • MLflow Models—serve as a convention that standardizes how you package ML models and provides tools for deploying them. MLflow saves each model as a directory that contains a descriptor file and arbitrary files. 
  • MLflow Registry—this centralized model store includes a UI and a set of APIs that enable you to manage your MLflow model’s entire lifecycle collaboratively. It provides model versioning, model lineage, stage transitions, and annotations.

Learn more in our detailed guide to MLflow.

Here are key components of Kubeflow:

  • Notebooks—lets you create and manage interactive Jupyter notebooks. It also enables you to build notebook containers or pods directly in Kubernetes clusters.
  • TensorFlow model training—Kubeflow provides a custom TensorFlow job operator for configuring and training models on Kubernetes. It provides bespoke job operators to support other frameworks, but the operators’ maturity can greatly vary.
  • Pipelines—let you build and manage multistep ML workflows running in Docker containers.
  • Deployment—Kubeflow provides various methods to deploy ML models on Kubernetes via external add-ons.

Learn more in our detailed guide to Kubeflow Pipelines.

KubeFlow vs. MLflow: Key Differences

Here are a few key differences between KubeFlow and MLflow.

Approach

Since Kubeflow is a container-based system, all processing is done within the Kubernetes infrastructure. Kubeflow is considered more complex because it handles container orchestration as well as machine learning workflows. At the same time, this feature improves reproducibility of experiments.

MLflow is a Python program, so you can perform training using any Python compatible framework. It can be set up on a single server and easily adapted to existing ML models.

Collaborative Environment

Kubeflow enables experiment tracking through its metadata feature. However, advanced technical knowledge is required to set up complete tracking for machine learning experiments.

MLflow has experiment tracking built in. It lets developers work in a local environment, and tracks experiment via a logging process that saves data to a remote archive. It is suitable for exploratory data analysis (EDA).

Pipelines and Scale

Orchestrating parallel and sequential tasks is what Kubeflow was originally built for. For use cases where you need to run end-to-end ML pipelines or extensive hyperparameter optimization based on cloud computing infrastructure, Kubeflow provides strong capabilities.

MLflow can also be used to set up end-to-end ML pipelines, but it does not manage the infrastructure and container layer so it requires careful planning of infrastructure and has more limited scalability.

Model Deployment

Kubeflow provides Kubeflow Pipelines, an independent component focused on model deployment and continuous integration and delivery (CI/CD). You can use Kubeflow pipelines independently of other features of Kubeflow. It prepares a model for deployment using components and services offered by a Kubernetes cluster, which may require significant development effort and time.

MLflow makes model deployment easier with the concept of a model registry. This is a central place to share machine learning models and a collaborative space for evolving models until they are implemented and delivering value in the field. The MLflow model registry has a set of APIs and UIs for more coordinated management of the entire lifecycle of an MLflow model. It also provides model versioning, model lineage, annotations, and step transitions.

MLflow can easily promote models to API endpoints in various cloud environments such as Amazon Sagemaker. Also, if you don't want to use a cloud provider API endpoint, you can create your own REST API endpoint.

MLflow vs KubeFlow: How to Choose?

MLflow and KubeFlow offer unique features and advantages. 

When to use MLflow

MLflow provides an MLOps platform powered by an active open-source community. It lets you abstract your ML model so you can easily deploy it into various environments. It is ideal when you need a variety of deployment options and functionality.

You can use MLflow to track, compare, and visualize your experiment metadata and results. The platform lets you package and deploy ML models and create a multistep workflow.

When to use Kubeflow

Kubeflow provides an open-source platform for managing, deploying, and scaling ML models on Kubernetes. It lets you code, track, and run experiment metadata locally or in the cloud. It is ideal when you need to deploy and manage ML models on Kubernetes. 

You can use Kubeflow to define and manage resource quotas across different teams and build reproducible pipelines encompassing the entire ML Lifecycle, including data gathering and model building and deployment. 

The platform provides a UI that lets you visualize your pipeline and experiment metadata and compare experiment results. It also includes a built-in Notebook server service and allows integration with AWS Sagemaker, a fully managed service that eliminates the need to manually manage the cloud infrastructure.

Managing Machine Learning Infrastructure with Run:ai

Run:AI simplifies machine learning infrastructure pipelines, helping data scientists accelerate their productivity and the quality of their models. 

With our MLflow and Kubeflow integration, jobs can be scheduled with the Run:ai scheduler. For a detailed walkthrough of how to run Jobs using the Run:ai scheduler, see details about:

When using MLflow or Kubeflow with Run:ai, you enjoy all the benefits of our platform: 

  • Advanced visibility—create an efficient pipeline of resource sharing by pooling GPU compute resources.
  • No more bottlenecks—you can set up guaranteed quotas of GPU resources, to avoid bottlenecks and optimize billing.
  • A higher level of control—Run:ai enables you to dynamically change resource allocation, ensuring each job gets the resources it needs at any given time.

Learn more about the Run:ai GPU virtualization platform.