NeMo⢠Megatron, a creation of the NVIDIA Applied Deep Learning Research team, represents a GPU-accelerated framework tailored for training and deploying transformer-based Large Language Models (LLMs) such as GPT, T5, and BERT. The goal of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new conversational AI models.
While NeMO⢠Megatron Launcher provides training scripts for SLURM workload scheduler, there has been a noticeable gap in guidance and scripts explicitly crafted for Kubernetes (K8s) environments. K8s is an open-source container orchestration platform that has emerged as a de facto standard for cloud-native infrastructure management and has become the platform of choice for AI companies like OpenAI and Spotify, and new AI Cloud providers like Coreweave. Its dynamic nature and cloud-native architecture make it a standout choice for orchestrating distributed machine learning workloads.
In this guide, we explain step by step how to train NVIDIAâs NeMo models on Kubernetes clusters. Our primary objective is to simplify the process and help AI practitioners get started with their LLM experiments on Kubernetes faster. To assist with this, we've made our launching scripts available in our repository, ensuring that anyone in the AI community can easily begin their LLM experiments on Kubernetes.
Prerequisites
â
Cluster
We have used a cluster with the following specification:
- A centralized NFS Server
- 4 x NVIDIA DGX A100 Nodes, with a total of 32 x NVIDIA A100 Tensor Core GPUs with 80 GB of GPU memory each
- 8 x 200 Gb HDR NVIDIA InfiniBand connectivity per node
This âhow toâ guide should work on similar GPU clusters, even without InfiniBand connectivities and a centralized NFS server. See below for details.
NVIDIA GPU Operator + NVIDIA Network Operator
In order to run the training and utilize the GPU and the network stack of NVIDIA hardware, we need to install the K8s software support, which are the GPU Operator and Network Operator.
Training Operator
The training operator is a set of tools and controllers written by KubeFlow.
Commands of how it should be installed can be found here.
Docker credentials
Before we start to run our training, we need to make sure K8s has the credentials to pull images from NVIDIAâs GPU Cloud (NGC) catalog. It contains the image with which we perform the training.
In order to get credentials to NGC we register to NeMo Framework Beta through this link and go to this link.
After logging-in you should see the following:
The ea-bignlp is the organization we just joined that will give us the ability to pull NeMo images.
After that, you can click on settings and then âSetupâ:
Then âGenerate API Keyâ:
After you have the API Key, it will be our âdocker passwordâ and the âdocker userâ will be â$oauthtokenâ
Run the command:
And use the username and password we just received from NGC.
Next lets create a Kubernetes secret based on these credentials:
Note: Point to your relevant config.json file if it is not in the default location.
Data Preprocessing
Now that we have a cluster ready for training, we need to prepare the data with which we will train the model. The dataset we are going to use is called âThe Pileâ.
"The Pile," created by OpenAI, is a massive, diverse text dataset with over 800 gigabytes of content from various internet sources, including books and websites. This inclusive resource spans multiple languages and subjects, making it invaluable for training large-scale language models like GPT-3.5. Researchers and developers use it for a wide range of natural language processing tasks. The whole dataset is divided into 30 shards of data.
The preprocessing includes 3 parts:
- Download
- Extraction
- Pre-process
We are going to download, extract, and pre-process the data directly to the NFS server so data is available to all the Kubernetes nodes. Â If youâre not working with a centralized file system you can alternatively copy the data to a local disk storage attached to each node.
Downloading the first shard can be done from this link.
Note: You need to register to Kaggle in order to download the file.
After downloading the dataset we need to extract it:
Now we will see a file named 00.jsonl which is extracted from the zip file.
In order to preprocess the data we run a docker image which mounts the extracted data and opens a terminal inside the container:
Through the terminal we now prepare the environment and launch the command that will initiate the preprocessing step which may take a few minutes:
Note: Make sure data is mounted / copied to the exact same directory location on every node, as the pod template is identical to the pods running on all nodes.
Training
Now that we have the cluster ready and the data is preprocessed we can move to the training step.
We are going to run the training job as a PyTorchJob using the K8s training operator and launch the job using the Megatron K8s Launcher which you can find here.
First, letâs clone the repository:
Next we prepare the K8s YAML files by running the command below. Adjust the number of workers to the number of GPUs available in your cluster. In our case, we had 32 GPUs, which corresponds to 32 workers.
Apply the files to K8s to launch the training run:
And now wait for the model to be trained! :-)
Conclusion
In this guide, we walked through the process of training NeMo LLMs on a Kubernetes GPU cluster. We've covered everything from setting up the necessary infrastructure, including GPU and network support, to data preparation and the actual training process. Our goal has been to simplify the complex journey of training large language models, making it more accessible to AI practitioners. By openly sharing our tools and launcher scripts on our repository, we aim to accelerate the journey of AI practitioners  to adopt Kubernetes for the development of Generative AI applications.