Activate NVIDIA Multi-Instance GPU (MIG)

10 December, 2025

Joshua Hartmann
Joshua Hartmann
Systems Engineer

Joshua hat im Sommer 2023 seine Ausbildung zum Fachinformatiker für Systemintegration bei den NETWAYS Web Services erfolgreich abgeschlossen. Heute ist er ein wichtiger Teil des Teams, das sich mit großer Hingabe um die Kundenbetreuung und die kontinuierliche Weiterentwicklung der SaaS-Apps kümmert. Neben seinem musikalischen Talent am Klavier hat Joshua eine Leidenschaft für Wintersport und findet auch Freude im Gaming. Doch am allerliebsten verbringt er seine Zeit mit seiner besseren Hälfte, denn sie ist für ihn das größte Glück.

by | Dec 10, 2025

AI Blog 

In response to the growing demand for flexible yet capable computing resources, NVIDIA has provided a powerful tool in the form of Multi-Instance GPU (MIG) technology, which allows a single physical GPU to be partitioned into multiple independent instances.
Each instance has its own compute, memory and bandwidth reserves so that different workloads – from AI inference and database acceleration to classic HPC tasks – can run simultaneously without interfering with each other. For this, you need to activate NVIDIA MIG.


In this article, we will show you step-by-step how to enable, configure and manage MIG on supported NVIDIA GPUs. We’ll cover the necessary driver and software requirements, explain how to use nvidia-smi and the NVIDIA GPU Toolkit, and give practical tips on how to find the optimal resource layout for your specific use cases.
Whether you’re working in a cloud environment, an on-premise data center or on a single workstation server, this guide will help you unlock the full potential of your GPU hardware right away.

Activate and configure NVIDIA MIG on Ubuntu 24.04 Server

In our example, an NVIDIA RTX PRO 6000 Blackwell Server Edition is available, which we also offer in the NETWAYS Cloud. However, the procedure is almost identical for all MIG-capable NVIDIA GPUs. The aim is to divide the GPU into several independent instances that behave like separate mini-GPUs – with their own memory and compute resources.

Install NVIDIA drivers

Before MIG can be used, an up-to-date NVIDIA driver must be installed. On Ubuntu 24.04, this is very easy using the package nvidia-driver-580-open. The ‘open’ version is ideal for server environments and modern CUDA versions.

sudo apt install nvidia-driver-580-open --no-install-recommends

After the command has been successfully executed, we can display the available GPU with nvidia-smi. (It may be necessary to restart the server)

Check GPU status

~$ sudo nvidia-smi -i 0
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.95.05              Driver Version: 580.95.05      CUDA Version: 13.0     |
+-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA RTX PRO 6000 Blac...    Off |   00000000:06:00.0 Off |                    0 |
| N/A   33C    P0             92W /  600W |       0MiB /  97887MiB |      1%      Default |
|                                         |                        |             Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

This is the ideal basis for ensuring that the hardware has been initialized without errors.

Activate NVIDIA Multi-Instance GPU (MIG) mode

Before instances can be created, MIG mode must be activated for each GPU. This process is reversible and does not affect operation as long as no instances are active. After activation, the GPU is ready for partitioning.

~$ sudo nvidia-smi -i 0 -mig 1
Enabled MIG Mode for GPU 00000000:06:00.0
All done.

Show available MIG profiles

Each NVIDIA GPU offers different MIG profiles that represent different resource layouts. These range from small 1g profiles to larger 4g or 7g configurations.

~$ nvidia-smi mig -lgip
+-------------------------------------------------------------------------------+
| GPU instance profiles:                                                        |
| GPU   Name               ID    Instances   Memory     P2P    SM    DEC   ENC  |
|                                Free/Total   GiB              CE    JPEG  OFA  |
|===============================================================================|
|   0  MIG 1g.24gb         14     4/4        23.62      No     46     1     1   |
|                                                               1     1     0   |
+-------------------------------------------------------------------------------+
|   0  MIG 1g.24gb+me      21     1/1        23.62      No     46     1     1   |
|                                                               1     1     1   |
+-------------------------------------------------------------------------------+
|   0  MIG 1g.24gb+gfx     47     4/4        23.62      No     46     1     1   |
|                                                               1     1     0   |
+-------------------------------------------------------------------------------+
|   0  MIG 1g.24gb+me.all  65     1/1        23.62      No     46     4     4   |
|                                                               1     4     1   |
+-------------------------------------------------------------------------------+
|   0  MIG 1g.24gb-me      67     4/4        23.62      No     46     0     0   |
|                                                               1     0     0   |
+-------------------------------------------------------------------------------+
|   0  MIG 2g.48gb          5     2/2        47.38      No     94     2     2   |
|                                                               2     2     0   |
+-------------------------------------------------------------------------------+
|   0  MIG 2g.48gb+gfx     35     2/2        47.38      No     94     2     2   |
|                                                               2     2     0   |
+-------------------------------------------------------------------------------+
|   0  MIG 2g.48gb+me.all  64     1/1        47.38      No     94     4     4   |
|                                                               2     4     1   |
+-------------------------------------------------------------------------------+
|   0  MIG 2g.48gb-me      66     2/2        47.38      No     94     0     0   |
|                                                               2     0     0   |
+-------------------------------------------------------------------------------+
|   0  MIG 4g.96gb          0     1/1        95.00      No     188    4     4   |
|                                                               4     4     1   |
+-------------------------------------------------------------------------------+
|   0  MIG 4g.96gb+gfx     32     1/1        95.00      No     188    4     4   |
|                                                               4     4     1   |
+-------------------------------------------------------------------------------+

Split GPUs into several partitions

The parameter -cgi is used to define which instance types are to be created. In the example, two 2g.48gb instances are created, perfect for running two isolated AI inference services in parallel, for example.

The -C flag ensures that suitable compute instances are created in addition to the GPU instances. These are necessary for applications to be able to use the MIG instance at all:

sudo nvidia-smi mig -cgi 2g.48gb,2g.48gb -C

or with the previously determined ID:

sudo nvidia-smi mig -cgi 5,5 -C

This step is the core of the MIG configuration: From this moment on, the physical GPU behaves like several logical GPUs.

Identify MIG instances

The instances created, including their UUIDs, can be displayed via nvidia-smi -L. These unique identifiers are used later on, for example, for container runtime configurations, assignments in Kubernetes or monitoring tools.
Each instance is completely isolated and can be assigned exclusively to an application or a container.

~$ nvidia-smi -L
GPU 0: NVIDIA RTX PRO 6000 Blackwell Server Edition (UUID: GPU-ad2e909a-16c4-5867-7e0f-c68d696dc0fa)
  MIG 2g.48gb     Device  0: (UUID: MIG-af414487-fcaa-5f42-b210-6f614c9cf780)
  MIG 2g.48gb     Device  1: (UUID: MIG-fcf3ca68-f772-5d84-9d6b-a0e1bcabf88b


Remove GPU partitions

If layouts are to be changed or the GPU is required for other workloads, compute instances and GPU instances can simply be deleted. New profiles or combinations can then be created.

This makes MIG extremely flexible: workloads can be adapted to new requirements within seconds.

sudo nvidia-smi mig -dci 
sudo nvidia-smi mig -dgi

Use MIG in Kubernetes

In Kubernetes, the workflow is somewhat more complex, but particularly powerful. The NVIDIA GPU operator is required to activate NVIDIA MIG in your cluster.

Configure NVIDIA GPU operator

The NVIDIA GPU Operator automates driver installation, device detection, plug-in management and MIG configuration in the cluster.
By adding the Helm repository, installation is possible at the touch of a button.

helm repo add nvidia https://nvidia.github.io/gpu-operator
helm repo update

Then create a values.yaml that controls the following things:

  • whether Node Feature Discovery is activated
  • what the MIG strategy mode looks like
  • whether a MIG manager should actively configure instances
  • optional tolerations so that pods are planned correctly

This file is, so to speak, the “control center” for how MIG should function in the cluster.

---
nfd:
  enabled: true             # Activates Node Feature Discovery for automatic labeling of GPU-enabled nodes.

driver:
  enabled: false            # Deactives automatic driver installation in case a driver is already present
                            # on the node.

mig:
  strategy: mixed           # MIG strategy 'mixed' allows side-by-side configuration of MIG and non-MIG
                            # workloads in the cluster.

migManager:
  enabled: true             # Activates the MIG manager which is in charge of partitioning and configuring
                            # MIG devices.
                        
  config:                   # Reference to a user-defined MIG ConfigMap.
    name: nvidia-mig-config # Name of the MIG ConfigMap.

# Optional: Tolerations for DaemonSets to (dis-)allow Pods on specific Nodes.
daemonsets:
  tolerations:
    - key: "dedicated"
      operator: "Equal"
      effect: "NoSchedule"
      value: "SomeValue"

node-feature-discovery:
  worker:
    tolerations:
      - key: "dedicated"
        operator: "Equal"
        effect: "NoSchedule"
        value: "SomeValue" 

Create a ConfigMap to specify a user-defined layout, e.g. 1× 2g.48gb + 2× 1g.24gb.
This allows each node to be partitioned exactly according to this pattern, which is extremely helpful if several nodes are to be configured identically or if certain workloads require fixed resources.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nvidia-mig-config 
  namespace: gpu-operator
data:
  config.yaml: |
    version: v1
    mig-configs:
      # RTX PRO 6000 Blackwell SE 96GB
      2g.48gb-2x1g.24gb:
        - devices: all
          mig-enabled: true
          mig-devices:
            "2g.48gb": 1
            "1g.24gb": 2

Import the ConfigMap into the Kubernetes cluster:

kubectl apply -f nvidia-mig-config.yaml

Install GPU operator

After applying the ConfigMap, the GPU operator needs to be installed. This automatically recognizes the GPU-capable nodes and divides them according to the stored configuration. The entire process is automated.

helm install gpu-operator nvidia/gpu-operator --create-namespace\
  -n gpu-operator \
  -f values.yaml

Label GPU Nodes in the cluster

To tell Kubernetes which node should receive the defined MIG layout, it must be labeled.
This label decides which instances are created – a simple but very precise control.

The value of the label must correspond to the name of a defined MIG configuration from the previously created ConfigMap:

kubectl label nodes $GPU_NODE nvidia.com/mig.config=2g.48gb-2x1g.24gb --overwrite

The operator will then activate NVIDIA MIG on the labeled cluster nodes.

Conclusion – Activate NVIDIA MIG and get the most out of your GPU

With NVIDIA’s MIG technology, the potential of a single server GPU can be significantly increased: by splitting it into independent mini-GPUs, different workloads – from AI inference and database acceleration to classic HPC tasks – can run simultaneously without interfering with each other. The actual effort involved is limited to installing the current driver, activating MIG mode vianvidia-smi and creating the desired instances.

Once set up, the layout can be managed automatically both on individual servers and in Kubernetes clusters, resulting in better resource utilization and lower costs. So if you want to utilize the full performance of your NVIDIA GPU, you should activate NVIDIA MIG today and integrate the flexible partitioning concept into your daily operations.

Don’t have any compatible GPUs at hand? No problem. In the NETWAYS Cloud and in NETWAYS Managed Kubernetes® we offer corresponding NVIDIA GPus – including help with setup and installation by our MyEngineer®.

Our portfolio

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

How did you like our article?