NVIDIA Exploring Adversarial Machine Learning (Course Review)

For the past year, I’ve been playing around with prompt injection, system prompt leak (“extraction” as the cool ML peeps call it), and data poisoning. However, this has always been ad-hoc through personal research and via capture the flags (CTFs). I’ve come to realise that there is a lot more to adversarial machine learning (ML) than just prompt injection and I wanted to explore other ways that these ML systems can be exploited and understand the underlying math/ML principles a little better.
I came across this course (Exploring Adversarial Machine Learning) when a friend posted about it last year but only got around to it now. He’s also written a review about the course itself which you can read here.
All in all, this was a great course! A lot of the concepts I had heard of during past research and while scanning ML papers but I’d never looked more into it and I’d never looked at how to actually perform these attacks. The course showcases a lot of great tools too to help perform attacks, optimise hyper-parameters and dive into data using Explanatory Data Analysis (EDA).
The EDA sections were actually super interesting because obviously you might know that not all information is relevant to the model when it makes predictions but seeing which parts are relevant is another story. For example, you can identify which parts of an image are relevant to a prediction model when it attempts to identify what it’s looking at, if you can then transpose that onto other images you may be able to trick a model into thinking this is X instead of Y:
This is a simple example but it got me thinking about what if you were able to do the same with something like near invisible watermarks (ie. Synth-ID). Could this be extracted and then transposed onto another image to trick the model into thinking this was a watermarked image while it wasn’t (ie. forge a false positive)? Food for thought…
The course costs 90 euros (150$ AUD) with which you get access to:
- Full coursework with lab environment in Jupyter, notebooks, training data, answer notebooks, boilerplate code, etc
- 30 GPUs hours (8 hour sessions before the machine auto-shuts down) on a NVIDIA A10G (AWS g5.2xlarge ec2 instance). You can see the specs in the appendix.
- A certification after passing an exam that tests each of the core concepts.
Should I do the suggested pre-requisite (Getting Started with Deep Learning)? If you can read/write python code and know a little about pytorch, pandas, and numpy, then it’s probably not needed. There’s a lot of well commented examples throughout the whole course.
I don’t think it’s super needed because the course already has all the resources and external links/papers/articles you’ll need. But if you’re stuck on an exercise or have questions about the course you can always ask about it on the NVIDIA Forums.
The Lab Environment
Once you start the course, you are given access to a Jupyter lab environment hosted on an ec2 instance:
The environment is very responsive and contains everything you’ll need for the course. The coursework, tutorial, exercises, answers and the exam itself.
Important
Every time you shut off the environment, everything will be deleted and next time you start the environment again it’ll be a clean deployment. As such, it’s super important to save your work if you don’t want to lose progress. Thankfully most exercises have answer notebooks so you can always fall back on those if needed.
Before you start, you might want to make a full export of the lab environment. You can use the following command to create an archive that you can then download for offline use:
tar chvfz notebook.tar.gz *Note: There are some larger models and a bunch of training data making the tar generation quite slow (approx. 1.4GB compressed). Hence, it’s not worth doing it every time you start/stop the environment unless you’re planning on smashing the course in under 8 hours
Go through the course at your own pace and try to understand as much as possible. There’s a few external resources shared throughout the course which are good reads too.
The content
Model Evasion
Generating a sample to intentionally get misclassified while preserving some other property (such as human imperceptibility).
This section goes through open box and closed box evasion techniques and algorithms, such as SimBA (Simple BlackBox Attack) and HopSkipJump algos. This is also the first section so you might have some getting used to if you haven’t played much with Jupyter notebooks before. Although they’re easy to use and understand. The course also highlights some useful Jupyter functionality throughout the course, especially needed in later sections where you start creating and optimising models.
This section and the next section are very information dense but you’ll get to reuse a lot of it as you move on to other sections so don’t be afraid to take it slow here.
Model Extraction
Using queries to build a functionally equivalent model.
This is a really interesting concept where you learn to re-create a model by extracting a lot of data from the model you’re targeting. You may be thinking what’s the point of this? Well, if you can gather enough data on the target model (which you might not have access to), you can train a local model, attack it locally which is usually a lot easier and faster and then (if your local model is representative enough), you can use that same attack against the target model. Mind blowing idea but super cool! You get to replicate CVE-2019-20634 inside the lab, a really cool vulnerability that existed on Proofpoint Email Protection spam detection.
Model Assessments
Applying building-block techniques to perform wholistic and useful security assessments of ML models.
This section focuses on introducing tools and libraries to perform various attacks against offline and online models. It showcases Adversarial Robustness Toolbox (ART) which is a Python library for Machine Learning Security. The courses showcases a number of algorithm including some that you’ve implemented, however the library is a lot more optimised so don’t cry when you see that the adversarial images you created in previous sections are much worse than the images they create where they change 3 pixels on an image of a dog and it starts being identified as a spaceship… I’m exaggerating but still, it’s incredible how much you can optimise your attacks.
You’ll also be introduced to Alibi, a library which provides a set of algorithms or methods known as explainers. Basically, they’re algorithms that give insights into trained model predictions. It builds on top of EDA that was introduced in the previous sections and showcases more ways you can learn about “why a model predicts x/y/z” basically.
Lastly, you’ll learn about hyper-parameter optimisation and how to use Optuna to optimise your attacks. The course introduces the basics of this and how to use Optuna most of the complexity but don’t panic if you feel like this section raises more questions than it answers. Hyper-parameter optimisation is not easy and while the library makes it seem like a breeze, if you dig deeper the core concepts are more difficult.
Model Inversion
Inverting a model in order to get a representation of the training data.
The inversion section showcases how to recover data by querying a model. As mentioned in the course, this works best when the example for each class (assuming we’re attacking a classification model) are very similar to each other. Basically, this attack works the easiest when the model is overfit. The MIFace attack showcased in the course was originally developed for facial recognition systems.
Don’t go thinking that after this course you’ll be able to recover the training data for Claude or ChatGPT… But you’ll be introduced to blind inference to attack a diffusion model (Image generation) to recover training data from it! This reminded me of the ChatGPT Ghibli fiasco where they got accused of exploiting artists’ work (ie. ingesting copyrighted material).
Model Poisoning
Training-time attacks designed to influence decisions made by the final model
This is another great section that is simple to think about and understand and that you may have done in the past as part of CTFs (I believe there was a challenge in picoCTF some time ago). However, this showcases why these sorts of things happen and you can abuse it.
With LLMs ingesting everything and anything nowadays, we’ve already seen some cases of this happening:
- elder-plinius has become quite popular on twitter for jail-breaking models and extracting system prompts. I don’t have the reference but some models obviously ingested his tweets, code, etc and there’s been cases of where simply mentioning parts of his name in the conversation would trigger the jailbreaks.
- There was also another instance of indirect poisoning where a GPT model was using people’s feedback for post-training. This is an anecdotal tweet from Will Depue, a researcher at OpenAI, that explains how a GPT model stopped responding to Croatian queries entirely because Croatian users were more prone to down vote messages!
Large Language Model (LLM) Attacks
LLM Model attacks: Prompt Injection, Data Poisoning, Training Data Extraction
This being the last section, if you’re following AI news on twitter and have looked into attacking AI LLM models, this should be a breeze and quite easy to run through.
It covers techniques that you’ve seen in previous sections but applied to LLMs. You also get to implement part of the Extracting Training Data from Large Language Models paper and attempt to extract information from the data that might have been ingested as part of training.
The assessment for this section only focuses on the first section but all three parts are interesting and worth learning about.
The Exam
To earn the certificate, you’ll need to complete the assessments. There’s 6 exercises being assessed, one for each major topic. The exam is very straightforward, it’s slightly harder than the coursework but not by much. Easily achievable in the allocated time. You also have all the material at your disposal so feel free to copy code from previous exercises and what not. There’s no need to remember everything.
You can also do the exam after every section instead of waiting to do everything at the end. Just remember to save your answers because they grade all sections at once.
Finally, once you achieve a score above 90% you’ll pass the course and receive the coveted certification highlighting your hard work:
You’ll then receive the digital certification within 24 hours via email. Or, if you’re impatient like me you can retrieve it from here:
You can take a clean screenshot of your cert using your browser’s node screenshot functionality. Open the html inspector and right click on the node you wish to screenshot to get a perfect screenshot without worrying about having to crop or get the perfect delimitation:
This leaves you with a beautiful certificate to proudly display on LinkedIn:
Protip
Once you’re done with the exam, mine crypto with the remaining GPU hours to make the cert cheaper. jk, but more seriously: you can still use the environment after passing the exam. Also GPU is GPU hours… Use it to practice, run RL pipeline or anything else… You paid for ec2 hours in the end so make use of them.
Final Remarks
Most concepts are easy to grasp if you’ve played around with AI and machine learning in general. The only hard part will be remembering all the attacks and everything that is possible. Also remembering how to use the attacks/algorithms with automation tools can be a little overwhelming but as always… you just need time to practice and as they say in the course, there’s a huge number of algos so it’s normal not knowing/remembering everything. Just know that they exist, remember the attacks and know where to look.
Unrelated, but I got caught off guard because the following code comments in the exam and I thought they somehow knew my handle (#FYXME):
Lastly, one of the authors of this course Will Pearce, also has his own Adversary Machine Learning training platform which you can look into if you want to practice further: https://dreadnode.io/
References
- https://dreadnode.io/
- https://docs.pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#gradients
- https://nvd.nist.gov/vuln/detail/CVE-2019-20634
- https://arxiv.org/abs/2303.14126
- https://datascience.stackexchange.com/questions/32651/what-is-the-use-of-torch-no-grad-in-pytorch
- https://docs.seldon.ai/alibi-explain/
- https://optuna.org/
- https://lnwatson.co.uk/posts/llm-training-poisoning/
- https://www.nvidia.com/en-us/data-center/products/a10-gpu/
Appendix:
Course Outline
1. Introduction
Overview of topics covered in the hands-on notebooks
- Course Overview Notebook
- Getting Started With JupyterLabs Notebook
2. Model Evasion
Generating a sample to intentionally get misclassified while preserving some other property (such as human imperceptibility).
- Evasion Open-Box Notebook
- Evasion Closed-Box Notebook
3. Model Extraction
Using queries to build a functionally equivalent model.
- Extraction Basics Notebook
- Extraction "the Hard Way" Notebook
4. Model Assessments
Applying building-block techniques to perform wholistic and useful security assessments of ML models.
- Assessments ART, TextAttack, Alibi Notebook
- Assessments Optuna Notebook
5. Model Inversion
Inverting a model in order to get a representation of the training data.
- Inversion Notebook
6. Model Poisoning
Training-time attacks designed to influence decisions made by the final model
- Poisoning Notebook
7. LLM Attacks
LLM Model attacks: Prompt Injection, Data Poisoning, Training Data Extraction
- LLM Prompting Notebook
- LLM Poisoning Notebook
- LLM Extraction Notebook
8. Course Certificate Coding AssessmentEnvironment specifications
root@8cde1fbe28c8:/dli/task# curl http://169.254.169.254/latest/meta-data/instance-type
g5.2xlarge
root@8cde1fbe28c8:/dli/task# uname -a
Linux 8cde1fbe28c8 5.11.0-1028-aws #31~20.04.1-Ubuntu SMP Fri Jan 14 14:37:50 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
root@939ff24637c3:/dli/task# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 291G 44G 248G 15% /
tmpfs 64M 0 64M 0% /dev
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/root 291G 44G 248G 15% /usr/bin/nvidia-smi
shm 64M 0 64M 0% /dev/shm
tmpfs 16G 12K 16G 1% /proc/driver/nvidia
tmpfs 3.2G 904K 3.2G 1% /run/nvidia-persistenced/socket
devtmpfs 16G 0 16G 0% /dev/nvidia0
tmpfs 16G 0 16G 0% /proc/acpi
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
root@939ff24637c3:/dli/task# free -h
total used free shared buff/cache available
Mem: 31Gi 756Mi 18Gi 0.0Ki 12Gi 29Gi
Swap: 0B 0B 0B
root@939ff24637c3:/dli/task# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 4
initial apicid : 4
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 4
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 3274.366
cache size : 512 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 5
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 8
core id : 1
cpu cores : 4
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 6
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 2800.000
cache size : 512 KB
physical id : 0
siblings : 8
core id : 2
cpu cores : 4
apicid : 5
initial apicid : 5
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
processor : 7
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7R32
stepping : 0
microcode : 0x830107f
cpu MHz : 3190.944
cache size : 512 KB
physical id : 0
siblings : 8
core id : 3
cpu cores : 4
apicid : 7
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
bugs : sysret_ss_attrs null_seg spectre_v1 spectre_v2 spec_store_bypass
bogomips : 5600.00
TLB size : 3072 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management:
root@939ff24637c3:/dli/task# nvidia-smi
Thu Jan 22 00:44:12 2026
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 12.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10G On | 00000000:00:1E.0 Off | 0 |
| 0% 21C P8 17W / 300W | 0MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+