'How to clean garbage from CUDA in Pytorch?
I teached my neural nets and realized that even after torch.cuda.empty_cache() and gc.collect() my cuda-device memory is filled. In Colab Notebooks we can see the current variables in memory, but even I delete every variable and clean the garbage gpu-memory is busy. I heard it's because python garbage collector can't work on cuda-device. Please explain me, what should I do?
Solution 1:[1]
For me I have to delete the model before emptying the cache:
del model
gc.collect()
torch.cuda.empty_cache()
then you can check memory is freed using 'nvidia-smi'.
Solution 2:[2]
You can do this:
import gc
import torch
gc.collect()
torch.cuda.empty_cache()
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | DvdG |
Solution 2 | razimbres |