'How to make Intel GPU available for processing through pytorch?
I'm using a laptop which has Intel Corporation HD Graphics 520. Does anyone know how to it set up for Deep Learning, specifically Pytorch? I have seen if you have Nvidia graphics I can install cuda but what to do when you have intel GPU?
Solution 1:[1]
PyTorch doesn't support anything other than NVIDIA CUDA and lately AMD Rocm.
Intels support for Pytorch that were given in the other answers is exclusive to xeon line of processors and its not that scalable either with regards to GPUs.
Intel's oneAPI
formerly known ad oneDNN
however, has support for a wide range of hardwares including intel's integrated graphics but at the moment, the full support is not yet implemented in PyTorch as of 10/29/2020 or PyTorch 1.7.
But you still have other options. for inference you have couple of options.
DirectML is one of them. basically you convert your model into onnx, and then use directml provider to run your model on gpu (which in our case will use DirectX12 and works only on Windows for now!)
Your other Option is to use OpenVino and TVM both of which support multi platforms including Linux, Windows, Mac, etc.
All of them use ONNX models so you need to first convert your model to onnx format and then use them.
Solution 2:[2]
Intel provides optimized libraries for Deep and Machine Learning if you are using one of their later processors. A starting point would be this post, which is about getting started with Intel optimization of PyTorch. They provide more information about this in their AI workshops.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|---|
Solution 1 | Hossein |
Solution 2 | Alexander Mayer |