Category "cuda"

Pytorch with CUDA local installation fails

I am trying to install PyTorch with CUDA. I followed the instructions (installation using conda) mentioned in https://pytorch.org/get-started/locally/ conda in

Nvidia NVML Driver/library version mismatch [closed]

When I run nvidia-smi, I get the following message: Failed to initialize NVML: Driver/library version mismatch An hour ago I received the sa

Numba support for cuda cooperative block synchronization?? Python numba cuda grid sync

Numba Cuda has syncthreads() to sync all thread within a block. How can I sync all blocks in a grid without exiting the current kernel? In C-Cuda there's a coo

ncu-ui won't run: Could not load the Qt platform plugin "xcb" in "" even though it was found

I'm trying to run the ncu-ui profiler GUI on a CentOS 7 Linux system (using ncu-ui 2022.1), both as root and as a regular user. I'm getting the error: qt.qpa.pl

sprintf-like function for CUDA device-side code?

I could not find anything in internet. Due to the fact that it is possible to use printf in a __device__ function I am wondering if there is a sprintf like func

A cuda wrapper to execute openCL

I'm involved in a project where I have to do gpu programming, one of my constraint is to do it on a nvidia device (thus in CUDA). But I haven't access to a dev

What is the canonical way to check for errors using the CUDA runtime API?

Looking through the answers and comments on CUDA questions, and in the CUDA tag wiki, I see it is often suggested that the return status of every API call shoul

cuda 10.2 in Qt 5.14 ubuntu 18.04

I am planning to start cuda programming in Qt framework. I would like to start with a simple example. system information : OS : ubuintu 18.04 LTS Qt version : 5

In a CUDA kernel, how do I store an array in "local thread memory"?

I'm trying to develop a small program with CUDA, but since it was SLOW I made some tests and googled a bit. I found out that while single variables are by defau

A top-like utility for monitoring CUDA activity on a GPU

I'm trying to monitor a process that uses CUDA and MPI, is there any way I could do this, something like the command "top" but that monitors the GPU too?

CUDA - Implementing Device Hash Map?

Does anyone have any experience implementing a hash map on a CUDA Device? Specifically, I'm wondering how one might go about allocating memory on the Device an

When should NVRTC compilation produce a CUBIN?

If I understand the workflow description in the NVRTC documentation correctly, here's how it works: Create an NVRTC program from the source text. Compile the NV

Writing from Device to Host and notifying the host

Using CUDA 5 with VS 2012 and capability 3.5 (Titan and K20). At particular stages of my kernel execution, I want to send a generated data chunk to the host me

Writing from Device to Host and notifying the host

Using CUDA 5 with VS 2012 and capability 3.5 (Titan and K20). At particular stages of my kernel execution, I want to send a generated data chunk to the host me