Category "gpu"

Is there any way to break the limit of openCL memory in android?

I want to allocate 4.5GB to my openCL program in android phone wiht 8GB memory. But I found the memory size from CL_DEVICE_GLOBAL_MEM_SIZE is much lower than th

tensorflow error This file requires compiler and library support for the ISO C++ 2011 standard

The result is below,I run the project stylegan2, but it fails. So I need help. The link is https://github.com/NVlabs/stylegan2 File "/home/ubuntu/worksp

Tensorflow Adding visible gpu devices: 0

Adding a GPU to a TF takes a long time (about 5 minutes). 2020-10-13 20:40:44.526254: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successful

docker stack deploy with GPU , but can't find nvidia devices

docker stack deploy with GPU , but can't find nvidia devices description: When I use docker-compose up start the program, the code works well! But when I use

Use of pytorch dataset for model inference- GPU

I am running T5-base-grammar-correction for grammer correction on my dataframe with text column from happytransformer import HappyTextToText from happytransform

hipLaunchKernel failed

I am using ROCM's hip for programming. When I use hipLaunchKernelGGL, everything works fine. But when I use hipLaunchKernel(), I always get bizarre errors. Erro

Is the render target view the only way to output data from pixel shader in DirectX?

Purpose: I want to render an image in the screen and save it in my disk. Description: I have a render target view. I have a input shader resource view with its

Opencl clBuildProgram() access violation exception

I'm having a weird error executing an opencl kernel, When I'm trying to build the opencl kernel using the clBuildProgram() execution err = clBuildProgram(progra

Video Lagging while Object Detection on C++

There is an object detection pre-trained model i.e. Yolov3/v4-tiny, when the algorithm is implemented in python, everything looked good, there is no lag while p

How can I run tensorflow without GPU?

My system has a GPU. When I run Tensorflow on it, TF automatically detects GPU and starts running the thread on the GPU. How can I change this? I.e. how can I r

Lightgbm classifier with gpu

model = lgbm.LGBMClassifier(n_estimators=1250, num_leaves=128,learning_rate=0.009,verbose=1)`enter code here` using the LGBM classifier is there way to use t

Nvidia NVML Driver/library version mismatch [closed]

When I run nvidia-smi, I get the following message: Failed to initialize NVML: Driver/library version mismatch An hour ago I received the sa

tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error

I am trying to use GPU with Tensorflow. My Tensorflow version is 2.4.1 and I am using Cuda version 11.2. Here is the output of nvidia-smi. +--------------------

how important are many of the new GPU algorithms that are being published? [closed]

I have done some GPU programming in the past and realized that unlike in typical sequential algorithms, where for example you may have an O(nl

Is Google Colab notebook sharing my Drive data with the notebook author?

I am following an online tutorial and the tutor has provided a Google Colab notebook as a supplement. But whenever I run any of the cells from the notebook, I a

Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory;

When I try to run a python script , which uses tensorflow, it shows following error ... 2020-10-04 16:01:44.994797: I tensorflow/stream_executor/platform/defaul

Display GPU Usage While Code is Running in Colab

I have a program running on Google Colab in which I need to monitor GPU usage while it is running. I am aware that usually you would use nvidia-smi in a comman