Skip to content
Advertisement

Tag: cuda

Like nvidia-smi, can nvidia-occupancy(cuda occupancy) also collect values in real time?

I already collect timestream data once every 10 seconds using nvidia-smi. nvidia-occupancy would also like to collect data in this way. Is there any way to save nvidia-occupancy timeseries data using linux terminal? Currently, the values that can be easily obtained were only the maximum values. Answer Currently, there isn’t any tool to collect occupancy information the way nvidia-smi collects

Running CUDA on ThinkPad w550s Ubuntu system (Quadro K620M) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question

Kernel update breaks CUDA

I have a NVIDIA Grid K2 GPU allocated to a virtual server running Ubuntu 14.04. To reinstall the proper drivers after an automatic kernel update I ran sudo apt-get update followed by sudo apt-get install nvidia-current. Now I cannot get CUDA 7.5 to work any longer. If I run the deviceQuery sample I get the following message: This is the

ImportError: libcudart.so.7.0: cannot open shared object file: No such file or directory

When I execute python command, a “ImportError” error occurs.Some solved it by adding “export LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib:/usr/local/cuda-5.5/lib64” in the /etc/profile.I try it but do no effect.I find the “libcudart.so.7.0” in /usr/share/man/man7/libcudart.so.7 by executing the whereis command and have no idea what should I do next to solve it. Answer This error is being raised because the loader cannot find version 7.0 of

CUDA C++: Using a template function which calls a template kernel

I have a class which has a template function. This function calls a template kernel. I’m doing my development in Nsight on a Linux box. In doing this, I encounter the following pair of conflicting requirements: 1 – When implementing a template function, the definition must appear in the *.h (or *.cu.h) file since the code is not generated until

Do i need to install Nvidia’s SDK(CUDA) for OpenCL to detect Nvidia GPU?

I have a code written in C (using opencl specs) to list all the available devices. My PC has an AMD FirePro as well as Nvidia’s Tesla graphics card installed. I first installed AMD-APP-SDK-v3.0-0.113.50-Beta-linux64.tar.bz2 but it didn’t seem to work so thereafter I installed OpenCL™ Runtime 15.1 for Intel® Core™ and Intel® Xeon® Processors for Red Hat* and SLES* Linux*

__ldg causes slower execution time in certain situation

I posted this issue already yesterday, but wasnt well received, though I have solid repro now, please bear with me. Here are system specs: Tesla K20m with 331.67 driver, CUDA 6.0, Linux machine. Now I have a global memory read heavy application therefore I tried to optimize it using __ldg instruction on every single place where I am reading global

How do I get my CUDA specs on a Linux machine?

I’m accessing a remote machine that has a good nVidia card for CUDA computing, but I can’t find a way to know which card it uses and what are the CUDA specs (version, etc.). I used the “lspci” command on the terminal, but there is no sign of a nvidia card. I’m pretty sure it has a nVidia card, and

Advertisement