有谁成功在Ubuntu 14.04.3 LTS x86_64上成功安装CUDA 7.5?


12

我的工作站有两个GPU(Quadro K5200和Quadro K2200),并安装了最新的NVIDIA驱动程序(版本:352.41)。cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.debCUDA 7.5 Downloads下载文件后,我尝试安装它,但结果如下:

root@P700-Bruce:/home/bruce/Downloads# sudo apt-get install cuda
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 cuda : Depends: cuda-7-5 (= 7.5-18) but it is not going to be installed
 unity-control-center : Depends: libcheese-gtk23 (>= 3.4.0) but it is not going to be installed
                        Depends: libcheese7 (>= 3.0.1) but it is not going to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.

我已经尝试了解决方案:

  1. sudo apt-get remove nvidia-cuda-* #删除旧的nvidia-cuda软件包
  2. 安装未满足的依赖项:

    root@P700-Bruce:/home/bruce/Downloads# apt-get install cuda-7-5
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:
    
    The following packages have unmet dependencies:
     cuda-7-5 : Depends: cuda-toolkit-7-5 (= 7.5-18) but it is not going to be installed
                Depends: cuda-runtime-7-5 (= 7.5-18) but it is not going to be installed
     unity-control-center : Depends: libcheese-gtk23 (>= 3.4.0) but it is not going to be installed
                            Depends: libcheese7 (>= 3.0.1) but it is not going to be installed
    E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
    
    root@P700-Bruce:/home/bruce/Downloads# apt-get install cuda-toolkit-7-5
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:
    
    The following packages have unmet dependencies:
     cuda-toolkit-7-5 : Depends: cuda-core-7-5 (= 7.5-18) but it is not going to be installed
                        Depends: cuda-command-line-tools-7-5 (= 7.5-18) but it is not going to be installed
                        Depends: cuda-samples-7-5 (= 7.5-18) but it is not going to be installed
                        Depends: cuda-documentation-7-5 (= 7.5-18) but it is not going to be installed
                        Depends: cuda-visual-tools-7-5 (= 7.5-18) but it is not going to be installed
     unity-control-center : Depends: libcheese-gtk23 (>= 3.4.0) but it is not going to be installed
                            Depends: libcheese7 (>= 3.0.1) but it is not going to be installed
    E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
    
  3. 安装和使用智能

我的Ubuntu14.04操作系统已安装完毕,并进行了软件更新并安装了最新的Nvidia驱动程序。

你能帮忙吗?提前致谢!

Answers:


8

CUDA的安装有些棘手。我已经按照以下步骤操作,它对我有用。您也可以参考此链接

环境确认:

  1. lspci | grep -i nvidia (确认显示了NVIDIA板的信息)

  2. uname -m (确保它是x86_64)

  3. gcc --version (确保已安装)

安装CUDA –

  1. cuda_7.5.18_linux.runhttps://developer.nvidia.com/cuda-downloads下载文件

  2. 运行以下命令:

    sudo apt-get install build-essential
    echo blacklist nouveau option nouveau modeset=0 |sudo tee -a /etc/modprobe.d/blacklist-nouveau.conf 
    sudo update-initramfs -u
    
  3. 重新启动电脑

  4. 在登录屏幕上,按Ctrl+ Alt+ F1并登录到您的用户。

  5. 转到拥有CUDA驱动程序的目录,然后运行

    chmod a+x .
    sudo service lightdm stop
    sudo bash cuda-7.5.18_linux.run --no-opengl-libs
    
  6. 在安装过程中:

    • 接受EULA条件
    • 对安装NVIDIA驱动程序说是
    • 对安装CUDA Toolkit +驱动程序说是
    • 对安装CUDA示例说是
    • 无需使用Nvidia重建任何Xserver配置
  7. 检查/dev/nvidia*文件是否存在。如果没有,请执行以下操作

    sudo modprobe nvidia
    
  8. 设置环境路径变量

    export PATH=/usr/local/cuda-7.5/bin:$PATH
    export LD_LIBRARY_PATH=/usr/local/cuda-7.5/lib64:$LD_LIBRARY_PATH
    
  9. 验证驱动程序版本

    cat /proc/driver/nvidia/version`
    
  10. 检查CUDA驱动程序版本

    nvcc –V
    
  11. 重新打开lightdm

    sudo service lightdm start
    
  12. Ctrl+ Alt+ F7并通过GUI登录系统

  13. 创建CUDA示例,NVIDIA_CUDA-7.5_Samples通过终端转到文件夹,然后运行以下命令:

    make
    cd bin/x86_64/linux/release/
    ./deviceQuery
    ./bandwidthTest
    

    两种测试最终都应该在终端中输出“ PASS”

  14. 重新启动系统


非常感谢!这最终可以在我的华硕UX32VD(配备GeForce 620M的Optimus笔记本电脑)上使用。我尝试了一切。昨天我可以使nvidia-352与Bumblebee一起使用,但是在安装CUDA工具包后,我无法运行任何示例(好像我没有CUDA卡,是的,我正在使用optirun)。其他驱动程序使我陷入登录循环或黑屏unity-greeter!我感激
不尽:)

唯一的事情,我需要改变这里是从optionoptions黑名单暴发户区间内。
TheM00s3'8

我有一台装有NVIDIA GeForce GTX 680的HP台式机。您的说明大体上可行,除了运行文件(cuda_7.5.18_linux.run)随附的图形卡驱动程序导致lightdm重启后退出工作(在grub之后,您会看到黑屏,光标无休止地闪烁) )。我的解决方案是先通过卸载该驱动程序sudo apt-get purge nvidia-*,然后使用从NVIDIA官方网站下载的最新运行文件进行安装。而且效果很好。另一种解决办法是这样的溶液(A)askubuntu.com/a/676772/194156

2

有两种安装适合的CUDA驱动程序的方法(用于Optimus以及混合主板上的内置图形芯片组)-此处描述的第一种是最简单的,而第二种描述则比较麻烦但有效:

一种)

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
sudo apt-get install nvidia-355 nvidia-prime
sudo reboot

B)

方法B的描述在这里,但是已经很老了(由dschinn1001用户解释)-这种方法B比较谦虚,可能有风险,但无害。:

如何在Ubuntu13.04中安装Nvidia驱动程序GT 520和Cuda 5.0?

Nvidia的beta-driver-package可以在Linux上下载:

http://www.nvidia.de/object/cuda_1_1_beta.html

方法A更简单,但不清楚,方法如何与xscreensaver交互并且方法B较旧,但驱动程序包也在最近时间进行了更新,并且方法B完成后,它应与条件为xscreensaver的xscreensaver一起使用,才能更好地工作。已安装。(我在13.10上测试了方法B,即使使用xscreensaver,它也能很好地工作。我认为此线程的其余部分取决于硬件。)

此外,对于带有Optimus图形芯片组的大黄蜂,对大黄蜂的这些调整也是必要的:

如何在14.04中设置nVidia Optimus / Bumblebee


1

听起来像lp bug 1428972

用户fennytansy在评论10中添加了一种解决方法:

sudo apt-get install libglew-dev libcheese7 libcheese-gtk23 libclutter-gst-2.0-0 libcogl15 libclutter-gtk-1.0-0 libclutter-1.0-0


运行后,命令屏幕变黑。我只能访问tty1吗?您知道其他解决方案吗?
Karesh Arunakirinathan 2015年

1

我使用runfile方法成功安装了CUDA。设置起来有点麻烦,因为必须使用runfile方法安装主图形驱动程序(请参见此处)。

尝试安装驱动程序。这可以通过使用runfile方法来完成。它将提示您安装的每个部分,并且可以禁用GL库和工具箱。由于CUDA样本需要使用libGLU.so而不是,因此统一控制中心也一直给我一些问题libGL.so。构建自己的学习示例时,这很容易解决。


1

尝试卸载nvidia驱动程序,然后直接安装不带该驱动程序的cuda。在全新的Ubuntu 14.04上,我按照nvidia网站上的说明进行操作。除了验证事物的兼容版本(gcc,内核)之外,这些指令还包括:

sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb 
sudo apt-get update
sudo apt-get install cuda 

幸运的是,作为上述步骤的副产品,安装了正确的nvidia驱动程序。


1

我花了整整一天的时间试图使用“ ppa:graphics-drivers / ppa ”将NVIDIA驱动程序更新到352版。一切都失败了。一次安装后,gpu-manager.log报告已安装驱动程序,而Xorg.0.log将报告相反的情况。

nouveau驱动程序已被删除并列入黑名单:sudo apt-get --purge remove xserver-xorg-video-nouveau cat /etc/modprobe.d/nouveau-nomodeset-jsrobin.conf黑名单nouveau选项nouveau modeset = 0别名nouveau off别名lbm-nouveau关闭

我最终放弃了,使用了一个纯粹的“ NVIDIA ... bin”解决方案。

  1. 如上所示,列入黑名单的新酒。
  2. 完全卸载了如上所述的nouveau Xserver。
  3. 将系统BIOS设置为以PCIe(两个nvidia卡)为主,并停用主板HD4600接口。
  4. 启动进入恢复模式,激活网络,然后进入控制台模式。
  5. 运行“ NVIDIA-Linux-x86_64-352.41.run -uninstall”只是为了确保没有留下任何东西。
  6. 删除/ etc,/ usr / local中的所有旧目录,这些旧目录看起来像是过去的cuda或nvidia安装的残余。
  7. 跑“ NVIDIA-Linux-x86_64-352.41.run”
  8. 运行“ NVIDIA-Linux-x86_64-352.41.run --check”以验证一切正确。
  9. 然后运行“ cuda_7.5.18_linux.run”以完成安装。事情目前正在运作。两台显示器都已启动并且正在工作。当前正在构建cuda示例文件。确保在NVIDIA安装箱中使用“ --help”标志。我决定采用bin途径的主要原因(连同其他替代方法之一不起作用,是因为“ bin”方法为“ mesa” OpenGL更新后的恢复提供了简便的途径。 结果

1

我今天重新启动了Ubuntu,发现还有另一个未满足的依赖项,例如libcog15 : Depends: mesa-driver...(我不记得完整的软件包名称),因此我曾经apt-get install安装“ mesa-driver”。之后,CUDA 7.5成功安装。

请注意,我的内核版本是3.19.0-28-generic,而gcc版本是Ubuntu 4.8.4-2ubuntu1〜14.04,在CUDA 7.5官方文档中找不到。我会检查它是否真的有效。


1
由于某种原因,计算机上的台面驱动程序在启动时引起了各种统一性问题,并导致了系统完全故障。小心。
asdf

@Bruce Yo-通常这不仅是台面问题,还取决于混合nvidia-graphics-cards上的芯片组,它们各不相同。您也应该考虑我的解决方案。:o)
dschinn1001 2015年

0

我尝试了sudo su和apt-get install cuda而不是sudo apt-get install cuda。有效。

 sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb
 sudo apt-get update
 sudo su
 apt-get install cuda

欢迎来到Ask Ubuntu,很高兴看到您分享知识。但是,这不是论坛,这是一个问答网站,请查看此帮助教程。复制其他答案(对于661266用户)无济于事,当您获得足够的声誉时,您将可以投票。
user.dz

@Sneetsher感谢您的评论。我已经尝试了661266用户的回答,但是没有用。当我使用“ su”而不是“ sudo”时,它起作用了。我不知道为什么。但是,它适用于我的试用。我认为尝试我的解决方案对某些人来说是值得的。
softgearko'2


-1

-lightdm登录问题(登录循环)

驱动程序istall出现问题(“驱动程序安装失败:似乎正在运行X服务器...”)

要在Ubuntu 16.04 64bit上成功安装NVidia CUDA工具包,我只需要这样做:

  1. 在pendrive上制作一个Ubuntu的liveImage(8GB的笔就足够了)-这样的尝试将节省大量的精力,然后在您的主机Linux系统上安装失败!
  2. 在pendrive上进行实时会话登录(“在安装前尝试Ubuntu,”)
  3. 在实时会话中添加sudo用户:

    sudo adduser admin(#pass:admin1)

    sudo usermod -aG sudo管理员

  4. 从实时会话中注销,以#admin身份登录

  5. 从NVidia官方网站下载CUDA工具包(〜1.5GB)
  6. 更改下载的安装程序文件的特权(请勿在此步骤中安装!):
    sudo chmod + x cuda_X.X.run

  7. 切换到控制台视图:

    Ctr + Alt + F1(打开终端视图)Ctr + Alt + F7(从终端视图切换到图形服务器)

  8. 在控制台视图(Ctr + Alt + F1)登录:

    登录名:admin密码:admin1

  9. 停止图形运行服务:

    sudo service lightdm停止

  10. 检查图形服务器是否关闭-切换Ctr + Alt + F7之后,显示器应为黑色,然后在控制台视图Ctr + Alt + F1上切换回去

  11. 使用以下配置安装CUDA Toolkit:

    sudo ./cuda_X.X.run(按'q'进行许可证读取跳过)不安装OpenGL库不更新系统X配置其他选项设置为yes,并且路径为默认值

  12. 打开图形服务器:

    sudo service lightdm启动

  13. 以用户身份登录(如果您在实时会话注销时自动以#ubuntu身份登录):

    登录名:admin密码:admin1

  14. 检查在GPU块上提供的简单并行矢量和的nvcc编译器是否可用:

    将vecSum.cu和book.h保存为新文件,进行编译并在以下终端运行:/usr/local/cuda-8.0/bin/nvcc vecSum.cu && clear && ./a.out

  15. 检查控制台打印输出-应类似于:0.000000 + 0.000000 = 0.000000

    -1.100000 + 0.630000 = -0.000000
    
    -2.200000 + 2.520000 = 0.319985
    
    -3.300000 + 5.670000 = 2.119756
    -4.400000 + 10.080000 = 5.679756
    -5.500000 + 15.750000 = 10.250000
    -6.600000 + 22.680000 = 16.017500
    -7.700000 + 30.870001 = 23.170002
    -8.800000 + 40.320000 = 31.519997
    -9.900000 + 51.029999 = 41.129967
    
  16. 如果pendrive live会话一切正常,请在主机linux系统上执行相同操作

PS请注意,这不是理想的教程,但对我来说效果很好!

======= vecSum.cu =====

#include "book.h"
#define N 50000
///usr/local/cuda-8.0/bin/nvcc vecSum.cu && clear && ./a.out

//"HOST" = CPU
//"Device" = GPU

__global__ void add( float *a, float *b, float *c )
{
    int tid = blockIdx.x;
    if ( tid < N )
        c[ tid ] = a[ tid ] + b[ tid ];
}

int main ( void )
{
    float a[ N ], b[ N ], c[ N ];
    float *dev_a, *dev_b, *dev_c;
    //GPU memory allocation
    HANDLE_ERROR( cudaMalloc( ( void** )&dev_a, N * sizeof( float ) ) );
    HANDLE_ERROR( cudaMalloc( ( void** )&dev_b, N * sizeof( float ) ) );
    HANDLE_ERROR( cudaMalloc( ( void** )&dev_c, N * sizeof( float ) ) );

    //sample input vectors CPU generation
    for ( int i = 0; i < N; i++ )
    {
        a[ i ] = -i * 1.1;
        b[ i ] = i * i * 0.63;
    }

    //copy/load from CPU to GPU data vectors a[], b[] HostToDevice
    HANDLE_ERROR( cudaMemcpy( dev_a, a, N * sizeof( float ), cudaMemcpyHostToDevice ) );
    HANDLE_ERROR( cudaMemcpy( dev_b, b, N * sizeof( float ), cudaMemcpyHostToDevice ) );

    //calculate sum of vectors on GPU
    add<<<N,1>>> ( dev_a, dev_b, dev_c );

    //copy/load result vector from GPU to CPU c[] DeviceToHost
    HANDLE_ERROR( cudaMemcpy( c, dev_c, N * sizeof( float ), cudaMemcpyDeviceToHost ) );

    //printout results
    for ( int i = 0; i < 10; i++ ) printf( "%f + %f = %f\n", a[ i ], b[ i ], c[ i ] );

    //free memory and constructed objects on GPU
    cudaFree( dev_a );
    cudaFree( dev_b );
    cudaFree( dev_c );

    return 0;
}

========= book.h ======

/*
 * Copyright 1993-2010 NVIDIA Corporation.  All rights reserved.
 *
 * NVIDIA Corporation and its licensors retain all intellectual property and
 * proprietary rights in and to this software and related documentation.
 * Any use, reproduction, disclosure, or distribution of this software
 * and related documentation without an express license agreement from
 * NVIDIA Corporation is strictly prohibited.
 *
 * Please refer to the applicable NVIDIA end user license agreement (EULA)
 * associated with this source code for terms and conditions that govern
 * your use of this NVIDIA software.
 *
 */


#ifndef __BOOK_H__
#define __BOOK_H__
#include <stdio.h>

static void HandleError( cudaError_t err,
                         const char *file,
                         int line ) {
    if (err != cudaSuccess) {
        printf( "%s in %s at line %d\n", cudaGetErrorString( err ),
                file, line );
        exit( EXIT_FAILURE );
    }
}
#define HANDLE_ERROR( err ) (HandleError( err, __FILE__, __LINE__ ))


#define HANDLE_NULL( a ) {if (a == NULL) { \
                            printf( "Host memory failed in %s at line %d\n", \
                                    __FILE__, __LINE__ ); \
                            exit( EXIT_FAILURE );}}

template< typename T >
void swap( T& a, T& b ) {
    T t = a;
    a = b;
    b = t;
}


void* big_random_block( int size ) {
    unsigned char *data = (unsigned char*)malloc( size );
    HANDLE_NULL( data );
    for (int i=0; i<size; i++)
        data[i] = rand();

    return data;
}

int* big_random_block_int( int size ) {
    int *data = (int*)malloc( size * sizeof(int) );
    HANDLE_NULL( data );
    for (int i=0; i<size; i++)
        data[i] = rand();

    return data;
}


// a place for common kernels - starts here

__device__ unsigned char value( float n1, float n2, int hue ) {
    if (hue > 360)      hue -= 360;
    else if (hue < 0)   hue += 360;

    if (hue < 60)
        return (unsigned char)(255 * (n1 + (n2-n1)*hue/60));
    if (hue < 180)
        return (unsigned char)(255 * n2);
    if (hue < 240)
        return (unsigned char)(255 * (n1 + (n2-n1)*(240-hue)/60));
    return (unsigned char)(255 * n1);
}

__global__ void float_to_color( unsigned char *optr,
                              const float *outSrc ) {
    // map from threadIdx/BlockIdx to pixel position
    int x = threadIdx.x + blockIdx.x * blockDim.x;
    int y = threadIdx.y + blockIdx.y * blockDim.y;
    int offset = x + y * blockDim.x * gridDim.x;

    float l = outSrc[offset];
    float s = 1;
    int h = (180 + (int)(360.0f * outSrc[offset])) % 360;
    float m1, m2;

    if (l <= 0.5f)
        m2 = l * (1 + s);
    else
        m2 = l + s - l * s;
    m1 = 2 * l - m2;

    optr[offset*4 + 0] = value( m1, m2, h+120 );
    optr[offset*4 + 1] = value( m1, m2, h );
    optr[offset*4 + 2] = value( m1, m2, h -120 );
    optr[offset*4 + 3] = 255;
}

__global__ void float_to_color( uchar4 *optr,
                              const float *outSrc ) {
    // map from threadIdx/BlockIdx to pixel position
    int x = threadIdx.x + blockIdx.x * blockDim.x;
    int y = threadIdx.y + blockIdx.y * blockDim.y;
    int offset = x + y * blockDim.x * gridDim.x;

    float l = outSrc[offset];
    float s = 1;
    int h = (180 + (int)(360.0f * outSrc[offset])) % 360;
    float m1, m2;

    if (l <= 0.5f)
        m2 = l * (1 + s);
    else
        m2 = l + s - l * s;
    m1 = 2 * l - m2;

    optr[offset].x = value( m1, m2, h+120 );
    optr[offset].y = value( m1, m2, h );
    optr[offset].z = value( m1, m2, h -120 );
    optr[offset].w = 255;
}


#if _WIN32
    //Windows threads.
    #include <windows.h>

    typedef HANDLE CUTThread;
    typedef unsigned (WINAPI *CUT_THREADROUTINE)(void *);

    #define CUT_THREADPROC unsigned WINAPI
    #define  CUT_THREADEND return 0

#else
    //POSIX threads.
    #include <pthread.h>

    typedef pthread_t CUTThread;
    typedef void *(*CUT_THREADROUTINE)(void *);

    #define CUT_THREADPROC void
    #define  CUT_THREADEND
#endif

//Create thread.
CUTThread start_thread( CUT_THREADROUTINE, void *data );

//Wait for thread to finish.
void end_thread( CUTThread thread );

//Destroy thread.
void destroy_thread( CUTThread thread );

//Wait for multiple threads.
void wait_for_threads( const CUTThread *threads, int num );

#if _WIN32
    //Create thread
    CUTThread start_thread(CUT_THREADROUTINE func, void *data){
        return CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)func, data, 0, NULL);
    }

    //Wait for thread to finish
    void end_thread(CUTThread thread){
        WaitForSingleObject(thread, INFINITE);
        CloseHandle(thread);
    }

    //Destroy thread
    void destroy_thread( CUTThread thread ){
        TerminateThread(thread, 0);
        CloseHandle(thread);
    }

    //Wait for multiple threads
    void wait_for_threads(const CUTThread * threads, int num){
        WaitForMultipleObjects(num, threads, true, INFINITE);

        for(int i = 0; i < num; i++)
            CloseHandle(threads[i]);
    }

#else
    //Create thread
    CUTThread start_thread(CUT_THREADROUTINE func, void * data){
        pthread_t thread;
        pthread_create(&thread, NULL, func, data);
        return thread;
    }

    //Wait for thread to finish
    void end_thread(CUTThread thread){
        pthread_join(thread, NULL);
    }

    //Destroy thread
    void destroy_thread( CUTThread thread ){
        pthread_cancel(thread);
    }

    //Wait for multiple threads
    void wait_for_threads(const CUTThread * threads, int num){
        for(int i = 0; i < num; i++)
            end_thread( threads[i] );
    }

#endif




#endif  // __BOOK_H__
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.