Install Cuda 8 Ubuntu

Welcome back! This is the fourth post in the deep learning development environment configuration series which accompany my new book, Deep Learning for Computer Vision with Python.

Installing CUDA Toolkit 7.5 on Ubuntu 14.04 Linux The following explains how to install CUDA Toolkit 7.5 on 64-bit Ubuntu 14.04 Linux. I have tested it on a self-assembled desktop with NVIDIA GeForce GTX 550 Ti graphics card. Ubuntu-16.04 and Cuda-8.0 Install Guide. Apr 28, 2017. NVIDIA libraries are notorious for breaking Xserver particularly in the ubuntu Linux distro. Here’s my installation guide on how to do a clean install without breaking display drivers. Hope it helps.

How to install CUDA Toolkit and cuDNN for deep learning As I mentioned in an earlier blog post, Amazon offers an EC2 instance that provides access to the GPU for computation purposes. This instance is named the g2.2xlarge instance and costs approximately $0.65 per hour. Installing CUDA Toolkit 8.0 on Ubuntu 16.04 As per TensorFlow documentation, following are the prerequisites to install TensorFlow with GPU support. In the previous post we tried various methods to find out if the GPU is from NVIDIA or not. $ sudo dpkg -i cuda-repo-ubuntu1404_7.5-18_amd64.deb $ sudo apt-get update $ sudo apt install cuda-7-5 This worked great on a notebook for me. However, on a desktop I started getting the infamous login loop problem. Installing Nvidia CUDA 8.0 on Ubuntu 16.04 for Linux GPU Computing (New Troubleshooting Guide) Published on April 1. However, the experience of installing CUDA on Ubuntu may be very frustrating.

Today, we will configure Ubuntu + NVIDIA GPU + CUDA with everything you need to be successful when training your own deep learning networks on your GPU.

Links to related tutorials can be found here:

  • Configuring Ubuntu for deep learning with Python (for a CPU only environment)
  • Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python (this post)
  • Configuring macOS for deep learning with Python (releasing on Friday)

If you have an NVIDIA CUDA compatible GPU, you can use this tutorial to configure your deep learning development to train and execute neural networks on your optimized GPU hardware.

Let’s go ahead and get started!

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python

If you’ve reached this point, you are likely serious about deep learning and want to train your neural networks with a GPU.

Graphics Processing Units are great at deep learning for their parallel processing architecture — in fact, these days there are many GPUs built specifically for deep learning — they are put to use outside the domain of computer gaming.

NVIDIA is the market leader in deep learning hardware, and quite frankly the primary option I recommend if you are getting in this space. It is worth getting familiar with their lineup of products (hardware and software) so you know what you’re paying for if you’re using an instance in the cloud or building a machine yourself. Be sure to check out this developer page.

It is common to share high-end GPU machines at universities and companies. Alternatively, you may build one, buy one (as I did), or rent one in the cloud (as I still do today).

If you are just doing a couple of experiments then using a cloud service provider such as Amazon, Google, or FloydHub for a time-based usage charge is the way to go.

Longer term if you are working on deep learning experiments daily, then it would be wise to have one on hand for cost savings purposes (assuming you’re willing to keep the hardware and software updated regularly).

Note: For those utilizing AWS’s EC2, I recommend you select the p2.xlarge, p2.8xlarge, or p2.16xlarge machines for compatibility with these instructions (depending on your use case scenario and budget). The older instances, g2.2xlarge and g2.8xlarge are not compatible with the version of CUDA and cuDNN in this tutorial. I also recommend that you have about 32GB of space on your OS drive/partition. 16GB didn’t cut it for me on my EC2 instance.

It is important to point out that you don’t need access to an expensive GPU machine to get started with Deep Learning. Most modern laptop CPUs will do just fine with the small experiments presented in the early chapters in my book. As I say, “fundamentals before funds” — meaning, get acclimated with modern deep learning fundamentals and concepts before you bite off more than you can chew with expensive hardware and cloud bills. My book will allow you to do just that.

How hard is it to configure Ubuntu with GPU support for deep learning?

You’ll soon find out below that configuring a GPU machine isn’t a cakewalk. In fact, there are quite a few steps and potential for things to go sour. That’s why I have built a custom Amazon Machine Instance (AMI) pre-configured and pre-installed for the community to accompany my book.

I detailed how to get it loaded into your AWS account and how to boot it up in this previous post.

Using the AMI is by far the fastest way to get started with deep learning on a GPU. Even if you do have a GPU, it’s worth experimenting in the Amazon EC2 cloud so you can tear down an instance (if you make a mistake) and then immediately boot up a new, fresh one.

Configuring an environment on your own is directly related to your:

  1. Experience with Linux
  2. Attention to detail
  3. Patience.

First, you must be very comfortable with the command line.

Many of the steps below have commands that you can simply copy and paste into your terminal; however it is important that you read the output, note any errors, try to resolve them prior to moving on to the next step.

You must pay particular attention to the order of the instructions in this tutorial, and furthermore pay attention to the commands themselves.

I actually do recommend copying and pasting to make sure you don’t mess up a command (in one case below backticks versus quotes could get you stuck).

If you’re up for the challenge, then I’ll be right there with you getting your environment ready. In fact, I encourage you to leave comments so that the PyImageSearch community can offer you assistance. Before you leave a comment be sure to review the post and comments to make sure you didn’t leave a step out.

Without further ado, let’s get our hands dirty and walk through the configuration steps.

Step #0: Turn off X server/X window system

Before we get started I need to point out an important prerequisite. You need to perform one of the following prior to following the instructions below:

  1. SSH into your GPU instance (with X server off/disabled).
  2. Work directly on your GPU machine without your X server running (the X server, also known as X11, is your graphical user interface on the desktop). I suggest you try one of the methods outlined on this thread.

There are a few methods to accomplish this, some easy and others a bit more involved.

The first method is a bit of a hack, but it works:

  1. Turn off your machine.
  2. Unplug your monitor.
  3. Reboot.
  4. SSH into your machine from a separate system.
  5. Perform the install instructions.

This approach works great and is by far the easiest method. By unplugging your monitor X server will not automatically start. From there you can SSH into your machine from a separate computer and follow the instructions outlined in this post.

The second method assumes you have already booted the machine you want to configure for deep learning:

  1. Close all running applications.
  2. Press ctrl+alt+F2 .
  3. Login with your username and password.
  4. Stop X server by executing sudo service lightdm stop .
  5. Perform the install instructions.

Please note that you’ll need a separate computer next to you to read the instructions or execute the commands. Alternatively, you could use a text-based web browser.

Step #1: Install Ubuntu system dependencies

Now that we’re ready, let’s get our Ubuntu OS up to date:

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
2
$sudo apt-getupgrade

Then, let’s install some necessary development tools, image/video I/O, GUI operations and various other packages:

C-Note overhears Michael and Lincoln discussing that there are too many fugitives for available time and they share the information with the group telling that one shall be left behind. T-Bag feels threatened and calls his cousin James Bagwell, disclosing the escape plan and asking James to call the Warden Pope in case T-Bag does not contact him later. After the crash, Kellerman tries to suffocate Lincoln, but he is saved in the last moment by a stranger. Prison break season 1 download. Abruzzi feels guilty for the death of the child and feels the need of God's forgiveness. Abruzzi asks his outside man to kidnap James, but things go wrong and the mobster kills James and his five years old son Jimmy.

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
2
$wget-Oopencv.ziphttps://github.com/Itseez/opencv/archive/3.3.0.zip
$wget-Oopencv_contrib.ziphttps://github.com/Itseez/opencv_contrib/archive/3.3.0.zip

Then, unzip both files:

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
2
$unzip opencv_contrib.zip

Running CMake

Cuda 8 Download

In this step we create a build directory and then run CMake:

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
2
$cd~/.virtualenvs/dl4cv/lib/python3.5/site-packages/
$ln-s/usr/local/lib/python3.5/site-packages/cv2.cpython-35m-x86_64-linux-gnu.socv2.so

Note: Make sure you click “<=>” button in the toolbar above to expand the code block. From there, ensure you copy and paste the ln command correctly, otherwise you’ll create an invalid sym-link and Python will not be able to find your OpenCV bindings.

Your .so file may be some variant of what is shown above, so be sure to use the appropriate file.

Testing your OpenCV 3.3 install

Now that we’ve got OpenCV 3.3 installed and linked, let’s do a quick sanity test to see if things work:

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python

Make sure you are in the dl4cv virtual environment before firing up Python. You can accomplish this by running workon dl4cv .

When you print the OpenCV version in your Python shell it should match the version of OpenCV that you installed (in our case OpenCV 3.3.0 ).

Install cuda 8 on ubuntu

When your compilation is 100% complete you should see output that looks similar to the following:

Figure 7: OpenCV 3.3.0 compilation is complete.

That’s it — assuming you didn’t have an import error, then you’re ready to go on to Step #6 where we will install Keras.

Step #6: Install Keras

For this step, make sure that you are in the dl4cv environment by issuing the workon dl4cv command.

From there we can install some required computer vision, image processing, and machine learning libraries:

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python
2
4
6
'image_data_format':'channels_last',
'epsilon':1e-07,
}

Ensure that image_data_format is set to channels_last and backend is tensorflow .

Congratulations! You are now ready to begin your Deep learning for Computer Vision with Python journey (Starter Bundle and Practitioner Bundle readers can safely skipStep #7).

Step #7 Install mxnet (ImageNet Bundle only)

This step is only required for readers who purchased a copy of the ImageNet Bundle of Deep Learning for Computer Vision with Python. You may also choose to use these instructions if you want to configure mxnet on your system.

Either way, let’s first clone the mxnet repository and checkout branch 0.11.0 :

Setting up Ubuntu 16.04 + CUDA + GPU for deep learning with Python

Previously, the larger program compiled and ran fine. Indeed, an executable compiled under cuda 8 in an earlier Ubuntu release runs fine in the cuda 9 environment.

Thanks.

My other options would be to revert my whole system to an earlier Ubuntu release or to rework my algorithm to find something that nvcc can handle.

WRFWRF

1 Answer

WRFWRF
Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.

Not the answer you're looking for? Browse other questions tagged cudaubuntu-17.04 or ask your own question.