DNNWeaver

From UVA ECE & BME wiki
Jump to: navigation, search

Back to Xilinx Labs

Group: SPARK

Yunfei Gu, Vaibhav Verma, Xufeng Yuan


Using DNNWeaver to accelerate object detection


Contents

Objectives

Build DNN Accelerator Module for PYNQ-Z1 using DNNWeaver

Dnnweaver.png

Build DNNWeaver

  • python 2.7
  • Xilinx Vivado 2016.2
  • Caffe
make PROTOTXT:= $Your DNNWeaver path/compiler/sample_prototxts/lenet.prototxt
  • Modify the tcl script in $Your DNNWeaver Path/fpga/tcl/vivado_2016_2.tcl
#line:56 #set file_list [split [exec cat $origin_dir/$hw_dir/file.list | egrep -v "\#" | egrep -v "^\s*$" | egrep -v "testbench" | awk Template:Print "hardware/"$0] "\n"]
+line:57 source ./hardware/file_list.tcl
  • Add new file: Media:file_list.tcl in $Your DNNWeaver Path/fpga/hardware
  • Modify the Verilog source code in $Your DNNWeaver Path/fpga/hardware/source/dnn_accelerator/dnn_accelerator.v
+line:57 parameter integer ROM_ADDR_W     = 3
Make all

If you compile DNNWeaver successfully, you will get synthesis-output/ and vivado/ directories in $Your DNNWeaver Path/fpga/

Synthesis and Generate Bitstream

Project setting.png
Dnn block.png
Upgrade ip.png
Dnn implementation.png

The dnn_accelerator part has been highlighted with red color

Dnn summary.png

The above figure is the resource usage summary for implementing dnn_accelerator on PYNQ-Z1

Train LeNet and get Weights and Inputs using Caffe

Caffe.png
Caffe1.png
  • Create Mnist train and test dataset
 ./examples/mnist/create_mnist.sh
The lmdb dataset will be generated in ./examples/mnist/mnist_train_lmdb and ./examples/mnist/mnist_test_lmdb
  • Run train bash
 ./examples/mnist/lenet_train.sh
We will get the weights matrix comes from 10000th iteration of training in ./examples/mnist/lenet_iter_10000.caffemodel
  • Parse weights matrix comes from lenet_iter_10000.caffemodel into .mat format which is readable for matlab following Lenet.prototxt
        #!/usr/bin/env python
        import numpy as np
        import scipy.io as sio
        import caffe
        def load():
            # Load the net
            caffe.set_mode_gpu()
            # You may need to train this caffemodel first
            # There should be script to help you do the training
            net = caffe.Net(root + 'lenet.prototxt', root + 'lenet_iter_10000.caffemodel',\
                caffe.TEST)
            conv1_w = net.params['conv1'][0].data
            conv1_b = net.params['conv1'][1].data
            conv2_w = net.params['conv2'][0].data
            conv2_b = net.params['conv2'][1].data
            ip1_w = net.params['ip1'][0].data
            ip1_b = net.params['ip1'][1].data
            ip2_w = net.params['ip2'][0].data
            ip2_b = net.params['ip2'][1].data
            sio.savemat('conv1_w', {'conv1_w':conv1_w})
            sio.savemat('conv1_b', {'conv1_b':conv1_b})
            sio.savemat('conv2_w', {'conv2_w':conv2_w})
            sio.savemat('conv2_b', {'conv2_b':conv2_b})
            sio.savemat('ip1_w', {'ip1_w':ip1_w})
            sio.savemat('ip1_b', {'ip1_b':ip1_b})
            sio.savemat('ip2_w', {'ip2_w':ip2_w})
            sio.savemat('ip2_b', {'ip2_b':ip2_b})
        if __name__ == "__main__":
            # You will need to change this path
            root = '/home/yunfei/FPGA_project/caffe/examples/mnist/'
            load()
            print('Caffemodel loaded and written to .mat files successfully!')


We will get conv1_b.mat con1_w.mat conv1_b.mat conv2_w.mat conv2_b.mat ip1_b.mat ip1_w.mat ip2_b.mat ip2_w.mat
Ex: conv1_w.mat
Matlab.png
  • Decode the lmdb data


 import caffe
 import lmdb
 import numpy as np
 import cv2
 from caffe.proto import caffe_pb2
 lmdb_env = lmdb.open('/home/yunfei/FPGA_project/caffe/examples/mnist/mnist_test_lmdb/')
 lmdb_txn = lmdb_env.begin()
 lmdb_cursor = lmdb_txn.cursor()
 datum = caffe_pb2.Datum()
 for key, value in lmdb_cursor:
 datum.ParseFromString(value)
 label = datum.label
 data = caffe.io.datum_to_array(datum)
 #CxHxW to HxWxC in cv2
 image = np.transpose(data, (1,2,0))
 cv2.imshow('cv2', image)
 cv2.waitKey(1)
 print('{},{}'.format(key, label))
We will get the lable and data through terminal:
Lmdb.png

Implement the DNN_accelerator Bitstream on PYNQ-Z1

Following the DNNWeaver, they loaded the bitstream to the ZYNQ board using Petalinux via Linux operating system embedded on the FPGA ARM processors. However, PYNQ-Z1 is an advanced FPGA board which uses the Python language and libraries, designers can exploit the benefits of programmable logic and microprocessors in Zynq to build more capable and exciting embedded systems. It can upload the bitstream on PYNQ-Z1 using built-in Python package instead of Petalinux, following the instruction of PYNQ: PYTHON PRODUCTIVITY FOR ZYNQ.

PYNQ-Z1 Board Setup

Boardsetup.png

Ethernet Setup

You will need to have an Ethernet port available on your computer, and you will need to have permissions to configure your network interface. With a direct connection, you will be able to use PYNQ, but unless you can bridge the Ethernet connection to the board to an Internet connection on your computer, your board will not have Internet access. You will be unable to update or load new packages without Internet access.

  • Assign your computer a static IP address
  • Connect the PYNQ-Z1 to your computer’s ethernet port
  • Browse to http://192.168.2.99:9090 [PYNQ board should be connected via Ethernet to access the link]

Connecting to Jupyter Notebooks

To connect to Jupyter Notebooks open a web browser and navigate to:

Load Bitstream to PYNQ-Z1

PYNQ-Z1 has the built-in Overlay Class Python package to load an overlay. An overlay is instantiated by specifying the name of the bitstream file. Instantiating the Overlay also downloads the bitstream by default and parses the Tcl file.

Create directory.png
Create python.png
Overlay.png
T.png
T2.png
Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox