0
I Use This!
Activity Not Available

Project Summary

Fall 2006
CMPT 310: Introduction to Artificial Intelligence Instructor: Greg Mori
Assignment 5: Learning
Due Dec. 4 at 11:59pm
This assignment can be done in pairs.
Important Note: The univeristy policy on academic dishonesty (cheating) will be taken
very seriously in this course. Students will work in pairs on this assignment. You may not
discuss the specific questions in this assignment, nor their solutions with any other group.
You may not provide or use any solution, in whole or in part, to or by another group.
You are encouraged to discuss the general concepts involved in the questions in the context
of completely different problems. If you are in doubt as to what constitutes acceptable
discussion, please ask! Further, please take advantage of office hours offered by the instructor
and the TA if you are having difficulties with this assignment.
Part 0: Handwritten Digits
Download the images from the course website. There are two paths to follow, the MATLAB
path (which I believe to be easier), and the Java path (which you may be much more familiar
with). Choose one of the following:
1. Tarball 1.1 (data java.tar.gz) contains a huge set of 60000 training images, examples
of handwritten digits. The binary file format is a pair of integers giving the height h and
width w of the image, followed by a list of hw integers. Tarball 1.2 (code java.tar) contains
code for reading in these images, written in Java. Along with the directory of images is a
file containing 60000 labels (integers in {0, 1, . . . , 9}) in the same format as the image files h
is 60000, w is 1. Again, code for reading these in Java is available.
2. Tarball 2.1 (data matlab.tar.gz) contains 2 data files, and Tarball 2.2 (code matlab.tar)
a matlab function for reading in these data files. Running the function [train images,
train labels] = readData; will store a 28-by-28-by-60000 array in train images and a
60000-by-1 array in train labels. Each entry train images(:,:,i) corresponds to one
image. An image can be viewed by simply typing figure; imshow(train images(:,:,i)).
If you do not have large amounts of RAM, instead load train1000.mat (load(’train1000.mat’)),
which will load only the first 1000 images and labels into variables train images1000 and
train labels1000. If you also don’t have a newer version of matlab which allows compressed
.mat files, use load(’train1000v6.mat’) after gunzipping said file.
For your viewing pleasure, all 60000 training images have been converted to JPEG format,
and are available as a tarball on the course website. It is not recommended that you use
these as input to your program, as this JPEG compression is lossy.

Part 1: Perceptron Learning
Train a single layer perceptron to recognize the digit 7 in your chosen programming language.
There should be M = 28 ∗ 28 + 1 = 785 input units (xi ), one for each pixel in an image,
plus a bias term (which is set to -1 for all inputs). There is one output unit, which should
be trained to be 1 if the image is of a digit 7, and 0 otherwise.
Your method should initialize the 785 weights Wj to some random values, and use the
Perceptron Learning algorithm to find (locally) optimal weights using gradient descent.
Use a subset of the 60000 images to train your perceptron (perhaps the first 100, or first
1000, or first 10000).
Pseudocode for perceptron learning:
W = random initial values
for iter = 1 to T
    for i = 1 to N (all examples)
        x = input for example i
        y = output for example i
        Wold = W
        Err = y − g(Wold · x)
        for j = 1 to M (all weights)
           Wj = Wj + α · Err · g′ (Wold · x) · xj
For your testing output, show the learned weights Wj (without the bias term) as an image
(use reshape(W(1:end-1),[28 28]) in MATLAB, or write a pgm image using the provided routine in Java). Comment briefly on what you see.
A test example with input x is classified as a 7 if the output g(W · x) of the final perceptron
is greater than 0.5.
Also show the results of testing your perceptron on a test set chosen as subset of the provided
data (disjoint from the training set you use). Report the classification results for at least 5
different training set sizes. For each training set size, run at least 5 different experiments
with a different random training set of the given size. Note that a “random” subset can
be obtained using randperm in MATLAB, or by starting from a random index (Java or
MATLAB). Plot % correct on test set versus # training examples on a graph by hand, in
MATLAB, or using your favourite program.
Part 2: Neural Networks
Create a neural network that attempts to recognize all of the digits 0 through 9. There should
be 10 output units, one corresponding to each of the 10 digits. The rest of the structure of
the network is up to you.
There should be at least 1 hidden layer of nodes in between the input and output layers. The
input layer does not have to be all 784 input pixels (plus a bias term), it can be something
else of your choosing.
Train this network using back-propagation (see lecture slides).
Again, train using some subset of the 60000 images, and use a disjoint subset to test your
trained network. Again, show recognition rates on using at least 2 different sizes of training
set (one run for each is sufficient this time).
I will use a completely separate set of images to test your networks for the digit recognition
contest. For the contest:
1. If you are using Java, please create a class RecognizeDigits XX YY, with your student
numbers in place of XX and YY, as before. This class should extend the template class
RecognizeDigits provided.
2. If you are using MATLAB, please create a function recognizeDigits XX YY, with your
student numbers in place of XX and YY as before. Use the template provided without
changing the function header.
In short, your function/class should work with the run tournament file provided, with
absolutely no modifications.
Submitting Your Assignment
You must create your assignment in electronic form. For each part of the assignment, submit
the source code as well as the test output mentioned in each question. For part 2 give a brief
summary of the structure of the network used, and any other design choices made. Submit
these in one of the formats listed below.
In total, you should submit:
• All source code for parts 1 and 2
• Error rate plot for training perceptron
• Visualization of learned weights for perceptron for one of these training sets, with brief
comments
• Error rates (don’t have to plot) for training neural network with ≥ 2 sizes of training set
• Brief summary of structure of neural network and inputs
The accepted text formats are:
• ASCII Text
• HTML
• PDF (Portable Document Format)
And the accepted image formats are:
• GIF
• JPEG
• PDF
You must include a completed cover sheet along with all the files from your completed
assignment. This file is available from the course main page: http://www.cs.sfu.ca/
~mori/courses/cmpt310/a5/cover_sheet_a5.txt This file will be used by the TA to give
you feedback on your assignment, so don’t forget to complete it. This cover sheet also
includes the grading scheme.
All files should all be labeled appropriately so that the markers can easily follow and find
your work. Create an archive (either *.zip or *.tar.gz) containing all relevant assignment
files, plus the cover sheet. When you are done, go to the Assignment Submission Page at
https://submit.cs.sfu.ca/ and follow the instructions there.

Tags

artificialintelligence cmpt310 handwritingrecognition learning matlab neuralnetworks perceptronlearning sfu

In a Nutshell, cmpt310-autumn2006-assignment5...

 No code available to analyze

Open Hub computes statistics on FOSS projects by examining source code and commit history in source code management systems. This project has no code locations, and so Open Hub cannot perform this analysis

Is this project's source code hosted in a publicly available repository? Do you know the URL? If you do, click the button below and tell us so that Open Hub can generate statistics! It's fast and easy - try it and see!

Add a code location

GNU General Public License v2.0 or later
Permitted

Place Warranty

Commercial Use

Modify

Distribute

Forbidden

Sub-License

Hold Liable

Required

Distribute Original

Disclose Source

Include Copyright

State Changes

Include License

These details are provided for information only. No information here is legal advice and should not be used as such.

All Licenses

This Project has No vulnerabilities Reported Against it

Did You Know...

  • ...
    Black Duck offers a free trial so you can discover if there are open source vulnerabilities in your code
  • ...
    you can subscribe to e-mail newsletters to receive update from the Open Hub blog
  • ...
    in 2016, 47% of companies did not have formal process in place to track OS code
  • ...
    search using multiple tags to find exactly what you need

 No code available to analyze

Open Hub computes statistics on FOSS projects by examining source code and commit history in source code management systems. This project has no code locations, and so Open Hub cannot perform this analysis

Is this project's source code hosted in a publicly available repository? Do you know the URL? If you do, click the button below and tell us so that Open Hub can generate statistics! It's fast and easy - try it and see!

Add a code location

Community Rating

Be the first to rate this project
Click to add your rating
   Spinner
Review this Project!
Sample ohloh analysis