Artificial Neural Networks: Matrix Form (Part 5)

To actually implement a multilayer perceptron learning algorithm, we do not want to hard code the update rules for each weight. Instead, we can formulate both feedforward propagation and backpropagation as a series of matrix multiplies, which leads to better usability. This is what leads to the impressive performance of neural nets - dumping matrix multiplies to a graphics card allows for massive parallelization and large amounts of data.

This tutorial will cover how to build a neural network that uses matrices. It builds heavily off of the theory in Part 4 of this series, so make sure you understand the math there! 

Read More

Artificial Neural Networks: Mathematics of Backpropagation (Part 4)

Up until now, we haven't utilized any of the expressive non-linear power of neural networks - all of our simple one layer models corresponded to a linear model such as multinomial logistic regression. These one-layer models had a simple derivative. We only had one set of weights the fed directly to our output, and it was easy to compute the derivative with respect to these weights. However, what happens when we want to use a deeper model? What happens when we start stacking layers? 

Read More

Linux snippets: sorting a file, ignoring the header

When working with large data files that have a header, sometimes it is more efficient to sort the files for evaluation so that a streaming algorithm can be used. In addition, you may want to simply sort the data that you have by some key for organizational and readability purposes. Regardless, a lot of data preparation involves doing something with data in a delimited file containing a header, while also preserving the position and contents of the header.

Here is a short example that sorts a tab delimited file with a header by the first field in the file:

(head -n 1 data.tsv && tail -n +2 data.tsv  | sort -k1 -t'     ') > data_sorted.tsv
What this command does is spawn a subshell that runs everything in parenthesis, and then outputs it to a second file. Within the parenthesis, we first get the header (head -n 1). Then we run another command that takes everything except the header (tail -n +2) and pipes it to the sort utility. The arguments to sort include the field to sort by (-k1, or the first field in this case) and a delimiter (-t' ', which specifies using tab as a delimiter - you can paste a tab character by typing Ctrl-V followed by Tab). You could substitute whatever routine you want for sort.

Linux snippets: using xclip to pipe to the system clipboard

A lot of times I write scripts to generate code, specifically in the case where I have to generate a large amount of SQL column names. If I want to then paste this into a file in the appropriate place, I can either copy and paste from the terminal (which is cumbersome, especially on Linux) or pipe it to a file, and then copy and paste it (which is also a bit unwieldy).

Instead, we can save a step by piping directly to the system (X) clipboard using xclip.  To get it on Ubuntu, we can install it from the repositories:

sudo apt-get install xclip

The default behavior of xclip is not to put its input onto the system clipboard (it puts text in the X clipboard, so you'll be able to middle click to paste in X applications, but not your IDE), so I created an alias in my .bashrc (or .zshrc) file:

alias xclip='xclip -selection c'

Then, you can pipe to the system clipboard with:

cat long_file.txt | xclip
Now you can paste the output of cat long_file.txt with the system paste command into any other application.

Search Bash history by first few characters (like MATLAB)

One convenient feature of the MATLAB interpreter is the ability to type in the first few characters of a previous command and press the "up" arrow to search for all previous commands that begin with those characters. Bash doesn't enable this behavior by default - you can use ctrl+r to search anywhere in a command, but not by the first few characters. If you want to add this functionality to Bash, add the following to your ~/.inputrc file (from this helpful askbuntu.com post):

## Search backwards with the up arrow
"\e[A":history-search-backward
## Search forwards with the down arrow
"\e[B":history-search-forward

Now just type the first few characters of a command in your history, and press the up arrow to search backward or the down arrow to search forward.

Artificial Neural Networks: Linear Multiclass Classification (Part 3)

In the last section, we went over how to use a linear neural network to perform classification. We covered using both the perceptron algorithm and gradient descent with a sigmoid activation function to learn the placement of the decision boundary in our feature space. However, we only covered binary classification. What if we instead want to classify a point belonging to one of $K$ classes?

Read More

Artificial Neural Networks: Linear Classification (Part 2)

So far we've covered using neural networks to perform linear regression. What if we want to perform classification using a single-layer network?  In this post, I will cover two methods: the perceptron algorithm and using a sigmoid activation function to generate a likelihood. I will not cover the delta rule because it is a special case of the more general backpropagation algorithm, which will be covered in detail in Part 4.

Read More

Android Snippets: Showing a ProgressDialog in an AsyncTask

AsyncTask is a relatively pain-free way to thread a background task in an Android application. However, you may want to discourage the user from performing any interaction with the application while the task is running. Or you may want to simply let them know that something is happening in the background, e.g. their file is actually downloading and pressing the button again won't make it happen any faster!

In any case, here is short example of how to show a ProgressDialog while your AsyncTask is running:

private class BackgroundTask extends AsyncTask <Void, Void, Void> {
	private ProgressDialog dialog;
	
	public BackgroundTask(MyMainActivity activity) {
		dialog = new ProgressDialog(activity);
	}

	@Override
	protected void onPreExecute() {
		dialog.setMessage("Doing something, please wait.");
		dialog.show();
	}
	
	@Override
	protected void onPostExecute(Void result) {
		if (dialog.isShowing()) {
			dialog.dismiss();
		}
	}
	
	@Override
	protected Void doInBackground(Void... params) {
		try {
			Thread.sleep(5000);
		} catch (InterruptedException e) {
			e.printStackTrace();
		}

		return null;
	}
	
}

You should replace MyMainActivity in the ProgressDialog constructor with the name of the calling activity. Note also that this example doesn't actually do anything - it just sleeps for 5 seconds and then finishes. To start the task, you can use the following:

BackgroundTask task = new BackgroundTask(MyMainActivity.this);
task.execute();

Artificial Neural Networks: Linear Regression (Part 1)

Artificial neural networks (ANNs) were originally devised in the mid-20th century as a computational model of the human brain. Their used waned because of the limited computational power available at the time, and some theoretical issues that weren't solved for several decades (which I will detail at the end of this post). However, they have experienced a resurgence with the recent interest and hype surrounding Deep Learning. One of the more famous examples of Deep Learning is the "Youtube Cat" paper by Andrew Ng et al.

It is theorized that because of their biological inspiration, ANN-based learners will be able to emulate how a human learns to recognize concepts or objects without the time-consuming feature engineering step. Whether or not this is true (or even provides an advantage in terms of development time) remains to be seen, but currently it's important that we machine learning researchers and enthusiasts have a familiarity with the basic concepts of neural networks.

This post covers the basics of ANNs, namely single-layer networks. We will cover three applications: linear regression, two-class classification using the perceptron algorithm and multi-class classification.

Read More

ML Primers

During my first year of studying machine learning, I've read a lot of papers, book chapters, and online tutorials. I like to learn by example. For me, theory paired with an implementation is the best way to learn a topic in machine learning. However, nearly every tutorial I've come across has a lot of one and little of the other. The ones that include both are usually presented at such a high level that they are of little use to someone trying to fully understand the topic at hand, or their code isn't even public!

Over the past few years, I've collated a lot of material from different sources to create a set of "primers" that are introductions to a wide array of machine learning topics. They include a theoretical foundation paired with a short tutorial with accompanying code written in Python. After going through one of these primers, you should have enough basic knowledge to begin reading papers about a particular subject without feeling too lost.

I'm currently in the process of posting these primers. Most of my original code was written in MATLAB, but it's not free and not everyone has access to it through their university or workplace, so I'm translating it to Python. The primers are aimed at an audience familiar with calculus and computer science so as not to "dumb down" any material, but I try to avoid using undefined terms or concepts.

I hope you find them helpful! If you have any issues with the code or material, feel free to leave a comment.