St Andrews University Hackathon (STACSHACK) 2016

In February my friends and I attended St Andrews' hackathon. It went fairly successfully, and I learned some cool new tech! This post covers the details.

StacsHack 2016 was a 2-day event from Saturday 20th Feb to Sunday 21st. We arrived around 10am on Saturday, hacking started at 12pm, and continued until 12pm the next day.

It wasn't until around 4pm on Saturday that I had a firm idea of what my final application was going to be. My initial idea was to create a social media Heads-Up-Display (HUD) that would use augmented reality to overlay people's social media feeds next to their head. There wasn't any augmented reality devices available at the hackathon (I certainly wouldn't mind playing around with a HoloLens, MLH!), so my idea was to use a phone camera strapped to the front of an Oculus Rift (which were available).

By around 1.30pm I had my hands on an Oculus Rift, but unfortunately didn't have hardware that was particularly compatible with it. My laptop is decently specced for a laptop, but the way that Oculus have designed the software won't work well with laptop GPUs. This doesn't really make sense as I wasn't planning on doing much rendering for the Oculus, just passing through a video feed from a phone - but it thoroughly put an end to the idea.

I downsized my idea from augmented reality to just using a webcam though. JP Morgan issued a Security Challenge to build an application using OpenCV to authenticate any type of application. They wanted to be able to train a program to recognise individuals like a fingerprint scanner but for faces.

I figured this challenge was my best chance at winning a prize that was closest to my original idea. I decided to create the application they were looking for, but incorporate a part of my original idea - during training the program prompts for a user's social media details (such as Twitter and GitHub) and would display a feed next to their head if they are recognised.

I began implementing using Java and OpenCV, but quickly ran into a couple issues in tying the both together. It seemed possible, but difficult. Then I came across JavaCV and my problems were solved! JavaCV is a pretty thin Java wrapped around OpenCV, so it was ideal for my purposes.

Before I go any further, the source code is available on GitHub here. I created a demonstration video to show the capabilities of the software, this is available on YouTube here (since Wagtail won't embed the video without a massive space in this page :(... ).

I began by simply trying to get JavaCV to recognise ANY face from an image file. I had this working by 4pm on Saturday by using some example code online combined with some useful StackOverflow answers. The code for this was rather simple, consisting of around 40 lines of Java code, the crucial section is shown below.

// Create a face detector from the cascade file in the resources directory.
CascadeClassifier faceDetector = new CascadeClassifier(getClass().getResource("/resources/lbpcascade_frontalface.xml").getPath());
Mat image = Imgcodecs.imread(getClass().getResource("/resources/face2.jpg).getPath());

// Detect faces in the image.
// MatOfRect is a special container class for Rect.
MatOfRect faceDetections = new MatOfRect();
faceDetector.detectMultiScale(image, faceDetections);

System.out.println(String.format("Detected %s faces", faceDetections.toArray().length));

// Draw a bounding box around each face.
for (Rect rect : faceDetections.toArray()) {
    Imgproc.rectangle(image, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(0, 255, 0));
}

// Save the visualized detection.
String filename = "faceDetection.png";
System.out.println(String.format("Writing %s", filename));
Imgcodecs.imwrite(filename, image);

The next stage was to use this code to take input from my laptop's webcam, and render a rectangle around faces in the webcam's images, then display these on the screen. I also needed to implement training the system on specific user's faces. I had this working by 10pm (according to my commit messages!), which demonstrates the relative ease of using JavaCV/OpenCV. It took one person less than the equivalent of 2 working days to have a basic working system which trains new users in under a minute, and works with quite good accuracy.

By 2am I had implemented a framerate display, refactored the code a couple times, and added an indicator of how confident my application was.

The next day I picked up Ben Jackson as a team member as his project hadn't worked out. With his help I managed to add a lot of aesthetic improvements to the application, as well as social media integration. We also added a method of specifying which users must be on the screen TOGETHER in order to "unlock" the application. This was specified using a text file.

The aesthetic improvements included getting the application to display higher resolution images, having more informative error messages, improving performance, and giving different users different colours. These kind of small changes may not seem important, but can make a real difference when it comes to judging at a hackathon. It makes the application seem a lot more polished, even if the functionality hasn't changed much.

Once again, the source code and commit history can be viewed at https://github.com/Sheepzez/social-face-recog.

In the end, Ben and I won a Raspberry Pi 2 Model B starter kit each (from winning the JP Morgan Security Challenge). We both have RPi 2's already, but hey, you can never have too much raspberry pie. In addition to the plethora of t-shirts all attendees got - seriously if you need a wardrobe refresh, check out your local hackathons.

If you enjoyed this post, please check out my other blog posts.