Monday, July 25, 2016

Reversing the WWDC Wall

Martin Conte Mac Donell:

I took ~50 (rather sloppy) photos of the wall and wrote a program to do image stitching using a cylindrical projection.


With the flat image and using OpenCV again I detected all the contours (letters) and created a (inverse) mask to remove the blue-ish background and only keep the letters.


This image was already good enough to run tesseract. In order to get the best results I trained tesseract with the San Francisco Mono font before running it.


Go find your app phrase here: you can also get the JSON representation of the wall from here.

1 Comment RSS · Twitter

"Hello destructive goating."

I don't even... what?

Leave a Comment