Redesign of one of the forms in the Control Panel

February 2nd, 2012

One of the biggest problems I had with the old control panel was that certain functions really weren’t well thought out, so, using the control panel was quite cumbersome. Some of the earlier redesigns of the Task Status Report made a large impact. Making the report easy to read and to convey the critical information very quickly was one of the goals.

The primary goal was to make sure that it was as easy to use the Control Panel to do things as it was to start an ssh session and do things manually. But, we needed to make the form easy enough for people to use that didn’t know what they needed.

Easy enough for everyone, still quick enough for someone that knows precisely what they want.

The following two screenshots are from the old Control Panel. Arguably one could say that the Simple interface was harder to use than the Advanced interface.

Simple Cronjob Entry Form

Advanced Cronjob Entry Form

The replacement Cron Job entry page:

With a few buttons to allow very commonly entered options to be selected, adding a cron job becomes quite easy. I have debated whether to add a single line entry that just takes the raw crontab entry so that people don’t need to parse it and fill in the form.

Sometimes, forms can be usable.

Using Git to generate an automatic version number

February 2nd, 2012

With one of the most recent projects which clocks in at 6331 lines of code, it becomes difficult to determine whether the production and development versions are running the same version after a number of commits.

The first post I came across, Canonical Version Numbers with Git seemed like a good starting point.

The first thing you need to do is set a tag on your repository:

git tag Prod-1

and we can test to make sure our tag was set:

git describe --tags --long

Now, we create /usr/local/bin/git-revision.sh (as mentioned in one of the comments with a minor change):

#!/bin/bash
revisioncount=`git log --oneline | wc -l`
projectversion=`git describe --tags --long`
cleanversion=${projectversion%%-*}

echo "$projectversion-$revisioncount"
#echo "$cleanversion.$revisioncount"

You can use either the projectversion or the cleanversion depending on the format you prefer.

The projectversion will contain the sha snippet along with the version and looks like: 1.0-1-g8f63877-291.

The cleanversion removes the sha key leaving 1.0-291.

Now that we’ve got a version number that changes on each commit, we can use this somewhere. This version is updated AFTER the last commit, so, you’ll want to import it into your project somewhere, or, in your commit message.

Before you commit, as one of the commenters in the previous post said, execute:

git-getrevision.sh > version

For our footer.mako file, we have the following:

<footer>
${request.user.company} ${request.user.phone} ${request.user.email}
Version: 1.2.3
</footer>

We now create post-commit in $GIT/hooks:

#!/bin/bash

VERSION=`/usr/local/bin/git-revision.sh`

FOOTER='/var/www/path/to/templates/footer.mako'

CONTENTS=`cat $FOOTER`

UP_TO_VERSION="${CONTENTS%%Version:*} Version:"
CLOSE_FOOTER="</footer>${CONTENTS##*</footer>}"

echo "$UP_TO_VERSION $VERSION $CLOSE_FOOTER" > $FOOTER

You can do whatever you need with the post-commit script. I just chose to use bash to rewrite the version number present in the footer of the site. You can just output the version number to a header file that is included if that works with your system.

First beta of my new webapp, SnapReplay.com

January 28th, 2012

After a late night working until 2:45am on the finishing touches, my phone alerted me to the fact it was 5:35am and time for the first Live Photostream to take place.

The stream? Roger Waters – The Wall, from the Burswood Dome in Perth, AUSTRALIA. Special thanks to Paul Andrews for taking the pictures and doing a lot of testing beforehand.

From those that watched the stream in realtime – about 23 people based on the counter that I didn’t reset from my testing – I did receive a bit of good feedback and collected a lot of data to analyze.

Rule #1, do NOT keep modifying things during a live event. I saw a minor tweak, made a change, and broke some functionality. While it didn’t affect things, it did bug me to see the problem during the first test. Live with things until after the event – unless it is truly a showstopper. In this case, it was just an html tweak which caused some javascript to wrap and broke the JQuery click handler.

Rule #2, you can never collect enough data. While watching the stream, I realized I had turned off almost all of the debugging hints in node.js during development as it was really noisy. While most of the static assets are served with Varnish, those requests aren’t hitting the backend, so, I didn’t have a good indicator of real traffic. Running varnishncsa in one window while watching node.js with a bit of debugging turned on allowed me to see things, but, not logging pageviews, socket connects/disconnects and other data eliminates the ability to review things after the fact. I did think about putting some hooks into some of the express events (express being the framework I’m using).

Rule #3, always test your production environment well before the event/launch. As I had a very compressed development timetable knowing on Jan 13 that we wanted to do the first event on Jan 28, some infrastructure decisions I had made were not tested thoroughly beforehand resulting in having to run with a less than optimal setup. While Varnish and socket.io do work well together, some browser combinations had issues when doing brief usability tests. Fifteen days to write a scaleable architecture and an Android app is difficult. While I had no experience with node.js or socket.io prior to Nov 11, and haven’t touched Java since 2002 or so, I did spend a bit of time dealing with issues that came from lack of exposure to both.

As it isn’t recommended for node.js to handle static content, I used Varnish in a ‘cdn’ setup to offload static assets and media content. This worked very well except when I made a modification to some javascript and due to some of the rules in my VCL, I strip querystring arguments – making it impossible to just add ?v=2 to my javascript include. Bans for the CDN were only allowed from another host (remember that ‘test your complete production environment’ before launch?), so, a little manual telnetting from another machine allowed me to purge the javascript.

All in all, a great first test, several positive comments, and a nice, long list of requests/enhancements. I can see that this might be a fairly popular project.

If you would like to help beta test and have an Android phone, Download the app, take a few snapshots or enter texts and watch them show up on the Beta Test page.

If you have an event or are attending a concert where you would like to use SnapRelay, I can create a custom app specifically for your event. Let me know in the comments.

A few days of Android development – Java

January 24th, 2012

My latest project is written entirely in Java* (Java and JavaScript). While they aren’t the same by any stretch of the imagination, the project has been quite a learning experience.

From concept inception on Jan 11, 2012 and the first byte of code written that evening, I dreaded the Android App. Matthew Housden wrote a quick framework of it that had some bare functionality used for testing and I spent the last three days doing a headfirst dive into Java which I haven’t used in 9+ years.

The first problem I encountered was the tutorials on the Android SDK site don’t work in a 2.2 or 2.3 emulator nor on my phone. The documentation is littered with half-truths, but, when you read it carefully, you can tell that the person that wrote it was a programmer and not a technical documentation writer.

The application is quite simple – three screens, two of which POST to a remote server. However, I spent a ton of time debugging an issue where sending extra data to an Intent causes the OnActivity callback to have a null intent. I can see the security implications behind this, but, the documentation doesn’t allude to this.

Roughly 700 lines of Java (most are try/catch autogenerated blocks, declarations, etc) to get an app which is 22k until packaged for the market which balloons it to 290k. Dealing with the Camera was by far the heaviest code, dealing with the fact that the context isn’t available globally, I had to pass a number of parameters around rather than have a method that could be called.

It really didn’t take long to sink back into Java as it is just OOP and language is mostly syntax. Since I had been working heavily with Async Javascript with node.js, the event driven code structures required by Android weren’t that difficult to transition to.

Overall impressions: I was quite pleased how quickly I could write an app to do a few things and interact with my Node.js app. I really regret not doing more Android development with a few other projects I had in mind, but, there are only so many hours in the day. This weekend I’ve got a live beta test planned for the entire project – 17 days from inception to a real test.

Then we’ll see if this thing is worth launching.

node.js 7-day Retrospective

January 19th, 2012

A week ago I was walking the dog and thinking about how to handle a validation routine and I got sidetracked and thought about a different problem I had a few weeks earlier. I’ve been working with Ajax Push for a few things to test some parts of a larger concept.

I’m a big advocate of writing mini-projects to test pieces of an eventual solution. Writing two small projects in APE helped me refine the model of another project I had which is what triggered this project. CodeRetreat was also a very good experience – rewriting the same code six times in the same day. Each time you iterated, your code or methodology was better.

Now I have an idea and need a platform. I know I need Ajax Push and APE wasn’t suitable for my other project. I don’t like JavaScript, plain and simple. node.js uses server-side Javascript and this app would have plenty of client-side Javascript as well. The library I intended to use was socket.io as it supported the feature set I needed.

Within minutes, I had node.js up and running through installing one of their binary distributions for Debian. This turned out to be a mistake as they have an extremely old version packaged, but, it took five days before I ran into a package that required me to upgrade.

node.js is fairly straightforward and I had it serving content shortly after starting it. The first problem I ran into was routing. It seemed cumbersome to define everything and I started running through a number of route packages. I ended up using Express, a lightweight framework that includes routing as part of the framework. Express also wraps the connect package which I had used to handle uploads. Refactored code to use Express and I’m off and running.

Now, I’m serving pages, static files, my stylesheets are loading (with the proper content type) and the site is active. I actually had a problem with some JQuery because the content-type wasn’t being set to text/html for my index page.

Next up, image resizing. I used the gm wrapper around graphicsmagick which worked well and I didn’t look further. The methods used by GM are quite straightforward and I see very little difference in the output quality from it versus imagemagick. The command line interface is a bit more straightfoward – not that you need that unless you’re debugging what GM is actually doing. I did run into an few issues with the async handling which required a bit of rework. I still have some corner cases to solve but, I’m shooting for an alpha release.

Redis was straightforward and I needed that for a counter. Again, the async paradigm makes you write code that an OO or functional programmer might find troubling.

What you expect:

io.sockets.on('connection', function (socket) {
  var counter = redis_client.incr('counter');
  socket.emit('stats', { 'counter':res });
});

What you really mean:

io.sockets.on('connection', function (socket) {
  redis_client.incr('counter', function (err, res) {
    socket.emit('stats', { 'counter':res });
  });
});

Javascript doesn’t support classes, but, there are ways to emulate the behavior you’re after. This is something you learn when working with Sequelize – the ORM I am using for access to MySQL. I really debated whether to use Redis for everything, or, log to MySQL for the alpha. I know in the future I’ll probably migrate to CouchDB or possibly MongoDB so that I can still do sql-like queries to fetch data. Based on the stream-log data I expected to be collecting, I could see running out of RAM for Redis over time. Sequelize allows you to import your models from a model file which cleans up a bit of code. Most of the other ORMs I tried were very cumbersome and required a lot of effort to move models to an external file resulting in a very polluted app.js.

Now I needed a template language and form library. I initially looked at Jade but wanted something closer to the Python templating languages I usually use. Second on the list was ejs which is fairly powerful. It defines a layout page and imports your other page into a <%- body %> section, but, that’s about as powerful as it gets. There is currently support for partial includes, allowing header templates, etc, but, that is being phased out and should be done through if branches in the layout.ejs file.

As for a form library, I never found anything satisfying. For most frameworks, a good form library is a necessity for me, but, I can hand-code validation routines for this.

Authentication. At this point, I tried to install Everyauth. During installation a traceback is displayed with a fairly cryptic message:

npm ERR! Error: Using '>=' with 1.x.x makes no sense. Don't do it.

Tracking this down, we find a very old packaged version of NPM which refuses to upgrade because node.js is too old. In Debian Sid, node.js version 0.4.12 is packaged, what? 0.7.0-pre1 is the latest recommended. Upgrading node.js to be able to install a newer version of npm allows us to move forward.

Note: before upgrading, make sure you commit all of your development changes. I didn’t and lost about fifteen minutes of code due to a sleepy rm. :)

So, now we’re running a newer version of node.js, npm upgrades painlessly and we’ve installed Everyauth.

Everyauth is, well, every auth you can think of. In reading their example code, it looked like they were doing more than they actually do, but, they wrap a set of routines and hand back a fairly normalized set of values back. Within fifteen minutes I had Facebook and Twitter working, but, GoogleHybrid gave me some issues. I opted to switch to Google’s OAuth2, but, that failed in a different place. I’ll have to debug that, fork and do a pull request.

I need to write the backend logic for Everyauth, but, with Sequelize, that should be fairly quick.

Down to the basics

Web site performance is always on my mind. Search Engine Optimization becomes important for this site as well. Javascript built pages are somewhat difficult for Googlebot to follow and we don’t want to lose visibility because of that. However, we want to take advantage of a CDN and using Javascript and dom manipulation will allow us to output a static page that can be cached and use JQuery to modify the page to customize it for a logged in user. The one page that will probably see the heaviest utilization is completely Ajax powered, but, it is a short-lived page and probably wouldn’t be indexed anyhow.

node.js for serving static files

I debated this. node.js does really well for Ajax and long-polling but several articles recommend using something other than node.js for static media. I didn’t find it to be slow, but, other solutions did easily outserve it for static content. Since we’re putting all of our content behind Varnish, the alpha will serve the content to Varnish and Varnish will serve the content. It is possible I’ll change that architecture later.

socket.io

While I’ve just scratched the surface of the possibilities, socket.io is very easy to use and extremely powerful. I haven’t found a browser that had any problems, and, it abstracts everything so I don’t have to worry which method it is using to talk to the remote browser. You can associate data with the socket so that you can later identify the listener which is handy for writing chat applications.

Stumbles

At the end of seven days, I’m still stumbling over Javascript’s async behavior. At times, function precedence is cumbersome to work around when you’re missing a needed closure for a method in a library. I’ve also tested a number of packages that obviously solved someone’s problem and was published that looked good but just wasn’t generic enough.

248 hours

656 lines of code, most major functionality working, some test code written and the bulk of the alpha site at least fleshed in.

Overall Impression

node.js is very powerful. I think I could have saved time using Pyramid and used node.js purely for the Ajax Push, but, it was a good test and I learned quite a bit. If you have the time to implement a small project using new technology, I highly recommend it.

Software Used

* node.js
* Express
* gm
* redis
* sequelize
* ejs
* everyauth
* npm
* socket.io

Software Mentions

* APE
* jade

Additional Links

* Blazing fast node.js: 10 performance tips from LinkedIn Mobile

Entries (RSS) and Comments (RSS).
Cluster host: li