Variable Naming Conventions

January 13th, 2012

While working on a new project I have to come up with a database schema and variable names for use throughout the project. While some names appear humanized, there is some logic to choosing names that are more descriptive.

Consider the difference between start_date and end_date as variables. While you read the code, when you later look at the database schema, you may not make the association that they are related. date_start and date_end are probably better names.

Likewise when you’re dealing with storing the files submitted by users, thumbnail, high_res, low_res probably isn’t as descriptive as file_thumbnail, file_highres, file_lowres.

Later, when you decide to add dimensions, you can then have file_thumbnail, file_thumbnail_width and file_thumbnail_height and your database schema will be more readable.

Also, you can do the same with table names. trouble_ticket, trouble_ticket_detail and trouble_ticket_attachment will make it easier to associate than trouble_ticket, attachment_ticket, ticket_detail.

A little planning can make things much easier down the road.

The Architecture of a New Project

January 11th, 2012

Yesterday I started working with Ajax Push, wrote a quick demo for a friend, and then stripped that and wrote a functional demo project with documentation. I did this to test if Ajax Push worked well enough for another concept project. As it turns out, using APE does work, but, it leaves a little to be desired.

While I was working with APE and tweaking the documentation and demo, a problem I had faced a few weeks back popped into my mind. Using Ajax Push for this application was perfect, it was all server push rather than client communication and the concept would work wonderfully.

What now?

We’re faced with a few dilemmas. This problem is 99% Ajax/Long Polling and 1% frontend. An Android and IOS app need to be developed to interface with the system, but, that is the simple part of the project.

Architecture

At first I considered Python/Pyramid as the frontend, Varnish for caching content and APE for handling the Ajax Push/Long Polling. I’ll need to write an API to handle the Android and IOS Authenticating and communicating with the system. I suspect my app will become an OAuth2 endpoint for the apps which I’ll explain in a moment.

It was at this point that I realized, I could use node.js and socket.io to handle the long polling, but, the frontend requirements are so lightweight, I could do most of the web app in Node.js. Since I’m using node.js quite heavily, I’ll probably use Redis and CouchDB to do my storage – just in case.

Epiphany

Now, I had an epiphany. While I don’t really intend to open the API for the project initially, there’s a certain logic to making your own project utilize the same API that you will later make public. If anything, it makes designing your IOS and Android app easier since they utilize an API rather than relying on separate methods for communications with the webapp. One single interface rather than two and later if Windows Mobile gets an app, we’ve already got the API designed. Since we’re an OAuth2 endpoint, our mobile apps can take advantage of numerous existing libraries – saving quite a bit of time.

Later, if the API is made public, we’re not facing a new engineering challenge and we’ve had some first-hand experience with the API.

Recently there has been a lot of discussion about using ‘the right tool for the job’ and why that is wrong. ‘Use the same language for every part of the project’ is the other school of thought. There are things I know Python does well, there are things I know it doesn’t do well. There are things Erlang can handle, and things it shouldn’t. While I’m not a fan of Javascript, for this project, it really does seem like the right tool for the job. The difference between APE and node.js was Spidermonkey versus V8. In both cases, I’m writing Javascript, so, why not choose the option that has a much larger installed base – and a demo that has a use case very similar to my final app.

Now what?

While I’ve not used node.js, I’m expecting the next few days to be a rapid iteration of development and testing.

…and I’ll be using git. :)

git init

Ajax Push Engine, Pyramid and a quick demo application

January 11th, 2012

Earlier today I was debating Ajax Push and Pyramid for a project I had in mind. I ended up spending about 45 minutes writing a quick proof of concept, then, decided that perhaps something a bit more detailed with some documentation would be helpful for others.

I used Pyramid and APE and wrote a quick demo app. All of the code for the demo app can be downloaded from http://code.google.com/p/pyramid-ape-demo/.

In the html/ directory, the files, graphics and javascript files required to run the client side of the app are included. In the ape_server/ directory, the javascript that needs to be installed in the Ape Server scripts/ directory is present. You’ll want to modify the password. Also included in the html/ directory is a python script called push.py which allows you to use urllib2.urlopen to communicate with the server directly. And finally, in the ape/ directory is a very minimal Pyramid application. pyramidape.wsgi is also included as a starting point to get the site set up.

In the demo, the left hand Coke can is controlled completely by the Ape Javascript Client code. Communications between the browser and Ape server are not processed by anything but Ape. On the right hand side, the Coke can is controlled by a json post to Pyramid and then Pyramid uses urllib2.urlopen to communicate with Ape which then updates the page.

Changes made on the page are reflected among all of the other people that are currently viewing the page in realtime. Since we’re using Ajax push, the page doesn’t need to be reloaded to show those changes. In this example, an img src and the alt text is changed along with a button. You can write your script to modify any html on the page – changing the colors of the page, elements, etc.

Using Ajax push and long polling with Pyramid isn’t difficult and this simple demo and example code should be a good starting point.

Finally, a formal release for my WordPress + Varnish + ESI plugin

January 10th, 2012

A while back I wrote a plugin to take care of a particular client traffic problem. As the traffic came in very quickly and unexpectedly, I had only minutes to come up with a solution. As I knew Varnish pretty well, my initial reaction was to put the site behind Varnish. But, there’s a problem with Varnish and WordPress.

WordPress is a cookie monster. It uses and depends on cookies for almost everything – and Varnish doesn’t cache assets that contain cookies. VCL was modified and tweaked, but, the site was still having problems.

So, a plugin was born. Since I was familiar with ESI, I opted to write a quick plugin to cache the sidebar and the content would be handled by Varnish. On each request, Varnish would assemble the Edge Side Include and serve the page – saving the server from a meltdown.

The plugin was never really production ready, though, I have used it for a year or so when particular client needs came up. When Varnish released 3.0, ESI could work with GZipped/Deflated content which significantly increased the utility of the plugin.

If you would like to read a detailed explanation of how the plugin works and why, here’s the original presentation I gave in Florida.

You can find the plugin on WordPress’s plugin hosting at http://wordpress.org/extend/plugins/cd34-varnish-esi/.

DDOS Packet Logger rough cut

January 5th, 2012

I believe this is ready for a little external testing. While I am not extremely happy with the compression used, it does give about a 15% reduction in space with very little CPU impact.

I do intend to write my own streaming compression which should get me closer to a 55% compression ratio based on some simple testing. I need to add some features to select the ethernet port to watch and get logging rotating on a daily basis, but, it does do the original intended job.

http://code.google.com/p/ddos-log/

Thank you for any feedback.

Note: you don’t need to be under a DDOS to test it, it just logs packets going to port 25 and 80 to a logfile for later processing.

Entries (RSS) and Comments (RSS).
Cluster host: li