Posts Tagged ‘socket.io’

50000 Connection node.js/socket.io test

Wednesday, February 22nd, 2012

While working on a project, I started doing some benchmarking, but, benchmarks != real world.

So, I created a quick test, 50k.cd34.com, and, if you can, hit the url, send it to your friends and lets see if we can hit 50000 simultaneous connections. There is a counter that updates, and, the background color changes from black to white as it gets closer to 50000 viewers.

Code available on code.google.com.

I’ll probably be fixing IPv6 with socket.io soon. I was rather dismayed that it didn’t work.

First beta of my new webapp, SnapReplay.com

Saturday, January 28th, 2012

After a late night working until 2:45am on the finishing touches, my phone alerted me to the fact it was 5:35am and time for the first Live Photostream to take place.

The stream? Roger Waters – The Wall, from the Burswood Dome in Perth, AUSTRALIA. Special thanks to Paul Andrews for taking the pictures and doing a lot of testing beforehand.

From those that watched the stream in realtime – about 23 people based on the counter that I didn’t reset from my testing – I did receive a bit of good feedback and collected a lot of data to analyze.

Rule #1, do NOT keep modifying things during a live event. I saw a minor tweak, made a change, and broke some functionality. While it didn’t affect things, it did bug me to see the problem during the first test. Live with things until after the event – unless it is truly a showstopper. In this case, it was just an html tweak which caused some javascript to wrap and broke the JQuery click handler.

Rule #2, you can never collect enough data. While watching the stream, I realized I had turned off almost all of the debugging hints in node.js during development as it was really noisy. While most of the static assets are served with Varnish, those requests aren’t hitting the backend, so, I didn’t have a good indicator of real traffic. Running varnishncsa in one window while watching node.js with a bit of debugging turned on allowed me to see things, but, not logging pageviews, socket connects/disconnects and other data eliminates the ability to review things after the fact. I did think about putting some hooks into some of the express events (express being the framework I’m using).

Rule #3, always test your production environment well before the event/launch. As I had a very compressed development timetable knowing on Jan 13 that we wanted to do the first event on Jan 28, some infrastructure decisions I had made were not tested thoroughly beforehand resulting in having to run with a less than optimal setup. While Varnish and socket.io do work well together, some browser combinations had issues when doing brief usability tests. Fifteen days to write a scaleable architecture and an Android app is difficult. While I had no experience with node.js or socket.io prior to Nov 11, and haven’t touched Java since 2002 or so, I did spend a bit of time dealing with issues that came from lack of exposure to both.

As it isn’t recommended for node.js to handle static content, I used Varnish in a ‘cdn’ setup to offload static assets and media content. This worked very well except when I made a modification to some javascript and due to some of the rules in my VCL, I strip querystring arguments – making it impossible to just add ?v=2 to my javascript include. Bans for the CDN were only allowed from another host (remember that ‘test your complete production environment’ before launch?), so, a little manual telnetting from another machine allowed me to purge the javascript.

All in all, a great first test, several positive comments, and a nice, long list of requests/enhancements. I can see that this might be a fairly popular project.

If you would like to help beta test and have an Android phone, Download the app, take a few snapshots or enter texts and watch them show up on the Beta Test page.

If you have an event or are attending a concert where you would like to use SnapRelay, I can create a custom app specifically for your event. Let me know in the comments.

node.js 7-day Retrospective

Thursday, January 19th, 2012

A week ago I was walking the dog and thinking about how to handle a validation routine and I got sidetracked and thought about a different problem I had a few weeks earlier. I’ve been working with Ajax Push for a few things to test some parts of a larger concept.

I’m a big advocate of writing mini-projects to test pieces of an eventual solution. Writing two small projects in APE helped me refine the model of another project I had which is what triggered this project. CodeRetreat was also a very good experience – rewriting the same code six times in the same day. Each time you iterated, your code or methodology was better.

Now I have an idea and need a platform. I know I need Ajax Push and APE wasn’t suitable for my other project. I don’t like JavaScript, plain and simple. node.js uses server-side Javascript and this app would have plenty of client-side Javascript as well. The library I intended to use was socket.io as it supported the feature set I needed.

Within minutes, I had node.js up and running through installing one of their binary distributions for Debian. This turned out to be a mistake as they have an extremely old version packaged, but, it took five days before I ran into a package that required me to upgrade.

node.js is fairly straightforward and I had it serving content shortly after starting it. The first problem I ran into was routing. It seemed cumbersome to define everything and I started running through a number of route packages. I ended up using Express, a lightweight framework that includes routing as part of the framework. Express also wraps the connect package which I had used to handle uploads. Refactored code to use Express and I’m off and running.

Now, I’m serving pages, static files, my stylesheets are loading (with the proper content type) and the site is active. I actually had a problem with some JQuery because the content-type wasn’t being set to text/html for my index page.

Next up, image resizing. I used the gm wrapper around graphicsmagick which worked well and I didn’t look further. The methods used by GM are quite straightforward and I see very little difference in the output quality from it versus imagemagick. The command line interface is a bit more straightfoward – not that you need that unless you’re debugging what GM is actually doing. I did run into an few issues with the async handling which required a bit of rework. I still have some corner cases to solve but, I’m shooting for an alpha release.

Redis was straightforward and I needed that for a counter. Again, the async paradigm makes you write code that an OO or functional programmer might find troubling.

What you expect:

io.sockets.on('connection', function (socket) {
  var counter = redis_client.incr('counter');
  socket.emit('stats', { 'counter':res });
});

What you really mean:

io.sockets.on('connection', function (socket) {
  redis_client.incr('counter', function (err, res) {
    socket.emit('stats', { 'counter':res });
  });
});

Javascript doesn’t support classes, but, there are ways to emulate the behavior you’re after. This is something you learn when working with Sequelize – the ORM I am using for access to MySQL. I really debated whether to use Redis for everything, or, log to MySQL for the alpha. I know in the future I’ll probably migrate to CouchDB or possibly MongoDB so that I can still do sql-like queries to fetch data. Based on the stream-log data I expected to be collecting, I could see running out of RAM for Redis over time. Sequelize allows you to import your models from a model file which cleans up a bit of code. Most of the other ORMs I tried were very cumbersome and required a lot of effort to move models to an external file resulting in a very polluted app.js.

Now I needed a template language and form library. I initially looked at Jade but wanted something closer to the Python templating languages I usually use. Second on the list was ejs which is fairly powerful. It defines a layout page and imports your other page into a <%- body %> section, but, that’s about as powerful as it gets. There is currently support for partial includes, allowing header templates, etc, but, that is being phased out and should be done through if branches in the layout.ejs file.

As for a form library, I never found anything satisfying. For most frameworks, a good form library is a necessity for me, but, I can hand-code validation routines for this.

Authentication. At this point, I tried to install Everyauth. During installation a traceback is displayed with a fairly cryptic message:

npm ERR! Error: Using '>=' with 1.x.x makes no sense. Don't do it.

Tracking this down, we find a very old packaged version of NPM which refuses to upgrade because node.js is too old. In Debian Sid, node.js version 0.4.12 is packaged, what? 0.7.0-pre1 is the latest recommended. Upgrading node.js to be able to install a newer version of npm allows us to move forward.

Note: before upgrading, make sure you commit all of your development changes. I didn’t and lost about fifteen minutes of code due to a sleepy rm. :)

So, now we’re running a newer version of node.js, npm upgrades painlessly and we’ve installed Everyauth.

Everyauth is, well, every auth you can think of. In reading their example code, it looked like they were doing more than they actually do, but, they wrap a set of routines and hand back a fairly normalized set of values back. Within fifteen minutes I had Facebook and Twitter working, but, GoogleHybrid gave me some issues. I opted to switch to Google’s OAuth2, but, that failed in a different place. I’ll have to debug that, fork and do a pull request.

I need to write the backend logic for Everyauth, but, with Sequelize, that should be fairly quick.

Down to the basics

Web site performance is always on my mind. Search Engine Optimization becomes important for this site as well. Javascript built pages are somewhat difficult for Googlebot to follow and we don’t want to lose visibility because of that. However, we want to take advantage of a CDN and using Javascript and dom manipulation will allow us to output a static page that can be cached and use JQuery to modify the page to customize it for a logged in user. The one page that will probably see the heaviest utilization is completely Ajax powered, but, it is a short-lived page and probably wouldn’t be indexed anyhow.

node.js for serving static files

I debated this. node.js does really well for Ajax and long-polling but several articles recommend using something other than node.js for static media. I didn’t find it to be slow, but, other solutions did easily outserve it for static content. Since we’re putting all of our content behind Varnish, the alpha will serve the content to Varnish and Varnish will serve the content. It is possible I’ll change that architecture later.

socket.io

While I’ve just scratched the surface of the possibilities, socket.io is very easy to use and extremely powerful. I haven’t found a browser that had any problems, and, it abstracts everything so I don’t have to worry which method it is using to talk to the remote browser. You can associate data with the socket so that you can later identify the listener which is handy for writing chat applications.

Stumbles

At the end of seven days, I’m still stumbling over Javascript’s async behavior. At times, function precedence is cumbersome to work around when you’re missing a needed closure for a method in a library. I’ve also tested a number of packages that obviously solved someone’s problem and was published that looked good but just wasn’t generic enough.

248 hours

656 lines of code, most major functionality working, some test code written and the bulk of the alpha site at least fleshed in.

Overall Impression

node.js is very powerful. I think I could have saved time using Pyramid and used node.js purely for the Ajax Push, but, it was a good test and I learned quite a bit. If you have the time to implement a small project using new technology, I highly recommend it.

Software Used

* node.js
* Express
* gm
* redis
* sequelize
* ejs
* everyauth
* npm
* socket.io

Software Mentions

* APE
* jade

Additional Links

* Blazing fast node.js: 10 performance tips from LinkedIn Mobile

The Architecture of a New Project

Wednesday, January 11th, 2012

Yesterday I started working with Ajax Push, wrote a quick demo for a friend, and then stripped that and wrote a functional demo project with documentation. I did this to test if Ajax Push worked well enough for another concept project. As it turns out, using APE does work, but, it leaves a little to be desired.

While I was working with APE and tweaking the documentation and demo, a problem I had faced a few weeks back popped into my mind. Using Ajax Push for this application was perfect, it was all server push rather than client communication and the concept would work wonderfully.

What now?

We’re faced with a few dilemmas. This problem is 99% Ajax/Long Polling and 1% frontend. An Android and IOS app need to be developed to interface with the system, but, that is the simple part of the project.

Architecture

At first I considered Python/Pyramid as the frontend, Varnish for caching content and APE for handling the Ajax Push/Long Polling. I’ll need to write an API to handle the Android and IOS Authenticating and communicating with the system. I suspect my app will become an OAuth2 endpoint for the apps which I’ll explain in a moment.

It was at this point that I realized, I could use node.js and socket.io to handle the long polling, but, the frontend requirements are so lightweight, I could do most of the web app in Node.js. Since I’m using node.js quite heavily, I’ll probably use Redis and CouchDB to do my storage – just in case.

Epiphany

Now, I had an epiphany. While I don’t really intend to open the API for the project initially, there’s a certain logic to making your own project utilize the same API that you will later make public. If anything, it makes designing your IOS and Android app easier since they utilize an API rather than relying on separate methods for communications with the webapp. One single interface rather than two and later if Windows Mobile gets an app, we’ve already got the API designed. Since we’re an OAuth2 endpoint, our mobile apps can take advantage of numerous existing libraries – saving quite a bit of time.

Later, if the API is made public, we’re not facing a new engineering challenge and we’ve had some first-hand experience with the API.

Recently there has been a lot of discussion about using ‘the right tool for the job’ and why that is wrong. ‘Use the same language for every part of the project’ is the other school of thought. There are things I know Python does well, there are things I know it doesn’t do well. There are things Erlang can handle, and things it shouldn’t. While I’m not a fan of Javascript, for this project, it really does seem like the right tool for the job. The difference between APE and node.js was Spidermonkey versus V8. In both cases, I’m writing Javascript, so, why not choose the option that has a much larger installed base – and a demo that has a use case very similar to my final app.

Now what?

While I’ve not used node.js, I’m expecting the next few days to be a rapid iteration of development and testing.

…and I’ll be using git. :)

git init

Entries (RSS) and Comments (RSS).
Cluster host: li