If you could have it all….

I’m a bit of a web performance nut.  I like technology when it is used to solve real challenges and won’t use technology for technology’s sake.  When you look at today’s scalability problems of all of the web 2.0 shops, one only needs to make one real generalization.

What is the failing point of today’s sites?  How many stories have you read in the media about some rising star that gets mentioned on yahoo or digg or slashdot?  Generally, their site crashes under the crushing load (I’ve had sites slashdotted, its not as big a deal as they would have you believe).  But, the problem we face is multifaceted.

Developer learns php.  Developer discovers MySQL.  Developer stumbles across concept.  Developer cobbles together code, buys hosting — sometimes on a virtual/shared hosting environment, sometimes on a VPS, sometimes a dedicated server.  But, the software that performs well for a few friends hitting the site and acting as beta testers is never really pushed.  While the pages look nice, the engine behind them is usually poorly conceived, or worse, designed thinking that the single server or dual server web/mysql combination is going to keep them alive.

95% of the software designed and distributed under Open Source Licenses doesn’t understand the unique challenges behind a site that needs to handle 20 visitors versus 20000 visitors per hour.  Tuning apache to handle high traffic, tuning mysql indexes and mysql’s configuration and writing applications designed for high traffic is not easy.  Debugging and repairing those applications after they’ve been deployed is even harder.  Repairing while maintaining backwards compatibility adds a whole new level of complexity.

Design with scalability in mind.  I saw a blog the other day where someone was replacing a 3 server setup behind a load balancer with a single machine because the complexity of 100% uptime made their job harder.  Oh really?

What happens when your traffic needs outgrow that one server?  Whoops, I’m back to that load balanced solution that I just left.

What are the phases that you need to look for?

Is your platform ready for 100,000 users a day?  If not, what do you need to do to make sure it is ready?  Where are your bottlenecks? Where does your software break down?  What is your expansion plan?  When do you split your mysql writers and readers?  Where does your appliction boundary start and end?  What do you think breaks next?  Where is our next bottleneck?

What happens with a digg or slashdot that crushes a site?  Usually, its a site that has all sorts of dynamic content with ill conceived mysql queries generated in realtime every pageload.  I can remember a CMS framework that did 54 sql queries to display the front page.   That is just rediculous and I dumped that framework 5 minutes after seeing that.  Pity, they did have a good concept.

So, with scalability in mind, how does one engineer a solution?  LAMP isn’t the answer.

You pick a framework that doesn’t use the usual paradigms of an application.  Why should you worry about a protocol, you should design the application divorced from the protocol.  You develop an application that faces the web rather than talking direct to the web because other applications might talk to your application.  When it comes time to scale, you add machines without having to worry about task distribution.  Google does it, you should too.

Mantissa solves that problem by being a framework that encompasses all of that.  If some of these Web 2.0 sites thought about their deployment like google did — expansion wouldn’t create much turmoil.  To grow, you just add more machines to the network.

Leave a Reply

You must be logged in to post a comment.

Entries (RSS) and Comments (RSS).
Cluster host: li