Archive for the ‘Scalability’ Category

Varnish and Node.js

Thursday, July 18th, 2013

While working with a client installation they wanted to run Varnish in front of their node.js powered site to eliminate having node serve the static assets. Socket.io uses HTTP/1.0 and cannot be cached. Minimally these few lines can be added to their respective functions and things will work. Obviously you’ll want to set expires on your static assets, strip cookies where possible, etc.

sub vcl_recv {
    if (req.url ~ "socket.io/") {
      return (pipe);
    }
}

sub vcl_pipe {
    if (req.http.upgrade) {
        set bereq.http.upgrade = req.http.upgrade;
    }
    return (pipe);
}

Tested with: Varnish 3.0.4, Node.js v0.10.13

btrfs gets very slow, metadata almost full

Thursday, March 7th, 2013

One of our storage servers that has had problems in the past. Originally it seemed like XFS was having a problem with the large filesystem, so, we gambled and decided to use btrfs. After eight days running, the machine has gotten extremely slow for disk I/O to the point where backups that should take minutes, were taking hours.

Switching the disk scheduler from cfq to noop to deadline appeared to have only short-term benefits at which point the machine bogged down again.

We’re running an Adaptec 31205 with 11 Western Digital 2.0 terabyte drives in hardware Raid 5 with roughly 19 terabytes accessible on our filesystem. During the first few days of backups, we would easily hit 800mb/sec inbound, but, after a few machines had been backed up to the server, 100mb/sec was optimistic with 20-40mb/sec being more normal. We originally attributed this to rsync of thousands of smaller files rather than the large files moved on some of the earlier machines. Once we started overlapping machines to get their second generational backup, the problem was much more evident.

The Filesystem:

# df -h /colobk1
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda8        19T  8.6T  9.6T  48% /colobk1

# btrfs fi show
Label: none  uuid: 3cd405c7-5d7d-42bd-a630-86ec3ca452d7
	Total devices 1 FS bytes used 8.44TB
	devid    1 size 18.14TB used 8.55TB path /dev/sda8

Btrfs Btrfs v0.19

# btrfs filesystem df /colobk1
Data: total=8.34TB, used=8.34TB
System, DUP: total=8.00MB, used=940.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=106.25GB, used=104.91GB
Metadata: total=8.00MB, used=0.00

The machine

# uname -a
Linux st1 3.8.0 #1 SMP Tue Feb 19 16:09:18 EST 2013 x86_64 GNU/Linux

# btrfs --version
Btrfs Btrfs v0.19

As it stands, we appear to be running out of Metadata space. Since our used metadata space is more than 75% of our total metadata space, updates are taking forever. The initial filesystem was not created with any special inode or leaf parameters, so, it is using the defaults.

The btrfs wiki points to this particular tuning option which seems like it might do the trick. Since you can run the balance while the filesystem is in use and check its status, we should be able to see whether it is making a difference.

I don’t believe it is going to make a difference as we have only a single device exposed to btrfs, but, here’s the command we’re told to use:

btrfs fi balance start -dusage=5 /colobk1

After a while, the box returned with:

# btrfs fi balance start -dusage=5 /colobk1
Done, had to relocate 0 out of 8712 chunks

# btrfs fi df /colobk1
Data: total=8.34TB, used=8.34TB
System, DUP: total=8.00MB, used=940.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=107.25GB, used=104.95GB
Metadata: total=8.00MB, used=0.00

So it added 1GB to the metadata size. At first glance, it is still taking considerable time to do the backup of a single machine of 9.7gb – over 2 hours and 8 minutes when the first backup took under 50 minutes. I would say that the balance didn’t do anything positive as we have a single device. I suspect that the leafsize and nodesize might be the difference here – requiring a format and backup of 8.6 terabytes of data again. It took two and a half minutes to unmount the partition after it had bogged down and after running the balance.

mkfs -t btrfs -l 32768 -n 32768 /dev/sda8

# btrfs fi df /colobk1
Data: total=8.00MB, used=0.00
System, DUP: total=8.00MB, used=32.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=192.00KB
Metadata: total=8.00MB, used=0.00

# df -h /colobk1
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda8        19T   72M   19T   1% /colobk1

XFS took 52 minutes to back up the machine. XFS properly tuned took 51 minutes. Btrfs tested with the leafnode set took 51 minutes. I suspect I need to run things for a week to get the extent’s close to filled again and check it again. In any case, it is a lot faster than it was with the default settings.

* Official btrfs wiki

Using temporary; Using filesort

Sunday, December 2nd, 2012

Ahh the dreaded temporary table and filesort. This is one performance killer that is incredibly bad on a high traffic site and the cause is fairly easy to explain.

MySQL tries to keep a result set in memory. When the query plan optimizer checks the number of rows that might be returned, it also looks at the table structure. In this case, we have text fields, and, a lot of them. However, it only takes one for MySQL to decide to write to disk.

The fix for this is somewhat simple to explain, but, may be a little difficult to implement. In our case, we have 91000 lines of some very poorly written php code that ‘builds’ the command through string concatenation, allowing for unique prefixes and tablenames. Houdini would be proud at the misdirection in this application, but, we’ve found the query through the MySQL slow query log, and we can fix it there, then, figure out where to modify the code.

Heart of the problem

select * from tablea,tableb where tablea.a=1 and tablea.b=2 and tablea.c=3 and tablea.id=tableb.id;

Of course, the initial application had no indexes on the 35000 row table. If you’re interested in some blog posts I wrote about indexing, MySQL Query Optimization, MySQL 5.1’s Query Optimizer and Designing MySQL Indices.

What is the solution to dealing with queries that return text fields?

Creative use of Subqueries is needed.

SELECT * from tablea,tableb where tablea.id in (SELECT id from tablea where a=1 and b=2 and c=3) and tablea.id=tableb.id;

But wait, I need a limit clause in my subselect and MySQL says:

ERROR 1235 (42000): This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'

Now we modify the query slightly to use a join:

select * from tablea where tablea.id join (SELECT id from tablea where a=1 and b=2 and c=3 order by c desc limit 15) subq on subq.id=tablea.id,tableb where tablea.id=tableb.id;

We’ve avoided the creation of the temporary table, we’ve avoided the filesort and we’ve saved ten seconds off this query which is loaded on every pageload.

Now to convince this person that they don’t need to regenerate the page on every pageload – only when they are adding content. But, that’s an argument for another day.

Ext4, XFS and Btrfs benchmark redux

Tuesday, May 22nd, 2012

As Linux 3.4 was just released and it includes a number of btrfs filesystem changes, I felt it was worth retesting to see if btrfs had better performance.

$ /usr/sbin/bonnie++ -s 8g -n 512

ext4

mkfs -t ext4 /dev/sda9
mount -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   582  98 59268   6 30754   3  3515  99 104817   4 306.1   1
Latency             15867us    1456ms     340ms    8997us   50112us     323ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 35092  55 520637  91  1054   1 35182  54 791080 100  1664   2
Latency              1232ms     541us   14112ms    1189ms      41us   11701ms
1.96,1.96,colo7,1,1337657098,8G,,582,98,59268,6,30754,3,3515,99,104817,4,306.1,1,512,,,,,35092,55,520637,91,1054,1,35182,54,791080,100,1664,2,15867us,1456ms,340ms,8997us,50112us,323ms,1232ms,541us,14112ms,1189ms,41us,11701ms

ext4 with tuning and mount options

mkfs -t ext4 /dev/sda9
tune2fs -o journal_data_writeback /dev/sda9
mount -o rw,noatime,data=writeback,barrier=0,nobh,commit=60 /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   587  97 64875   6 34046   4  3149  96 105157   4 317.2   4
Latency             13877us     562ms    1351ms   18692us   54835us     287ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 38127  59 525459  92  2118   2 37746  58 792967  99  1433   2
Latency               980ms     525us   14018ms    1056ms      46us   12355ms
1.96,1.96,colo7,1,1337661756,8G,,587,97,64875,6,34046,4,3149,96,105157,4,317.2,4,512,,,,,38127,59,525459,92,2118,2,37746,58,792967,99,1433,2,13877us,562ms,1351ms,18692us,54835us,287ms,980ms,525us,14018ms,1056ms,46us,12355ms

btrfs from ext4 partition

umount /mnt
fsck.ext3 -f /dev/sda9
btrfs-convert /dev/sda9
mount -t btrfs -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   462  98 62854   5 30782   4  3065  88 88883   8 313.1   7
Latency             63644us     272ms     206ms   38178us     241ms     409ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 36868  85 598431  98 26007  93 32002  73 756164  99 21975  84
Latency             15858us     427us    1003us     471us     157us    2161us
1.96,1.96,colo7,1,1337660385,8G,,462,98,62854,5,30782,4,3065,88,88883,8,313.1,7,512,,,,,36868,85,598431,98,26007,93,32002,73,756164,99,21975,84,63644us,272ms,206ms,38178us,241ms,409ms,15858us,427us,1003us,471us,157us,2161us

btrfs without conversion

mkfs -t btrfs /dev/sda9
mount -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   468  98 60274   4 29605   4  3629 100 89250   8 301.5   7
Latency             55633us     345ms     196ms    3767us     229ms    1119ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 26078  60 603783  99 26027  92 25617  58 754598  99 21935  84
Latency               452us     423us    1029us     426us      16us    2314us
1.96,1.96,colo7,1,1337661202,8G,,468,98,60274,4,29605,4,3629,100,89250,8,301.5,7,512,,,,,26078,60,603783,99,26027,92,25617,58,754598,99,21935,84,55633us,345ms,196ms,3767us,229ms,1119ms,452us,423us,1029us,426us,16us,2314us

xfs defaults

mkfs -t xfs /dev/sda9
mount -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G  1391  96 65559   5 31315   3  2984  99 103339   4 255.8   3
Latency              5625us   33224us     221ms   10524us     103ms     198ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512  7834  37 807425  99 14627  50  8612  41 790321 100  1169   4
Latency              1182ms     123us     837ms    2037ms      18us    7031ms
1.96,1.96,colo7,1,1337660479,8G,,1391,96,65559,5,31315,3,2984,99,103339,4,255.8,3,512,,,,,7834,37,807425,99,14627,50,8612,41,790321,100,1169,4,5625us,33224us,221ms,10524us,103ms,198ms,1182ms,123us,837ms,2037ms,18us,7031ms

xfs tuned

mkfs -t xfs -d agcount=32 -l size=64m /dev/sda9
mount -o noatime,logbsize=262144,logbufs=8 /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G  1413  96 64640   5 31226   3  2977  99 104762   4 246.8   3
Latency              5616us     370ms     235ms   10530us   62654us     206ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 14763  70 793694  98 23959  81 15104  72 790204  99  2290   8
Latency               482ms     118us     274ms     683ms      17us    5201ms
1.96,1.96,colo7,1,1337666959,8G,,1413,96,64640,5,31226,3,2977,99,104762,4,246.8,3,512,,,,,14763,70,793694,98,23959,81,15104,72,790204,99,2290,8,5616us,370ms,235ms,10530us,62654us,206ms,482ms,118us,274ms,683ms,17us,5201ms

btrfs with a snapshot

mkfs -t btrfs /dev/sda9
mount -o noatime,subvolid=0 /dev/sda9 /mnt
wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.4.tar.bz2
tar xjf linux-3.4.tar.bz2
btrfs subvolume snapshot /mnt/ /mnt/@_2012_05_22
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   469  98 58400   5 30092   4  2999  85 89761   8 321.1   3
Latency             17017us     267ms     240ms   22907us     300ms     359ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 31715  72 598360  98 25780  92 26411  59 756110  99 22058  84
Latency               102ms     424us     844us     472us      20us    2171us
1.96,1.96,colo7,1,1337664006,8G,,469,98,58400,5,30092,4,2999,85,89761,8,321.1,3,512,,,,,31715,72,598360,98,25780,92,26411,59,756110,99,22058,84,17017us,267ms,240ms,22907us,300ms,359ms,102ms,424us,844us,472us,20us,2171us

Deleted kernel, left it in snapshot, reran test

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   469  98 63934   5 31244   4  3208  94 90227   8 296.3   7
Latency             17009us     282ms     217ms    3746us     224ms    1269ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 28758  66 596074  97 26185  93 25714  59 755464  99 21893  84
Latency             42108us     424us     993us     445us      17us    2245us
1.96,1.96,colo7,1,1337671128,8G,,469,98,63934,5,31244,4,3208,94,90227,8,296.3,7,512,,,,,28758,66,596074,97,26185,93,25714,59,755464,99,21893,84,17009us,282ms,217ms,3746us,224ms,1269ms,42108us,424us,993us,445us,17us,2245us

Updated results using some different parameters. Same hardware, same hard drive.

leaf and btree size of 16384

mkfs -t btrfs -l 16384 -n 16384 /dev/sda9
mount -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   472  98 37514   2 14395   2  3135  89 80600   7 294.0   6
Latency             16820us     781ms     383ms   19736us     230ms     379ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 17447  46 621480  99 24345  94 14984  39 754999  99 19873  82
Latency               303us     494us     900us     412us     107us    3127us
1.96,1.96,colo7,1,1338411461,8G,,472,98,37514,2,14395,2,3135,89,80600,7,294.0,6,512,,,,,17447,46,621480,99,24345,94,14984,39,754999,99,19873,82,16820us,781ms,383ms,19736us,230ms,379ms,303us,494us,900us,412us,107us,3127us

leaf and btree size of 32768

mkfs -t btrfs -l 32768 -n 32768 /dev/sda9
mount -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   468  97 26136   2 17256   2  3135  89 84450   7 306.5   7
Latency             43238us     923ms     330ms   12632us     367ms     986ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 17958  61 624570  99 19930  95 14506  50 753354  99 15976  80
Latency             15384us     514us     937us     431us     144us    4782us
1.96,1.96,colo7,1,1338409200,8G,,468,97,26136,2,17256,2,3135,89,84450,7,306.5,7,512,,,,,17958,61,624570,99,19930,95,14506,50,753354,99,15976,80,43238us,923ms,330ms,12632us,367ms,986ms,15384us,514us,937us,431us,144us,4782us

leaf and btree size of 65536

mkfs -t btrfs -l 65536 -n 65536 /dev/sda9
mount -o noatime /dev/sda9 /mnt
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
colo7            8G   467  97 25097   2 17349   2  2845  87 86653   8 300.2   7
Latency             56046us     772ms     414ms    4101us     249ms     241ms
Version  1.96       ------Sequential Create------ --------Random Create--------
colo7               -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                512 15372  68 626336  98 14723  96 13137  58 753463 100 11652  80
Latency             15825us     395us   77890us     428us      19us   15727us
1.96,1.96,colo7,1,1338410439,8G,,467,97,25097,2,17349,2,2845,87,86653,8,300.2,7,512,,,,,15372,68,626336,98,14723,96,13137,58,753463,100,11652,80,56046us,772ms,414ms,4101us,249ms,241ms,15825us,395us,77890us,428us,19us,15727us

Analysis

Last time I tested ext4, xfs and btrfs, deletions really lagged behind. Now, it looks like btrfs is quite a bit more robust. Additionally, there are better repair and recovery tools which were basically missing before. btrfs doesn’t lag behind like it used to, and while it is a little slower in some cases, it’s only a few percent. However, it makes up for that with some of the random and sequential creation and deletions.

Rough analysis at this point – if you need a versioning filesystem and don’t mind being a bit on the bleeding edge, btrfs has made substantial strides.

Updated Analysis

For the hardware in question, it appears that the larger block sizes with Bonnie++ don’t benefit things, but, make sure you test with your workload.

Test Equipment

  • Linux colo7 3.4.0 #1 SMP Mon May 21 00:29:58 EDT 2012 x86_64 GNU/Linux
  • Intel(R) Xeon(R) CPU X3220 @ 2.40GHz
  • WDC WD7500AACS-0 01.0 PQ: 0 ANSI: 5
  • ahci enabled
  • 100gb partition
  • machine rebooted between each test

A discussion of Web Site Performance – from a design perspective

Friday, March 23rd, 2012

One of the things I always run into are clients that want their site to be faster. Often times, I’m told that it is the server or MySQL slowing their site down. Today, I had a conversation with a site owner that was talking about how slow their site was.

“The site loads the first post and sits there for five seconds, then the rest of the page comes in.”

Immediately, I think, <script src=” is likely the problem.

Load the page, yes, pauses… right where the social media buttons are loaded.

Successive reloads are better, but, that one javascript include appears to always be fetched. Turns out, expire time on that javascript is set to a date in the past, therefore, always fetches regardless of modifications. And, that script doesn’t load very quickly, adding to the delay.

Disable the plugin, reload, and the site is fast. The initial reaction is, lets move those includes to async javascript. Social Media buttons don’t need to hold up the pageload – they can be rendered after the site has loaded. It might look a little funny, but, most of the social media buttons are below the fold anyhow, and, we’re trying to get the site to display quickly.

There is a difference between the page being slow and the page rendering slowly. The latter is what most people will see and make a judgement that the site is slow. So, the first thing we need to do is move things to async. As an example, the social media buttons on this site are loaded by my cd34-social plugin.

But, the meat of the conversion to async is here:

<script type="text/javascript">
<!--
var a=["https://apis.google.com/js/plusone.js","http://platform.twitter.com/widgets.js","http://connect.facebook.net/en_US/all.js#xfbml=1"];for(script_index in a){var b=document.createElement("script");b.type="text/javascript";b.async=!0;b.src=a[script_index];var c=document.getElementsByTagName("script")[0];c.parentNode.insertBefore(b,c)};
// -->
</script>

What this code does is load https://apis.google.com/js/plusone.js, http://platform.twitter.com/widgets.js and http://connect.facebook.net/en_US/all.js#xfbml=1 after the page has loaded, and inserts it before itself.

This will make the social buttons load after the page has loaded, but, won’t hold up the page rendering.

However, this isn’t the only issue we’ve run into. The plugin they used, includes its own social media buttons. The plugin should use CSS sprites which are large images that contain a bar or matrix of icons where you use CSS positioning to move that image around and display only a portion of it. This way, you fetch one image rather than the 16 social media buttons and the 16 social media button hover images, and use CSS to move that image around and display the right icon out of the graphic.

Here are a collection of those sprites as used by Google, Facebook, Twitter and Twitter’s Bootstrap template:

With these sprites, you save the overhead of multiple fetches for each icon, and, present a much quicker overall experience for the surfer.

Not everything can be solved with low latency webservers, some performance problems are on the browser/rendering side.

Entries (RSS) and Comments (RSS).
Cluster host: li