Journalistic Responsibility

December 14th, 2009

A week or two ago, a story broke regarding a security upgrade in Windows. In the race to scoop the story first, facts were not checked, the validity of the story was based on a blog post at a security company.

Ed Bott @ Ziff Davis covered it in What the “Black screen of death” story says about tech journalism.

Even TechCrunch falls into this with a spoofed Eric Schmidt joins Twitter. Post first, ask later. Rather than correct the incorrect article, let it run for the adviews.

Since the introduction of the Internet, journalistic accuracy has dropped substantially. While spell-check should eliminate most of the errors, typographic errors occur frequently. The number of journalists that get your and you’re confused or their and there is staggering. Tribune Media, CNN/Turner, ABC, Fox and MSNBC are not immune. Associated Press, Reuters and United Press International remain news leaders with accurate, verified and grammatically correct articles. With the downturn in paper journalism, competent writers have been replaced with less expensive writers that are more interested in the number of bylines they can generate than the quality of their work.

To test a theory, a mock-up of a Facebook Beta application, a ruse posted on a few news sites with corroborating evidence and a ‘hot tip’ to two media outlets resulted in 31 different locations picking up on the post, 2700 or so retweets and precisely one site validating the facts.

The first site it was posted to, Hacker News, suspected it was fake almost immediately. However, they missed the significance of the names chosen, the times that the other comments were posted and the sequence of names. Hackers indeed. A spoof post about a hamster falling into the LHC stayed within the top 210 posts for almost four days before enough ‘news’ displaced it.

In the end, it took a security person from Facebook to post and the thread was subsequently killed. Did Facebook violate someone’s privacy to get to the bottom of this? There sure wasn’t much red tape for the Facebook engineer to peer into someone’s profile to get to the bottom of it.

TheNextWeb suspected something was amiss and updated their post throughout the day clearly indicating the updates. Martin Bryant contacted me via email to ask quite directly whether the information was true. This is good journalism.

I suppose most of the sites that ran the story are just pulling RSS feeds from somewhere with no editorial oversight. A trusted syndicated source could distribute a hoax fairly widely and the remnants would be available on the web and search engines for years.

Do sites knowingly run with incorrect headlines in search of ad dollars associated with a hot story — hoax or not? Three sites that picked up the story clearly wanted the the hysteria and hype to drive adviews.

In the end, the glut of news available at our fingertips means that the overall quality of news has diminished. Is there a solution? With automation moving at breakneck speed, it is a problem we’re going to have to deal with for quite some time. Even Google’s news site presents stories without any editorial control and would be a difficult, but not impossible vector to exploit.

Peer reviewed news isn’t the answer as so many sites have proven and editorially controlled sites contain bias no matter how independent they claim to be.

Want to design the killer app of 2010? Fix news distribution.

Facebook Pro – Facebook’s Revenue Stream

December 11th, 2009

I’ve always been an early adopter of technology, social media and new websites that had a technological edge. I read quite a few of the tech news websites and love to get in on early beta and beta offerings from companies. One of my recent favorite betas that I was invited to was lite.facebook.com. On the surface, it seemed to lack a certain finesse, but, the biggest feature it had was that it was extremely quick, lacked the application spam and let me see 99% of what I was interested in.

I’ve loved Google Voice and was a fairly early adopter. I had tried Grand Central, but, it didn’t replace enough functionality with what I had currently set up with the local phone company. Google Wave and their Sandbox is another product that I find very intriguing. I have worked with Wave Federations and I think once someone develops a killer app for Wave, it’ll gain wide acceptance.

But, this isn’t about Google, this is about Facebook.

I was an early adopter of FB Connect. I’ve written a number of applications that I’ve not released to experiment with their API and have been generally impressed by their openness. Some of the information an application is able to access is a privacy nightmare. People complain day in and day out about Google and Privacy – perhaps because Google has to collect all of its market intelligence based on your surfing habits, and then Facebook finds a way to have you spend hours customizing your profile – giving Facebook precisely the information that makes their advertising system 10x more intrusive than Google could ever be. Back to the point.

In August I received an email from Facebook asking if I would participate in another beta project. I was warned that this one would entail a purchase from their store, but, in exchange, I would receive credit towards advertising. It makes perfect sense to test the payment system ahead of time on a major release – something many new electronic stores fail to do. I clicked the link saying I would be a part of their beta and waited.

And waited.

Last night, a very cryptic email arrived with a link to follow to read about this exciting new product Facebook had to offer. As I read the page, I was already pulling out my wallet to get my credit card because the service seemed perfect for me. Having to maintain a LinkedIn profile and a Facebook Profile has always been an exercise in duplication. Facebook doesn’t ask enough questions to really be useful in business and I suspect if they put their heads together, they could develop a new angle.

It appears they listened.

The page was very basic, it talked about the benefits of a ‘Facebook Pro’ account, pricing hadn’t been established but they had set a test price of $29.95 for a 6 month recurring membership.

Some of the benefits listed included:

* Ability to store Work History
* Ability to write Recommendations on profiles
* Tighter control over Profile Security
* Additional Contact Method fields
* Certification badges
* Digital Business cards

facebook pro beta

Once you get in, there is a small NDA that prevents screenshots of the interface, but, it is obvious that there are hundreds of people in the beta. Even as I have set up some business interests, it is listing profiles in a ‘Business Network’ that are staggeringly accurate. A refreshing change from the People You May Know lottery.

So far, the new options are quite intriguing and if the quality of the business contacts I’ve made in the beta are indicative of the trend, I think Facebook has a real winner here.

I found it interesting that the beta was released which allows tighter control over privacy the day after they release new privacy options that the masses are hailing as anti-privacy. Perhaps this is why Facebook chose this week to release the beta.

Upgraded GFS2 Cluster Tools from 2.2 to 3.0.4

December 10th, 2009

With a few words of warning, we upgraded one of our clusters from 2.2 to 3.0.4. While this is normally a seamless project, it needed to be coordinated with both storage nodes in the cluster since the changes from 2.2 to 3.0 in openais were incompatible. Some minor changes to the cluster config file were needed which results in a cleaner file, and, a new dependency for rgmanager was added for the upgrade to 3.0.

This meant some downtime while openais was upgraded. Since we run behind a pair of load balancers, we were able to shut down the first filesystem, disconnect it from cman, upgrade one side, shut off the services on the other, bring this side up, bring up services, then upgrade the second node.

While this should have worked, cman on the primary node had no problem, but the secondary node refused to start dlm_controld.

Dec 10 12:29:20 dlm_controld dlm_controld 3.0.4 started
Dec 10 12:29:30 dlm_controld cannot find device /dev/misc/lock_dlm_plock with minor 58

For some odd reason, lock_dlm_plock was created in /dev rather than /dev/misc after the udev upgrade. Moving it into place allowed cman to start on the second node, and, allowed the cluster to run in non-degraded mode.

Why lock_dlm_plock was in the wrong place on one node and in the correct place on the other node, I’m not sure. I think prior to rgmanager being installed, the init script for cman didn’t stop when dlm couldn’t be loaded, and since the /dev/misc folder hadn’t been created, it created the lock file in /dev. Subsequent restarts of the machine have resulted in it coming up without an issue, so, it appears to be an issue somewhere in one of the startup scripts.

No ESI processing, first char not ‘<'

December 1st, 2009

After installing Varnish 2.0.5 on a machine, ESI Includes didn’t work. When using varnishlog, the first error that occurred when debugging was:

No ESI processing, first char not ‘< '

   12 SessionClose – timeout
   12 StatSess     – 124.177.181.149 50662 4 0 0 0 0 0 0 0
   12 SessionOpen  c 68.212.183.136 60087 66.244.147.44:80
   12 ReqStart     c 68.212.183.136 60087 409391565
   12 RxRequest    c GET
   12 RxURL        c /esi.html
   12 RxProtocol   c HTTP/1.1
   12 RxHeader     c Host: cd34.colocdn.com
   12 RxHeader     c User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2b4) Gecko/20091124 Firefox/3.6b4
   12 RxHeader     c Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
   12 RxHeader     c Accept-Language: en-us,en;q=0.5
   12 RxHeader     c Accept-Encoding: gzip,deflate
   12 RxHeader     c Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
   12 RxHeader     c Keep-Alive: 115
   12 RxHeader     c Connection: keep-alive
   12 RxHeader     c X-lori-time-1: 1259718658980
   12 RxHeader     c Cache-Control: max-age=0
   12 VCL_call     c recv
   12 VCL_return   c lookup
   12 VCL_call     c hash
   12 VCL_return   c hash
   12 VCL_call     c miss
   12 VCL_return   c fetch
   12 Backend      c 14 cd34_com cd34_com
   12 ObjProtocol  c HTTP/1.1
   12 ObjStatus    c 200
   12 ObjResponse  c OK
   12 ObjHeader    c Date: Wed, 02 Dec 2009 01:50:59 GMT
   12 ObjHeader    c Server: Apache
   12 ObjHeader    c Vary: Accept-Encoding
   12 ObjHeader    c Content-Encoding: gzip
   12 ObjHeader    c Content-Type: text/html
   12 TTL          c 409391565 RFC 120 1259718659 0 0 0 0
   12 VCL_call     c fetch
   12 TTL          c 409391565 VCL 43200 1259718659
   12 ESI_xmlerror c No ESI processing, first char not ‘< '
   12 TTL          c 409391565 VCL 0 1259718659
   12 VCL_info     c XID 409391565: obj.prefetch (-30) less than ttl (-1), ignored.
   12 VCL_return   c deliver
   12 Length       c 68
   12 VCL_call     c deliver
   12 VCL_return   c deliver
   12 TxProtocol   c HTTP/1.1
   12 TxStatus     c 200
   12 TxResponse   c OK
   12 TxHeader     c Server: Apache
   12 TxHeader     c Vary: Accept-Encoding
   12 TxHeader     c Content-Encoding: gzip
   12 TxHeader     c Content-Type: text/html
   12 TxHeader     c Content-Length: 68
   12 TxHeader     c Date: Wed, 02 Dec 2009 01:50:59 GMT
   12 TxHeader     c X-Varnish: 409391565
   12 TxHeader     c Age: 0
   12 TxHeader     c Via: 1.1 varnish
   12 TxHeader     c Connection: keep-alive
   12 TxHeader     c X-Cache: MISS
   12 ReqEnd       c 409391565 1259718659.088263512 1259718659.127703667 0.000059366 0.039401770 0.000038385
   12 Debug        c "herding"

ESI received significant performance enhancements in 2.0.4 and 2.0.5 so, it seemed something was incompatible. Downgrading to 2.0.3 and using the VCL from another machine still resulted in ESI not working.

In this case, mod_deflate was running on the backend which was causing the issue. However, in reading the source code, it appears that message could also occur if your ESI include wasn’t handing back properly formed XML/HTML content. If your include doesn’t contain valid content and is only returning a small snippet, you might consider passing:

-p esi_syntax=0x1

on the command line that starts Varnish.

The changes in Varnish address the issue of ESI being enabled on binary content. Since the first character isn’t an < in almost all binary files (jpg, mpg, gif) and isn't the start of most .css/.js files, varnish doesn't need to spend extra time checking those files for includes. While you can and should selectively enable esi processing, this is just an added safeguard and a performance boost to compensate for vcl that might have an esi directive on static/binary content. Since Varnish 2.0.3 now worked properly with the new machine, we upgraded to Varnish 2.0.5 which introduced a very odd issue:

[Tue Dec 01 20:58:11 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.htmlt
[Tue Dec 01 20:58:13 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.html7
[Tue Dec 01 20:58:24 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.html\xfa
[Tue Dec 01 20:59:01 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.html\xb5
[Tue Dec 01 20:59:06 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.html\xe7
[Tue Dec 01 20:59:07 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.html\xd4
[Tue Dec 01 20:59:08 2009] [error] [client 66.244.147.40] File does not exist: /gfs/www/cd/cd34.com/index.html\x1c

This generated 404s on the piece of the page that contained the ESI include. Downgrading to 2.0.4 fixed the issue and the issue appears to already be fixed in Trunk. Varnish Ticket #585

Varnish 2.0.4 and mod_deflate disabled addressed the two issues that prevented ESI from working correctly on this new installation.

Social Gaming Design Requirements

November 17th, 2009

Over the past few years, social gaming has become very popular. Many of the games don’t encompass the elements I believe should be present to make your game a larger success.

* Easy to learn. A game that takes minutes to learn the simple mechanics will attract clients. Design the game to have a moderate progression that can be sped up with the addition of virtual cash. Alternatively, the game can be a blitz version of a full-length game that can be purchased.

* Visibly show friend’s scores on the playscreen. Having a list of the top 10 Friends ordered to make it plainly visible where the player is compared to friends is a must.

* Long Tail Game. A game that is very engaging at the start, but, requires less time to maintain as time goes on. This can be done through game or item upgrades that take longer to obtain as the level increases. A penalty for abandoning their empire — even something as fatal as making the player start over after seven days of inactivity is enough to keep a somewhat casual user engaged. Don’t overdo it as someone that loses everything is unlikely to return. 90% of your revenue is earned within the first week of the player joining the game. However, in the event your game has multiple rounds, a player might play for free, learn some of the strategy and then make a purchase for the second round. A player that doesn’t understand the rules early on and makes mistakes also has a much higher chance of purchasing in-game currency now that they’ve figured out the game.

* Friend linking. Prevent progression without spending virtual cash or recruiting more friends. Limiting gameplay to a subset of the entire game or giving bonuses based on the number of friends recruited allows new players to be brought in which bring more potential income. Those new players have the same limits and must grow their pyramid in order to advance. Encouraging social play grows the game much more quickly. If you allow a solo player the ability to advance at an increased cost, you might entice them enough to invite friends.

* Make it easy to invite friends. Some games require you to link your real account with ‘friends’ in order to grow your social circle. While this appears to be beneficial, your real goal is to bring in the total number of users. Some people aren’t comfortable adding friends just to grow their social circle and will remove them. If you have a number of games, it might be advantageous to require the friend connection so that you can see other games that your ‘friends’ are playing. The downside to this is when a player cleans up their friend list and removes the friends they added specifically for the game.

* Notifications. Letting a user’s friends know that they have gained a level or gained some special item or promotion instills a sense of competitiveness when someone obtains a rare loot item or achievement while the friend hasn’t been playing.

While there are other aspects to Social Gaming, designing your game with these six points in mind will enable you to monetize your application more easily. Unless you’re building the game purely as a hobby, your goal is to make money by engaging users to purchase the in-game virtual currency. It’s a numbers game. A small percentage of your users will actually pay to play and you need to cater your game to increase that percentage as much as possible.

When you first deploy your game, you will need to know many things about your visitors. You want to know how many people hit the front page, how many hit the join page and where they came from. You want to track the number of people that actually sign up, how much activity each one has, and which people convert to a paid subscription. Track your adbuys, track views/impressions, clicks, and click to signup/click to paid subscription. Understand those numbers and learn how to manipulate them by changing your signup process, modifying the game or adjusting how your in-game currency is used.

Writing the game is a minor piece of the battle. Building marketshare without thinking about revenue is a fairly fast way to go into debt and leave yourself in a situation where you quickly grab any VC/Angel money and give up too much of the company. Think about how you will monetize it and start charging early on. If your clients are used to a ‘free to play’ environment and you suddenly switch, many of your clients will abandon the game.

Entries (RSS) and Comments (RSS).
Cluster host: li