Archive for the ‘Programming Languages’ Category

Python, Perl and PHP interoperability with pack and unpack

Monday, April 27th, 2009

Perl has very powerful capabilities for dealing with structures.  PHP’s support of those structures was based on Perl’s wisdom.  Python went a different direction.

Perl pack/unpack definitions

PING_FORMAT => ‘(a4n2N2N/a*)@245’;
TASK_FORMAT => ‘a4NIN/a*a*’;
RETR_FORMAT => ‘a4N/a*N’;
ENCPAYLOAD_FORMAT => ‘Na*’;

PHP pack/unpack definitions

define(‘TASK_FORMAT’, ‘a4NINa*a*’);
define(“ENCPAYLOAD_FORMAT”,’Na*’);

For a communications package written in perl that communicates with 32 bit and 64 bit machines that may not share the same endian structure.  The problem I’ve run into now is that Python does not support the Perl method, and, I don’t know why they didn’t at least offer some compatibility.  pack and unpack give enormous power to communication systems between machines and their support of the perl methods allowed for reasonable interoperability between the two platforms.

Python on the other hand opted to not support some of the features, which was one issue, but, their requirement is that you cannot send variable length packets.

In Python, we’re able to replicate N, network endian Long by using !L:

>>> import struct
>>> print struct.unpack(‘!L’,’\0\0\1\0′);
(256,)

However, there is no method to support a variable length payload behind that value.  We’re able to set a fixed length like 5s, but, this means that we’ve got to know the length of the payload being sent.

>>> print struct.unpack(‘!L5s’,’\0\0\1\0abcde’);
(256, ‘abcde’)

If we overstate the size of the field, Python is more than happy to tell us that the payload length doesn’t match the length of the data.

>>> print struct.unpack(‘!L8s’,’\0\0\1\0abcde’);
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
File “/usr/lib/python2.5/struct.py”, line 87, in unpack
return o.unpack(s)
struct.error: unpack requires a string argument of length 12

The cheeseshop/pypi seems to show no suitable alternative which brings up a quandry.  For this particular solution, I’ll write a wrapper function to do the heavy lifting on the two unpack strings I need to deal with and then I’ll debate pulling the perl unpack/pack routines out of the perl source and wrapping it into an .egg, possibly for distribution.

From File_DB to BerkeleyDB

Thursday, April 2nd, 2009

File_DB lacked decent file locking and concurrence.  I wasn’t really willing to move to MySQL which would have solved the problem, but, added a few minor inconveniences along the way.  I needed to store a few thousand bytes for a number of seconds.  While File_DB was wrapped with file locking and assisted by my own lock routine, it lacked truly concurrent access which I felt was leading to some of the issues we were seeing.

However, after a relatively painless conversion from File_DB to BerkeleyDB, it did not solve the problem completely.  The error I was addressing is now much harder to get to occur in normal use, but, I am able to reproduce it with a small test script.

The documentation for the perl methods to access BerkeleyDB are a bit sparse for setting up CDB, but, after digging through the documentation, and a few examples on the net, I ended up with some code that did indeed work consistently.

Since CDB isn’t documented very well, I ended up with the following script to test file locking and ensure things worked as expected.

#!/usr/bin/perl

use Data::Dumper;
use BerkeleyDB;

my %hash;
my $filename = "filt.db";
unlink $filename;

my $env = new BerkeleyDB::Env
   -Flags => DB_INIT_CDB|DB_INIT_MPOOL|DB_CREATE;

my $db = tie %hash, 'BerkeleyDB::Hash',
  -Filename   => $filename,
  -Flags        => DB_CREATE,
  -Env        => $env
or die "Cannot open $filename: $!\n" ;

my $lock = $db->cds_lock();

$hash{"abc"} = "def" ;
my $a = $hash{"ABC"} ;
# ...
sleep(10);

print Dumper(%hash);
$lock->cds_unlock();
undef $db ;
untie %hash ;

Path issues caused most of the issues as did previous tests not actually clearing out the _db* and filt.db file. One test got CDB working, I modified a few things and didn’t realize I had actually broken CDB creation because the other files were still present. Once I moved the script to another location, it failed to work. A few quick modifications and I was back in business.

Perhaps this will save someone a few minutes of time debugging BerkeleyDB and Perl.

—–

Due to a logic error in the way I handled deletions to work around the fact that BerkeleyDB doesn’t allow you to delete a single record when you have a duplicate key, my code didn’t work properly in production. After diagnosing that and fixing it with a little bit of code, 125 successive tests resulted in 100% completion. I’ve pushed it to a few machines and will monitor it, but, I do believe that BerkeleyDB fixed the issues I was having with File_DB.

Embedded indexing versus Client/Server

Saturday, March 28th, 2009

For a particular application, I require temporary persistent storage of some data.  That data consists of a key value and a payload.  That key value can be a dupe, which is what causes the problem.

File_DB in perl handles duplicates and I can delete a key/value pair without too much difficulty.  However, file locking is not handled very well with File_DB which created concurrency issues with the threaded daemon.

Sqlite3 had no problem with duplicates, and could be compiled with the delete from/limit clause to easily handle duplicate keys.  Rather than recompile the packaged Sqlite3 in Debian, I made a slight modification to the code on my side so that I could do further testing.  Due to a few issues with threading and a potential issue with storing binary data and retrieving it in perl, I needed to reevaluate.

BerkeleyDB solves a few problems.  It supports concurrency, it supports proper file locking, but, a minor limitation is that duplicate keys are not handled well when you want to delete a key.  It’ll require a rewrite of some functionality to use BerkeleyDB, but, I believe that solution will provide the least potential for failures.

I could have use MySQL which I am very comfortable with, but, the storage of the data really only needs to be there for a few minutes in most cases, and the amount of data stored is 10-20K at most.  With MySQL’s client timeout, I couldn’t really guarantee everything would work every time without writing in considerable error checking.  While MySQL would handle everything perfectly, it was overkill for the task at hand.

I’m rewriting the File_DB methods to use BerkeleyDB and modifying the saved data slightly to work around the key delete issue.

It should work and should raise the reliability of this process from 99.2% to 99.9% which will be a considerable improvement.

Multithreaded madness

Monday, March 23rd, 2009

An application I wrote long ago that used File_DB for short-term persistent multithreaded storage had a few issues with concurrency.  I debated rewriting the script to use BerkeleyDB which included proper file locking, but, decided to use Sqlite3 instead as it was closer to SQL and would eliminate a bit of code.

The transition was relatively easy.  Writing self-test functions worked well and a few bugs were dealt with along the way.  Most of the issues were getting used to Sqlite3’s quirks, but, all in all the code worked fine.  Multiple tests with multiple terminal windows resulted in everything working as expected including locking tables, concurrency issues and removing a logic error on the prior application.

First startup of the application resulted in a rather strange result which didn’t make a lot of sense.  I chalked that up to something I had done during testing, deleted the sqlite3 database file and restarted the application.

Success.

The application started, set it self as a daemon and detached from the terminal.  I sent a task to the daemon, and bam.  It seemed to work, it complained of a length error in the unpack which meant there was some data that didn’t get retrieved correctly from the database.  A second task was sent and the error received was even stranger.  Trying to connect to sqlite3 through the command line resulted in:

sqlite> select * from tasks;
SQL error: database disk image is malformed

Ok, something broke badly.

I checked and doublechecked my code with perl running in strict mode and could find nothing that would cause this.  It seems that the packaged version of sqlite3 in debian’s packaged perl is not compiled with threading enabled.

Oops.

I missed that when I was digging through the library configs and will have to build that package for testing.  I did want to avoid using the BerkeleyDB library and move to Sqlite3, but, I think in the interest of finishing this project a little sooner, I will rewrite the code and adjust the locking and use Sqlite3 in the future.

Sqlite3 works very well with SQLAlchemy and TurboGears, but, in this case, it didn’t quite solve the problem that I needed solved.

Professional MySQL Developer?

Monday, March 9th, 2009

Once in a while I get to troubleshoot problems on machines specifically with MySQL.  Sometimes that task is difficult because the vendor written code has been Zend Encoded.  Perhaps in the name of performance, possibly to protect his code, or, to prevent me from seeing just how bad his code is.  Diagnosing MySQL issues forensically requires a few scripts, patience, and time.

This time, I modified /etc/mysql/my.cnf on two machines and added the following:

log        = /var/log/mysql/mysql.log
log-slow-queries= /var/log/mysql/mysql-slow.log
log-queries-not-using-indexes

This gave me three things.  A full log of all queries executed.  I have written a small program to normalize the queries and sort them so I can focus on the queries that take 90% of the time.  I then get queries that take more than 10 seconds — which can be altered by adjusting long_query_time, and, I get any query that doesn’t use an index.  Usually I will focus on the unindexed queries as those generally cause the problems.  Very few people understand the impact of table locks and that usually is where most problems occur.

The first example comes from an application which claims to be able to support 100,000 visitors per day.  Perhaps with enough hardware behind it can, but, in its current state, on a quad core xeon, about 40,000 per day is about all they can push.

Where do we start?  So few lines, so many errors.

CREATE TABLE `iplog` (
`ip` varchar(255) NOT NULL default ”,
`pid` bigint(20) NOT NULL default ‘0’,
`viewed` varchar(255) NOT NULL default ”,
`locked` tinyint(1) NOT NULL default ‘0’,
PRIMARY KEY  (`ip`,`pid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The above schema replicates functionality in the prior version of the software.  The old version of the system didn’t do things this way, so, I am unsure why the developer chose to alter his habits and break things in so many ways.

ip varchar(255)

IPv4 currently only takes 15 spaces.  char(15) would have been more efficient taking 16 bytes to store a 15 digit IP address including the ‘.’.  Even varchar(15) would have taken 20 bytes to store the 15 digit IP address.  But varchar(255) just takes the cake.  The proper way to store an IPv4 address is unsigned int.  Even barring the fact that varchar(15) or char(15) would have been more space efficient, using an unsigned int allows the BTree that MyISAM creates to be a lot more balanced.

Then, lets add an index to the IP field.  Knowing that indexes space pad keys to their full length, the Mysql index file on the disk is actually larger than the data file itself.

I might object to the use of InnoDB on a table with questionable value.  The overhead for InnoDB on data that is valuable only for a few minutes seems like overkill, but, we’ll give some credit for data integrity.

That really wasn’t the worst offender.  Another application written by another professional programmer included roughly 80 queries to display one page.  Once you dig deeper into the code, you find queries like this:

# Query_time: 0  Lock_time: 0  Rows_sent: 1  Rows_examined: 984
SELECT count(VID) AS total_videos FROM video WHERE type=’public’ AND ( keyword LIKE ‘%keyword%’  OR keyword LIKE ‘%keyword%’ OR keyword LIKE ‘%keyword%’ OR keyword LIKE ‘%keyword%’);

First, VID is auto_increment, not null.  This particular query causes mysql to scan the results from the above query, and the verify VID to see if it is null for any row.  Due to the table construction, it cannot be null, so, SELECT count(*) would have been faster.

Second, they disassembled the input, wrapped it in % so that they could do floating matches rather than using a fulltext index which would have been much faster.  As the table only has 984 rows now, once they start growing the site, they’ll start to see more performance issues.  A fulltext index would allow for quite a bit more growth.

While the following query isn’t optimizable, the results could be cached.

# Query_time: 0  Lock_time: 0  Rows_sent: 5  Rows_examined: 155
SELECT * from channel ORDER BY rand() LIMIT 5;

As a result of the ORDER BY rand(), MySQL will need to do a full table scan each and every time this query is called.  In this case, twice for each pageload.

Another particularly interesting query:

select tradeId from trades2 where tradeDomain = ‘noref’;

Taken on its own, you can see where tradeDomain may have been omitted in the indexes.  However, the table name gives you some indication of the mess.  Here’s another query that ran without an index:

select tradeId from trades28 where tradeDomain = ‘noref’;

There are a total of 50 trades tables.  This corresponds to the number of ‘sites’ in the sites table.  It seems our developer couldn’t quite figure out how to index on two fields.

In the last 4 hours, the unindexed queries from this one site have resulted in the following:

-rw-r—–  1 mysql adm  10869491 2009-03-09 10:53 mysql-slow.log

Roughly 54000 queries in 4 hours that had no index.  Compared to yesterday’s logs when I added a number of indices, it is a world of difference.

However, fixing the issues in the code is still more expensive than throwing hardware at the problem.  The problem you run into with using hardware as a crutch is that eventually you reach an impasse where the problem must be fixed, or, you face other bottlenecks.

Based on a few changes made by adding indexes, the load on those two machines has dropped considerably.  Each of the vendors that wrote the systems that I analyzed claim to handle well more traffic than either of the machines is currently handling.  The claims were written on their web sites roughly 3 years ago, and the machines that they are currently running on a much newer and much more powerful than they were at the time the software was installed.

One of the software vendors is out of the business due to too many support issues.  One software vendor continues churning out bad code and getting people to pay $600 for it.  The third developer disappeared and was replaced by a team of two programmers who appear to know what they are doing and are releasing version 5.0 (the current, latest version is 3.1) ‘soon’.  I know it will be soon because their web site, last updated in July 2008, says they are in limited beta.

I guess it is time to order some hardware.

Entries (RSS) and Comments (RSS).
Cluster host: li