Posts Tagged ‘mysql’

User Interface Design

Wednesday, June 24th, 2009

Programmers are not designers. Technical people should not design User Interfaces.

* 810 source files
* 90658 lines of code
* 10213 lines of html

For an internal project tasked to a series of programmers throughout the years without enough oversight, it is a mass of undocumented code with multiple programming styles. PHP allowed lazy programming, Smarty didn’t have some of the finesse required, so, the User Interface suffered. Functional but confusing to anyone that hadn’t worked intimately with the interface or been walked through it.

The truest statement is that it is easier for me to do things through the MySQL command line than through the application. While this does have a tendency to introduce possible typos, it has altered SQL practices here.

update table set value=123 where othervalue=246;

could have an accidental typo of

update table set value=123 where othervalue-=246;

which would have completely unintended consequences. One typo altered the DNS entries for 48000 records. Shortly after that typo, ingrained in company policy was that I never wanted to ever see a query like that executed in the command line regardless of how simple the command.

Even within code, the above command would be entered as:

update table set value=123 where othervalue in (246);

This prevented a number of potential typos. Even limit clauses with deletions were enforced to make sure things didn’t go too haywire in an update.

With Python, indenting is mandatory which results in multiple programmer’s code looking similar and easier to troubleshoot. Utilizing SQLAlchemy which enforces bind variables when talking with the database engine, we’ve eliminated the potential for a typo updating too many records. Even cascade deletes are enforced in SQLAlchemy even when running on top of MyISAM. With MVC, our data model is much better defined and we’re not tied down to remembering the relationship between two tables and possible dependencies. Conversion from the existing MySQL database to a DeclarativeBase model hasn’t been without issues, but, a simple python program allowed the generation of a simple model that took care of most of the issues. Hand tweaking the database model while developing the application has allowed for quite a bit of insight into issues that had been worked around rather than making adjustments to the database.

Fundamental design issues in the database structure were worked around with code rather than fixed. Data that should have been retained was not, relationships between tables was defined in code rather than in the database leading to a painful conversion.

When it was decided to rewrite the application in Python using TurboGears, I wasn’t that familiar with the codebase nor the user interface. Initially it was envisioned that the templates would be copied and the backend engine would be written to power those templates. After a few hours running through the application, and attempting the conversion on a number of templates, I realized the application was functional but it was extremely difficult to use in its current state. So much for having a programmer design an interface.

Some functionality from the existing system was needed so I peered into the codebase and was unprepared for that surprise. At this point it became evident that a non-programmer had designed the interface. While Smarty was a decent template language, it was not a formtool, so, methods were designed to give a consistent user experience when dealing with error handling. A single php file was responsible for display, form submission and validation and writing to the database for each ‘page’ in the application. The code inside should have been straightforward.

* Set up default CSS classes for each form field for an ‘ok’ result
* Validate any passed values and set the CSS class as ‘error’ for any value that fails validation
* Insert/Update the record if the validation passes
* Display the page

Some validation takes place numerous times throughout the application, and, for some reason one of the ‘coders’ decided that copy and paste of another function that used that same validation code was better than writing a function to do the validation. Of course when that validation method needed to be changed, it needed to be changed in eight places.

So, what should have been somewhat simple has changed considerably:

* Evaluate each page
* Redesign each page to make the process understandable
* Adjust terminology to make it understandable to the application’s users
* modify the database model
* rewrite the form and validation

A process that should have been simple has turned into quite a bit more work than anticipated. Basically, development boils down to looking at the page, figuring out what it should be, pushing the buttons to see what they do and rewriting from scratch.

TurboGears has added a considerable amount of efficiency to the process. One page that dealt with editing a page of information was reduced from 117 lines of code to 12 lines of code. Since TurboGears uses ToscaWidgets and Formencode, validation and form presentation is removed from the code resulting in a controller that contains the code that modifies the tables in the database with validated input. Since Formencode already has 95% of the validators that are needed for this project, we can rest assured that someone else has done the work to make sure that field will be properly validated. Other validation methods can be maintained and self-tested locally, but, defined in such a manner that they are reused throughout the application rather than being cut and pasted into each model that is validating data. In addition, bugs should be much less frequent as a result of a much-reduced codebase.

Due to the MVC framework and the libraries selected by the developers at TurboGears, I wouldn’t be surprised if the new codebase is 10%-15% the size of the existing application with greater functionality. The code should be more maintainable as python enforces some structure which will increase readability.

While I am not a designer, even using ToscaWidgets and makeform, the interface is much more consistent. Picking the right words, adding the appropriate help text to the fields and making sure things work as expected has resulted in a much cleaner, understandable interface.

While there are some aspects of ToscaWidgets that are a little too structured for some pages, our current strategy is to develop the pages using ToscaWidgets or makeform to make things as clear as possible making notes to overload the Widget class for our special forms at a later date.

While it hasn’t been a seamless transition, it did provide a good opportunity to rework the site and see a number of the problems that the application has had for a long time.

Embedded indexing versus Client/Server

Saturday, March 28th, 2009

For a particular application, I require temporary persistent storage of some data.  That data consists of a key value and a payload.  That key value can be a dupe, which is what causes the problem.

File_DB in perl handles duplicates and I can delete a key/value pair without too much difficulty.  However, file locking is not handled very well with File_DB which created concurrency issues with the threaded daemon.

Sqlite3 had no problem with duplicates, and could be compiled with the delete from/limit clause to easily handle duplicate keys.  Rather than recompile the packaged Sqlite3 in Debian, I made a slight modification to the code on my side so that I could do further testing.  Due to a few issues with threading and a potential issue with storing binary data and retrieving it in perl, I needed to reevaluate.

BerkeleyDB solves a few problems.  It supports concurrency, it supports proper file locking, but, a minor limitation is that duplicate keys are not handled well when you want to delete a key.  It’ll require a rewrite of some functionality to use BerkeleyDB, but, I believe that solution will provide the least potential for failures.

I could have use MySQL which I am very comfortable with, but, the storage of the data really only needs to be there for a few minutes in most cases, and the amount of data stored is 10-20K at most.  With MySQL’s client timeout, I couldn’t really guarantee everything would work every time without writing in considerable error checking.  While MySQL would handle everything perfectly, it was overkill for the task at hand.

I’m rewriting the File_DB methods to use BerkeleyDB and modifying the saved data slightly to work around the key delete issue.

It should work and should raise the reliability of this process from 99.2% to 99.9% which will be a considerable improvement.

Professional MySQL Developer?

Monday, March 9th, 2009

Once in a while I get to troubleshoot problems on machines specifically with MySQL.  Sometimes that task is difficult because the vendor written code has been Zend Encoded.  Perhaps in the name of performance, possibly to protect his code, or, to prevent me from seeing just how bad his code is.  Diagnosing MySQL issues forensically requires a few scripts, patience, and time.

This time, I modified /etc/mysql/my.cnf on two machines and added the following:

log        = /var/log/mysql/mysql.log
log-slow-queries= /var/log/mysql/mysql-slow.log
log-queries-not-using-indexes

This gave me three things.  A full log of all queries executed.  I have written a small program to normalize the queries and sort them so I can focus on the queries that take 90% of the time.  I then get queries that take more than 10 seconds — which can be altered by adjusting long_query_time, and, I get any query that doesn’t use an index.  Usually I will focus on the unindexed queries as those generally cause the problems.  Very few people understand the impact of table locks and that usually is where most problems occur.

The first example comes from an application which claims to be able to support 100,000 visitors per day.  Perhaps with enough hardware behind it can, but, in its current state, on a quad core xeon, about 40,000 per day is about all they can push.

Where do we start?  So few lines, so many errors.

CREATE TABLE `iplog` (
`ip` varchar(255) NOT NULL default ”,
`pid` bigint(20) NOT NULL default ‘0’,
`viewed` varchar(255) NOT NULL default ”,
`locked` tinyint(1) NOT NULL default ‘0’,
PRIMARY KEY  (`ip`,`pid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The above schema replicates functionality in the prior version of the software.  The old version of the system didn’t do things this way, so, I am unsure why the developer chose to alter his habits and break things in so many ways.

ip varchar(255)

IPv4 currently only takes 15 spaces.  char(15) would have been more efficient taking 16 bytes to store a 15 digit IP address including the ‘.’.  Even varchar(15) would have taken 20 bytes to store the 15 digit IP address.  But varchar(255) just takes the cake.  The proper way to store an IPv4 address is unsigned int.  Even barring the fact that varchar(15) or char(15) would have been more space efficient, using an unsigned int allows the BTree that MyISAM creates to be a lot more balanced.

Then, lets add an index to the IP field.  Knowing that indexes space pad keys to their full length, the Mysql index file on the disk is actually larger than the data file itself.

I might object to the use of InnoDB on a table with questionable value.  The overhead for InnoDB on data that is valuable only for a few minutes seems like overkill, but, we’ll give some credit for data integrity.

That really wasn’t the worst offender.  Another application written by another professional programmer included roughly 80 queries to display one page.  Once you dig deeper into the code, you find queries like this:

# Query_time: 0  Lock_time: 0  Rows_sent: 1  Rows_examined: 984
SELECT count(VID) AS total_videos FROM video WHERE type=’public’ AND ( keyword LIKE ‘%keyword%’  OR keyword LIKE ‘%keyword%’ OR keyword LIKE ‘%keyword%’ OR keyword LIKE ‘%keyword%’);

First, VID is auto_increment, not null.  This particular query causes mysql to scan the results from the above query, and the verify VID to see if it is null for any row.  Due to the table construction, it cannot be null, so, SELECT count(*) would have been faster.

Second, they disassembled the input, wrapped it in % so that they could do floating matches rather than using a fulltext index which would have been much faster.  As the table only has 984 rows now, once they start growing the site, they’ll start to see more performance issues.  A fulltext index would allow for quite a bit more growth.

While the following query isn’t optimizable, the results could be cached.

# Query_time: 0  Lock_time: 0  Rows_sent: 5  Rows_examined: 155
SELECT * from channel ORDER BY rand() LIMIT 5;

As a result of the ORDER BY rand(), MySQL will need to do a full table scan each and every time this query is called.  In this case, twice for each pageload.

Another particularly interesting query:

select tradeId from trades2 where tradeDomain = ‘noref’;

Taken on its own, you can see where tradeDomain may have been omitted in the indexes.  However, the table name gives you some indication of the mess.  Here’s another query that ran without an index:

select tradeId from trades28 where tradeDomain = ‘noref’;

There are a total of 50 trades tables.  This corresponds to the number of ‘sites’ in the sites table.  It seems our developer couldn’t quite figure out how to index on two fields.

In the last 4 hours, the unindexed queries from this one site have resulted in the following:

-rw-r—–  1 mysql adm  10869491 2009-03-09 10:53 mysql-slow.log

Roughly 54000 queries in 4 hours that had no index.  Compared to yesterday’s logs when I added a number of indices, it is a world of difference.

However, fixing the issues in the code is still more expensive than throwing hardware at the problem.  The problem you run into with using hardware as a crutch is that eventually you reach an impasse where the problem must be fixed, or, you face other bottlenecks.

Based on a few changes made by adding indexes, the load on those two machines has dropped considerably.  Each of the vendors that wrote the systems that I analyzed claim to handle well more traffic than either of the machines is currently handling.  The claims were written on their web sites roughly 3 years ago, and the machines that they are currently running on a much newer and much more powerful than they were at the time the software was installed.

One of the software vendors is out of the business due to too many support issues.  One software vendor continues churning out bad code and getting people to pay $600 for it.  The third developer disappeared and was replaced by a team of two programmers who appear to know what they are doing and are releasing version 5.0 (the current, latest version is 3.1) ‘soon’.  I know it will be soon because their web site, last updated in July 2008, says they are in limited beta.

I guess it is time to order some hardware.

Entries (RSS) and Comments (RSS).
Cluster host: li