MD5s

Using an MD5 Digest (or md5 sum) i a neat way of building a predictable key for data. Obviously there is the issue of MD5 collisions (there two completely different source data both produce the same MD5 Digets), but unless you’re building medical or safety equipment, for general text manipulation its pretty negligible.

However, MD5s can be represented in several ways. Lets discount the binary envoding of the 128-bits (16 bytes) of data as thats rather cumbersome, and if you’re storing this in a database such as MySQL, there isn’t a 16 bytes numeric data type; BIGINT is 8 bytes, so you’d have to use two BIGINTs and do lots of horible stuff.

That brings us to the base encodings. Base 16, or hexadecimal, would require us to use a text data type to store the results – as the base16 encoding will contains the numbers 0-9, and the letters A-F (or a-f – the case is irrelevent/insensative in base 16). It would be 32 “characters” long. We can stuff that in a column with no trouble (char(32)).

We can also use a Base 64  encoding, using upper and lower case letters and a few symbols as well as numerals 0-9. This comes to 22 characters (you’ll sometimes see == appended to a Base64 to make it 24 characters). Using 22 chars as a key instead of 32 is 31.25% less data. That makes your indexes that much more compact as well as the column data.

It may not be a perfect primary key, but its possibly reasaonable. But then comes the question of converting between Base16 and base 64. Here’s one way:

#!/usr/bin/perl
use strict;
use warnings;
use Digest::MD5;
use MIME::Base64;

my $data = "foobarbasbifffoobarbasbiff";
my $md5_base64 = Digest::MD5::md5_base64($data);
printf "%s in base64 as hex: %s\n", $md5_base64, unpack('H*', MIME::Base64::decode_base64($md5_base64));

MySQL UTF8 and Perl

It’s been quite annoying; DBI and DBD::MySQL seems to default to Latin 1, and it appears that the client side way of “updating” to UTF8 is to issue “USE NAMES utf8” as ytour first query when you connect to MySQl (in my case, 5.5.x). The alternate is to tell the server that it should automatically do this query each time a client connects, or alternatively, disable encoding negotiation and use everything as UTF8. Here’s a few links I found useful:

And a quote from the second:

[mysqld]
default-character-set=utf8
default-collation=utf8_general_ci
character-set-server=utf8
collation-server=utf8_general_ci
init-connect=’SET NAMES utf8′

[client]
default-character-set=utf8

As you’ll see in the first link above (Stackoverflow), adding params with spaces to Amazon RDS is a little tricky from Win platform – and you have no choice but to use the CLI tools for RDS to do this.

MySQL varchar not case sensative

I managed to overlook an issue with creating a varchar column in an app I have been working on. I have basically got a normalised table, with a forien key to a table of values. In this case, its a set of HTML Document Titles, keyed off an autoincrement column called Title_ID. What I want to do is look up a title, and get a Title_ID back.

Great; I can do this with a stored function, which I did, and it worked. But it was slow. So I decided that I’d normalise these en-mass with one big INSERT statement to the normalised table (protected by a unique index constraint), and then store the resulting Title_ID.

There be dragons. As one title came through as “[Q] help me” it was duly inserted and given a Title_ID.
However, when a lower case “[q] help me” came through, it matched as a duplicate of the original and therefore was not inserted again. I then pulled the strings into a Perl hash, and of course, couldn’t find a key with “[q] help me”, only “[Q] help me”.
Turns out that the issue was my column definition. varchar(x) is not case sensative. varchar(x) binary is. The Unique index I had on here was doing its job and comparing values based upon the case in-sensative column – not its fault.

ALTER TABLE Titles CHANGE COLUMN `Title` `Title` VARCHAR(600) BINARY NULL DEFAULT NULL ;

And now I see my column as “`Title` varchar(600) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL“.

MySQL Indexes and maximum lengths

MySQL has several index types; the default and therfore probably most common is the BTree. There’s a limit in MySQL (at least as of this writing when the “current” GA is 5.5.9) is 767 bytes – across all the columns being indexed. Of course, varchar columns can now be bigger than 255 chars, so this limit is probably more easily reached these days.

In my case, I had a table with one column of a URL’s path, and a URL’s Query String, both of which can be larger than the old 255 chars. I also have an index that cover these two plus a few other columsn – normalised protocol, normalised domain, and TCP port.

In trying to move to longer columns (1K) I had to modify my index to restrict the number of characters from these varchar columns to ensure I remained under the 767 limit – I couldn’t just change one of these columns from 255 to 1024.

I throught I’d try and simple change first – instead of increasing the oclumn, just put the restriction on the current columns as they stand – so limit the index to 255 chars (while the column still IS 255). Turns out that in at least 5.5.9, since the specified size is the current column size, it ignores this.

Lets make that clearer; you need to specify a SMALLER length for the indexed columns in order for it to stick. Once that’s done, you can then alter table to increae the column lengths.

MySQL CREATE TABLE as doesn’t take triggers with it

MySQL has a nice feature that you can make a new table exectly like an old table (as far as table column structure and indexes go):

create table FOO like BAR;

However, as I just rediscovered, any triggers on the table aren’t taken across with it. D’oh. Which reminds me, mysqldump has a specific -R flag to backup/dump routines; worth having that on too (I did).

Rule for the day: check your triggers on tables before moving/renaming.