>>Build a table that only has columns for the indexed data, store the rest of the data in a memo field in compressed form. This type of data can be easily compressed using Huffman encoding with a static frequency table. If the table is still too large, break out the columns into individual tables with common primary keys.
>
>Al,
>
>Do you think that this would be sufficient to get 6 Gig of data down to 650 Meg or less?
assuming few indices, yes
If the rest of the data is in compressed format, from your experience, would uncompressing it on the fly degrade the performance very much?
no
>
>Ed
read up on huffman coding
http://www.data-compression.com/lossless.html#huffothers Lempel-Ziv (lz78)
http://www.data-compression.com/lempelziv.html, Lempel-Ziv Huffman (lzh)
Lempel-Ziv Welch would work but the patent would require royalties,
http://burks.brighton.ac.uk/burks/foldoc/59/64.htmsearch for libraries that implement the algorithms, like gzip.
I recommend that you do some research on the web to find the best solution for your data.