Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Distributing and using VERY large tables.
Message
 
To
07/08/2001 16:31:16
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
00539842
Message ID:
00540966
Views:
19
Al,

Thanks. The Huffman encoding algorithim fairly straight forward. Like most things, though, I'm sure this won't be a one hour project ;-).

I just wanted to get your opinion before devoting the time to it. This looks like the way that I'm going to have to go since the only other option (Compaxion) won't sell their product in the United States. God knows why. Should I show them my photo and explain that I'm Russian? :-)

Thanks again.

Ed

>>>Build a table that only has columns for the indexed data, store the rest of the data in a memo field in compressed form. This type of data can be easily compressed using Huffman encoding with a static frequency table. If the table is still too large, break out the columns into individual tables with common primary keys.
>>
>>Al,
>>
>>Do you think that this would be sufficient to get 6 Gig of data down to 650 Meg or less?
>
>assuming few indices, yes
>
>If the rest of the data is in compressed format, from your experience, would uncompressing it on the fly degrade the performance very much?
>
>no
>
>>
>>Ed
>
>read up on huffman coding http://www.data-compression.com/lossless.html#huff
>others Lempel-Ziv (lz78) http://www.data-compression.com/lempelziv.html, Lempel-Ziv Huffman (lzh)
>
>Lempel-Ziv Welch would work but the patent would require royalties, http://burks.brighton.ac.uk/burks/foldoc/59/64.htm
>
>search for libraries that implement the algorithms, like gzip.
>
>I recommend that you do some research on the web to find the best solution for your data.
Previous
Reply
Map
View

Click here to load this message in the networking platform