I appreciate it and I'll be working on that today...
>Just another thought on this...
>
>You said you could have 10,000 commands or more. If you double that and assume that each command is at the maximum size of 999 characters, that yields a data file under 20MB in size. While respectable, that is not a huge size for a text data file...and this is a maximum case scenario.
>
>What you might want to try is taking a version of your processing code and comment out the CASE statement that actually processes the commands. Then choose a sample file and compare the time it takes to run with just reading in the data and with doing the full processing. It might give you some information on where your overhead is, reading the data, processing the data, or half-and-half. (Be sure to do each test with a freshly copied data file, so you know you aren't reading from a cache.) Then you know where to target your efficiency efforts.
>
>Anyway, it's just a suggestion.
.·*´¨)
.·`TCH
(..·*
010000110101001101101000011000010111001001110000010011110111001001000010011101010111001101110100
"When the debate is lost, slander becomes the tool of the loser." - Socrates
Vita contingit, Vive cum eo. (Life Happens, Live With it.)
"Life is not measured by the number of breaths we take, but by the moments that take our breath away." -- author unknown
"De omnibus dubitandum"