>I, too, ran Turbo Pascal on a Z80 and CP/M. As I recall it, the big wins had little to do with it being a single-pass compiler (if indeed it was, which I'm not sure of), but more that:
>
>- primarily, it was SMALL. Everything fit comfortably on a 240K 8" floppy and ran comfortably in 64K RAM without too much floppy I/O. In those days, small == fast.
>
>- it compiled and linked in a single operation
I meant single-pass in a logical sense, i.e. the code had to contain everything in the particular order as required by the compiler; you couldn't use anything that wasn't defined earlier; couldn't mention a variable if it wasn't defined and typed; couldn't call a function if it wasn't already in the code, or had at least a prototype defined earlier (version 4.x, IIRC). IOW, it wasn't reading the source twice, to resolve foreshadowed unknowns from the first pass.
But right, it meant only a part of the speed. The other part was that it was so optimized, and so devoid of legacy crap (I figure it didn't support punched cards :) that it was a really revolutionary piece of software.
>Anyways, it was a horror show to compile anything with the MS compiler. It was on multiple floppies and required a lot of floppy swapping. And that was just for the compile step; then there were multiple swaps for the link step too. He was quite used to doing this and thought it was completely normal, which it was, prior to Turbo Pascal.
>
>I bought Turbo Pascal mail-order, sight-unseen from Borland. My buddy dismissed it as a toy. That is, until the first time he saw me compile one of my programs with it. One of those shocked-in-a-good-way experiences that were not uncommon in those early days, and which are few and far between today.
True. I actually had the same experience (even though I came to the game later, when the machine actually had a 10M hard disk (WOW), with a whopping 16 directories. And Microsoft Cobol on it. And when its bug, where the indexes would simply fail at 32K records, finally appeared, I wrote a LLFF solution in Turbo Pascal 3, which read M$Cobol's table as a raw fixed length records file (there was some header, I think, and one or two header bytes per record, but the rest was quite ASCII), and then sort it. I divided the file into chunks, used shell sort to sort the records within the chunk, then opened all the sorted chunks at the same time and picked one record at a time from them. Even though it wrote each record to the disk twice, it was still faster than just recreating the index in M$ Cobol.
>It wasn't that the original code was bad - for that particular application, forcing the code into the single-pass paradigm meant it had to be complex. I couldn't see how to make it simpler and easier to maintain; even if I'd rewritten it all as my own attempt at a single-pass parser it probably would have ended up just as ugly. It's just that for this particular application, a 2-pass parser was the right choice - the smaller code size, increased speed and enhanced maintainability were just side benefits, although arguably the result/proof of the 2-pass architecture being the right choice.
Now in retrospect it must seem that the single-pass code was really pushing the envelope, doing the best it could under the given set of assumptions, and of course (now we can say "of course", with hindsight wisdom :) it broke when it was pushed further. But it's nice when you can say that you had a heavy piece of code, refactored it and made it do even more. The wisdom of knowing when to refactor and when not is not easy to come by.