Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
VFP not mentioned in MSDN subscription ad
Message
General information
Forum:
Visual FoxPro
Category:
Other
Miscellaneous
Thread ID:
00605216
Message ID:
00614570
Views:
47
Jim,
First let me ask again: did you come across, in your reading of the book, the name of the access mechanism employed internally in SQL Server? I didn't, but could easily have missed it. My personal guess is that if it was really THAt different they would be proud of it and give it a name.

No, they didn't give it a name other than to say that it was "set-based." I fail, however, to see why a "name" is any part of this issue. Suffice it to say that the structures are completely different.

VFP's records are only "logically" stored in a contiguous block. Any given table can be fragmented to hell.

And SQL servers are by design logically stored, but not in a contiguous block. The problem with your argument here is that in the instance where an ISAM table exceeds the cluster size, it may be contiguously stored. It depends purely on the OS.

IOW, in ISAM, there's the table header, followed by all the data. Whether or not the actual clusters are contiguous doesn't matter. In SQL server, each 8K page has its own header, which contains, among other things, the file number, the previous and next file numbers.

To equate these two storage methods, even from an abstract POV, is simply wrong.

My take is that SQL Server invested the extra development to have 'controlled fragmentation' (my term) so that they could have more strict control over performance overall.

They also optimized for NT. It was their goal to have the best CPT of any backend server. When SQL Server is used correctly, they achieve that goal.

It is probable, for instance, that FP's creators did something similar with .CDX files - internal structures with offsets that THEIR code knows how to interpret.

Sure they did. They basically used the same binary tree structures in the CDX to help retrieve records quickly and efficiently.

>The difference, and this is what I don't think you or Walter get, is that the backend mechanics don't matter. What matters it how the result set is delivered to and from across the line.<

Walter will address his issue(s).

As for me, this is the crux of the matter right here! My statement never mentioned frontend/backend or across the wire.
My statement simply said that SQL Server's STORAGE and RETRIEVAL mechanism (implying it's INTERNAL workings, but maybe that was not clear) sure appeared to me to be very ISAM-like, to the point that I would personally call it "ISAM", especially in the absence of any other name by its inventors.


So, IOW, "Since they didn't give it a name, I'm going to call it ISAM". Is that what you're saying? They told you, in the book, the internal structure of the table. As I said earlier, it's obviously very different than an ISAM table. Yet you call it ISAM. They told you, in the book, about the pool of threads which are used to retieve and update the tables. Yet you want to call a single threaded application the same as a free threaded one. Do the words "pre-emptive multi-tasking operating system" have meaning to you? Do the words "thread priority" mean anything?

And the reason I mentioned it was because I constantly read references to "ISAM", especially in SQL Server literature, that are derogatory in the sense that it is 'old' and 'too traditional for modern systems' etc.

I was making the point to Jess that blaming "ISAM" for problems may not be all that accurate because SQl Server looks suspiciously like "ISAM" too (under the covers).


In reality, relational database managment systems pre-date the PC. So how can something that didn't exist somehow be older or too traditional for modern systems? The answer is simply that SQL Server was a new approach. It was optimized for a particular operating system, and so on.

As I pointed out earlier, SQL Server tables are entirely unlike ISAM tables. The problem isn't ISAM. The problem is approaching a backend system as if it were. Let me give you a personal example from my own work.

I was asked to create an in-process server to be called from a VBScript file to do the following:

1. Retrieve style numbers from 20+ tables residing on five different servers across a WAN.

2. Get only the unique style numbers (the total records retrieved ended up being 15,000+, with around 1,500 unique ones).

3. Delete all the records in a table residing in a table in an SQL Server database.

4. Add all of the unique records.

After I had written the server, I decided to see what actions took the longest. The entire process took less than ten seconds, the bulk of this time was spent deleting and updating the SQL Server table.

My initial reaction to this was that this must be caused by the overhead involved in dealing with the ODBC driver. However, after doing some deeper analysis, the basic problem wasn't with the ODBC driver, but rather that I was treating the SQL Server table as if it were a FoxPro table. Since, however, the total time involved wasn't significant, I decided against trying to fix something that in reality wasn't broken.

Kinda like someone mocking your choice of a product (say toothpaste), only to find out that they too use the same product.

My take on the SQL Server book wasn't that they were mocking the product, but rather saying to approach using SQL Server in the same manner is flat out wrong.

The problem isn't ISAM itself, but rather the approach taken in dealing with it and SQL Server. You can't treat them the same. The argument that, "I don't want to learn...< insert different methodology or language >" just doesn't wash with me.

In this forum, every once in awhile, a thread will popup complaining about performance problems with SQL Server. Assuming that the right hardware has been invested in, the problem isn't with VFP or SQL Server. The problem is that the programmer employed the wrong methodology. IOW, they tried to treat the SQL Server table in the same manner that they would've if it had been a FoxPro table. You may want to. You may want to "dance with the one that brought you", bur in order to resolve the problem a change in the approach is required. You cannot expect either from the VFP or SQL Server teams to solve it for you. This is why the authors of the SQL Server book recommend doing a "deep port".

My purely gut reaction to all of this is that it all comes down to design. SQL Server is needed in those cases where contention is high and/or security is an overriding concern and the amount of data is less than 2gb. If these are not issues, then VFP and its ISAM table can consistently out-perform it provided that the design of the application is such that it allows the internal engine to do its work in as efficient a manner is possible.

One argument that's often brought up is accessing VFP tables. Since they may not be all in one place, it may be difficult for other applications to access them. Within my own systems, however, this is never a problem regardless of where the tables are physically located. This is simply because the applications have been designed properly, with meta-data tables pointing to the location of each system and its tables. As a result, systems seamless interact with data from other systems both quickly and efficiently.

In fact, this has been my biggest concern in making the port to SQL Server. So many systems interact with data from so many others that it is almost a requirment that they all be brought up at one time. To do so, however, would be a nightmare from a support stand point. As a result, I've spent a good deal of time working on designs that will allow the systems to be brought up one at a time.

>Jim. my problem with this whole issue is very simple. I keep offerring up documentation, outside resources, and even registry entries to prove my point. Have you or Walter offered up one such thing to disprove what I've said. The answer is an empathic, "No!" I'll let the readers of this post decide who know what here.

Yep, you are offering lots up. Just not about *my* issue.


I'm still trying to figure out what your issue is. From what you've posted it seems to be a problem with the acronym "ISAM". You seem to feel that negative connotations are being applied to it. If so, the only negative is that employing the same methodology to an SQL Server table as you would to an ISAM table is the wrong approach! In and of themselves, as I've said before, both a set-based and an ISAM based systems can interact with each other, employing the strengths of each to the user's best advantage.
George

Ubi caritas et amor, deus ibi est
Previous
Reply
Map
View

Click here to load this message in the networking platform