Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Remote Views vs. SQL-Pass through
Message
 
 
To
11/04/2000 11:44:27
General information
Forum:
Visual FoxPro
Category:
Client/server
Miscellaneous
Thread ID:
00353636
Message ID:
00358459
Views:
16
>
1) Much earlier in this thread you indicated that views are often suitable for VFP data, even though not for client server. Actually, there are some real problems even with local views. One thing I have always found a nusiance is that if you query against a buffered view which has not been written back to the underline table -- your query does not see any of the changes. This is not a bug per se (there are all sort of good reasons for th is behavior) but it is still a PITA if you are trying to present dynamic summary tables during data entry. There are all sorts of workarounds, but I finally gave up on local views, and just do SQL queries to a cursor. You can query against cursors all day long, and it is not that big problem to roll your own optimistic updating scheme. In short, I'd say views are not even that suitable for use against local data!
>

What was the context of my quote??? Regardless of the data source, I DO NOT advocate the use of remote views. If however, you are using VFP data, local views come in handy.


>
2) There are cases in client server where we do use 5000 (or even a great deal more) records in a local cursor. Maybe with your client server experience you could suggest an approach that would use much smaller cursors. Basically we are doing data cleaning on other peoples data -- which we have already moved to SQL table on our server.
>
>This involves
>1) downloading a record, and downloading all possible duplicates. We use a very broad definition of "possible duplicate" and thus may end up with a large "possible duplicate" cursor.
>2) Then we apply a large number of tests against the record and the possible duplicates. This gives us one of three results:
>A) The algorithm deterimines without further human interventions that the record definitely duplicates existing data.
>B) the algorithm determines without further human intervention that the record definitely is not a duplicate
>C) The alogorthim has found (within the possible duplicate set) some close matches which require a human decision to determine whether it is a duplicate or not. In this case a master-detail form is presented, with the master being the record being checked, and the child grid being the close matches.
>
>We have tried doing the "narrowing" step at the server end -- the one that reduces the large set of possible matchs to a certain match or no matches, or a small set of close possible matches. Because of a large number of case statements, and the parsing required, and all the picky little stuff that has to be done, we have found this places a huge load on the server, and is just better done at the client end. But maybe we are overlooking something.
<

If you are doing data-munging/cleansing, my suggestion is to handle all of these activities on the server. De-duping, address standardization, demographic overlays, etc...- none of that requires interactive involvement on the part of the user. This used to be the business I was in...< bg >...
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform