Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Wwipstuff, XML and binary data
Message
De
16/07/2000 21:48:51
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Applications Internet
Divers
Thread ID:
00299404
Message ID:
00393077
Vues:
24
>I thought I'd do something like this:
>
>I thought I'd use a view at the mid-tier to get the data from Oracle and send it to the desktop app. When the desktop app sends the dbf back, I create the same Oracle remote view again, and overwrite the view data with the .dbf data. Then I do a tableupdate() from the modified remote view back to Oracle.

I suppose this can work- depending on the way that you go about it. You might have trouble with this because of the way that VFP determines what is a new record, and what is a changed record in a view. VFP gives negative record numbers to appended records in a view, and AFAIK, there is no way to adjust that. So you might need some fancy footwork to avoid duplicating records in the backend tables.

> I thought this would be safe since the remote view will be pulling data from several tables at once and I wouldn't have to be concerned with the Oracle structure, etc. I was going to keep the DBC generation stuff in the init of the midtier object.

Have you speed tested the creation of your views? You might be very disappointed in the performance of creating views on the fly- it works ok when a GUI app initializes, because you only have to do it once in the lifetime of an app. But with a distributed app, you will most likely be creating and destroying your business objects fairly often.

Is there a reason that you don't want to let a DBC live on the server? And is there a reason you are set on views as opposed to SPT/SP?


>Something else. When the midtier object makes a .dbf, I was going to compress it, encrypt it, then filetostr it to get it into a memvar that the desktop app would read. Then the desktop app would be doing the same to move data back. Is that the fastest way to move the data, or do I have too many steps or something?

This is going to depend on the average size of the datasets involved. For small datasets, the compression/decompression might be more overhead than it's worth. For large datasets, it might very well be worth it. You might even work out a system where you measure the size of the data before you send it, and decide to compress it only if it's over a certain size.
Erik Moore
Clientelligence
Précédent
Répondre
Fil
Voir

Click here to load this message in the networking platform