Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Timing Issue When Copy Files to Another Directory
Message
De
23/09/2004 12:19:59
Jay Johengen
Altamahaw-Ossipee, Caroline du Nord, États-Unis
 
 
À
22/09/2004 19:30:42
Dragan Nedeljkovich (En ligne)
Now officially retired
Zrenjanin, Serbia
Information générale
Forum:
Visual FoxPro
Catégorie:
Base de données, Tables, Vues, Index et syntaxe SQL
Divers
Thread ID:
00945135
Message ID:
00945419
Vues:
23
Hey Dragan,

Currently we simply use Windows Exploring to copy the files. I'm not sure about the FileToStr solution as we have 300 tables, associated memo and indexe files and some of the larger files are about 1GB in size. Seems like the memory would blow up. Also, I'm not sure if I understand how you can keep related DBF, CDX and FPT files in sync unless you have an exclusive lock on the table. Or scan through all the records and RLock as you go. I wonder what 24/7 businesses who use native tables do to keep their development and production data straight.

>>We may run 24/7 in the near future and there is one issue that I know we need to resolve. Currently to refresh a Development or Testing environment we will simply copy the data files from Production. We have had problems on occasion with this because during the copy process a CDX or FPT may have been already copied, then a different large file gets copied, and in the meantime the DBF changes due to editing. The CDX, FPT and DBF are now out of sync. At this moment we can simply do it at night or over the weekend when the system is likely not active, but how to do this when it is?
>
>It may depend on what you use for copying. If it's a scripting shell filesystem object, or VFP Copy File command, you may just be out of luck with this. Sometimes it even won't let me copy because of file being in use.
>
>However... I've noticed that filetostr() reads anything, regardless of it being in open elsewhere, and is extremely fast. So you may try putting all three of dbf, cdx and fpt into three strings, one after another, and then write them out. Depending on the size of your files, maybe even this may not be fast enough, and a replication thingy may be a better way of doing this - select * from prodfile p1 into cursor tmp where p1.pk not in (select pk from devfile), and then select devfile and append from dbf("tmp"). That's for new records, but if you have a timestamp of last edit in each record of each table, you may just use that as an additional clause or a union. I'm doing something like that to update remote SQL from local dbfs, and it works like it should.
>
>Anyone who expected me to say "works like a charm" was probably disappointed, but in view of my known views, I don't believe in charms either.
>
>This message was not edited after initial posting.
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform