Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Form Data Environment
Message
From
28/02/1999 15:45:28
 
 
To
28/02/1999 14:47:38
General information
Forum:
Visual FoxPro
Category:
Classes - VCX
Miscellaneous
Thread ID:
00192388
Message ID:
00192473
Views:
15
>If you open all the files in main.prg and keep them open, are your applications running on a network with multiple people accessing them? Doesn't all the users keeping all the files open all the time raise the chances of the file stuctures getting creamed from network errors (or users turning their work stations off without logging off)?

Yes, in house all of our apps are in a network environment, as are most of the applications our clients develop. The overhead of opening and closing tables after each data change is prohibitive, especially in a network environment. You can't block idiocy; we have timers in our apps that will force an idle station out of the application after a known period of time. We also do not allow data to stay in flux for very long; even though we use optimisitic table buffering as a general strategy, we perform TABLEUPDATE() or TABLEREVERT() to take place when changes to a record are accepted, and then rely on transactions to wrap operations that span changes to more than one record in a table.

In Weatherhill's offices, every system is on a UPS, and virtually all the machines are left up and running 24 hours a day; our VFP applications are designed to force a logout at known times to ensure that all tables are closed when backups run - data entry and accounting applications, while mission-critical from the standpoint of being the core of our basis, do not have the requirement that they be allowed to run 24x7. We have code in our own app, and added to our accounting application as well, that forces the program to close down at known times, and it requires active intervention by one of a small group of managers (me, the operations manager, or the company treasurer) to not have the automatic logouts take place. We take the backup tape off-site each day; someone has to go over to the bank anyway with deposits, so they stop in and drop off the previous night's tape, and grab the oldest tape in rotation to bring back to the office from a safety deposit box. We also store the backups made during our monthly closes off-site.

It's possible to turn off the power to a machine, and periodic lockups do require it to be done. When it happens, we try to get everyone out of whichever application got frozen on one machine before powering it off, to minimize the risk to the system. It's a pain at times, but nowhere near as big a problem as having to redo the work of three or four people for the previous half a day or so.

We have not lost data on a systemwide basis other than a situation where a server died in quite a while; we have manual procedures in place that document what was done as far as data entry during the previous day (we do a full backup of all servers every night after the enforced logout has taken place, and we use ARCServe to image out our databases to a different server a couple of times a day.) We have yet to run into a situation where we've had to go back more than 4 hours to get our data entry back up to speed other than when our offices flooded during a hurricane in 1994, and even there, we were able to go back to a backup from the previous day to restore the databases.

The trick is being able to go back and identify what's been done since the last backup. A simple process of having an 'out basket' on the desk of every person who does data entry takes care of that; the source document (customer orders, inventory receipts, returns, etc.) is saved until we know that it's in the system and no longer needed.

This means that the data entry task is not an informal one; it's the single most important thing done during day-to-day operation of the business. The cost of protecting the work is justified that way. For example, how much time is lost if a power outage occurs and computers die with the power? Compare that to the ~$100/computer station to put an adequate UPS in place. Disk storage is cheap - so mirrored disks on the servers, and imaging the data to a second server on a regular basis is cheap.

This can be a hard sell to a client until they've run into a situation where it cost them a significant amount of time and money to recover from something that happens in the normal course of business. I'm lucky in a way - all of the operations staff at Weatherhill came from environments where they'd seen what inadequate protection of data costs compared to the cost of providing that protection. I don't have any problems getting the budget for adequate server hardware, tape backup, power protection, or even the little out baskets on every desk, because TPTB have a real knowledge of the value of the data collected during the day-to-day operation of the company.

>Another related question: How do you go about generating a unique key value for your files. I was looking for a self incrementing integer field type, but couldn't find it. So at the moment, I open a 'key value' file exclusively, keeping anyone else from reading it, get my values, update it and close it. What's the better way? Thanks.
>

Each system has a table containing a record per table in that system; we don't buffer the table. When adding records, we acquire an RLOCK() on the record containing the last key used value for the table, generate as many keys as we need, and write the new last key used before releasing the lock and flushing. The RLOCK() is adequate to ensure serialization of access, we haven't worried about having to throw away keys if we don't accept the transaction, and we don't recycle keys from deleted records. We 'spend' 8 bytes per record for a key (we started this strategy years before the integer data type was added to VFP, so we use a binary character field for a PKEY.) 8 bytes is an ungodly large number; since we're dealing with perhaps 200,000 customer order line items per year, we should be OK for a long time.

>
>>
>>I use the DE in forms, and have code in my framework that runs in the DE.BeforeOpenTables event which goes and fixes the paths in the DE objects (DataEnvironment.Cursor.DataBase and DataEnvironment.Cursor.CursorSource) before the actual tables get opened; I pick up the actual data paths before anything ever gets opened from the station configuration entries (depending on which of my applications is looked at, there's either a registry entry, a command line parameter passed in by the shortcut that starts the app, or a set of records in the system's main configuration table that specify where the user is supposed to look for data files) and store the base path for data in a property of my application object. I use that app object property to fix things before my forms try to open tables and views.
>>
>>It may sound like a lot of work, but it really isn't - develop the strategy once, and build it into your base form class, so that every form derived from it inherits the behavior that you need. That's a big part of OOP. The development curve can be reduced even further by using a framework that already handles the behavior the way you want it done, or can be extended to do what you want easily.
>>
EMail: EdR@edrauh.com
"See, the sun is going down..."
"No, the horizon is moving up!"
- Firesign Theater


NT and Win2K FAQ .. cWashington WSH/ADSI/WMI site
MS WSH site ........... WSH FAQ Site
Wrox Press .............. Win32 Scripting Journal
eSolutions Services, LLC

The Surgeon General has determined that prolonged exposure to the Windows Script Host may be addictive to laboratory mice and codemonkeys
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform