Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
VB, C#, and VFP data handling examples
Message
De
26/04/2007 13:27:16
 
 
À
25/04/2007 23:28:07
Walter Meester
HoogkarspelPays-Bas
Information générale
Forum:
Visual FoxPro
Catégorie:
Visual FoxPro et .NET
Divers
Thread ID:
01215120
Message ID:
01220204
Vues:
37
Walter,

I see exactly where you are coming from. It really is the 10-90 -rule, where the last 10% of completing an application can take 90% of the total time. If people's lives are not on the line because of potential validation holes you can, and it is even prudent to, make some reasonable allowances. So, if you are not programming the space shuttle or jumbo jets' autopilot systems, you can relax a little.

Part of pessimistic locking strategies for me is to lock ALL related records as well, so that nothing falls through the cracks. And this works with 100% accuracy without me having to think of all of the possible ways concurrent data entry can mess up the data.

Again, not ideal for many reasons, but I go pessimistic whenever I can get away with it.


Pertti

>Hi Perti,
>
>While we can discuss issues like these until the cows come home, In reality, about no system is free from data integrity polution. Whether you are doing SPs, dynamic SQL, pessimistic locking, optmistic locking, there always is a GAP that potentially leaves your data in an inconsistent way.
>
>We have to identify the risks and take action when the risks are too high. Unfortunately there is no black and white here. Though your example might have been caught more easily by using pessimistic locking and validating the cached data before submitting, it won't catch inter-table integrity checks. For example, if a male cannot get pregnant and one user is entering a pregancy, while another user changes the sex from female to male of the same patient in another table, this is not something you will catch in a pessimistic locking scheme either if the other table is cached for validation.
>
>The question is, how to deal with such issues. To be honest, in our software, we don't even check for this extreme situation and never encountered such one, though it is absolutely possible. Though murphies law always poses a threat, there is only so much you can and should do to prevent situations like this, not only will you spend more time in writing and debugging such validation rules than actual development, it won't make you software run any faster as well. IMO, its better in such cases is to provide a mechanism that will validate your data-integrity in batch runs. Those batchruns will do lengthy validations of large portions of data to identify the most data integrity issues.
>
>We are doing some data collection and validation for national registries and it is stunning to see that for a table of roughly 150 fields there are more than hundred validation rules. In most cases we only implement a portion on our side, and depend on the registry to report back any other error (we often don't even have all rules). If it is already that difficult for a single table, you can imagine that it is impossible in a complex database system, esspecially with difficult validations that involve the 'History problem'.
>
>A well designed RI, should at least take care of the most serious interrelational integrity. Additionally, you can have checks, rules and triggers on the database level to define other rules. Third, you can have business rules to guard the next level of integrity on any other layer. And last, have some detecting mechanism for other more exceptional cases. Even if you implement it this way, you'll find that you did not think of a particular GAP in your data integrity and have to deal with such cases mannually on the occasion it occurs.
>
>
>
>Walter,
>
>
>>>Hi perti,
>>>
>>>Though I agree that if you do your data integrity checks on your cached data you might create a mess, I think it is fair to say that these kind of validations should be done on the actual current data upon save.
>>>
>>>In our framework we can do such validations through SQL statements on the database within the transaction, which avoids the problem you are outlining here. Another strategy is to implement them as validation rules within the database as triggers.
>>>
>>>Again, it is about using the right strategy to prevent a mess.
>>>
>>
>>Yes, i totally agree that validating against cached data is no good in optimistic locking scenarios.
>>
>>In my last example
>>
>>[where the message should have actually read:
>>
>>"While you were editing this record, another user changed this person's RETIRED status as retired. You are trying to save a change in age from 60 to 59. This would make the person RETIRED at 59 YEARS OLD, which is not allowed!"
>>
>>]
>>
>>I was talking about validating against the actual, non-cached data, and while doing that having to generate a lengthy error message basically educating the user on database concurrency issues -theory, which is tricky enough for a developer (as evidenced by this track), and much more so to a casual user.
>>
>>
>>P.
>>
>>
>>>All in all, I find optimistic locking way better for concurrency as having to revert to a pessimistic locking schemes esspecially if records contain administrative information that can be changed by batches for other administrative processes.
>>>
>>>There is a middle ground as well where you can use optimisitic locking, but with conflict management (WhereType property, and using .F. fot the lForce parameter in the TABLEUPDATE() command), where a user is notified that another user has changed the record and therefore you cannot commit yours unless you overrule.
>>>
>>>Walter,
>>>
>>>
>>>>>The whole change/update tracking got to be such a headache for me at some point that I decided to implement pessimistic locking in all of my apps (thousands of them), and so far noboby has complained. Quite on the contrary -- as long as the users have enough information about who has the record locked and how long it has been locked and how to reach them, the users are fine about bugging each other to "get off my record".
>>>>>
>>>>>VFP3 introduced the View with optimistic locking as its default state and change tracking by default. It worked well and eliminated all the rlock() code and contention management. IOW there was less code, not more.
>>>>
>>>>True, and was it VFP 5 or 6 that introduced transaction/rollback, which made the task even easier. However, optimistic locking still could wreak havock with the default "last save wins" -behavior. A good example given above by Kevin (I think) where one person edits the city and zip code and another one only the zip code. Now the guy editing the city and the zip code saves, then comes the guy editing the zip code only and saves. you now have a "semantic disconnect" between the city and the zip code, which would be hard to catch by the software, unless it compared city/zip code tables during validation. This is not a very likely real-life scenario, but it points out the pitfalls in optimistic locking and field-to-field dependencies.
>>>>
>>>>Also, some field integrity issues can arise from straight updates that bypass the Business Object. For example, you might have a business rule which states that "a person marked as RETIRED must be OVER 60 YEARS OLD" (again, this is a very simple and probably not very likely real-life scenario). If one guy changes the RETIRED flag to TRUE and saves while another guy is correcting the age from 60 to 59 and saves, you have a business rule validation problem on the record level. Jeff Pace brought up a good point about this, which I have been thinking about as well, that this discussion should probably involve change tracking through the BO.
>>>>
>>>>>
>>>>>The second issue was the rise of the stateless application, such as a web app. Consensus was (and is) that pessimistic locking is infeasible in these circumstances.
>>>>
>>>>Yes, that's the consensus, and for many good reasons. Yet, you don't have to go with the consensus if it doesn't fit your clients' needs...
>>>>
>>>>>
>>>>>If you must allow simultaneous access due to some time-critical and update-conflict-prone data, then you need to implement granular, column -level rules about inter-relations between data items and potential RI problems between related tables if you plan to allow a partial (or selective) update on top of a previous update. Not a good candidate for a generic solution, that.
>>>>>
>>>>>Sounds dire. Can you provide an example? I cannot think of a RI problem caused by selective updates, as long as the standard principles are followed: system-generated primary keys that cannot be edited, triggers if changes need to cascade and transactions if work needs to be done that doesn't fit in a trigger.
>>>>
>>>>True, and I personally never use compound or natural keys. I was just thinking of situations where someone might use natural keys, or situations where business rules cover data from multiple tables, which, strictly speaking, is not an RI problem...
>>>>
>>>>>
>>>>>I know that optimistic locking and the resulting programming mess is sometimes a necessity...
>>>>>
>>>>>What programming mess would that be? If you update edited fields using a where clause consisting of the primary key, your chance of success is as good as can be. If you use VFP Views, that was done for you automatically. I can understand why you might see a mess if you used compound or human-editable keys... but that's a separate issue.
>>>>
>>>>It is a mess when there is a lot of validation interaction between data items within one or more tables. Again, running change tracking through a BO could take care of this, but the error message to the last saver as to what is wrong with his save -attempt could get way too verbose for a regular data entry person to comprehend. Such as, "Record can not be saved because a person marked RETIRED must be OVER 60 YEARS OLD." The dude would look at his screen and say:"Heck with that, I haven't marked this guy as RETIRED, I can see it right here on the screen in front of me! This software stinks. Call tech support!" So, you would have to expand on the error message so that it says something like:"While you were editing this record, another user changed this person's RETIRED status as retired. You are trying to save a change in age from 60 to 59. This would make the person RETIRED at 60 YEARS OLD, which is not allowed!" Now multiply this logic by the number of interrelated data
>>items
>>>>on your record,and it does become a mess...
>>>>
>>>>Just sayin'...
Pertti Karjalainen
Product Manager
Northern Lights Software
Fairfax, CA USA
www.northernlightssoftware.com
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform