Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Response Guidelines
Message
 
To
31/12/2000 02:14:07
Walter Meester
HoogkarspelNetherlands
General information
Forum:
Visual FoxPro
Category:
Other
Miscellaneous
Thread ID:
00457550
Message ID:
00458208
Views:
24
>I'm talking about static intelligent keys here. You've ask what static intelligent keys are used for. I said, for cases where the key is not or highly unlikely to be changed. For dynamic intelligent key's you'll have to use cascading updates.
<
>What I'm saying is that there i a difference between intelligent keys entered by
>the user and intelligent keys genereted by the database. For example, lets take and invoice table (cantaining the header info, not the invoice lines). What do you take as primary key ?
>
>a. An unmeaningfull surrogate number ?
>b. A generated invoive number which is printed on the invoice itself ?
>
>Surrogate key purists would choose A. I would choose B. Why ? As to our fincancial guidelines, invoicenumbers when genereted should never change. The only difference between A and B is that the PK in case B has a meaning outside the database (in thus is an intelligent key by definion).

I still disagree, based on the argument that I want consistency throughout my database. I can get that every time with surrogate keys. Indeed, I can also get a great degree of consistency in naming my surrogate keys. TableName + ID. Makes life wonderful when I write my middle-tier.

In your example, I may be able to use intelligent keys a lot of the time, but not all. If I can treat all my tables throughout all my databases across all my backends in a consistent manner, I have a better, more maintainable system. Do you not look for consistency?

I now have it in three areas:

1. Every table has a surrogate PK
2. Every PK's name is TableName + ID
3. Every PK is an integer

For me, it makes development so much easier.

>Don't confuse intelligent keys with keys entered by the user. Intelligent keys can also be generated.

Yes, I use generated keys. I call them surrogate keys. < g >

>In other cases where the intelligent key *is* entered by the user it might change as long as there are no ristricting RI childs (an articleno might change as longs as it is not used in orderlines yet or occurs in important statistics).
>
>No, it's about setting up your database right. You'll have to setup your RI rules for cascading deletes anyways. Again I don't see the difference why it is a requirement to implement your cascading deletes right and forget to correctly implement cascading updates ? Tell me what's the difference ?????

On the system I am working on now, I don't delete. Period. And I don't have to worry about cascading updates because I use surrogate keys.

>>Can I not create an UPDATE trigger that is relatively consistent across my multiple backends to prevent this?
>
>hmmmm, why not implement RI restrict rules ? Note that if you do, you're awfully close to implement cascading updates for the table. The point is that in fact you're implementing RI (Referential Integrity) rules. If you do this, then do it right !

There is a big difference between implementing a trigger to prevent a surrogate key from changed across mulitple backends, to implementing cascading updates through RI. As I have pointed out, and you have failed to address, I believe this I don't need to create triggers in Oracle to do this, as I have to with SQL Server 7. Again, I want consistency.

>If you look at it carefully there are not that many differences. For a surrogate primary key or static intelligent Primary key:
>- You'll have to implement cascading deletes (RI rule)
>- Prevent the change of the surrogate key accidently from outside, (can be done with RI Restrict rules)
>
>With dynamic intelligent primary key:
>- You'll have to implement cascading deletes (RI rule)
>- Implement cascading updates (RI rule)
>
>I don't think the former is significantly better than the latter.

See above

>With one big difference, though the social security number is incorrect, it cascades the change trough all tables and the Primary Key and Foreign Key relation remain intact. In your case the database is leaved inconsistent because it now contains orphan rows as certain child records don't have a corresponding parent. If you'll examine this case carefully, you'll notice that this problem will not arise with intelligent keys which changes are cascaded trouh all related tables.

Sorry, I missed this the first time (I am knew to SQL Server). Your premise is incorrect, at least so far as the cascades go. A developer or power user cannot change the surrogate key because of foreign key constraints. If I have a Customer table, with a surrogate key, and an Order table, with a surrogate key and the CompanyID as a foreign key, no one (developer or power user) can change the CompanyID in the Company table. SQL Server will not allow it.

Now, someone can change the CompanyID field in the Order table, but how is this any different than your system. I could just as easily go to a child table and change the SSN.

Now, in my example, the use of surrogate keys explicitly prevents the changing of the PK. In your example, it is allowed, and the results are even worse. The SSN updates have cascaded, so all my data is consistent, but consistently bad. How are you going to catch this? You can't run a maintenance routine looking for orphaned records, as there aren't any. So how are you handling this in your current projects?

>>Please do not misquote me. I have neved argued that I do not delete data due to poor data design. I generally do not delete data because in my experience, if someone took the time to put it in, they may need it back at some point. Even if a customer is no longer in business, I may need that customer and their order history. Disk space is cheap. Lost data is not.
>
>Can you point out some reasons why the costumer want to undelete a record ? Would a backup not be sufficient ? What happens to performance when not deleting data ?

A client may want to undelete a record because when we designed the system, they were use to the previous system, which bogged down when they had over 3 years of data. So, based on previous experience, they only want to keep 3 years around.

I know better, so when the three year time limit is reached, I mark the records as inactive rather than delete them. Disk space is cheap. Then, when the new manager comes in and wants to look at trends, no one is in trouble, especially me.

A backup? In SQL Server? When I restore a backup in SQL Server, I restore the entire database at the point in time of the backup, not an individual table.

And as far a performance goes, SQL Server scales just fine. If I run into a case where I need to delete data, and I'm sure I will, I will have to worry about cascades. But I'm hoping SQL Server will do it all for me by then. < g >
Chris McCandless
Red Sky Software
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform