Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Response Guidelines
Message
De
08/01/2001 03:51:07
Walter Meester
HoogkarspelPays-Bas
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00457550
Message ID:
00460459
Vues:
17
Hi Chris,

>>>Yes, in this example you can use an integer field, but you are sidestepping my point: what happens when you have character PKs and integer PKs?

>>Basicly nothing. The point you've got to watch out for that you don't have programmed modules that expects integer PKs. Since I don't like to be forced into surrogate keys, I dont have got such limitations in my framework.
>
>You still have to code differently in your middle tier. I think this case was brought up by Erik Moore. You prefer this method, I do not.

Ah, different than what? AFAIK there no standard software, framework, or guidelines to do this. Also, when dealing with single attribute primary keys whether they are character or integer should not make a significant difference.

I'd like to piont out that there are also 16 bytes GUID (characters) used as a surrogate primary key. These are specifically handy in a distributed database system. Since PKs could be generated by different servers (this is a real potential problem with the enumerating integer PKs). So, my question is, how would you handle that one ? My take on the point is, that you framework should not determine the structure of your database.

>>>It doesn't give me a lot of faith in the rest of what the author writes.

>>Bad, bad argument. You as a developer should be able to filter out what's right and what's wrong. If you make your decisions based on feelings, a less good example, you might miss some things that make very good sense at all. Just rear the whole article and analyze it piece by piece.

>No, good argument. It the author cannot even see this, why should I have faith in the rest of his arguments? This decision was not based on feelings, but facts: his solution has a potentially serious flaw, one he either isn't aware of or chose not to point out. Either way, not good. So why am I suppose to believe the rest of it?

Wait a minute. The potential problem is so small (a order is entered for the same client at exactly the same time (in a 1/100 of a second resulution) it is regarded negligible. Personally I find it odd that he uses this kind of PK (but maybe wanted to make a certain point here), but thats entirely my opinion and certainly not based on the potential problem.

This potential problem I regard just as high as the change you'll run out of surrogate keys (or creating a duplicate GUID) and you'll have to find gaps and holes between existing PK values. But since when correctly designed and maintained, the potential problems are negligible .

>>AFAIK, the cities in our country are pretty static. Our country is pretty crowded and there simply is not enough room for more cities. I can't recall a new city having arised after Almere or Lelystad (and thats about a 25 years or more ago).
>>
>>But this is not an issue either. As long as it is possible to extend this lookup table, there is nothing wrong.

>Walter, the cities example above was your scenario, not mine.

Yes, and ?? Is this not a real world example ??

>>"I don't like to spend hours to adapt the program to a new situation when not necessary at all."

>This is your scenario, but you have not provided an advantage to using intelligent keys.

IMO, I did, I do not have to change numerous programs to retrieve the city from a table containing an adress nor do I have to change numerous views and numerous table structures. Just add the cities table and adapt my program and/or RI to ensure the user enters a city that occurs in the cities lookup table.

>>As I said those strategies are different, the lookup table contains all known cities, so even if you start with an empty adress table you're ensured that the city entered does exists, With the DISCTINCT approach you're able to insert incorrect spelled cities and will be ask for confimation each time a city was entered for the first time. The lookup table can be applied in more than one table (for consistency ??) without having a seperate DISTINCT solutions for each new one. IOW, the lookup table IMO is a far better solution anyways.

>In your original scenario, the Address table already existed and had data, and the request from the manager came later. Now, your are starting with an empty database. Which is it?

I'd buy a table from our PTT containing all cities (and/or postalcode) and/or I'll gather as much cities as I can from the tables containing an adress (could be more than one).

>You failed to mention this in your original scenario. And of course, you asked for my solution, and I live in the United States.

You don't have such tables provided by one or more companies ? It could be a hole in the market. Seriously these are all real-world situations where the choice for intelligent/natural keys are very attractive.

>>Though they're related, It should be designed in such way that the external scheme determines the internal one, and not in reverse. So each different DBMS should have the same external scheme (at least as far is possible) and therefor could have different internal ones.

>Says who? You have made this statement, but provided nothing to support it.

When designing and analyzing a database you do this on the external scheme level. When you want to implement it, you still have the choice between different DMBSs. After you choose one specific DBMS you start working on the internal scheme (maybe you've choosen for a heterogenous distributed database system, e.g, SQL - server and Oracle). This is what I call consistenct my friend ;-).

If in a particular case, you want to take advantage of DBMS specific features, this can be applied on a case by case basis.

>Again, you are making my point for me! If I want to switch backends with minimal amount of effort, I have to take into account that Oracle and SQL Server 7 handle cascade updates differently. So I can use surrogate keys and minimize the differences in my backends. And I can anticipate SQL Server 2000 all I want, but it isn't going to change the fact that I have to use SQL Server 7 now.

Errr, come again ? You've stated before you want to use different external schemes, which makes it a whole lot more difficult to switch back-end. Also, don't you think that you're using your exceptional case (you don't delete data, use SQL server 7 without RI triggers) as a argument for not synchronizing the external schemes of both backends. This might work for your particular case, but is by no means a standard.

>>OTOH, when doing things in a way just for the sake of consistency while other options might have a clear advantage (I think we now have proven that when working with intelligent keys, there could be advantages in using intelligent keys which are important for the system), it is not advisable to just skip this argument and use surrogate key just for the sake of consistency.

>No, you have yet to provide an example of a clear advantage for initelligent keys.

Did you read the messages of Daniel Rouleau. In his example he has a table with 80 foreing keys. If they were surrogate he'll have to unclude 80 join condidtions to make the same report as with intelligent keys. When using surrogate keys, you'll in general need more joins, which makes a significant performance difference. Also note the points I summed up in my discussion with Erik. If they are not significant for you, that's fine, but another might need it to solve an issue by using intelligent/natural keys.

>>The main message I wanted to give here is: Use your brains, just think what strategy is best in your case and make a decision based on the advantages and disadvantages you'll find. Not ever, make just a decision based on "for the sake of consistency, I'll always apply surrogate keys".

>There are many advantages to using surrogate keys, including the problem of cascading updates. Yet, you summarize my arguments as just "consistency", as Jim Nelson has tried to do.

O.K. how would you solve the problem of decentral order input. Let's say you've got a undefined number of clients attached to another DMBS which generated surrogate keys of their own. All the orders are replicated to serveral other DBMSs (nodes). In this case you'll be faced with the fact that all those DMBSs should generated unique keys. For these circumstances you might be better of with GUIDs which in fact are character. Consistency ?? Don't you've got to adapt your framework ??

>I love your arguments Walter. On one handle, you write that triggers are great when I should use them to handle cascading updates in SQL Server 7. Now, when it does not suit your scenario, they are buggy. Walter, you are being terribly inconsistent.

I'm still not convinced that there are no tools to generated the code for Cascading deletes/Cascading updates/Nullifies and ignore RI options. If they're not there should be some template which describes this to do. Else this is a significant weakness of the SQL server DBMS which in fact does not have anything to do with this discussion.

Triggers might be your only option to implement RI, but they're not the best. Declarative RI is the best option, indeed because they require less coding and they're bugfree (or suppose to).

>>About consistency and surrogates: I wonder if we all would agree to wear the same clothes and drive the same cars and eat the same food because it would be:
>>1) consistent and standard
>>2) simpler by far
>>3) cheaper all around
>>4) require less thinking
>>
>>Try to imagine such world, and more important what the implications are.
>
>You are comparing designing database systems to wearing clothes and driving cars? Now, you have gone off the deep end. < g >

No it isn't. You don't seem to realize that in this world there are different DMBSs, different design strategies, different OSs, different sorts of developers, different opinions, different programming languages etc.

By choosing one strategy, you're narrowing down the possibilities to let all of those work together.

>We have already been through this argument, and you have agreed that restoring backups can be a major PITA.

Yes I did.

>>Using Views via ODBC would do in this case.

>I didn't say you couldn't do it. Just the longer your scenario goes on, the more it is costing your client. I assume you are not working for free.

When a manager wants to change things, he should face the costs. I'll try to find the most easy way to to do the job, and do this your way. If the manager thinks someone else could do it better, It would be fine with me. Let the other try, and after he/she screwed up, they'll return to me after all.

>>The point is that marking data as inactive is not a suitable solution in many cases, and I can't help to 'feel' this isn't the right way. I'd rather take some other actions to maintain possibilities to recover data. You could think of storing older data in other tables or maintaining a backup strategy where recovering is made as easy as possible.

>Why is it not suitable?

You're facing the "history" problem. You want to maintain history for datawarehousing purposes. In fact in general you don't want to delete any historical data. Fine, this I can accept. In dutch there is a lot written by Rene Veldwijk regarding this problem.

However when an order is entered and is canceled any time after that, how would you remove this order, and prevent it beeing recovered when an IT manager orders you to recover history ? How do you handle faulty entered data ??

It might be better to find some reference to the 'history' problem.

>Why is this an exception?

I've never heard of anyone (except you) that does not delete data in a database. You're welcome to prove this a common practise in database world.

Walter,
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform