Plateforme Level Extreme
Abonnement
Profil corporatif
Produits & Services
Support
Légal
English
Response Guidelines
Message
De
06/01/2001 03:46:06
Walter Meester
HoogkarspelPays-Bas
 
Information générale
Forum:
Visual FoxPro
Catégorie:
Autre
Divers
Thread ID:
00457550
Message ID:
00460180
Vues:
22
Chris,

>>The PK is a 32 bit integer field. The only difference between the surrogate and intelligent key here is that the in this case the inv_pk has a meaning outside the RDBMS.

>Yes, in this example you can use an integer field, but you are sidestepping my point: what happens when you have character PKs and integer PKs?

Basicly nothing. The point you've got to watch out for that you don't have programmed modules that expects integer PKs. Since I don't like to be forced into surrogate keys, I dont have got such limitations in my framework.

>>The chance that it would happen is, very, very small, since the datetime datatype has a resolution of one second or 1/100 of a second. If accendently (once in a very long time) it will happen, the primary key uniqueness constrain will let the insert fail. Wether you do or not do handle this situation, the database remains consistant. OTOH, I've got to agree that this solution seems a bit odd to me. I would have choosen for an internal orderno, generated by the system in the same way as surrogate keys.

>It doesn't give me a lot of faith in the rest of what the author writes.

Bad, bad argument. You as a developer should be able to filter out what's right and what's wrong. If you make your decisions based on feelings, a less good example, you might miss some things that make very good sense at all. Just rear the whole article and analyze it piece by piece.

>>>Or create a view of DISTINCT city values. When the user enters a city that does not exist in the view, ask them if they typed is correct. In your scenario, how do you handle a new city being entered?

>>There is a difference in the two approuches: The DISTINCT approach would only recognize cities that are already used in the adress table. The lookup table would allow you to enter all cities at forehand (I know this is a hell lot of more work in the USA than in my small country) and does not allow the users to enter a city that does not occur in the lookup table.

>Are you saying that cities in your country have been fixed for a very long time? No new ones, or the potential for new ones? Either way, I would think that having all cities entered beforehand is questionable.

AFAIK, the cities in our country are pretty static. Our country is pretty crowded and there simply is not enough room for more cities. I can't recall a new city having arised after Almere or Lelystad (and thats about a 25 years or more ago).

But this is not an issue either. As long as it is possible to extend this lookup table, there is nothing wrong.

>>IMO, this situation is not an academic one, since our PPT delivers a postal code table (updated regulary) in which the postalcode automaticly defines the Street and City. In this table the postalcode is the primary key. This allows your application to always have the right adress in your table. If no other PK is available than the postalcode it is harder to deal with surrogate keys here.

>Your argument is very confusing. In your previous post, you introduce a second table with the just the cities as your solution. Now, it appears as though selecting DISTINCT cities will prevent an incorrect city from being entered by the user. Exactly what are you trying to say?

As I said those strategies are different, the lookup table contains all known cities, so even if you start with an empty adress table you're ensured that the city entered does exists, With the DISCTINCT approach you're able to insert incorrect spelled cities and will be ask for confimation each time a city was entered for the first time. The lookup table can be applied in more than one table (for consistency ??) without having a seperate DISTINCT solutions for each new one. IOW, the lookup table IMO is a far better solution anyways.

In our country we could use the postal code table maintained by our national PTT. This one is updated regulary (So you're ensured that all new postalcodes are included). Our PTT even offers a solution to check if a certain adress exists.

>>The point I'm trying to make is that, I don't care about the internal scheme: If you've got to do more work to implement cascading deletes and/or updates, so be it. The point is that the external schemes of two the same databases in two different DBMSs should be the same. How you handle this in the internal scheme is not a valuable issue whether to use one over the other strategy.

>I could not disagree more. If I am the one doing both the internal and external work, the two are very related.

Though they're related, It should be designed in such way that the external scheme determines the internal one, and not in reverse. So each different DBMS should have the same external scheme (at least as far is possible) and therefor could have different internal ones.

>>No, the external schemes (Tables, fields, datatypes, RI rules and other constraints, SQL) should be exacly the same no matter which DMBS used. (Yes, I know that certain DBMS do not support certain data types and there are differences in the SQL language, but at least try to be as uniform as possible).

>Ah, so you are arguing for consistency?

In this case, I guess I am, it's a given fact when doing such projects you want tranparancy (Oh no walter what have you done, you've introduces another buzzword: transparancy) which means that you could, switch backend with the minimum amounts of efforts and/or costs. In your SQL 7 example, it might be also called anticipating on SQL Server 2000.

OTOH, when doing things in a way just for the sake of consistency while other options might have a clear advantage (I think we now have proven that when working with intelligent keys, there could be advantages in using intelligent keys which are important for the system), it is not advisable to just skip this argument and use surrogate key just for the sake of consistency.

The main message I wanted to give here is: Use your brains, just think what strategy is best in your case and make a decision based on the advantages and disadvantages you'll find. Not ever, make just a decision based on "for the sake of consistency, I'll always apply surrogate keys".

>>And what about the the "an articleno may not change when it occurs in the sales table" (restrict rule). When the articleno is not the PK anymore (since you've added an alternate key) you can't use an RI rule since the Articleno does not occur in the sales table anymore. You'll have to write a trigger for that.

>Not a problem. Either way, I can write basically the same trigger for my multiple back-ends. Consistency.

Consistency ??? You've got me wondering how you would you this for multiple back-ends. However this is not what I'm trying to say. The thing is that surrogate keys might seem to solve a few problems at first sight, but create others at the end. In the previous examples when using intelligent keys in ORACLE, SQL server (SQL server, use the references keyword, and when using the RI builder, in VFP) I could just define a restrict rule, and don't have to worry about writing triggers (which tend to be buggy anyways).

About consistency and surrogates: I wonder if we all would agree to wear the same clothes and drive the same cars and eat the same food because it would be:
1) consistent and standard
2) simpler by far
3) cheaper all around
4) require less thinking

Try to imagine such world, and more important what the implications are.

>>What's a problem by using a second machine, when handling such exceptional cases ?

>Not a problem, so long as my client has an extra machine and a license for SQL Server. How many clients do you have that have an unused machine lying around with the specs to run SQL Server, and an extra license for SQL Server?

Well, I personall have got a licence for that one, and an extra machine is never a problem for me. If neccesary I'll take my SQL-server from home (is already install on a server) and do my stuff. As for the configuration of the hardware. It is not important as long as SQL-server will run on it as its only purpose is to restore a backup and gather the data in there.

>>My preferred strategy would be to gather all the needed data in VFP and export it to the new database.

>Your scenario is becoming more time consuming as this discussion goes on. One, your backup data is in SQL Server, so you have to restore it. I have covered the hurdles with this. Then you have to create a VFP program to access it.

Using Views via ODBC would do in this case.

>Then you have to decide what you bring in from the backup, making sure not to overwrite existing records, and making sure when you bring in an old Order record, you also bring in any parent records. Or, you could have just marked the data as inactive in the first place and avoid all this.

The point is that marking data as inactive is not a suitable solution in many cases, and I can't help to 'feel' this isn't the right way. I'd rather take some other actions to maintain possibilities to recover data. You could think of storing older data in other tables or maintaining a backup strategy where recovering is made as easy as possible.

>The original part of this discussion related to deleting and restoring data, not the intelligent key vs. surrogate key debate.

O.K, apperenlty you've found a way to easely 'undelete' old data by marking it inactive. At least you've thought about this strategy and I admire that. (I like to see people thinking about their actions in stead of following the hurdle and hypes (and yes this also applies to the surrogate vs intelligent key thing)). However, since this case is so exceptional (at least in my opinion) it is not suited to draw general conclusions.

You're in a better position (because you appearently have thought more about this problem than I) to judge if its the best strategy. However, If I were you, I would find some other people who have simular problems and have found other solutions.

>Let's just agree to disagree.

No lets agree, to agee and regard our different opinion to be negligible

Walter,
Précédent
Suivant
Répondre
Fil
Voir

Click here to load this message in the networking platform