Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Creating table for Variable values
Message
From
18/03/2017 06:21:16
Dragan Nedeljkovich (Online)
Now officially retired
Zrenjanin, Serbia
 
General information
Forum:
Visual FoxPro
Category:
Databases,Tables, Views, Indexing and SQL syntax
Miscellaneous
Thread ID:
01649129
Message ID:
01649155
Views:
46
>>>So, for each different Type the table will have a max number of columns. This way, the UI will take care of Captions and matching the Captions to the Col Name/Type.
>>>
>>>What do you think of this approach?
>>
>>I wouldn't. I'd use a one-to-many approach of some sort, with one record for each value. Since the values can be different types, you'd either need multiple tables, or for each record to have one field for each possible type, and another field that tells which column is in use.
>>
>>Tamar
>
>Multiple tables is what I want to avoid. The second part of your suggestion, in general ("for each record to have one field for each possible type, and another field that tells which column is in use.") is what I thought I described above. Although I am confused as how you mean by "for each record to have one field for each possible type".

The way you propose here is age old and I've seen it many times over the last 30 years. The situations when there is such a requirement, to store a somewhat limited set of values of variable type and number, are numerous and likewise different in nature. The two extreme cases I had was a water analysis lab and a bakery. In the lab, there were several different sets of ananlyses with a set of values each - some of them with just four values, some with up to 20, with different units of measure. The bakery had a delivery schedule in a single table, where the rows were the shop and the columns were types of bread and pastry+delivery time. The numbers were integers, at least (actually N(3) as not a single shop would order more than a few dozen of anything, so two digits actually covered everything, third was just in case. The table was huge and the real bug came up when they produced too many kinds of items which may go two or three times a day, so they broke the 255 fields limit, and the next one when they ran out of disk space (4M disk at the time). Normal dbf zips to 10% of original size, this one zipped to 2% - there was so much empty space.

We have a proverb, "if you don't pay the bridge toll, you pay the crossing". The complexity can't be swept under the rug. My preference is to have the tables neat and the complexity can be laid in a biz object.

The way I solved both above cases was to have the atom in the record, i.e. one record one single item. For empties, no record. So one row of delivery from the above table would have the identity of the shop, of the item, delivery time and amount. In the lab, the identity (fk) of the batch, of the thing measured, of the unit of measure, of the precision, and the value measured. There was also a parent table of batches with date, time, identity of the analysis set, laborant, location/origin of sample etc.

Now you may or may not want to take this atomic approach. Its advantage is that all the values are queryable and it's actually a neatly normalized schema. The disadvantage is the complexity of SQL when you want a denormalized result.

The other approach is what was already recommended - stuff those name-value-unit.of.measure triplets into a memo with json (xml would be too verbose, IMO). The advantage is the simplicity of table structure and hence SQL, the disadvantage is that you don't have these values in their own fields, so they aren't queryable.

Both these approaches allow for expansion - you can add new types of fields as you like; in the atom records it's just another name/unit.of.measure record in the metadata, in json it's just another triplet, with its own metadata (actually the same metadata as in the atom case).

Your approach with anonymous fields where semantics depends on the value in the type field actually breaks the 2nd normal form (IIRC) and is considered dirty technique which we all had to use in the times of COBOL and the like, when changing table structure required recompiling everyting and writing a conversion utility; with the advent of dbf (where structure is defined in the table header, not in your code) or database servers made that unnecessary. And disk space considerations are irrelevant here - actually this approach creates lots of empty fields.

Note that I'm not even mentioning multiple tables - because you would need one table per type, and types may differ by perhaps one field. You would need one table for each combination of field types or at least for each one that occurs in practice, and you'd have to create another table when a new type appears. And it will.

back to same old

the first online autobiography, unfinished by design
What, me reckless? I'm full of recks!
Balkans, eh? Count them.
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform