Level Extreme platform
Subscription
Corporate profile
Products & Services
Support
Legal
Français
Audit trail thought-experiment
Message
General information
Forum:
Politics
Category:
Other
Miscellaneous
Thread ID:
00567478
Message ID:
00567576
Views:
15
Peter,

It's nice to hear from a fellow enthusiast - working all the time doesn't always leave space in the day for thinking about the theory behind the work one does.

Your suggested approach is very interesting - the approach I took was to have an audit-trail record *for each field*, almost as you state, with an identical time-stamp for all the fields participating in the same transaction (i.e. the datetime field is not unique) the additions being the control information (including the PK of the record being trailled). This is not truly normalised as the timestamp is not unique, in order to gather together all the values for a transaction. Having thought about this, making the timestamp the primary key (and hence unique) would be valid, assuming the transaction to be serialisable and atomic (which is already possible with current technology): to reconstruct the record would be a case of filling the field-values for a virtual record by scanning back from a time T[now] to a maximum depth of T[insert] for the virtual primary key.

Also of course, you are right, the 'OLD' value does NOT need to be stored. Neither, in fact, does the type of transaction (I/U/D) if we use a NULL/DELETED value for all the deleted fields: an 'insert' is an update to a non-existant record. All that is needed is datetime/table/field/value/virtualPK/user.

I was trying to think of how to eliminate the table.field combination, but I think this needs to be retained: one might also need to identify the originating database, so to generalise (using 'handle' to indicate some combination of table/field-name, etc.:

datetimeStamp/handle/value/virtualPK/User

The HANDLE field could be dropped by having it as a record in the table, prior to the recorded changes, as could the virtualPK (and user).

We end up with:
datetimestamp/Value

So, INSERT INTO XYZ (custname) VALUES ('fred') WHERE pk=99

would become (when stored):-

10-OCT-2001#12:01:60505 user:brian
10-OCT-2001#12:01:60506 XYZ
10-OCT-2001#12:01:60507 99
10-OCT-2001#12:01:60508 custname
10-OCT-2001#12:01:60509 fred

5 records per field, in effect.

Hmmm..."ThoughtExperiment Server version 1.0" need only have one table and one field-type (all values could be stored as strings). I can feel an article coming on.....

One difficulty with this is, as you state, that the transactions must be scanned to build the record. From your description, you propose storing a timestamp as a foreign key into a table for the attributes of a record - is that correct?

So:

Table XYZ
PK, custname (value is 10-OCT-2001#12:01:60509)

Value table
10-OCT-2001#12:01:60509 fred

... and one can build the record that way. If I've got this right, how are you finding previous values?

Alternatively, are you insterting into the same table (XYZ) with a datetimestamp+PK as the primary key? That's interesting in that it is very effiecient even with current technology.



>
>Robert,
>
>Just normalize your database to the time-aspect, as if all entities are made unique to the date/time; when this is worked out, each attribite will become an entity, and the data in the former attribute will be the attribute itself. So, all is shifted one "dimension". What do I say ?
>
>Well, that you are IMO completely right on this theoretical thoughts, and that is't only a very different solution, being optimized more than what I said above, having all the data constantly duplicated from the logical point of view. Yeah, logical, but not technical; Remember, where I say that all attributes become entities, each instance of the attribute's data will be in a new record, being practically the same as your audit-trail records.
>Now one addition you can make :
>
>When my "approach" is taken as a basis (being sort of more logical), you your keys will have the date/time in it, while your way allows for a physical record-sequence, the newest at the bottom. Anyhow, what I'd like to say is that my approach allows for this beautiful time-bases system, in where a COMPLETE DATABASE can be viewed at a certain moment in time, giving the 100 % consistent situation at that time. F.e., this 8 years ago where you were married to your ex wife Marianne {g} at this same time your neighbour from that time had 1 child only. This can all be solved at system-level, the developers not knowing about it.
>Note that your audit-trail can cause the same functionality, though intrinsicly you have to collect all the transaction-records to get to the actual point of the data. Maybe I'm to fuzzy (not wanting to be extensive at the same time), but in the end you too can Seek at starting points, going to Recno(0) (etc.) to get the nearest records, once you put the appropriate indexes on the table(s).
>
>In a far past I created a test-system working in this way, and you won't believe your eyes what info will be in the database just because of the time-based phenomenon. Of course records will never be deleted, and a ogical Delete just adds another record (per attribute), namely "blank", or : not there.
>
>Excuse me for this fast fuzzy writing, but I'm just an enthousiast for these kind of things.
>BTW, right now our main app contains an audit trail of all, indeed allowing for the reconstruction (or duplication, replication) of all (roll forward). So of course your are right.
>
>Cheers,
Previous
Next
Reply
Map
View

Click here to load this message in the networking platform