hi Nadya,
Yes,
Easiest = a table (generated, see the two scans)
TableName
TRigger type
FunctionName and params
There may more than one entry (Tablename + Triggertype)
Select into array and Eval() your way through
a small step further = turn the table into code
One function per TableName + TriggerType
Write the code to the db
______
>Hi Gregory,
>
>I think, I got your ideas now and I could not sleep (2:22am) before I outline it here < g >
>
>Have a table called RIFunctions with this structure:
>TableName - C
>FunctionName - C
>FunctionCode - M (this field is optional, here I need a little bit more input from you).
>
>All triggers would be the same, e.g. _ri_handler("UPDATE"), but the code would be something like
>
>start of the code...
>begin transaction
>if seek(lcTableName,"RIFunctions","TableName")
> plError = evaluate(RiFunctions.FunctionName + '([' + m.lcTriggerType+'])')
>** Here we can use instead execscript(RiFunctions.FunCode), but I'm not sure, it would work in SP
>endif
>..
>rest of the code
>
>The function will be named something like
>tablename + '_' + trigger
>
>Here is a template of the trigger function on example of Trans table
>
>Trans_Trigger
>lparameter tcTriggerType
>
>local luOldKey
>luOldKey = oldval('cTrans_Pk')
>
>do case
> case m.tcTriggerType = "UPDATE"
>
> if cTrans_Pk <> m.luOldKey
> plError = Restrict_Update('Trans','ChildTable1',rest of params) ;
> and Restrict_Update(...) ;
> and ...
> endif
>
> case m.tcTriggerType = "DELETE"
> plError = Restrict_Delete(....) and ...
>
> case m.lcTriggerType = "INSERT"
> * test FK if they changed
>endcase
>return m.plError
>
>
>So, the code would be compact and would work faster than now. The only question I have with this implementation idea is do we want to put code in SP or leave it as a Memo field in that RIFunctions table and execute on the fly? I think, the first approach would be faster, but we end up with the long SP (though much shorter, than native RI code).
>
>So, that're my thoughts aloud. It would take at least one day to implement...
>
>>>>>It is actually becoming more and more interesting. I made bunch of other changes to fix bugs in cascading deletes/updates. Now I hope it works correctly...
>>>>
>>>>Hope for you too ;-)
>>>>
>>>>You'll have to test all scenario's though to be sure.
>>>
>>>I tested a couple, but I'm now trying to understand, how it should work.
>>>
>>>If we have a cascaded update, what should be our logic for our table as a child? So far I put a condition, that we check for our table as a child only in it is a first level of trigger execution, e.g. if our trigger was called as a result of cascading update on a parent, I do not run the second SCAN. However, I'm not 100% certain I'm correct.
>>>
>>>Can somebody with the good understanding of how triggers should behave explain to me all possible scenarios?
>>>
>>>Thanks in advance.
>>
>>The trigger level is imo irrelevant here
>>
>>if there is a cascaded update, the child update fires
>>
>>The child would normally check if there's a parent with that key.
>>
>>Doing this will give an illegal recursion. It's also not necessary to test this, if we trust the code of the parent
>>
>>There's a simple test: do not check any relations with the table one level above, in case of a cascade
>>
>>Suppose you keep an array, say CascadeInitiator[]
>>Each time a trigger fires
>>
>>dime __t__CascadeInitiator[_triggerlevel]
>>__t__CascadeInitiator[_triggerlevel] = ''
>>
>>
>>if you are doing a cascaded update
>>
>>__t__CascadeInitiator[_TriggerLevel] = m.thistable
>>replace .....
>>
>>
>>All relations with __t__CascadeInitiator[_TriggerLevel - 1] should not be tested (error 1887)
>>
>>think that's about it
>>
>>To be sure. Use standard RI, set up a cascade delete and a cascade update. Generate the code and look at it
Gregory