Hi Fabio,
>
>
>AddScalarMember_Time=VFPImplementationC++Code(NObjects,ObjectClassComplexity,CPUCacheSize)
>is ~ 2xNObjects => 3xTime within Cache
>AddArraysMember_Time=AddScalarMember_Time / 2
>it is 2x 4x faster,and the Cache to Memory transition start with 4x NObjects
>DestroyScalarMembers_Time = 3 x AddScalarMember_Time
>DestroyArraysMembers_Time = DestroyScalarMembers_Time / 4
>
very interesting findings - tanks for sharing. I had tested object creation/destruction and had also realized the connection between class "footprint", number of classes and cache size, but I hadn't thought of also testing the penalty of containership.
Seems my gut level instinct on putting behaviour/business objects into arrays instead of individual contained object references was sound seen from this perspective as well <g>.
>
>Honestly, to optimize VFP is not simple,
yes and no: I usually optimize at the application level, where most of the runtime is spent by mistakenly employed vfp - checking rushmore's pro and con's, possible system bottlenecks and hardware/os settings are often enough. Still, the exact code positions have to be isolated...
Sometimes I add a few new patterns - in case of object creation/destruction for instance I replaced on the fly created message/parameter objects in a n-tier framework solution (up to more than a hundred con/destructs between user interaction, but run when already a nontrivial number of objects are instantiated) to using persistent object stacks, eliminiating most of the overhead.
What kind of applications do you work on that need this finegrained control or have to be optimized on that level ?
regards
thomas