>Hi Dragan,
>
>I updated the wiki to make clearer what part I had added to David's great posting.
>
>>I assume the calculus involved there, regarding the number of keys per node, is only approximate
>
>Yes, of course. The number of keys per node depends on the size of the compressed keys as well as the number of records in the table. It's just an example to explain the different results and what changed from previous versions.
I just looked at a .cdx of a table I packed last night. On nodes for tags on name it seems to keep about 13% of free space; on the guid (16 hex chars) tag this seems to go about the same for most of the blocks, except the last few which are populated at about 50%, which is what I really expected to see after reading your (and David's - too bad English doesn't have your-singular and your-plural) article on Wiki.
What I didn't know was the way old version of indexing engine behaved on page splits - I expected a 50-50 split, but it seems it was just moving the spill into the new node. I wonder what did it do if the same full page had to split again - did it move the new spill into the next page, or did it just split it again. IOW (using the numbers from your example), if a page held 20 keys and we added another one, it went 20-1. Now if we introduced another one between these 20, did it go 20-1-1 or 20-2? Either way, it may have done less work on the first split, but had more work to do later. If now it splits 11-10 and then 12-10 or 11-11, this involves fewer splits in the long run, and gives us better speed. I don't really care about the size - disks are cheap nowadays... but then hauling more megs down the cables in a file-server environment may hurt. You cross the bridge for free, but may pay to cross the river.