Dragan,
Splitting a node in a B+tree is a very expensive operation. B-trees were developed because the expense of keeping them balanced is lower than it is for a binary tree.
Reducing the number of node splits at the cost of increasing disk space would seem to be a reasonable tradeoff.
>What I didn't know was the way old version of indexing engine behaved on page splits - I expected a 50-50 split, but it seems it was just moving the spill into the new node. I wonder what did it do if the same full page had to split again - did it move the new spill into the next page, or did it just split it again. IOW (using the numbers from your example), if a page held 20 keys and we added another one, it went 20-1. Now if we introduced another one between these 20, did it go 20-1-1 or 20-2? Either way, it may have done less work on the first split, but had more work to do later. If now it splits 11-10 and then 12-10 or 11-11, this involves fewer splits in the long run, and gives us better speed. I don't really care about the size - disks are cheap nowadays... but then hauling more megs down the cables in a file-server environment may hurt. You cross the bridge for free, but may pay to cross the river.