The old chestnut of “optimal block size” came up on OTN again a few weeks ago, with someone asking for advice on how to do some testing to decide on the optimal block size for a database. The correct answer to this question is you don’t: you assume you are going to use the default size for your platform and then think about whether there are any very specific jobs that your application does that might gain some sort of worthwhile benefit if you used a non-default size.
Nevertheless, the OP came back some time later with a few results which suggested that some of his tests showed that a 4KB block size gave significantly better performance than the same tests using 8KB and 16KB block sizes.
But there’s a problem with the conclusion. If you examine the results carefully, and think about what type of work must happen in the tests, you realise that this particular test was not about the blocksize – it was about the network and the client program. (I haven’t included a link to the posting where I explained this – it’s just a little later in the same thread. This is just to give you the option of working out why the test is wrong before you read my comments about it.)
Update 18th Aug 2010
The investigation continues – with the OP comparing the resultsof using a table with a single 2000 byte column to a table with many columns with an similar total size. Again, though, the anomaly in timing he is chasing seems to be about network traffic time, NOT about database block size.
(I’ve only sent one reply to this thread at the moment, but the OP has been good at supplying extra data in the past, so the discussion may evolve to produce further interesting information.)