Every now and again I see a note on the OTN database forum, or Metalink, or the newsgroup asking why the time taken to run a query can vary so much between executions … even when the plan didn’t change (and the bind variable used as inputs didn’t change).
Just off the top of my head, here are a few possibilities:
- Sometimes the required data is perfectly cached, sometimes it isn’t
- Sometimes other processes are using the CPU or disks very aggressively
- The volume of data can change significantly over short periods of time
- A long-running transaction may require you to do more work generating read-consistent data sometimes.
- A small change in data can cause a huge variation in the number of times a subquery runs
- A plan involving a hash join or sort may execute optimally on one occasion and multi-pass on another
- A parallel plan may get all the PX slaves it wants on one occasion and none of them on another
Any further suggestions are welcome – remember, though, we are assuming that (a) the plan really didn’t change, (b) the input bind values really didn’t change, (c) your environment didn’t change and (d) there isn’t a mad scientist (or novice DBA) in the background doing unusual things to a production system.