When writing about Oracle, people often use expressions whose meanings aren’t necessarily well-known to the less experienced user, so I’ve started building a glossary of commonly used Oracle terms. It’s going to take some time to build and it will keep growing as time passes. To make it easy to get to the page, I’ve added a special link near the top of the options column (in the “Special Links” list).
If you have any suggestions for terms feel free to add them in the comments. Please stick to one term per comment so that I can delete the comment after I’ve added the term to the glossary. As a guideline, you should be able to fit the term into a sentence like: “what is an X” rather than “how does X work.”
Before you look here, though, you might want to look at the Glossary in the Oracle manual (currently linked to the 11.2 documentation set).
a.k.a.: “also known as”
Analyze: A deprecated command for collecting statistics about objects, replaced by the package dbms_stats. However it still has some uses outside statistics collection, such as identifying chained (q.v.) or migrated (q.v.) rows (analyze table XXX list chained rows – see scripts utl_chain.sql and utl_chn1.sql in the $ORACLE_HOME/rdbms/admin directory), or validating the self-consistency of structures (analyze table XXX validate structure cascade). There is a an anomaly with statistics collection: if you use the analyze command to collect statistics on a table it will populate the chain_cnt column and the optimizer will use the chain_cnt in the cost calculation – but dbms_stats doesn’t generate a value for the chain_cnt.
Argument: a value passed in to a function or procedure (see also parameter)
Block cleanout: See this generic note on various uses of the term “clean” in Oracle.
Buffer Cache Hit Ratio: (a.k.a. BCHR). A number that attempts to compare the number of block visits made to the buffer cache with the number of blocks that needed to be read into the cache from disk. (See Hit Ratio). Several formulae have been suggested for this – but they usually fail to cater for the many varied mechanisms that Oracle uses for buffer visits and disk reads. Most formulae are based on versions originally promoted in Oracle 6.
ca.:”approximately” (Latin: circa)
Cache hit ratio: (a.k.a. buffer hit ratio, buffer cache hit ratio, data buffer hit ratio, data cache buffer hit ratio also a.k.a CHR, BHR, BCHR, DBHR, DCBHR): See Buffer Cache Hit Ratio.
Chained rows (see also Migrated Rows): There are two classes of chained rows – rows that are simply too long to fit in a single block, and rows with more than 255 columns.
If you have rows that are too big to fit inside a single block, there is little you can do about it. This is one of the few valid reasons for thinking about using a different block size for part of your database. However if row chaining is giving you a performance problem you need to consider (a) will changing column order to put popular columns first make a difference and (b) should the table be split vertically into two (or more) tables – with a one to one link – to move the longest bit of the row away from the rest of the data.
Oracle uses a single byte for the column count, so a row with more than 255 columns is split up into separate row pieces of up to 255 columns each, with a forward pointer (called nrid, for next rowid) from one piece to the next. Some of the performance statistics about “continued” row fetches, therefore, may be misleading as they may suggest you are visiting extra blocks to follow a long row when you are actually revisiting the same block to get the next set of columns. As a general guideline – tables with more than 255 columns can be seen as a faint suggestion that your database has a design flaw, and should always be viewed with a little suspicion.
There are various ways to distinguish between chained rows and migrated rows – a simple option is to use the ‘analyze table XXX list chained rows’ command to get a list of the rowids of rows that are chained or migrated. If you then create a new table from the listed rows, then any rows which are still chained in the new table were originally chained, the rest were originally migrated.
Circular References: See References, Circular.
Clean block: See this generic note on various uses of the term “clean” in Oracle.
Commit cleanout: See this generic note on various uses of the term “clean” in Oracle.
consistent get: (a.k.a. CR get) See this note
Cost: The “cost” column of an execution plan is the optimizer’s way of expressing the amount of time it will take to complete the query. Unfortunately there are defects and deficiencies in the optimizer’s cost model that mean the calculations may fail to produce a reasonable estimate. Because of this it is possible for two queries to have the same cost but hugely different execution times; similarly you can have a “low-cost” query that run for ages and a “high-cost” query that completes almost instantly. Note – also that the calculation and interpretation of “cost” are harder to deal with when the optimizer is using “”first_k_rows (fkr)” optimisation. Technically, though “cost is time”.
CPU Costing: See System Statistics.
Cursor: Notes to appear by 21-08-2001 in response to a comment from Vladimir on 11th.
db block get: (a.k.a. current get) See this note
e.g.: “for example” (Latin: exempli gratia ). Many people use e.g. when they really mean i.e.
Delayed block cleanout: See this generic note on various uses of the term “clean” in Oracle.
Delayed logging block cleanout: See this generic note on various uses of the term “clean” in Oracle.
Dequeue (verb) As a verb, “to dequeue” is typically used in Oracle to mean “to read (and possibly delete)) an item to a table that represents a queue”. This term is used particularly for Oracle’s AQ (advanced queueing) option. (See Enqueue (verb)).
Dirty block: See this generic note on various uses of the term “clean” in Oracle.
Enqueue (noun) In Oracle “an enqueue” is the same thing as “a lock”. (See Lock).
Enqueue (verb) As a verb, “to enqueue” is typically used in Oracle to mean “to add an item to a table that represents a queue”. This term is used particularly for Oracle’s AQ (advanced queueing) option. Queues are usually Index Organized Tables (IOTs) as this makes it relatively straightforward to ensure that things that go on to a queue are taken off in the same order. (See dequeue).
Glossary: Alphabetical list of terms peculiar to a field of knowledge with definitions or explanations.
Granule: The unit of memory handled by Oracle’s automatic memory management code to shift memory between the different caches and pools. For small SGAs (smaller than 256MB) the granule size is 4MB, for larger SGAs the granule size is 16MB. For various Windows platforms the granule size is 8MB. You can check this for your system by querying dynamic performance view v$sga_dynamic_components.
Hit Ratio (1): A number generated (typically) by dividing one potentially useful statistic by another, thus obscuring the meaning and hiding the scale. If you are aware of the cause of a performance problem a hit ratio will add no value to your understanding; if you are not aware of the cause of a performance problem then a hit ratio may encourage you to start looking in the wrong place for the cause. If you don’t have any performance problems then a hit ratio close to 100% (for suitable values of the word “close”) can be used to keep ignorant people happy.
Hit Ratio (2): A generic method of performance analysis that should have been deprecated at Oracle 7 (when wait events were introduced), and desupported by Oracle 8.
i.e.: “that is”, “in other words” (Latin: id est). Many people use i.e. when they really mean e.g.
Interested Transaction List: See ITL.
ITL: Abbreviation for Interested Transaction List. There is a little list stored near the top of each data block in a table or index identifying transactions that have recently modified the contents of that block. When a transaction wants to change some rows in a block it has to acquire one of the ITL entries in that block and write some identifying information into that entry. (The row number of the entry the transaction has picked in the ITL is then used as a marker (the lock byte q.v.) for any rows that the transaction then changes – there is an old article here that shows extracts from a few block dumps showing the relevant details)
The transaction needs to acquire only one ITL entry in the block no matter how many rows in the block it changes. A single transaction may modify many blocks at once and it has to own an ITL entry in each of those blocks. If all the ITL entries for a block are currently in use then a transaction may add a new ITL to the list – if there is space available to do so and if (for earlier versions of Oracle) the size of the ITL has not been limited by the setting for maxtrans. (From 10g, Oracle has started to ignore the setting of maxtrans, and this can cause some space wastage problems in indexes). The initial size of the ITL is set by initrans- with a minimum setting that is dependent on the version of Oracle and the type of the object.
LIO: (a.k.a Logical I/O) – a visit to a block that is in the buffer cache, initiated by a search though a cache buffers chain while holding a latch. (See also PIO). For further reading about logical I/Os, see this note.
Lock (noun): Locks are also known as “enqueues”. A memory structure that represents the interest a session has in some form of database object. If the locks are about objects in the library cache they are referred to as KGL locks (or sometimes breakable parse locks), in other cases Oracle uses a memory structure known as an “enqueue resource” (rows from v$enqueue_resource) to represent an item such as a table, a tablespace, a parallel execution slave, and then allows sessions to attach lock structures (rows from v$enqueue) to them to represent their interest. There are three chains of locks (holders, converters, and waiters) associated with enqueue resoruces, and sessions have to join the end of the correct chain (i.e. join the queue) when they attempt to create or convert a lock on an object. There are six (or seven) lock modes, and the rules are about which chain a session joins are dictated by the precedence of the different lock modes so, for example, if one session has been allowed to place an exclusive lock in the holders queue for an object then every other session will have to join the end of the waiters queue until the first session removes the exclusive lock from the holders queue. (To be continued…). In the case of library cache locks, the locks are represented by rows from x$kgllk, and are attached directly to the library cache object.
Lock (verb): e.g. “to lock a table”; create (or find) an enqueue resource representing an object and attach an enqueue to it that identifies your session as having an interest in an object.
Lock Byte: When a transaction is locking a row (which may be an index entry), it sets a byte that is stored with the row. The value identifies which entry in the ITL (q.v.) that has been taken by the transaction. This strategy allows Oracle to lock many rows in a single block from a single ITL entry. There is a counter on the associated ITL entry showing the number of rows locked by the ITL. When a transaction commits, the “commit cleanout” mechanism (q.v.) will update the ITL with the commit SCN on some of the blocks modified by the transaction but will not clear the lock bytes from the rows – these are left for the next transaction that changes the block (delayed logging block cleanout q.v.)
MBRC: multiblock read count. Ideally this term should be used to refer to the System Statistic (q.v.) that represents the average number of blocks achieved (or expected) in multiblock reads (of type “db file scattered read”). Historically this abbreviation has been used as a shorthand for the parameter db_file_multiblock_read_count but to avoid confusion it would be better to refer to the latter as the dbf_mbrc (lower case) in future.
Migrated Rows (see also Chained Rows): If you update a row in a way that makes the total row length increase, Oracle may find that there isn’t enough space in the current block to hold the new version of the row – even though the row may be short enough to fit in a single block. If this happens then Oracle will migrate the row – moving the entire row to a new block, leaving a “forwarding address” (i.e. rowid) in the old block to point to the new location. If the row is updated later in the new block it is possible that the update allows the row to move back to the original block; conversely it is possible that the update will grow the row even more and cause Oracle to migrate the row to another block. If such a “double migration” occurs Oracle doesn’t leave a second forwarding address, it simply updates the forwarding address in the original block to point to the latest location of the row.
There is a statistic chain_cnt on the views relating to table statistics – but this does not distinguish between migrated rows and chained rows (q.v.); and it is only populated by calls to the analyze command (q.v.). The run-time statistic “table fetch continued row” will give you some idea of how often you follow a forwarding address (after an indexed access to a migrated row) but, again, Oracle does not distinguish between chained rows and migrated rows when reporting “continued” fetches and the way this counter has been used for chained rows (in particular) has varied dramatically over the versions.
Row migration can be difficult to avoid in some cases, but usually occurs when the pctfree for a table has not been set to allow for the growth pattern of the rows. Watch out particularly for table where the rows are inserted as short rows and then have extra data added over time. This can happen in batch jobs which run two or three passes against a table, for example to merge data from different source into a single table. There is a bug (fixed the latest releases only) that can turn this error into a performance disaster if you happen to be using ASSM (automatic segment space management) when the blocksize is larger than 8KB.
N.B.: “note well!” (Latin: nota bene)
Open Cursors: See cursors.
Parameter (1): a place holder in the declaration of a function or procedure describing the types of the expected inputs. (See argument)
Pin: Most of the work that Oracle does revolves around finding and using chunks of memory. When a process has located a memory chunk that it expects to use more than once inside a single database call it will attach a pin structure to that chunk of memory to ensure that the chunk cannot be wiped out by another process. The pin structure can also act as a “shortcut” to the chunk the next time Oracle wants to use it. There are two well-known examples of pinning:
- Buffer cache pins: x$kcbbf – the buffer handles; if your process has pinned a buffer and wants to re-visit it then it doesn’t have to access the cache buffers chains latch to search for the buffer – it knows where it is and knows that it will be where it’s supposed to be. See this note on my website for more on pinning buffers.
- Library cache pins: x$kglpn – while your session is executing a child cursor it will have a pin attached to it. (There will also be a library cache lock (x$kgllk) attached – which is a longer-term structure representing an “open cursor”, often exposed in view v$open_cursor). Oracle is phasing out library cache pins in 10.2 in favour of a latch-like counter mechanism known as a mutex. Historically you could set parameter cursor_space_for_time = true to stop a session from detaching library cache pins from open cursors, removing them only when it closed the cursor (i.e. released the library cache lock), and this option could – by reducing library cache and shared pool latch contention – improve performance for well-written applications that executed the same statements extremely frequently at high levels of concurrency. There is a note (somewhere) in the manuals that with the advent of mutexes this parameter no longer needs to be used – but people do run into mutex problems occasionally, most notably complaining about waits on “cursor: pin S wait on X”.
PIO: (a.k.a. Physical I/O). A call by Oracle to the operating system to copy a block into the buffer cache. Common usage is a little variable and lacking in precision; depending on context you may notice that the term is sometimes used to describe a single read request for multiple blocks – but it is usually used to refer to the number of blocks read.
Q.E.D: “which was to be demonstrated” (Latin: quod erat demonstrandum) – also QED (not to be confused with Quantum Electro Dynamics and the book of the same title by Richard Feynmann)
q.v.: “which see” (Latin: quod vide)
Rollback (1): (see undo)
Rollback (2): the command that directs Oracle that the changes made by the currently active transaction should be reversed out. When you issue a “rollback;” your session will walk backwards through the chain of undo records (q.v.) it has generated and use each record in turn to reverse out a prior “db block change”; it will also mark each undo record in turn as “user applied”. Every block change it makes as it goes will generate new redo records, so the process of rolling back can be nearly as expensive as the original work done by the transaction and you should avoid writing code that assumes it will temporarily change the data with the intention of ultimately concealing the changes by rolling back. The final step in a “rollback;” is, in fact, a “commit;” with a synch write call to the log writer (lgwr).
References, Circular: See Circular References.
Row Chaining: see Chained Rows
Row Directory: Data blocks hold a variable number of “rows” of data (I put the word “rows” in quotes because you might not think of undo records in undo blocks or entries in index leaf or branch blocks as ordinary rows) and in the top section of each block – after the common block header and ITL sections – there is a list of entries telling Oracle where in the block each row starts. This list is usually refered to as the “row directory”.
There are variations on a theme. Blocks from heap tables and clusters also contain a “table directory”; for a cluster block this tells Oracle which rows in the row directory belong to each table in the cluster; in principle the table directory is doing the same for the simple heap table except in that case the table directory only holds one entry and all the rows in the row directory belong to that one table. Blocks from a compressed index include a “prefix directory”. Blocks from a compressed table include a “token directory”. In all cases the concept is the same: a list of two-byte entries pointing to a piece of useful data. (See also: this blog entry)
Row Migration: see Migrated Rows
SCN: abbreviation for System Change Number or System Commit Number (neither term is absolutely perfect). The SCN acts as a type of clock mechanism for a single instance – but it does not have a regular “tick”, instead it is a simple counter that goes up by one every time a session issues a commit (or rollback). But there are other occasions when the SCN can change: instances involved in distributed transactions or queries will synchronise their SCNs (upwards) at the end of each dialogue; similarly, the instances in a RAC setup synchronise their SCNs extremely frequently. There is also a block-related cause for the SCN to change – each change to a block increments the block’s “change number” which consists of the current SCN combined with a single byte counter: if you make more than 255 changes in a period when no commits occur then the SCN will go up by one and the counter byte on that block will set itself back to one (the value zero is reserved for “logically corrupt” blocks – reported by Oracle in the alert log under error ORA-01578).
A session takes note of the current SCN at various critical moments (when a transaction starts, when a transaction commits, when a query starts) and the current SCN is written to datablocks in various places (control files, data file headers, a couple of places in data block headers, ITL entries). A session is constantly comparing the current SCN, or one of its remembered SCNs, with the SCNs it sees stored in the database to ensure that it is looking at a safe, correct, and appropriate version of the data.
You can see the current SCN by calling function dbms_flashback.get_system_change_number, or querying current_scn from v$database. Any query against v$database, however, increments the SCN so the query for current_scn will increment it before displaying it.
Segment Header Block: (also just Segment Header). Historically the first block of the first extent of a segment – holding critical information about the size of the segment, number and location of extents, and control information about free space in the segment. In newer versions of Oracle there may be a few space management blocks before the segment header block – this variation is a detail of the implementation of ASSM (automatic segment space management) tablespaces. The view dba_extents will give you the file and block id of the first block of the first extent; the view dba_segments will give you the position of the segment header in that extent. Undo segments introduce a special case – the segment header block of an undo segment holds some extra information (see also Transaction Table) that is specific to the operation of Oracle’s undo mechanism.
System Statistics: (also referred to as “CPU costing”) a set of numbers modelling the performance of the hardware. (Not data-related statistics for objects in the SYSTEM tablespace). The statistics attempt to model the average disk speed, the CPU speed, and the ability of the machine to cope with parallel execution. The numbers are:
- cpuspeed: speed of the CPU in millions of some “Oracle benchmark operation” per second
- cpuspeednw: as above using a “no workload” method – introduced in 10g.
- sreadtim: the average read time for a single block read (“db file sequential read”) in milliseconds. If gathered this is rounded to the nearest millisecond, if you set it using dbms_stats.set_system_stats() you are not limited to whole numbers.
- mreadtim: the average read time for a multiblock read (“db file scattered read”) in milliseconds. If gathered this is rounded to the nearest millisecond, if you set it using dbms_stats.set_system_stats() you are not limited to whole numbers. The assumed size of the read is given by the statistics MBRC
- MBRC: the average size of a multiblock read (“db file scattered read”) actually achieved during the measurement interval, rounded to the nearest whole number of blocks.
- maxthr: maximum throughput in bytes per unit time achieved by a session
- slavethr: average throughput in bytes per unit time achieved by a parallel execution slave. The CBO will limit the adjustment made in its arithmetic for parallel execution by maxthr/slavethr.
- ioseektim: introduced in 10g – the average disk seek time in milliseconds; defaults to 10m/s
- iotfrspeed: introduced in 10g – the disk data transfer rate in bytes per millisecond – default to 4KB (4096B) per millisecond. If you do not have values for the other I/O stats (or if they are inconsistent – e.g. mreadtim < sreadtim) then 10g will synthesize figure from these transfer rates and the db_file_multiblock_read_count.
tkprof: Tool for summarising the contents of an extended (10046) trace file. Name possibly derived from Trace Kernel Profiler. Produces a tabular presentations of work done and time lost in wait events. Also reports “Misses in library cache during parse:” and “Misses in library cache during execute:” which tell you how many times the statement had to be re-optimized in response to parse calls and execute calls respectively (These will correspond to the statistics “parse count (hard)”.) The column heading in the “work done” report are:
count: the number of times the call was made
cpu: the CPU time in seconds recorded for this line
elapsed: the elapsed time in seconds recorded for this line. You may find cases where the elapsed time is less than the CPU time; this is usually due to “granularity” errors (i.e. rounding errors due to different methods of accounting) and appears most frequently when a large number of rapid operations take place.
disk: number of disk blocks read from disk. (This includes reads from the temporary tablespace, but not writes to the temporary tablespace).
query: number of buffered blocks accessed by “consistent gets” (q.v.)
current: number of buffered blocks accessed by “db block gets” (q.v.)
rows: rows generated.
Transaction Table: Each undo segment header block (q.v.) has an area in it called the transaction table. When a transaction starts, it picks an undo segment to use then picks the oldest free entry (called a transaction table “slot”) from that segment header’s transaction table. This slot is the identifying marker for a transaction, and information about it is published in two ways – first as the transaction ID in v$transaction, and secondly in v$lock where a TX lock is reported in mode 6 (exclusive) by the session running the transaction. Both locations identify the transaction by the triple: (undo segment number, slot number, sequence number). (The number of slots in a transaction table is limited so each one has a sequence number stamped on it, and each time a slot is re-used the sequence number goes up by one). As a transaction starts, the “start SCN” is one of the items written to the transaction table slot; when the transaction commits this is changed to the “commit SCN“.
Transaction Table Slot: a single row in a Transaction Table (q.v.) representing a single transaction. Since there are only a limited number of rows (slots) in a transaction table, part of the row data includes a “usage counter”, known as the “sequence”. A transaction id is then the combination of the undo segment number that the transaction table is in, the row (slot) number in the transaction table, and the sequence number of the row (slot). The parts of the transaction id can be seen in the view v$transaction in the columns: xidusn, xidslot, xidsqn. In the latest versions of Oracle, the entire transaction id is also reported as a raw(8) column called xid.
Undo: (formerly known as rollback): information describing how to reverse out changes made to data blocks (including index blocks, bitmap file header blocks etc). A single undo record is a description of how to reverse out a single “db block change”. Undo is stored in undo segments, which are created in undo tablespaces, and each undo segment header holds a “catalogue” (called the transaction table) which shows where recent transactions wrote their first undo record.
In earlier versions of Oracle the DBA would decide how many undo (rollback) segments to create and where to put them; in newer versions the DBA is expected to create a single (active) undo tablespace per instance and allow Oracle to decide dynamically how many undo segments are currently needed. Except for a few special cases, every datablock change results in the pre-creation of an undo record.
There is a persistent belief that entire Oracle blocks are copied into undo records. This is wrong – and probably reflects an older behaviour from Oracle 5.0 which used a “before image” file to keep copies of blocks. A typical undo record basically contains an overhead of about 80 bytes and the smallest amount of information needed to reverse out a change to a block.
From 10.2, the mechanism for handling undo changed, with the introduction of “private redo threads” and “in-memory undo”. To a large extent this only changed the timing of when undo records were written into undo blocks. If you examine the undo blocks after a transaction had committed you would not be able to tell whether it had used “in-memory undo” or the traditional mechanism.
See also rollback(2) above, and separate blog article “Why Undo?”
Undo Segment Header: this is just a special case of a segment header block. The most significant difference is that the header block for an undo segment holds a transaction table – which is a fixed list of recent transactions holding state information, SCNs, and pointers into the undo segment to locate the undo records associated with the transaction. The undo segment header (for automatic undo) also holds “Retention Map” which records the most recent commit SCN relevant to each extent and therefore allows Oracle to determine when an extent has passed the undo_retention period.