Oracle Scratchpad

September 3, 2010

Last Modified

Filed under: Infrastructure — Jonathan Lewis @ 5:23 pm BST Sep 3,2010

Have you ever wondered if there’s a way to see when the data in a table was last modified ? Depending on what you’re really trying to achieve, how accurate you want the answer to be, what resources you’re prepared to use, and how you’ve configured your database, you may be able to get an answer that’s good enough very easily.

If all you want is a rough indication that the table hasn’t changed over the last few days, or weeks, or even months, you may be able to run a simple, but potentially brutal, query against the table to find out. Here’s an example that get’s there in three steps – using the table sys.link$ as a target. Steps 1 and 2 are for explanatory purposes only.


select
	ora_rowscn
from
	link$
;

select
	first_change#, first_time
from
	v$log_history
order by
	first_change#;

select
	min(first_time)
from
	v$log_history
where
	first_change# >= (
		select max(ora_rowscn) from link$
	)
;

The pseudo-column ora_rowscn is an upper bound for the SCN at which the last change to a row committed. If you have enabled rowdependencies this will generally be the actual commit SCN for the row, otherwise Oracle has various algorithms for working out the largest SCN avaiable that is not less than the SCN at which the transaction committed. If all you’re interested in is an indication of when the table last changed even this is upper bound SCN may be good enough.

Step 2 is there to remind you that v$log_history records the SCN and timestamp for the first change in the file. This gives us a very crude conversion between SCNs and timestamps.

The final step gives us the answer we want. We find the approximate SCN of the most recent change to the table, and then report the timestamp of the first log file that started at a higher SCN.

Here’s a sample of output from my laptop for the three queries above:


ORA_ROWSCN
----------
   1274633

FIRST_CHANGE# FIRST_TIM
------------- ---------
       969527 27-APR-10
       976983 27-APR-10
       980116 27-APR-10
       988729 27-APR-10
       995910 27-APR-10
      1012055 27-APR-10
      1057428 28-APR-10
      1100463 30-APR-10
      1157543 03-MAY-10
      1188215 03-MAY-10
      1220173 04-MAY-10
      1226864 05-MAY-10
      1261756 06-MAY-10
      1300091 12-MAY-10	-- this is the critical row
      1342594 16-MAY-10
      1364917 16-MAY-10
      1418975 18-MAY-10
      1492902 19-MAY-10
      1582039 23-MAY-10
      1593611 09-JUN-10
      1596840 09-JUN-10

MIN(FIRST
---------
12-MAY-10

Remember – this isn’t trying to be accurate, and it’s not trying to guarantee that a table hasn’t been used recently, it’s just trying to say “it  looks as if this table may not have changed recently”. But if you’re trying to clear old garbage out of a production system this gives you one more piece of evidence about whether it might safe to drop a specific table. (As a simple example of why it’s not conclusive evidence – think about a table of currency codes: it’s likely to be very useful in a banking system, but may have been unchanged for years.)

19 Comments »

  1. I thought of using the scn_to_timestamp function. Unfortunately this only works for the last 120 hours of scns:

    ” The association between an SCN and a timestamp when the SCN is generated is remembered by the database for a limited period of time. This period is the maximum of the auto-tuned undo retention period, if the database runs in the Automatic Undo Management mode, and the retention times of all flashback archives in the database, but no less than 120 hours. The time for the association to become obsolete elapses only when the database is open. An error is returned if the SCN specified for the argument to SCN_TO_TIMESTAMP is too old. “

    Comment by Sean Molloy — September 3, 2010 @ 6:40 pm BST Sep 3,2010 | Reply

  2. Jonathan,
    is there a difference between the results from v$log_history and those of the scn_to_timestamp function? If my memory serves me well scn_to_timestamp does not translate ancient scn, but I guess that old entries from v$log_history also will be deleted sometime.

    Comment by Martin Preiss — September 3, 2010 @ 6:41 pm BST Sep 3,2010 | Reply

    • seems that Sean Molloy answered my question even before I asked it …

      Comment by Martin Preiss — September 3, 2010 @ 6:43 pm BST Sep 3,2010 | Reply

  3. Hi Jonathan,
    Thanks for the post, really useful. But, I dont understand why you are taking the min(first_time) and Not the max(first_time). Could you just explain me this,

    Cheers,
    Wissem

    Comment by orawiss — September 3, 2010 @ 9:37 pm BST Sep 3,2010 | Reply

    • Wissem,
      I need a timestamp from a point in time AFTER the row changed, and I want that to be the earliest possible time I can find, hence min().

      Comment by Jonathan Lewis — September 6, 2010 @ 8:01 am BST Sep 6,2010 | Reply

  4. Tanel Poder has dropped me an email to let me know that he wrote some notes on this topic a few months ago. He was too modest to post the link here as a comment – but he should have done, as they are more extensive than my brief comment and cover topics like the smon_scn_time etc.

    http://blog.tanelpoder.com/2009/02/07/when-was-a-table-last-changed/

    Comment by Jonathan Lewis — September 6, 2010 @ 8:09 am BST Sep 6,2010 | Reply

  5. “Oracle has various algorithms for working out the largest SCN available that is not less than the SCN at which the transaction committed”

    Presumably some of that is related to dirty blocks, written to disk with uncommitted changes. If there’s a large gap (eg days/weeks) between that dirty write and a delayed block cleanout, any guidelines on what that alogrithm may arrive at ?

    Comment by Gary — September 6, 2010 @ 10:36 pm BST Sep 6,2010 | Reply

    • Gary,

      That’s a point that exposies one of the weaknesses of this method (even if you are using rowdependencies).

      In the worst possible case the block would have been written to disc before the commit took place and thus have no information about the commit time. When the cleanout takes place Oracle typically only needs to know that the commit took place before the transaction (or read) doing the cleanout started, so it will only work out an “upper bound commit” (i.e. one that is just a little smaller than the start SCN of the transaction (or query) that is reading the block) rather than working backwards in time as far as possible. This may be an SCN that is much later in time than the actual commit SCN of the transaction.

      Of course, if the cleaning process is a query it’s possible that it’s a very long running query, in which case Oracle may work backwards through the undo tablespace until it can’t go any further and still not find a commit SCN that precedes the query’s start SCN, in which case the query fails with an ORA-01555. (The relevant statistics are: transaction tables consistent read rollbacks, transaction tables consistent reads – undo records applied, and cleanout – number of ktugct calls (Get Commit Time).

      Comment by Jonathan Lewis — September 7, 2010 @ 5:47 pm BST Sep 7,2010 | Reply

      • Thanks. I guess that where you need greater accuracy, you’d need to force a cleanout (eg full table scan).

        Comment by Gary — September 7, 2010 @ 10:22 pm BST Sep 7,2010 | Reply

  6. Hi Jonathan,

    Very interesting post.
    I had never heard of the ora_rowscn pseudo column and the same for the rowdependencies.
    How long has the ora_rowscn been around?
    Can you post about the rowdependencies somewhere in the future?

    Regards Hans-Peter

    Comment by Hans-Peter — September 9, 2010 @ 7:01 pm BST Sep 9,2010 | Reply

  7. Hey Jonathan,
    Excellent post. Have a question though…
    Correct me if I’m wrong but data stored in v$log_history table depends on maxloghistory parameter. If so, would this also be something to bear in mind?
    Mean, what if, in your example above… select max(ora_rowscn) from link$ was smaller than 96952, yet table and modified 01-JAN-10.

    Wouldn’t script return 27-APR-10?

    Thanks

    Comment by sulimo — September 14, 2010 @ 9:16 pm BST Sep 14,2010 | Reply

    • Sulimo,

      Correct – the method is only an upper bound approximation, and if you have only a short log history the best it can tell you is that the table hasn’t changed since your history started.

      Comment by Jonathan Lewis — September 15, 2010 @ 8:22 am BST Sep 15,2010 | Reply

  8. […] Filed under: Uncategorized — ksrivenu @ 5:56 am I felt very good after seeing this post (Last Modified) of Jonthan Lewis’s blog. I had given a “>similar reply for an users’s posting […]

    Pingback by Find when was last time block was modified « Oracle — September 26, 2010 @ 2:56 pm BST Sep 26,2010 | Reply

  9. Why not simply check db block changes from the segment statistics views?

    Comment by Naresh — September 28, 2010 @ 10:39 pm BST Sep 28,2010 | Reply

    • Naresh,

      At best this will tell you that the segment hasn’t been modified (or viewed) since the database started up – and that might be good enough for many people.

      I’d be a little worried that we don’t know about side effects of operations that might flush some of these statistics unexpectedly, though. I don’t think the documentation is very good on v$segstat and v$segment_statistics – and they do inhabit the SGA – so it’s possible that some of the information could be flushed when there is an extreme demand for memory.

      Comment by Jonathan Lewis — September 29, 2010 @ 2:41 pm BST Sep 29,2010 | Reply

  10. Hi Jonathan,

    Will v$log_history show only the last insert/update/delete??

    My scenario is also the same, I have lots of table in PRODUCTION server and would like to eliminate the ones that are not being used, rather than checking for insert/update/delete, I want to know when the table was last used for a select ?? Is this possible?

    Thanks,
    Sajith.

    Comment by Sajith — May 18, 2011 @ 6:23 pm BST May 18,2011 | Reply

    • Sajith,

      This strategy won’t help you.
      You could consider enabling audit by select for each table – but that only tells you about the future, not about the past.

      Comment by Jonathan Lewis — May 22, 2011 @ 12:42 pm BST May 22,2011 | Reply

  11. Following Query to Solved Your Problem :

    select * from all_tab_modifications
    WHERE TABLE_OWNER = [user_name]
    AND TIMESTAMP > ’01-JUN-2012′;

    Comment by Chandresh Mesvaniya — June 18, 2012 @ 9:52 am BST Jun 18,2012 | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.