Incremental backup sizing using the Crystal Ball method.

A quick read of Oracle’s support site (support.oracle.com for those with an account) can sometimes reveal a gem or two.  One of my favourite recent discoveries were these two:

  • ORACLE 10G BLOCK CHANGE TRACKING INSIDE OUT (Doc ID 1528510.1)
  • How to estimate RMAN incremental backup size using block change tracking file (Doc ID 1938079.1)

Huh? you might be asking.  10g?  Well better yet my friends – 10gR1!!!  The reason for my love of this first note is the depth of writing about it as well as coverage of internals that don’t appear to have fundamentally changed much since. I’ve had a couple of posts lately talking about BIGFILE backups and the use of the block change tracking file, so I thought it a good time to bring up these items.

SELECT ROUND ( (COUNT (DISTINCT bno) * 32) / 1024) AS "MBs changed"
  FROM x$krcbit b
 WHERE     b.fno = &&1
       AND b.vercnt >=
              (SELECT MIN (ver)
                 FROM (SELECT curr_vercnt ver,
                              curr_highscn high,
                              curr_lowscn low
                         FROM x$krcfde
                        WHERE fno = &&1
                       UNION ALL
                       SELECT vercnt ver, high, low
                         FROM x$krcfbh
                        WHERE fno = &&1)
                WHERE (SELECT MAX (bd.checkpoint_change#)
                         FROM v$backup_datafile bd
                        WHERE bd.file# = &&1 AND bd.incremental_level <= 1) BETWEEN low
                                                                                AND high);

This code will take a file ID and calculate the size of the incremental backup based upon level 1 incrementals.

Our next piece of code will show the changes in terms of blocks, bytes and percentage for all datafiles based upon information in the block change tracking file. It’s useful to see what datafiles are experiencing significant changes.

select file#,
       blocks_changed,
       block_size,
       blocks_changed * block_size bytes_changed,
       round(blocks_changed / blocks * 100, 2) percent_changed
from v$datafile join
     (select fno
             file#,
             sum(bct) blocks_changed
      from (select distinct fno, bno, bct from x$krcbit
            where vertime >= (select curr_vertime from x$krcfde
                              where csno=x$krcbit.csno and fno=x$krcbit.fno))
      group by fno order by 1)
using(file#)
order by bytes_changed desc, percent_changed desc;

2015-02-17_11-04-45

As you can see, these two scripts can be quite beneficial.

One last nugget – did you know that the block change tracking file tracks changes based upon a chunk of 32k? That means that for a tablespace with an 8k block size, one BCT chunk covers four blocks, 16k has two blocks and 32k. So whilst you may think it contains only references to the blocks that have changed, typically that count could be out by a factor of four. The size of the chunk is modified by setting “_bct_chunk_size” to some other value, such as 8192 so if you really wanted a counter of exactly how many blocks have changed, this would be the way to do it. However for most systems, a 32k datafile chunk is accurate enough.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s