RMAN and BIGFILE – hellish situations


I came across an interesting situation today in which an Exadata server hosts a database consisting solely of BIGFILE tablespaces. Nothing wrong with that, you say and in general you may be correct. However I think this reflects the unintended consequences of a poor architectural decision on their part. A BIGFILE tablespace is exactly that – one that only has one datafile. It’s generally always an “AUTOEXTEND” datafile with some size limit in mind – or for the lazy, UNLIMITED.

Works like a charm until you realize that backing up this monstrosity can be a challenge. Did I mention one datafile is almost 4TB in size, dwarfing all the others? No? Well, now you know. RMAN will by default back up this datafile like any other – a single channel will be tasked with the backup and it will take as long as it takes. In this case, several hours. To add to the fun, that single backup piece will land in one destination, destroying your idea of a nicely balanced ZFS appliance structure.

So, here’s a work-around.

– Exclude BIG_HAIRY_TABLESPACE from the full backup
– Take the full backup
– Backup the tablespace with a “SECTION SIZE” parameter
– Clear the tablespace exclusion.

Let’s see how this would work. First, since exclusions are not that common, let’s see what that command looks like:

RMAN> configure exclude for tablespace sysaux;
using target database control file instead of recovery catalog
Tablespace BIG_HAIRY_TABLESPACE will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored
RMAN> show all;
CONFIGURE EXCLUDE FOR TABLESPACE 'SYSAUX'; <-------- here it is
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
RMAN> configure exclude for tablespace sysaux clear;
Tablespace BIG_HAIRY_TABLESPACE will be included in future whole database backups
old RMAN configuration parameters are successfully deleted

So what would a backup script look like? Something like this.

run 
{
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/zfs/stream1/%U'';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '/zfs/stream2/%U'';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT '/zfs/stream3/%U'';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT '/zfs/stream4/%U'';
CONFIGURE CHANNEL 5 DEVICE TYPE DISK FORMAT '/zfs/stream5/%U'';
CONFIGURE CHANNEL 6 DEVICE TYPE DISK FORMAT '/zfs/stream6/%U'';
CONFIGURE CHANNEL 7 DEVICE TYPE DISK FORMAT '/zfs/stream7/%U'';
CONFIGURE CHANNEL 8 DEVICE TYPE DISK FORMAT '/zfs/stream8/%U'';
configure exclude for tablespace BIG_HAIRY_TABLESPACE;
backup as compressed backupset filesperset 1 maxpiecesize 30G incremental level 0 
database plus current controlfile;
backup as compressed backupset section size 30G tablespace BIG_HAIRY_TABLESPACE;
configure exclude for tablespace BIG_HAIRY_TABLE clear;
}

OK, so that will backup the database, then use all eight threads to split that 4TB datafile backup across all our mount points.

Yet, we are not out of the woods yet. Here’s why:

RMAN incremental level 1 poses some challenges.
– You can’t use “SECTION SIZE” for incremental level 1 backups
– You can’t use “SECTION SIZE” in conjunction with MAXPIECESIZE

What this means is that incremental level 1 (or 0+n) backups will have to scan the entire file and you will also be back to a single thread performing a monolithic backup. That’s bad for many reasons. Clearly the consideration for all incremental backups to make them viable is to ensure you have a block change tracking file enabled for the database, preferably before your next level 0.

So what have we learned?
– Bigfile tablespaces are great at providing almost maintenance free database storage.
– When properly managed for growth, they can be backed up pretty efficiently

But..
– once they grow to the point that you are no longer comfortable with the concept of a single threaded backup, you need to rearchitect and distribute the contents of that tablespace
– I really hope you are not relying on RMAN to restore this database in the event of a disaster. Just saying.

Categories: Computing, DatabaseTags: , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

dbaonTap

My Experiences with Oracle Technology

globetrottingteen

If travel is an addiction, I'm afflicted.

The Black Veil Ones

My Diary Travel, Photography, My Sight About Somethings

Off to Ireland

Trotz Knacks den Schritt ins Ausland wagen

Randy Johnson's Oracle Blog

Oracle DBA Topics Including Unix, Scripting, & Automation

Model For {Change

Creating a positive change.

Globe-T.

Le Bonnet voyageur • The travelling Winter Hat

Another Photography Enthusiast

CLICKS THROUGH MY EYES

corkandspoon

Adventures in food and drink

Break Your Cage

Embracing The Unknown - Travel is not a pause from life, merely an extension of it.

My Girls On Tour

Fashion & Travel Blog by Abigail Royston

NYCQuirk

Two college girls trying to make the most out of living in NYC without breaking the bank!

Don Charisma

because anything is possible with Charisma

Bunny Kitchen

Exploring the possibilities of cruelty free food

Angelart Star

The beautiful picture of angels makes you happy.

%d bloggers like this: