Backup and recovery in oracle 11g

In this post, we are going to discuss complete backup and recovery strategies in Oracle database 11g.

There are different types we can take a database backup.

The reasons why Oracle has introduced so many backup options are

1.logical backup cannot be considered as the main backup option since we don’t have the concept of recovery.

2.cold backup cannot be implemented for 24*7 customers. For such type of customers, we need to implement the hot backup strategy.

3. If the size of the database is in Terabytes, every day we cannot take a full backup. In this scenario, we need to depend on incremental backup. Such type of incremental backup will be supported by RMAN.

Note: We need to design a backup strategy for a customer based on

  • How customer conducts his business
  • Size of the database

Advantages of Logical Backup:

1. Logical backups are more useful in case of schema level refresh.

2. Logical backups are also useful in case of database upgradations and database migrations.

Upgration means moving from one version to another version.

Migration means moving from one platform to another platform.

3. Logical backups are platform independent.

4. Logical backups are also useful in case of table reorg (re-organization), schema recovery and database re-org.

Utilities of logical backup are:

  1. exp
  2. imp

Files generated by logical backup are

  • Dump file
  • Log file

Example,

$file=hrms_lbkp_14_11_16.dmp log=hrms_lbkp_14_11_16.log full=y direct=y

Dump file is partial binary and partial text in nature.

Recommended not to edit the dump file either with notepad or vi editor.

 

If there is a requirement, take the backup of the dump file and edit.

a log file is a text file and it shows the progress of the job. It also contains error messages as well as warning messages that are encountered during the progress of the job.

By default, traditional logical backup is conventional which means it has to go through the SGA processing layer. We can also bypass the SGA processing layer by specifying direct=y

By default, a user can take the backup of his own objects. If a user wants to take the backup of a full database, the user should have exp_full_database_role and for import, user should have imp_full_database_role

Sometimes while import operation is going on, we get undo related error messages. To overcome this, specify commit=y

commit=y means oracle loads array of records and frequently issues commit.

Sometimes while the import is going on, we may get DDL related error messages. To overcome this, specify ignore=y

Logical incremental backup:

In case of incremental backup, Oracle takes the backup of only modified objects.

The biggest disadvantage is, even if a single record is modified or inserted, Oracle considers the entire segment

The different levels of incremental are

1.inctype=complete

2.inctype=incremental

3.inctype=cumulative

 

inctype=complete

It is the base backup for the incremental backup. The basic difference between full=y and inctype=complete is, inctype=complete takes the backup of an entire database and it also updates the data dictionary.

 

inctype=incremental

It takes the backup of all modified objects till the last incremental or complete or cumulative whichever appears first

 

inctype=cumulative

It takes the backup of all modified objects till the last cumulative or completes whichever appears first.

Datapump:

Datapump is a new technology introduced in 10g version of Oracle.

Utilities of datapump are

1.expdp

2.impdp

Datapump is much faster compare to traditional logical backup because it uses API’s (Application Program Interface) like dbms_metadata and dbms_datapump

The prerequisites for datapump is we need to create a directory at oracle level as well as OS level

By default, a user can take the backup of his own objects provided if the user has read, write privileges on the directory

Datapump is a server-side utility which means, it generates the dumpfile at the server side irrespective of the location from where we initiate the job.

Once we invoke datapump job, oracle starts a master control process and this process keeps track of the status of the job in a table created by the job name. Once the job is successfully completed, Oracle drops the table automatically

The biggest advantage of datapump is, we can stop the job whenever there is a performance issue and we can resume the job at a later point in time.

By default, Datapump is sequential. we can also invoke it parallelly by specifying a parallel parameter. We can also estimate the space required for the dump file by specifying estimate parameter.

We can also compress the metadata as well as the business data by specifying compress parameter.

We can also exclude some objects like packages, procedures etc by specifying exclude parameter.

we can also take the backup of remote database objects through database links by specifying network_link parameter

Dumpfile generated by  exp can be imported only by imp. In the same way, dumpfile generated by expdp can also be imported by impdp.

User managed physical backup:

A database may be configured to run either in no archive log mode or archive log mode.

If the database runs in no archive log mode, only simple restore is possible and only cold backup is possible. If the database runs in archive log mode, restore + recovery are possible

Recovery is of two types

1.complete recovery

2.incomplete recovery

  • In case of complete recovery, there will be no loss of data.
  • In case of incomplete recovery, there will be a loss of data.

Incomplete recovery is of three types

1.until cancel

2.until time

3.until scn

Cold Backup:

Cold Backup is also called as an offline backup or consistent backup. In order to take cold backup, we need to shutdown the database gracefully by executing any one of the following commands

1.shutdown normal

2.shutdown transactional

3.shutdown immediate

For 24/7 customers, we cannot implement cold backup strategy.

In case of cold backup, we shut down the database gracefully and we copy the Control file, Data files and redolog files to the backup destination.

The process of applying archive logs to the previous backup is called recovery.

Whenever we perform incomplete recovery, we open the database with reset logs

Loss of current online redo log files lends to incomplete recovery.

The only scenario where we open the database with reset logs but still it is complete recovery is in the event of loss of current control file.

Whenever we open the database with reset logs, Oracle creates brand new online redo logs starting with the log sequence 1 and generates new reset log id (incarnation number) for the database.

Steps to take cold backup:

1.Shutdown the database gracefully

2.Copy Control file, redo log file and data files to the backup destination.

Scenario-1: Loss of full database (Control files, redo log files and data files )

  1. shut abort

  2. Restore the latest backup

  3. startup mount

  4. alter database recover automatic using backup control file until cancel;

  5. recover cancel

  6. alter database open reset logs

 

Note: After performing a full database restore recovery, we open the database with ‘reset logs’ because of log sequence mismatch.

Scenario-2 : Loss of non system datafile (hrms01.dbf)

Method 1 – Not Recommended

1.offline hrms tablespace

2.Restore hrms01.dbf from  thelatest backup

3.Recover datafile <PATH> or <Number>

4.Online hrms tablespace

Method 2 – Recommended

1.Offline hrms01.dbf

2.Restore hrms01.dbf from the latest backup

3.Recover datafile <PATH> or <Number> cold bkp

4.Online hrms.dbf

Scenario – 3: Loss of system datafile (System.dbf)

1.Shut abort

2.Restore system.dbf from latest backup

3.startup mount

4.Recover database

5.alter database open

Scenario – 4 : Loss of system datafile (Sysaux.dbf)

1.Offline sysaux.dbf

2.Restore sysaux.dbf from the latest backup

3.Recover database <PATH> or <Number>

4.Online sysaux.dbf

Scenario-5: Loss of all control files:

1.shut abort

2.Restore the control files from latest backup

3.startup mount

4.alter database recover automatic using backup control file until cancel;

OR

5.Recover database using backup controlfile until cancel;

6.Recover cancel;

7.alter database open resetlogs;

Scenario-6: Loss of datafile which was not in backup

1.offline hrms03.dbf

2.alter database create datafile ‘/u01/app/oracle/hrms/hrms03.dbf’;

4.online hrms03.dbf

Scenario-7: Loss of online redolog files

1.shut abort

2.copy CRD files or only.dbf

3.startip mount

4.alter database recover automatic using backup controlfile until cancel;

OR

5.recover database until cancel;

6.recover cancel

7.alter database open resetlogs;

Scenario-8: Point in time recovery

1.copy the latest backup and archivelogs to test server

2.startup mount

3.alter database recover automatic using backup control file until time ‘2016-11-21 15:11:00’

4.recover cancel

5.alter database open resetlogs

6.$exp file=<> log=<> tables=emp

7.copy the dumpfile to prod environment and import

Note: From 10g, we can totally eliminate point in time recovery by implementing flashback technology.

HOT BACKUP

Hot backup is called as online backup or inconsistent backup

This is the type of backup that we implement fpor 24/7 customers

In case of hot backup, we backup the database  while the database is up and running

In case of hot backup, there is no concept of simple restore. We must and should perform recovery

The pre-requisites for hot backup is, the database must and should run in archive log mode.

In case of hot backup, we backup tablespace by tablespace by keeping the tablespace in a special mode called’begin backup’

Once the backup of all tablespaces are completed we backup the control file at oracle level by executing the following command

>alter database backup controlfile to <PATH>/control01.ctl’;

At the end of every hot backup we switch the log file manually which causes a checkpoint and the checkpoint change number will be updated to the control file as well as datafile headers.

Steps to take hot backup

Method 1:

1.>alter tablespace hrms_ts begin backup;

2.$cp hrms *.* /opt/hotbkp

3.>alter tablespace hrms_TS end backup;

4.Repeat step 1 to step 3 for all databases

5.>alter database backup control file to ‘/opt/hotbkp/control01.ctl’

6.>alter system switch logfile

 

Method 2:

1.>alter database begin backup

2.$cp *.dbf  /opt/hotbkp

3.>alter database end backup

4.>alter database backup control file to ‘/opt/hotbkp/control01.ctl’

5.>alter system switch logfile

 

In 10g, Oracle has introduced a single command to keep the entire database

(all tablespaces) in begin backup. Recommended to use this command during non-business hours

If we keep the tablespace in begin backup mode for a longer duration, excessive redo information will be generated, thereby leading to more no. of log switches and more no. of archive logs in archive destination

if the archive destination is 100% then database goes to the hung state.

The reason why excessive redo information will be generated is that for the first time Oracle brings the entire block into the buffer cache.

In case of hot backup, there are chances of getting split blocks (fractured blocks)

Note: In the industry, along with the database backup, we also take the backup of archive logs and we delete the archive logs from archive destination

Whether it is cold or hot, the way how we take the backup differs but the process of restore and recovery is same.

 

Words from dbapath

Thank you for giving your valuable time to read the above information. I hope the content served your purpose in reaching out the blog.
Suggestions for improvement of the blog are highly appreciable. Please contact us for any information/suggestion/feedback.

If you want to be updated with all our articles

please follow us on Facebook | Twitter
Please subscribe to our newsletter.