Migrating a Database to an Exadata Machine

We have been migrating our databases from the non-Exadata servers to the Exadata Database Machine using the “RMAN 11g Duplicate standby from Active database” command, to create the standby databases on the Exadata machine. During a narrow maintenance window, we failover the database to the one on the Exadata host. Below are the steps which were performed for these successful migrations.

Assumptions
Here we assume that the following tasks has been completed and the servers are ready for the migration.

a. Oracle Grid Infrastructure 11.2.0+ installed on the Exadata Machine.
b. Oracle RDBMS 11.2.0.1 or later is installed on Exadata Machine.
c. An ASM Instance is configure and running on all the nodes
d. +DATA and +RECO diskgroups are created with sufficient space.

High Level Steps
a. Configure Primary Database to work with Oracle Data Guard.
b. Configure tns for primary and physical standby database.
c. Create Physical Standby database on the Exadata Machine from source DB.
d. Synchronize Physical database with Primary database on Exadata Machine.
e. Activate the Standby database to make it as Primary.
f. Upgrade the New Primary database on Exadata Database machine.
g. Register the New Primary database into CRS and configure the database as RAC, to run all nodes.

Using Physical Standby for Migration

Advantages
a. Minimum Downtime
b. Easy to configure and manage
c. Database can be migrated with same name

Disadvantages
a. Source and Target must run same OS version
b. Source and Target must run same RDBMS version
c. Copy archive logs manually if source and target have different RDBMS version.

Source and target configuration
Here is a brief description of the configuration.

Source and Target Details
Primary(Source) server: HOSTNAME: non-exadata
HOST TYPE: Non-RAC / Non Exadata
OPERATING SYSTEM: OEL 5.5
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
DB NAME: db11g
ORACLE SID: db11g
DB_UNIQUE_NAME: db11g
DATAFILE LOCATION: /ora01/oradata/db11g/
FRA: /ora01/flash_recover_area

Standby/Target server:
HOSTNAME: exa01 – exa08
HOST TYPE: 8 NODE RAC / EXADATA
OPERATING SYSTEM: OEL 5.5
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
DATABASE NAME: db11g
ORACLE SID: db11g1
DB_UNIQUE_NAME: db11g
DATAFILE LOCATION: +DATA
FRA: +RECO

A. Configuring Source for Oracle Data Guard Setup
In this section we will verify that the primary database is configured correctly to support the physical standby database.

1. Determine if the FORCE LOGGING is enabled.

SQL> select force_logging from v$database;


If the output of the query is YES, then proceed with the next step. If the output of the above query is NO then enable the FORCE LOGGING at the database level.

SQL> alter database force logging;


2. Determine the database is running in ARCHIVE LOG MODE.

Issue the following statement.

SQL> ARCHIVE LOG LIST;


If the database is running in NO ARCHIVE LOG MODE, then enable the ARCHIVE LOG MODE as follows:

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
SQL> ARCHIVE LOG LIST;


3. Set the primary standby database initialization parameters:

On the primary database, you define initialization parameters that controls redo transport services while the database is in the primary role. These include:

SQL> SHOW PARAMETER DB_NAME
NAME
---------
DB11G

SQL> SHOW PARAMETER DB_UNIQUE_NAME  THIS DOES NOT CHANGE, MUST BE “DB11G”

SQL> ALTER SYSTEM SET log_archive_format='LOG_%t_%s_%r.ARC';

SQL> ALTER SYSTETM SET Log_archive_dest_1='LOCATION=’USE_DB_RECOVERY_FILE_DEST’ Valid_for=(online_logfiles,all_roles) db_unique_name=db11r1';

SQL> ALTER SYSTEM SET log_archive_dest_2='SERVICE=db11g ASYNC REOPEN=10 valid_for= (online_logfiles, primary_role) db_unique_name=db11G';

SQL> ALTER SYSTEM SET log_archive_dest_state_2=ENABLE;

These parameters are dynamic and will take effect immediately. Check the parameter to make sure it points to the correct locations as specified.

SQL> SHOW PARAMETER LOG_ARCHIVE_DEST_2

SQL> SHOW PARAMETER REMOTE_LOGIN_PASSWORDFILE  this must be set to “EXCLUSIVE” (default)


Note: Above parameter makes sense when you are working in an identical environment including OS and RDBMS version.

4. Configure the primary database to receive redo data, by adding the standby redo log files to the primary database. It is highly recommended that you have one more standby redo log group than you have online redo log groups as the primary database. The files must be the same size or larger than the primary database’s online redo logs.

Note: The standby redo log files will be created automatically when we perform the RMAN Duplicate for Standby for the Standby Database.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ‘/ora01/oradata/db11g/ sb_redo_t1_g4.log’ size 500m;

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ‘/ora01/oradata/db11g/ sb_redo_t1_g5.log’ size 500m;

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ‘/ora01/oradata/db11g/ sb_redo_t1_g6.log’ size 500m;

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ‘/ora01/oradata/db11g/ sb_redo_t1_g7.log’ size 500m;
5. Configure redo transport authentication. Create a password file.

$orapwd file= orapwdb11g password=oracle entries=100

B. Configuring tnsnames for primary and physical standby database

1. Create an oracle NET service for the physical standby database on the primary database server. Your Data Guard configuration will use Oracle Net service names to reference different databases.

Open the tnsnames.ora file and add the below physical standby database entry in it.

$cat $ORACLE_HOME/network/admin/tnsnames.ora

This entry will be used to connect to the standby database.

stdby =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = exa01)(PORT = 1521)))
(CONNECT_DATA =
(SERVICE_NAME = db11g)))


2. Add the NET service name for the standby database in the standby server tnsnames.ora file.

$cat $ORACLE_HOME/network/admin/tnsnames.ora
dbg11r1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = non-exadata)(PORT = 1521)))
(CONNECT_DATA =
(SERVICE_NAME = db11g1)))

LISTENER_DB11G1 =
(ADDRESS = (PROTOCOL = TCP)(HOST = exa01-scan)(PORT = 1521))
LISTENERS_DB11G =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = exa01-scan)(PORT = 1521)))

3. Configure the static listener entry for the standby database in the listener.ora file on standby server.

$cat $ORACLE_HOME/network/admin/listener.ora

LISTENER_EXA01 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = exa01)(PORT = 1521)))

SID_LIST_LISTENER_EXA01 =
(SID_LIST=
(SID_DESC=
(GLOBAL_NAME=db11g )
(SID_NAME =db11g1)
(ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1 )))

4. Stop and start the listener and make sure it is listening to the standby database service.

$lsnrctl stop listener_exa01
$lsnrctl start listener_exa01

C. Creating the physical standby database

1. Copy the password file from the primary database server to the standby server.

$scp $ORACLE_HOME/db/orapwdb11g oracle@exa01: /oracle/product/11.2.0/db_1/dbs/orapwdb11g1


2. In the $ORACLE_HOME/dbs directory of the standby database server create an initialization parameter file named initdb11g containing only one parameter DB_NAME.

$cd $ORACLE_HOME/dbs
$cat initdb11g1
DB_NAME=db11g


3. On the standby database server go to $ORACLE_BASE/admin and create a directory as your DB_NAME and AUDIT_DUMP_DIRECTORY.

$cd $ORACLE_BASE/admin
$mkdir db11g adump
$ls –ltr


4. Create an Database Instance entry in /etc/oratab on Standby server.

$cat /etc/oratab
db11g1:/u01/app/oracle/product/11.2.0/db:N


5. On the standby server se the ORACLE_SID for the standby database.

$export ORACLE_SID=db11g1
$echo $ORACLE_SID


6. Connect as sysdba and start the database in NOMOUNT state.

SQL> STARTUP NOMOUNT
SQL> EXIT;


7. On the primary database server set the ORACLE_SID for primary database.

$export ORACLE_SID=db11g
$echo $ORACLE_SID


8. On the primary database server invoke the RMAN and connect to the primary database and auxiliary database as sys.

$rman target / auxiliary sys@stdby


9. Now execute the below script from RMAN on the primary database server. When this script finishes you will have a new standby database that was created over the network without any interim storage.

run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database NOFILENAMECHECK
spfile
parameter_value_convert '/ora01/oradata/db11g','+DATA'
set audit_file_dest '/u01/app/oracle/admin/db11g/adump'
set compatible '11.2.0'
set cluster_database 'FALSE'
set db_cache_size '1G'
set db_file_name_convert '/ora01/oradata/db11g','+DATA'
set db_unique_name 'db11g'
set local_listener 'LISTENER_EXA01'
set log_file_name_convert '/ora01/oradata/db11g','+DATA'
set log_archive_max_processes '5'
set pga_aggregate_target '300M'
set sga_target '4G'
set shared_pool_size '1G'
set standby_file_management 'AUTO';}


The above run command will fail with the below error message:

RMAN-00571: =======================================
RMAN-00569: ===== ERROR MESSAGE STACK FOLLOWS ========
RMAN-00571: ==========================================
RMAN-03002: failure of Duplicate Db command at 11/17/2009 10:47:23
RMAN-03015: error occurred in stored script Memory Script
RMAN-06172: no AUTOBACKUP found or specified handle is not a valid copy or piece


Go to the +DATA diskgroup on the standby server and get the control file name as below.

$asmcmd
ASMCMD> cd +data/db11g/controlfile
ASMCMD> pwd
+data/db11g/controlfile
ASMCMD> ls -l
Type Redund Striped Time Sys Name
CONTROLFILE HIGH FINE NOV 17 10:00:00 Y current.273.703162041


Rerun the RMAN run block by adding the control_files parameter.

Note: You must rename/move the spfile created on standby server under $ORACLE_HOME/dbs directory before rerunning the RMAN run block.

run {
allocate channel prmy1 type disk;
allocate channel prmy2 type disk;
allocate channel prmy3 type disk;
allocate channel prmy4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database NOFILENAMECHECK
spfile
parameter_value_convert '/ora01/oradata/db11g','+DATA'
set audit_file_dest '/u01/app/oracle/admin/db11g/adump'
set compatible '11.2.0'
set cluster_database 'FALSE'
set db_cache_size '1G'
set db_file_name_convert '/ora01/oradata/db11g','+DATA'
set db_unique_name 'db11g'
set control_files '+data/db11g/controlfile/ current.273.703162041'
set local_listener 'LISTENER_EXA01'
set log_file_name_convert '/ora01/oradata/db11g','+DATA'
set log_archive_max_processes '5'
set pga_aggregate_target '300M'
set sga_target '4G'
set shared_pool_size '1G'
set standby_file_management 'AUTO';}


Step 10 – 12 used only if source and target are same including OS and RDBMS versions.

10. Enable the log_archive_dest_state_2.  not required, optional

SQL> ALTER SYSTEM SET log_archive_dest_state_2=ENABLE;


11. Perform a log switch on the primary database and redo will start being sent to the standby.

SQL> ALTER SYSTEM SWITCH LOGFILE;


12. On the standby database set the database SID and start the manage recovery process.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

D. Activate Standby Database and Upgrade to 11.2

1. Stop monitoring for the Primary database.

2. On the primary/source node as oracle, flush all the changes to archived logfiles.

$sqlplus sqlplus “/ as sysdba”
SQL> select thread#, sequence# from v$log where status = 'CURRENT' order by THREAD#;
SQL>alter system archive log current;
SQL>alter system archive log current;
SQL>alter system archive log current;
SQL> select thread#, sequence# from v$log where status = 'CURRENT' order by THREAD#;

3. Shutdown the database db11g on primary database.

4. Startup the instance on primary in restricted mode.  Optional

SQL>startup restrict;
SQL>alter system enable restricted session;

5. Determine the current SCN for the physical standby on exa01.

sqlplus “/ as sysdba”
SQL>select current_scn from v$database;

6. On the primary, determine the range of archived logs that need to be applied to the physical standby.

SQL> select name from v$archived_log where next_change# > {current_scn from step 5};
7. On the primary, use RMAN to copy the archive logs found in the previous step to the standby database.

rman target / auxiliary sys@aux
RMAN>backup as copy auxiliary format ‘+RECO’ archivelog sequence between and thread 1;

Do previous step for all threads if source is RAC.

8. Manually recover the archive logs on the standby database. After the last archive log has been recovered, cancel recovery. On the exa01 node recover the dataase.

sqlplus “/ as sysdba”
SQL>recover standby database;

9. While in the mount mode, activate the standby database- exa01.

SQL> alter database activate standby database;
10. Open the database using the upgrade option – exa01.

SQL>alter database open upgrade;
11. Get the registry values on the standby – exa01.

SQL> select comp_name, version, status from dba_registry;
12. Run catupgrd.sql on the standby – exa01.

SQL>@/u01/app/oracle/product/11.2.0/dbhome_1/rdbms/admin/catupgrd.sql
13. Get the registry values on the standby again– Exa01.

SQL> select comp_name, version, status from dba_registry;

E. Register database resources with clusterware

1. Make sure that you have an instance entry in /etc/oratab on all the nodes.

2. Make sure that SPFILE is on the shared storage, for example on ASM diskgroup and accessible from all the nodes.

3. As the oracle software owner, register the database and instances with Clusterware.

$srvctl add database -d db11g -o /u01/app/oracle/product/11.2.0/db_1 -p +DATA/db11g/spfiledb11g.ora –y automatic –a “DATA, RECO”
$srvctl add instance -d db11g -i db11g1 -n exa01
$srvctl add instance -d db11g -i db11g2 -n exa02
$srvctl add instance -d db11g -i db11g3 -n exa03
$srvctl add instance -d db11g -i db11g4 -n exa04
$srvctl add instance -d db11g -i db11g5 -n exa05
$srvctl add instance -d db11g -i db11g6 -n exa06
$srvctl add instance -d db11g -i db11g7 -n exa07
$srvctl add instance -d db11g -i db11g8 -n exa08


4. Start the database and check status

$srvctl start database –d db11g
$srvctl status database –d db11g

Migrating a Database to an Exadata Machine – ALL DONE!

m4s0n501