Tuesday, March 11, 2014

Database migration using SAN migration

Couple weeks back I posted about Database migration using SAN migration POC. Yesterday, we successfully migrated 13 databases from one RAC to another RAC. 

We followed the steps I described in earlier post with some minor tweaks. Tweaks were mainly due to the fact that earlier setup had some database instance names in uppercase and some in lower case. To bring consistency we changed all instance names to lower case. Similarly, archive log destination as well as RMAN backup directory were not consistent. For better administration purpose we changed all the directory names to lower case as well.


Migration preparation:

  1. Identify databases are being migrated.
  2. Collect the database and related database service configuration.
  3. Prepare commands to recreate the services as well adding databases to the OCR on the target cluster.
  4. Copy password files from existing cluster nodes to target cluster nodes.
  5. Created text initialization files and copied them to the target server.
  6. On the target server we modified the init files:
    1. Archive log destination
    2. Any other directory names (like audit dump destination)
    3. remote_listener parameter
    4. Changed instance names from uppercase to lowercase
  7. Copy init files from the existing cluster nodes to the target cluster nodes.
  8. Create entries in /etc/oratab for the databases to be migrated
On the day of migration:
  1. Application team shutdown the respective application services.
  2. Shutdown database(s) having files on the disk group being migrated.
  3. Unmount disk group - all the nodes of existing cluster. (umount <DiskGroupName>)
    1. Here we shutdown the entire cluster as we are migrating all databases from this cluster
  4. SysAdmin detached SAN LUN from the existing cluster.
  5. SysAdmin presented same LUNs to the new target cluster nodes.
  6. ASM admin/DBA mounted the diskgroup by using asmcmd utility (mount <DiskGroupName>)
  7. Repeat following steps for each database
    1. Start databases using the modified pfile through sqlplus
    2. Create spfile based on the pfile used to startup the database.
    3. Shutdown database
    4. Startup using spfile through sqlplus
    5. Apply catbundle for the PSU version, we installed on target cluster which was not on the source cluster
    6. Validate modified parameters
  8. Add database to the OCR using commands identified in Step#3 [Changed database instance names to lower case].
  9. Start database using srvctl.
  10. Add database service(s), if any identified in Step#2 and Step#3
  11. As a part of system build, we did perform RAC System test. Still here, we performed service fail-over validation. 
  12. We performed control reboot of both the nodes at the same time.
  13. After everything came online we executed script [validate_svc_restart.sh] to ensure that service are running on the preferred nodes.
  14. We also, updated the RMAN configuration to accommodate the change in directory names.
Script validate_svc_restart.sh:

#!/bin/ksh

ASM_SID="`cat $ORATAB |awk -F: '/^\+ASM/{print $1}'`"; export ASM_SID
. /oracle/orabase/admin/common/oraprofile.sh ${ASM_SID} 1>/dev/null 2>/dev/null

for svc in `crsctl stat res -t | grep svc | grep -v preconnect `
do
   echo " * * * * Processing ${svc}"
   export DBN=`echo ${svc}|cut -d. -f2`
   export SRVC=`echo ${svc}|cut -d. -f3,4,5 | sed s/\.svc\$//`
   for PRFRD_SVR in `srvctl config service -d ${DBN} -s ${SRVC} | grep Preferred | cut -d: -f2|tr  , " "`
   do
      echo "      * * * * Validating preferred instance: ${PRFRD_SVR}"
      if [ `srvctl status service -d ${DBN} -s ${SRVC} | grep ${PRFRD_SVR} | wc -l` -lt 1 ]
      then
          echo " Service[${SRVC}] is NOT running on Preferred server: ${PRFRD_SVR}"
          srvctl stop service -d ${DBN} -s ${SRVC}
          srvctl start service -d ${DBN} -s ${SRVC}
          srvctl status service -d ${DBN} -s ${SRVC}
      fi
   done

done