Currently Being Moderated

In the first post on this topic on server modernization, I started us off on moving an Oracle Database the Old Way "Quickly" and have gotten us to this point.

 

Now we will set up all of the scripts for the application and the network links. This is the tedious part - there are many individual scripts to export/import the data in the tables. Organize these into a giant script that will run them all at once. Below is a sample of the beginning of the script. Make one for each separate group of tables you are moving. (You can see how old this is from the ORACLE_HOME.)

 

 

#!/bin/sh
######################################################################
# set -x
#  FILENAME: expimp.sh
#  USAGE: expimp.sh 'user to export' 'SID to export from' 'SID to export to'
#
#  for example:
# nohup expimp_resh.sh struct &
# Will export/import the database structure from one system to the other
#  and use a dummy label for the USER
#
######################################################################
RET=0
E_ORAHOME="Legacy server ORACLE_HOME"
I_ORAHOME="Target server ORACLE_HOME"
PATH=`pwd`
TABLE_NAME=$1
RESH="/usr/bin/resh target-1"
SRC_MKNOD="/etc/mknod"
DST_MKNOD="/etc/mknod"
DATE=`date`
echo $DATE
ORATAB=/etc/oratab
EXP="/home/oracle/product/7.3.4/bin/exp"
IMP="/home/oracle/product/7.3.4/bin/imp"
EXPFILE="/export/home/wpereira/exp${TABLE_NAME}.par"
IMPFILE="$PATH/imp${TABLE_NAME}.par"
EXPLOG="/export/home/wpereira/exp${TABLE_NAME}.log"
IMPLOG="$PATH/imp${TABLE_NAME}.log"
SRC_USER="system/’password’"
DST_USER="system/’password’"
EXP_PIPE="/tmp/eximpipe.$$"
IMP_PIPE="/tmp/eximpipe.$$"
${RESH}  "
    $SRC_MKNOD ${EXP_PIPE} p
    export ORACLE_SID=ORCL
    export TWO_TASK
    export ORACLE_HOME=${E_ORAHOME}
    $EXP userid=${SRC_USER} file=${EXP_PIPE} log=${EXPLOG}  parfile=${EXPFILE} & " &
#
#
ORACLE_SID="ORCL"; export ORACLE_SID
ORACLE_HOME=$I_ORAHOME; export ORACLE_HOME
$DST_MKNOD ${IMP_PIPE} p
${RESH} "cat ${EXP_PIPE}" > ${IMP_PIPE} &
# run the import
$IMP userid=${DST_USER} file=${IMP_PIPE} log=${IMPLOG} parfile=${IMPFILE} &
exit 0

 

 

Be sure to set up a high speed link between the legacy server and the new server. This link will allow for remote execution of the scripts.

 

 

Once the database has been created, you need to create all the objects in the database for the application.  Similar to allocating space for tablespaces on disks, Oracle has to allocate space for the objects like tables while the rows are being loaded. This space allocation takes time, so the better approach is to pre-allocate the space prior to the actual movement of the data which means that the application is shut down.

 

In this step you are also creating the VIEWS, PROCEDURES, PACKAGES, TRIGGERS, SYNONYMS, etc. of the application in the database. (You’ll have to disable the TRIGGERS before the data loads.)

 

Now that you have the export/import scripts set up, invoke them in one super script. You probably won’t be able to do the entire database in parallel. During the rehearsal, you’ll find where to put in the ‘wait’ statements.

 

Once all the ROWS have been imported you can invoke the set of scripts to create the INDEX's. When these are finished, run the script to ENABLE the CONSTRAINTS which are actually generating INDEX's. The next major step is to ENABLE the TRIGGERS. This is also done by script.

 

The audit of the database is among the last steps. Match the output of DBA_OBJECTS in both databases. You also need to get the number of ROWs in the legacy database and compare that to the number of ROWs imported into the target database. This is where the team comes in handy; each member is doing a different audit task.

 

Now open the application to use by the testers. Because you don’t want to add any test data to the production database this has to be done carefully.

 

If everything passes muster, then the target database can be opened for production. For the paranoid, the legacy database can be opened for parallel production in the event that the firm wants to move back to the old system.

 

Finally, document the entire effort so that the configuration is clear for others to understand. And go get some rest.

 

This process doesn’t take advantage of Oracle Streams or Oracle’s GoldenGate. But I believe there is a use for this methodology even today. What do you think? Do you agree or disagree?  Do you have migrations that will require this older approach because no other will work? Do you have any suggestions to improve this process? What have you seen when doing this with Datapump? Do you recommend any changes to this methodology when using 10.2 or 11.2 as the target database?

Comments

Filter Blog

By author:
By date:
By tag: