Upgrade of Oracle 11g RAC database to Oracle 12c RAC DBUA

This document will help you providing guideline to upgrade Oracle 11g RAC to Oracle 12c RAC clusterware  and Database .

Author:-Saibal Ghosh


1 Tasks Before Upgrade

1.1 Backup the Database:

Before we start the upgrade, it is a best practice to backup the database, Oracle Cluster Registry (OCR) and Oracle database home and Grid home .


The following packages or later version need to be installed on the system for the upgrade to go through successfully:




























Pluggable Authentication Modules for Linux (Linux PAM)

We need to install the latest Linux PAM (Pluggable Authentication Modules for Linux) library for our Linux distribution. PAM provides greater flexibility for system administrators to choose how applications authenticate users. On Linux, external scheduler jobs require PAM.


Oracle JDBC/OCI Drivers
We can use the following optional JDK version with the Oracle
JDBC/OCI drivers however, it is not required for the installation:
JDK 6 Update 10 (Java SE Development Kit 1.6.0_21)
JDK 1.5.0-24 (JDK 5.0) with the JNDI extension


Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.
We have two options for time synchronization: an operating system configured network time protocol (NTP), or Oracle Cluster Time Synchronization Service. Oracle Cluster Time Synchronization Service is designed for organizations whose cluster servers are unable to access NTP services. If we use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd ) starts up in observer mode. If we do not have NTP daemons, then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server.
If we have NTP daemons on our server but we cannot configure them to synchronize time with a time server, and we want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then we need to deactivate and deinstall the NTP.
To deactivate the NTP service, we must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To complete these step on Oracle Linux, we run the following commands as the root user:

# /sbin/service ntpd stop

# chkconfig ntpd off

# rm /etc/ntp.conf


or, mv /etc/ntp.conf to /etc/ntp.conf.org

Also we need to remove the following file:


This file maintains the pid for the NTP daemon. When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is installed in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode, and no active time synchronization is performed by Oracle Clusterware within the cluster.
To confirm that ctssd is active after upgrade, we need to enter the following command as the Grid installation owner:

$ crsctl check ctss

If we are using NTP, and we prefer to continue using it instead of Cluster Time Synchronization Service, then we need to modify the NTP configuration to set the –x flag, which prevents time from being adjusted backward. Then we restart the network time protocol daemon after we complete this task.


To do this, on Oracle Linux, Red Hat Linux, and Asianux systems, we edit the /etc/sysconfig/ntpd
file to add the –x flag, as in the following example:


# Drop root to id ‘ntp:ntp’ by default.

OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”

# Set to ‘yes’ to sync hw clock after successful ntpdate


# Additional options for ntpdate


Then, restart the NTP service.

# /sbin/service ntp restart


We use the following command to find if we have an existing version of the cvuqdisk package:

# rpm -qi cvuqdisk

We need to ensure that the above package is installed; else we need to install the package. We need to ensure that the Oracle software owner user (oracle or grid ) has the Oracle Inventory group (oinstall) as its primary group and is a member of the appropriate OSDBA.


1. Log in as an installation owner.
2. We need to check the soft and hard limits for the file descriptor setting. We need to ensure that the result is in the recommended range. For example:

$ ulimit -Sn


$ ulimit -Hn


3. We need to check the soft and hard limits for the number of processes available to a user.
 We need to ensure that the result is in the recommended range. For example:

$ ulimit -Su


$ ulimit -Hu


4. We need to check the soft limit for the stack setting. We need to ensure that the result is in the recommended range. For example:

$ ulimit -Ss


$ ulimit -Hs


We need to repeat this procedure for each Oracle software installation owner.
As an example, the settings in the  Production Server are as follows:



2.Upgrade the Grid Infrastructure:

Step1: Unset the following:



3.    GI_HOME


5.    ORA_NLS10

Step2: Check whether there is enough space on the mount point and /tmp as well as there ought to be at least 4GB of free space in the OCR and VOTING DISK diskgroup, because the MGMTDB is created in that diskgroup.
Step 3: Back up the Cluster and Oracle Homes and the OCR
Step 4: Check crs active and software versions are the same

·         crsctl query crs activeversion

·         crsctl query crs softwareversion

Step 5: Validate the Node readiness for the upgrade. Run a command similar to the following:

./runcluvfy.sh stage –pre crsinst –upgrade –rolling –src_crshome /orasw/app/11.2.0/grid –dest_crshome /orasw/app/12.1.0/grid –dest_version

Step 6: Start the Upgrade. The screenshots are self-explanatory.

Screen 1) we select the Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management option.

Screen 2) We choose the default language: English.

Screen 3) We select the nodes to form part of the RAC Cluster in the upgrade.

Screen 4)  EM Cloud Control is not selected at this point.


Screen 5)  The Operating System groups are selected from the drop down list.

Screen 6) Specifying the Oracle base and software location. We do not use the default software location, and thus we see a warning message.

Screen 7)  We prefer to manually run the configuration scripts and do not check ‘Automatically run configuration scripts’ checkbox.

Screen 8) In the Prerequisite Checks page we find that the swap space check fails. Therefore we manually  increase the swap space, and move on to the next step.

Screen 9)  This is the Summary page and we see a consolidated page of the information that is being taken into the upgrade process.

Screen 10) the pop-up comes up for running the rootupgrade.sh script.

Screen 11) The Product installation is continuing.


Screen 12)  We get the message that the upgrade of the Grid Infrastructure was successful.


3. Installing the database software

We now need to install the database software. For this we need to run the runInstaller.

Screen 1) Beginning the process of installation of the database software. We choose not to provide an email id.


Screen 2) Installing only the database software.

Screen 3)  Oracle Real Application Clusters database installation.

Screen 4)  We select the nodes to form part of the cluster.

Screen 5) The default language is English.

Screen 6) We chose the Enterprise Edition.

Screen 7) The Oracle base and the software location.

Screen 8) The Operating System Groups.

Screen 9) We get the swap size /dev/shm error, which are ignoble in this case.

Screen 10) The Summary page.

Screen 11) The final screen-the installation of the Oracle database software is successful.


Now, the final step would be to run the Database Upgrade Assistant to actually upgrade the database.

4.  Database Upgrade

Screen 2) We choose the database to upgrade.

Screen 3) The prerequisite checks.

Screen 4) Prerequisite checks continuing. On the next screen we take steps to recompile invalid objects.

Screen 4) Upgrade Options.

Step 5) Management Options.

Screen 6) We choose to have have our own backup policy.


Screen 7) the Summary Page

Screen 8) The Progress page. The pop-up alert is because of un-compiled PL/SQL objects, and since we have already planned to re-compile invalid objects, this error is ignoble and we continue ahead.


Screen 8) The Progress page continues.

 Screen 9) The Final screen shows that the upgrade completed successfully.

The Database and Grid Infrastructure was successfully upgraded to Oracle



About the Author

debasis maity

12+ years of rich experience on Database Administrations and on Infrastructure Solution Architect. AWS Certified Solution Architect and Senior Oracle DBA

Leave a Reply

Your email address will not be published. Required fields are marked *