STEP BY STEP PROCEDURE OF GI UPGRADE FROM 11.2.0.3 TO 11.2.0.4 USING CLI OR SILENT MODE
Step by step procedure of GI upgrade from 11.2.0.3 to 11.2.0.4 using CLI or SILENT mODE
In this section we will guide you how to upgrade the GI using cli.
-
Step 1: Before upgrade the ‘HAS’ shows version 11.2.0.3 for release and software version.
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.3.0]
$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.3.0]
-
Step 2: Create a directory to for new GI home.
unzip /home/oracle/ora_patch_11_2_0_4/ p13390677_112040_Linux-x86-64_3of7.zip

mkdir -p /u01/app/grid/product/11.2.0.4/grid
chown oracle:oinstall -R /u01/app/grid/product/11.2.0.4/grid
chmod -R 755 /u01/app/grid/product/11.2.0.4/grid
-
Step 3: Before installing the new 11.2.0.4 GI check in the /u01/app/oraInventory/ContentsXML/inventory.xml has crs=true against the existing GI home. In some cases, crs=true is missing, usually if GI was installed with software only and later converted to HAS.
In this case crs=true is missing.

If there’s no crs=true against the GI home the new version fails to detect the existing clusterware. You may encounter the below error while upgrading:
[FATAL] [INS-40406] The installer detects no existing Oracle Grid Infrastructure software on the system.
To resolve this issue, we need to update the inventory information by running the following command specifying the existing GI home and crs=true
$ cd /u01/app/grid/product/11.2.0/grid/oui/bin/
$ ./runInstaller -updateNodeList ORACLE_HOME=”/u01/app/grid/product/11.2.0/grid” CRS=true

-
Step 4: We will now start the GI upgrade in silent mode from the software location.
$ cd grid
Set the environment:
export ORACLE_HOME=/u01/app/grid/product/11.2.0.4/grid
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_SID=+ASM
./runInstaller -silent -ignoreSysPrereqs -ignorePrereq -responseFile /home/oracle/ora_patch_11_2_0_4/grid/response/grid_install.rsp ORACLE_HOME=”/u01/app/grid/product/11.2.0.4/grid” oracle.install.option=”UPGRADE” oracle.install.asm.OSDBA=dba oracle.install.asm.OSOPER=dba oracle.install.asm.OSASM=oinstall

# cd /u01/app/grid/product/11.2.0.4/grid/
# ./rootupgrade.sh
Login as Oracle user
$ cd /u01/app/grid/cfgtoollogs
$ vi /tmp/conf.rsp
oracle.assistants.asm|S_ASMPASSWORD=GPBDASMPWD#
oracle.assistants.asm|S_ASMMONITORPASSWORD=GPBDASMMON#
$ . /u01/app/grid/product/11.2.0.4/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/tmp/conf.rsp
oracle@ORCLDB(+ASM) :/home/oracle# cat /u01/app/grid/product/11.2.0.4/grid/install/root_ORCL_2018-03-19_13- 45-41.log
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/grid/product/11.2.0.4/grid
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file:/u01/app/grid/product/11.2.0.4/grid/crs/install/crsconfig_params
Creating trace directory
ASM Configuration upgraded successfully.
Creating OCR keys for user ‘oracle’, privgrp ‘oinstall’..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4664: Node ORCL successfully pinned.
Replacing Clusterware entries in upstart
Replacing Clusterware entries in upstart
ORCL 2018/03/19 13:50:50 /u01/app/grid/product/11.2.0.4/grid/cdata/ORCL/backup_20180319_135050.olr
ORCL 2016/12/22 06:34:53 /u01/app/grid/product/11.2.0/grid/cdata/ORCL/backup_20161222_063453.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
We will now check the Inventory.xml file again
Now we will start the databases:
srvctl start database -d ORCLDB
srvctl start database -d ORCLDB
Check the version:
$ crsctl query has softwareversion
Oracle High Availability Services version on the local node is [11.2.0.4.0]
$ crsctl query has releaseversion
Oracle High Availability Services release version on the local node is [11.2.0.4.0]

Related
About the Author
12+ years of rich experience on Database Administrations and on Infrastructure Solution Architect.
AWS Certified Solution Architect and Senior Oracle DBA