Sunday 5 February 2017

Two node cluster(RAC) upgrade from 11.2.0.4 to 12.1.0.2


















[root@cbrrac1 app]# /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: n
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: n
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: n

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2017/02/05 20:42:04 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/02/05 20:43:15 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/02/05 20:43:20 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/02/05 20:43:34 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/02/05 20:43:34 CLSRSC-363: User ignored prerequisites during installation

2017/02/05 20:43:48 CLSRSC-515: Starting OCR manual backup.

2017/02/05 20:43:52 CLSRSC-516: OCR manual backup successful.

2017/02/05 20:43:57 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2017/02/05 20:43:57 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0/grid -oldCRSVersion 11.2.0.4.0 -nodeNumber 1 -firstNode true -startRolling true'


ASM configuration upgraded in local node successfully.

2017/02/05 20:44:11 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2017/02/05 20:44:11 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/02/05 20:45:36 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2017/02/05 20:50:17 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/02/05 20:54:39 CLSRSC-472: Attempting to export the OCR

2017/02/05 20:54:39 CLSRSC-482: Running command: 'ocrconfig -upgrade oracle oinstall'

2017/02/05 20:54:56 CLSRSC-473: Successfully exported the OCR

2017/02/05 20:55:04 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2017/02/05 20:55:04 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.

2017/02/05 20:55:04 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2017/02/05 20:55:04 CLSRSC-543:
 3. The downgrade command must be run on the node cbrrac1 with the '-lastnode' option to restore global configuration data.

2017/02/05 20:55:30 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/02/05 20:55:49 CLSRSC-474: Initiating upgrade of resource types

2017/02/05 20:57:52 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.4.0 -d 12.1.0.2.0 -p first'

2017/02/05 20:57:52 CLSRSC-475: Upgrade of resource types successfully initiated.

2017/02/05 20:57:59 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
=================================================================

[root@cbrrac2 app]# /u01/app/12.1.0/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: n
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: n
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: n

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params
2017/02/05 20:59:11 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/02/05 21:00:27 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/02/05 21:00:29 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/02/05 21:00:37 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/02/05 21:00:37 CLSRSC-363: User ignored prerequisites during installation


ASM configuration upgraded in local node successfully.

2017/02/05 21:00:51 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/02/05 21:01:32 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2017/02/05 21:02:07 CLSRSC-329: Replacing Clusterware entries in file '/etc/inittab'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/02/05 21:04:59 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2017/02/05 21:05:26 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2017/02/05 21:05:27 CLSRSC-482: Running command: '/u01/app/12.1.0/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
The OCR was successfully upgraded.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2017/02/05 21:07:44 CLSRSC-479: Successfully set Oracle Clusterware active version

2017/02/05 21:07:54 CLSRSC-476: Finishing upgrade of resource types

2017/02/05 21:08:14 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.4.0 -d 12.1.0.2.0 -p last'

2017/02/05 21:08:14 CLSRSC-477: Successfully completed upgrade of resource types

2017/02/05 21:08:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded









[oracle@cbrrac1 bin]$ hostname
cbrrac1.rtm.com
[oracle@cbrrac1 bin]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]


[oracle@cbrrac2 ~]$ hostname
cbrrac2.rtm.com
[oracle@cbrrac2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]


[oracle@cbrrac1 ~]$ srvctl -V
srvctl version: 11.2.0.4.0

=============================================================

[oracle@cbrrac1 ~]$ ps -ef|grep smon
root      1400     1  2 20:53 ?        00:01:30 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle    1707     1  0 20:54 ?        00:00:00 asm_smon_+ASM1
oracle   12024  4212  0 22:01 pts/1    00:00:00 grep smon
oracle   19631     1  0 21:21 ?        00:00:00 mdb_smon_-MGMTDB


Saturday 4 February 2017

OCR and Voting disk commands

[oracle@rtmrac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1540
         Available space (kbytes) :     408028
         ID                       :  227236111
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user
[root@rtmrac1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :       1572
         Available space (kbytes) :     407996
         ID                       :  227236111
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded
=================================================================================
[oracle@rtmrac1 ~]$ ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[oracle@rtmrac1 ~]$ ocrconfig -manualbackup
PROT-20: Insufficient permission to proceed. Require privileged user
[oracle@rtmrac1 ~]$ su -
Password:
Last login: Tue Jan 24 21:58:18 IST 2017 from 192.168.176.1 on pts/0
[root@rtmrac1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@rtmrac1 ~]# ocrconfig -manualbackup
rtmrac1     2017/02/04 09:48:34     /u01/app/12.1.0/grid/cdata/rtmcluster/backup_20170204_094834.ocr     0
[root@rtmrac1 ~]# ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available

rtmrac1     2017/02/04 09:48:34     /u01/app/12.1.0/grid/cdata/rtmcluster/backup_20170204_094834.ocr     0

(or)
[root@rtmrac1 ~]# ocrconfig -export /tmp/bkpocr.ocr
[root@rtmrac1 ~]# ocrconfig -showbackup
PROT-24: Auto backups for the Oracle Cluster Registry are not available

rtmrac1     2017/02/04 09:48:34     /u01/app/12.1.0/grid/cdata/rtmcluster/backup_20170204_094834.ocr     0
[root@rtmrac1 ~]# ls -lrth /tmp/bkpocr.ocr
-rw------- 1 root root 97K Feb  4 09:58 /tmp/bkpocr.ocr
======================================================
Voting disk commands
=======================================================

[oracle@rtmrac1 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   3ef6b86734094ffdbf556ec910d990b6 (/dev/oracleasm/disks/ASMDISK1) [DATA]

Located 1 voting disk(s).

CRSCTL commands

CRSCTL commands:
===========================

To stop Cluster in one node:

Note:stoping/starting cluster should be done as root user.

#crsctl stop crs

To start cluster in one node:

#crsctl start crs

To check the status of the resources.

$crs_stat -t

(or)

$crsctl status res -t

To know cluster status in all nodes:

crsctl check cluster -all


Relocate resouce in cluster:

crsctl relocate resource <res_name> -s node1 -n node3