Saturday 28 March 2015

OCR, Voting and OLR files


Over view:


Oracle Clusterware includes two important components that manage configuration and node membership: Oracle Cluster Registry (OCR), which also includes the local component Oracle Local Registry (OLR), and voting disks.
·       OCR manages Oracle Clusterware and Oracle RAC database configuration information
·       OLR resides on every node in the cluster and manages Oracle Clusterware configuration information for each particular node
·       Voting disks manage information about node membership. Each voting disk must be accessible by all nodes in the cluster for nodes to be members of the cluster
You can store OCR and voting disks on Oracle Automatic Storage Management (Oracle ASM), or a certified cluster file system.
Oracle Universal Installer for Oracle Clusterware 11g release 2 (11.2), does not support the use of raw or block devices. However, if you upgrade from a previous Oracle Clusterware release, then you can continue to use raw or block devices. Oracle recommends that you use Oracle ASM to store OCR and voting disks.Oracle recommends that you configure multiple voting disks during Oracle Clusterware installation to improve availability.


OCR:


OCR contains information about all Oracle resources in the cluster.
Oracle recommends that you configure:
·       At least three OCR locations, if OCR is configured on non-mirrored or non-redundant storage. Oracle strongly recommends that you mirror OCR if the underlying storage is not RAID. Mirroring can help prevent OCR from becoming a single point of failure.
·       At least two OCR locations if OCR is configured on an Oracle ASM disk group. You should configure OCR in two independent disk groups. Typically this is the work area and the recovery area.
·       At least two OCR locations if OCR is configured on mirrored hardware or third-party mirrored volumes.


Backing Up Oracle Cluster Registry:

Automatic backups:Oracle Clusterware automatically creates OCR backups every four hours, each full day and end of the week. Oracle Database always retains the last three backup copies of OCR.  You cannot customize the backup frequencies or the number of files that Oracle Database retains. This backups will be done by CRSD process.

Manual backups:  Use the following  command on a node to force Oracle Clusterware to perform a backup of OCR at any time.

ocrconfig -manualbackup

 The -manualbackup option is especially useful when you want to obtain a binary backup on demand, such as before you make changes to OCR.

NOTE: The OCRCONFIG executable is located in the $GRID_HOME/bin directory.  

OCRCONFIG utility:

Use the following command to display the OCR backup files.

ocrconfig -showbackup

To check manually/auto taken OCR backups saparatly use the flag manual/auto as follow.

ocrconfig –showbackup manual
ocrconfig –showbackup auto

The default location for generating backups on Linux or UNIX systems is $GRID_HOME/cdata/cluster_name, where cluster_name is the name of your cluster.

The OCRCONFIG utility creates a log file in $GRID_HOME/log/host_name/client

OCRCHECK Utility



The OCRCHECK utility displays the version of the OCR's block format, total space available and used space, OCRID, and the OCR locations that you have configured. OCRCHECK performs a block-by-block checksum operation for all of the blocks in all of the OCRs that you have configured. It also returns an individual status for each file and a result for the overall OCR integrity check. 

You can only use OCRCHECK when the Oracle Cluster Ready Services stack is ONLINE on all nodes in the cluster.


# ocrcheck

==============================================

Voting Disk:

Voting disks manage information about node membership. Each voting disk must be accessible by all nodes in the cluster for nodes to be members of the cluster.

Storing Voting Disks on Oracle ASM

Oracle ASM manages voting disks differently from other files that it stores. If you choose to store your voting disks in Oracle ASM, then Oracle ASM stores all the voting disks for the cluster in the disk group you choose.
Once you configure voting disks on Oracle ASM, you can only make changes to the voting disks' configuration using the crsctl replace votedisk command. This is true even in cases where there are no working voting disks.

Backing Up Voting Disks

In Oracle Clusterware 11g release 2 (11.2), you no longer have to back up the voting disk. The voting disk data is automatically backed up in OCR as part of any configuration change and is automatically restored to any voting disk added.

Restoring Voting Disks

Run the following command as root from only one node to start the Oracle Clusterware stack in exclusive mode, which does not require voting files to be present or usable:
 
# crsctl start crs -excl

Run the crsctl query css votedisk  command to retrieve the list of voting files currently defined
 
crsctl query css votedisk

This list may be empty if all voting disks were corrupted, or may have entries that are marked as status 3 or OFF
If the voting disks are stored in Oracle ASM, then run the following command to migrate the voting disks to the Oracle ASM disk group you specify:
 
crsctl replace votedisk +asm_disk_group

If you did not store voting disks in Oracle ASM, then run the following command using the File Universal Identifier (FUID) obtained in the previous step:
 
$ crsctl delete css votedisk FUID

Add a voting disk, as follows:
 
$ crsctl add css votedisk path_to_voting_disk

Stop the Oracle Clusterware stack as root:
 
# crsctl stop crs

Restart the Oracle Clusterware stack in normal mode as root:
 
# crsctl start crs

======================================

OLR: 

In Oracle Clusterware 11g release 2 (11.2), each node in a cluster has a local registry for node-specific resources, called an Oracle Local Registry (OLR), that is installed and configured when Oracle Clusterware installs OCR. It contains manageability information about Oracle Clusterware, including dependencies between various services. Oracle High Availability Services uses this information. OLR is located on local storage on each node in a cluster.

Its default location is in the path $GRID_HOME/cdata/host_name.olr 
To Check OLR status on each node using following command.
 
# ocrcheck -local
 
If we need to see the contents of the OLR
 
# ocrdump -local -stdout
 
To backup OLR manually
 
# ocrconfig –local –manualbackup
 
TO see the contents of the backup OLR file
 
ocrdump -local -backupfile olr_backup_file_name
 
To change backup location
 
ocrconfig -local -backuploc new_olr_backup_path
 
To restore OLR follow the step
 
# crsctl stop crs
 
# ocrconfig -local -restore file_name
 
# ocrcheck -local
 
# crsctl start crs
 
$ cluvfy comp olr

3 comments: