Tuesday 30 July 2013

oracle database 12c pre installation steps

-Unzip the files.

#unzip linuxamd64_12c_database_1of2.zip
#unzip linuxamd64_12c_database_2of2.zip


-Add or amend the following lines in the "/etc/sysctl.conf" file.

#vi /etc/sysctl.conf

fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576



-Run the following command to change the current kernel parameters.

#/sbin/sysctl -p


-Add the following lines to the "/etc/security/limits.conf" file.

#vi /etc/security/limits.conf

oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    2047
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

-Reate the new groups and users.

#groupadd -g 54321 oinstall
#groupadd -g 54322 dba
#groupadd -g 54324 backupdbadd

#useradd -u 500 -g oinstall -G dba oracle

-set password for oracle

#passwd oracle

-Set secure Linux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.

#vi /etc/selinux/config

SELINUX=permissive or disable

-Once the change is complete, restart the server


-If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here. To disable it, do the following.

# service iptables stop
# chkconfig iptables off

#systemctl status firewalld
#service firewalld stop
#systemctl disable firewalld

- create directories

#mkdir -p /u01/app/oracle/product/12.1.0/db_1
#chown -R oracle:oinstall /u01
#chmod -R 775 /u01

-login as oracle user and start installaion


$./runInstaller


click here for Explanation of each kernal parameter

For database creation using DBCA click here

very good examples for crontab


Wednesday 17 July 2013

Views to know oracle database version

we can find database version and bit version using v$version;

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production



SQL> select * from product_component_version;

PRODUCT                             VERSION      STATUS
----------------------------------- ------------ ------------
NLSRTL                              11.2.0.1.0   Production
Oracle Database 11g Enterprise Edit 11.2.0.1.0   Production
ion

PL/SQL                              11.2.0.1.0   Production
TNS for Linux:                      11.2.0.1.0   Production



SQL>  column comp_name format a30;
SQL> select comp_name,version from dba_registry;

COMP_NAME                      VERSION
------------------------------ ------------------------------
OWB                            11.2.0.1.0
Oracle Application Express     3.2.1.00.10
Oracle Enterprise Manager      11.2.0.1.0
OLAP Catalog                   11.2.0.1.0
Spatial                        11.2.0.1.0
Oracle Multimedia              11.2.0.1.0
Oracle XML Database            11.2.0.1.0
Oracle Text                    11.2.0.1.0
Oracle Expression Filter       11.2.0.1.0
Oracle Rules Manager           11.2.0.1.0
Oracle Workspace Manager       11.2.0.1.0

COMP_NAME                      VERSION
------------------------------ ------------------------------
Oracle Database Catalog Views  11.2.0.1.0
Oracle Database Packages and T 11.2.0.1.0
ypes

JServer JAVA Virtual Machine   11.2.0.1.0
Oracle XDK                     11.2.0.1.0
Oracle Database Java Packages  11.2.0.1.0
OLAP Analytic Workspace        11.2.0.1.0
Oracle OLAP API                11.2.0.1.0

18 rows selected.

Thursday 11 July 2013

understanding oracle dataguard 11gr2



Introduction to Oracle Data Guard
Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. 




Protection Mode:
There are three protection modes for the primary database:
Maximum Availability: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If no standby location is available, it acts in the same manner as maximum performance mode until a standby becomes available again.
Maximum Performance: Transactions on the primary commit as soon as redo information has been written to the online redo log. Transfer of redo information to the standby server is asynchronous, so it does not impact on performance of the primary.
Maximum Protection: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If not suitable standby location is available, the primary database shuts down.

Data Guard Services

The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:
·         Redo Transport Services
Control the automated transfer of redo data from the production database to one or more archival destinations.
·         Apply Services
·         Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.
·         Role Transitions
Change the role of a database from a standby database to a primary database, or from a primary database to a standby database using either a switchover or a failover operation.

Redo Transport Services


How to Send Redo Data

On the primary database, Oracle Data Guard uses archiver processes (ARCn) or the log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations. Although you cannot use both the archiver and log writer processes to send redo data to the same destination, you can choose to use the log writer process for some destinations, while archiver processes send redo data to other destinations.
Data Guard also uses the fetch archive log (FAL) client and server to send archived redo log files to standby destinations following a network outage, for automatic gap resolution, and resynchronization. 

Using Archiver Processes (ARCn) to Archive Redo Data


By default, redo transport services use ARCn processes to archive the online redo log files on the primary database. ARCn archival processing supports only the maximum performance level of data protection in Data Guard configurations. You must use the LGWR process to transmit redo data to standby locations that operate in other data protection modes.
Enabling ARCn Processes to Archive to Local or Remote Destinations
You specify attributes on the LOG_ARCHIVE_DEST_n initialization parameter to control the automated transfer of redo data from the primary database to other destinations. Because ARCn archiver processing is the default archival behavior, specifying the ARCH attribute on the LOG_ARCHIVE_DEST_n parameter is optional.
The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the maximum number of ARCn processes. By default, 4 archiver processes are invoked when the primary database instance starts and Oracle Database dynamically adjusts the number of processes to balance the archiver workload.
We can change by following command, it is dynamic parameter
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES = 20;


 Archiving to Local Destinations Before Archiving to Remote Destinations


Archiving happens when a log switch occurs on the primary database:
·         On the primary database, after the ARC0 process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), theARC1 process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2).
·         On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (MRP processFoot 1 ) or SQL Apply (LSP processFoot 2 ) to apply the redo to the standby database.

Using the Log Writer Process (LGWR) to Archive Redo Data

Using the LGWR process differs from ARCn processing, because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous, based on whether the SYNC or the ASYNC attribute is set on the LOG_ARCHIVE_DEST_n parameter. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.

LOG_ARCHIVE_DEST_n Attributes for LGWR Archival Processing

- LGWR, SYNC, and ASYNC
You must specify the LGWR and SERVICE attributes on the LOG_ARCHIVE_DEST_n parameter to enable redo transport services to use the LGWR process to transmit redo data to remote archival destinations.
The SYNC attribute performs all network I/O synchronously, in conjunction with each write operation to the online redo log file, and waits for the network I/O to complete. This is the default network transmission setting.
The ASYNC attribute performs all network I/O asynchronously and control is returned to the executing application or user immediately, without waiting for the network I/O to complete.





Above image shows a Data Guard configuration that uses the LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:
  • On the primary database, the LGWR process submits the redo data to one or more network server (LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC destinations.
  • On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.
A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.

Apply Services

The redo data transmitted from the primary database is written to the standby redo log on the standby database. Apply services automatically apply the redo data on the standby database to maintain consistency with the primary database. It also allows read-only access to the data.

The main difference between physical and logical standby databases is the manner in which apply services apply the archived redo data:
·         For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database, as shown in 
 Automatic Updating of a Physical Standby Database

By default, for a newly created standby database, the primary database is in maximum performance mode.

SELECT protection_mode FROM v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL>
The mode can be switched using the following commands. Note the alterations in the redo transport attributes.
-- Maximum Availability.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
-- Maximum Performance.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

-- Maximum Protection.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;

ALTER DATABASE OPEN;

 steps for setup datagaurd 11gr2

Wednesday 10 July 2013

Active cloning in 11gr2 (Using RMAN DUPLICATE command)

Active Database cloning

Oracle 11g introduced the ability to create duplicate databases directly without the need for a backup. This is known as active database duplication. The process is similar to the backup-based duplication, with a few exceptions.
First, and most obviously, you don't need a backup of the source system, but it does have to be in ARCHIVELOG mode.

->The passwords in the password files must match for both servers, so remember to set the correct password when creating the password file on the destination server. or copy the source password file to destination 

$scp orapwdorcl oracle@192.168.152.131:/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwclone

Both the source and destination database servers require a "tnsnames.ora" entry in the destination server.

-configure the tnsnames.ora on destination server for both the clone and server.

$vi tnsnames.ora

clone =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.152.131)(PORT = 1521))
    )
    (CONNECT_DATA =
      (ORACLE_SID = clone)
    (UR=A)
    )
  )

orcl =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.152.132)(PORT = 1521))
    )
    (CONNECT_DATA =
      (ORACLE_SID = orcl)
    (UR=A)
    )
  )


In the destination server requires static listener configuration in a "listener.ora" file.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = clone)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = clone)
    )
  )
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.152.132)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

In clone server when connecting to RMAN, you must use a connect string for both the target and auxiliary connections.

$vi /etc/oratab

clone:/u01/app/oracle/product/11.2.0/dbhome_1:N

:wq 

$export ORACLE_SID=clone
$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
$export PATH=$PATH:$ORACLE_HOME/bin

$rman TARGET sys/password@orcl AUXILIARY sys/password@clone

Include the FROM ACTIVE DATABASE clause in the DUPLICATE command


RMAN>DUPLICATE DATABASE TO DB11G FROM ACTIVE DATABASE



Friday 5 July 2013

Central and Local inventories for Oracle

Central Inventory
The Central Inventory contains the information relating to all Oracle products installed on a host. It contains the following files and folders:

  • Inventory File
  • Logs Directory
Inventory File
This file lists all the Oracle homes installed on the node. For each Oracle home, it also lists the Oracle home name, home index, and nodes on which the home is installed. It also mentions if the home is an Oracle Clusterware home or a removed Oracle home. It can only detect removed Oracle homes created using Oracle Universal Installer version 11.2 and later.
This file is present in the following location:
<central inventory location>/ContentsXML/inventory.xml
Logs Directory
The Central Inventory contains installation logs in the following location:
<central inventory location>/logs
The logs directory contains the logs corresponding to all installations performed on a particular node. You can also find a copy of the installation log in the$ORACLE_HOME/cfgtoollogs directory.The installation logs for an installation are identified by the timestamp associated with the log files. These files are generally saved in the following format:

<Name_of_Action><YYYY-MM-DD_HH-MM-SS{AM/PM}>.log

Oracle Home Inventory

Oracle home inventory or local inventory is present inside each Oracle home. It only contains information relevant to a particular Oracle home. This file is located in the following location:

$ORACLE_HOME/inventory

It contains the following files and folders:

  • Components File
  • Home Properties File
  • Other Folders
Components File
This file contains the details about third-party applications like Java Runtime Environment (JRE) required by different Java-based Oracle tools and components. 
This file is located here:
ORACLE_HOME/inventory/ContentsXML/comps.xml
Home Properties File
This file contains the details about the node list, the local node name, and the Oracle Clusterware flag for the Oracle home. In a shared Oracle home, the local node information is not present. This file also contains the following information:
This file is located here:

$ORACLE_HOME/inventory/ContentsXML/oraclehomeproperties.xml

Other Folders in the Oracle Home Inventory

Scripts
Contains the scripts used for the cloning operation.
ContentsXML
Contains the details of the components and libraries installed.
Templates
Contains the template files used for cloning.
oneoffs
Contains the details of the one-off patches applied.