Thursday, 24 October 2013

11g Active Data Guard - enabling Real-Time Query

11g Active Data Guard - enabling Real-Time Query

Active Data Guard is a good new feature in 11g (although requires a license) which enables us to query the Standby database while redo logs are being applied to it. In earlier releases, we had to stop the log apply, open the database in read only mode and then start the log apply again when the database was taken out of the read only mode.
With Oracle 11g Active Data Guard, we can make use of our standby site to offload reporting and query type applications while at the same time not compromising on the high availability aspect.
How do we enable Active Data Guard?
If we are not using the Data Guard Broker, we need to open the standby database, set it in read only mode and then start the managed recovery as shown below.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down
.
SQL> startup
ORACLE instance started.
Total System Global Area 1069252608 bytes
Fixed Size 2154936 bytes
Variable Size 847257160 bytes
Database Buffers 213909504 bytes
Redo Buffers 5931008 bytes
Database mounted.
Database opened
.

SQL> recover managed standby database using current logfile disconnect;

ASM Fast Mirror Resinc(11g new feature)

ASM Fast Mirror Resync Explained Experimentally


ASM fast mirror resync is a new 11g ASM feature to restore the redundancy quickly after a transient disk path failure. With 10g ASM, if redundancy is used and a disk path fails, the disk is taken offline and shortly after, dropped. In order to preserve the redundancy all the existing redundant mirror extent copies are resynchronized to newly created extents in the remaining disks of the disk group. It is a very costly operation. With 11g ASM fast mirror resync, when a transient disk path failure occurs disks are also taken offline but they are not dropped during period of time that is determined by the value of a new disk group attribute DISK_REPAIR_TIME. ASM keeps tracks of all modified extents during the failure. After the failure is fixed; only modified extents are resynchronized. Therefore restoring the redundancy is performed very fast. It is important to note that the failure is a transient failure and it is a failure while ASM tries to write to/read from an extent in a disk. So there is no mention of damaged disk or corruption of data. Your data stays stale in the disk. But it can not be accessed due to a failure of cable, controller etc.
DISK_REPAIR_TIME attribute of the disk group determines the outage duration. In other words, this is the acceptable time of duration during you need to fix the failure. This setting is also the countdown timer of ASM to drop the disks that has been taken offline. So keep an eye on it during the repair of the failure. If remaining time is not enough, it must be extended as long as you believe that you can fix the failure and tolerate more lacking redundancy.
Note that, it is always possible to take the disks offline in advance using ALTER DISKGROUP OFFLINE SQL STATEMENT for preventative planned maintenance purposes.
After giving this brief introduction, I would like to make this note more useful by doing something practical instead of explaining the concepts which are already done properly in the Oracle Documentation. That is why I have prepared following scenario.

Scenario

I will create a transient disk path failure, fix the failure and finally put the failed disks back online. The objective is to setup “ASM Fast Mirror Resync” and monitor it before and after a transient disk path failure. In this scenario, I have used 11gR2 Oracle Database and 11gR2 ASM instance on an OEL5.4 box. If you would like to replay this scenario yourself then create or find a sand box, in other words, a box which does not have any significant importance to others. Otherwise some tasks performed here may cause terrible consequences that you would never want it to happen to you.
Following tasks will be performed in the order given.
1. Perform prerequisite actions to use ASM fast mirror resync
2. Simulate a transient failure
3. Fix the transient failure
4. Monitor the Fast Mirror Reync in action
All the scripts used in the scenario are listed at the end of this post.

1. Perform prerequisite actions to use ASM fast mirror resync

As mentioned above we need to use ASM redundancy (not external redundancy). Create a disk group having normal redundancy and a tablespace having datafile using this disk group
    create diskgroup dg1 normal redundancy
    failgroup dg1_fg01 disk
    '/dev/raw/raw1' name raw_disk1,
    '/dev/raw/raw2' name raw_disk2,
    '/dev/raw/raw5' name raw_disk5
    failgroup dg1_fg02 disk
    '/dev/raw/raw3' name raw_disk3,
    '/dev/raw/raw4' name raw_disk4,
    '/dev/raw/raw6' name raw_disk6;
 
    create tablespace asmfmr_ts datafile '+DG1/asmfmr_ts_01.dbf' size 128M;
ASM disk group attributes COMPATIBLE.RDBMS and COMPATIBLE.ASM must be set to the minimum software release of 11.1. Therefore a 10g database can not use this feature. Setup Fast Mirror Sync as below. Setting up DISK_REPAIR_TIME is optional. It has a default value of 3.6h
    alter diskgroup dg1 set attribute 'compatible.asm' = '11.2.0.0.0';
    alter diskgroup dg1 set attribute 'compatible.rdbms' = '11.2.0.0.0';
    alter diskgroup dg1 set attribute 'disk_repair_time'='5h';
List the new attributes we have just updated using the script dg_attribs.sql
SQL> @dg_attribs.sql
DG NAME              ATTRIB NAME                    VALUE                READ_ON
-------------------- ------------------------------ -------------------- -------
DG1                  disk_repair_time               5h                   N
DG1                  au_size                        1048576              Y
DG1                  sector_size                    512                  Y
DG1                  access_control.umask           066                  N
DG1                  compatible.rdbms               11.2.0.0.0           N
DG1                  cell.smart_scan_capable        FALSE                N
DG1                  access_control.enabled         FALSE                N
DG1                  compatible.asm                 11.2.0.0.0           N

2. Simulate a transient failure

Below, as it can be seen from path of the ASM disks, I have used raw devices as ASM storage. Each raw device corresponds to a disk in a disk group. Since we have the ability to change file access privileges of raw devices, I can create a transient disk path failure by changing read-write permission of the raw devices.
Below is the status of the ASM disks just before the failure.
SQL> @chkasmdiskstat.sql
NAME            PATH            MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER
--------------- --------------- ------- ------------ ------- -------- ---------- ------------
RAW_DISK6       /dev/raw/raw6   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG02              0
RAW_DISK5       /dev/raw/raw5   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01              0
RAW_DISK4       /dev/raw/raw4   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG02              0
RAW_DISK3       /dev/raw/raw3   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG02              0
RAW_DISK2       /dev/raw/raw2   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01              0
RAW_DISK1       /dev/raw/raw1   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01              0
Below I am simulating a transient disk path failure by modifying the read-write permissions of disks (raw devices) to 000.
From this point on, ASM can not access to these disks to read from or to write.
# chmod 000 /dev/raw/raw6 /dev/raw/raw4 /dev/raw/raw3
At this stage it is better to execute a workload to make some modifications in the existing redundant extent copies in the disk failure group DG1_FG01. It will also make ASM to realize that some disks are missing while it tries to mirror the redundant copies to the extents of failed disks.
SQL> @workload.sql
As it can be seen below, ASM can not access to any disk in failure group DG1_FG02 and it has taken these disks offline. Also disk repair timer has started. Besides this information, you can see further details regarding consequences of this failure in alert.log of ASM if you want.
SQL> @chkasmdiskstat.sql
NAME            PATH            MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP  REPAIR_TIMER
--------------- --------------- ------- ------------ ------- -------- ---------- ------------
RAW_DISK3                       MISSING UNKNOWN      OFFLINE NORMAL   DG1_FG02          18000
RAW_DISK4                       MISSING UNKNOWN      OFFLINE NORMAL   DG1_FG02          18000
RAW_DISK6                       MISSING UNKNOWN      OFFLINE NORMAL   DG1_FG02          18000
RAW_DISK5       /dev/raw/raw5   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01              0
RAW_DISK2       /dev/raw/raw2   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01              0
RAW_DISK1       /dev/raw/raw1   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01              0

3. Fix the transient failure

The fix is simply to revert the raw device read-write permissions to 660. This makes raw devices accessible by ASM again.
# chmod 660 /dev/raw/raw6 /dev/raw/raw4 /dev/raw/raw3
Now it is time to take the disks online.
SQL> alter diskgroup dg1 online disks in failgroup dg1_fg02;

4. Monitor the Fast Mirror Resync in action

Below it can be seen that stale extents of the disks taken online are being resynchronized. Look at the devices below having SYNCING value in the the column MODE_STATUS
SQL> @chkasmdiskstat.sql
NAME            PATH            MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP   REPAIR_TIMER
--------------- --------------- ------- ------------ ------- -------  ----------- ------------
RAW_DISK6       /dev/raw/raw6   CACHED  MEMBER       SYNCING NORMAL   DG1_FG02           17297
RAW_DISK5       /dev/raw/raw5   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01               0
RAW_DISK4       /dev/raw/raw4   CACHED  MEMBER       SYNCING NORMAL   DG1_FG02           17297
RAW_DISK3       /dev/raw/raw3   CACHED  MEMBER       SYNCING NORMAL   DG1_FG02           17297
RAW_DISK2       /dev/raw/raw2   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01               0
RAW_DISK1       /dev/raw/raw1   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01               0
Also syncing operation is represented with a status ONLINE in the column OPERATION of v$ASM_OPERATION view. See below.
SQL> @chkasmoper.sql
GROUP_NUMBER OPERA STAT
------------ ----- ----
1            ONLIN RUN
After syncing operation completed, redundancy has been restored and disks are online again. See below.
SQL> @chkasmdiskstat.sql
NAME            PATH            MOUNT_S HEADER_STATU MODE_ST STATE    FAILGROUP   REPAIR_TIMER
--------------- --------------- ------- ------------ ------- -------  ----------- ------------
RAW_DISK6       /dev/raw/raw6   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG02               0
RAW_DISK5       /dev/raw/raw5   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01               0
RAW_DISK4       /dev/raw/raw4   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG02               0
RAW_DISK3       /dev/raw/raw3   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG02               0
RAW_DISK2       /dev/raw/raw2   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01               0
RAW_DISK1       /dev/raw/raw1   CACHED  MEMBER       ONLINE  NORMAL   DG1_FG01               0

List of scripts used in this post

-- chkasmdiskstat.sql
-- Show ASM disks status
set pagesize 100
set linesize 200
col path for a15
col name for a15
col failgroup for a15
select disk_number,name,path,mount_status,header_status,mode_status,state,failgroup,repair_timer
from v$asm_disk;
 
-- dg_attribs.sql
-- show disk group attributes
col "dg name" for a20
col "attrib name" for a30
col value for a20
set pagesize 20
select dg.name  "dg name"
      ,a.name   "attrib name"
      ,a.value
      ,read_only
from v$asm_diskgroup dg,
     v$asm_attribute a
where dg.name = 'DG1'
and   dg.group_number = a.group_number;
 
-- chkasmoper.sql
-- show ongoing ASM operations
select * from v$asm_operation;
 
-- workload.sql
-- create a schema having a table using table space asmfmr_ts which has data files in the disk group DG1
-- This script is going to modify the contents of the extents on the disk group DG1
connect / as sysdba
create user asmfmr identified by asmfmr default tablespace asmfmr_ts;
grant create session, create table to asmfmr;
alter user asmfmr quota unlimited on asmfmr_ts;
connect asmfmr/asmfmr
drop table table_workload;
create table table_workload (col1 number, col2 varchar2(200));
begin
 insert into table_workload values ( 1,'workload data');
 for i in 1..20 loop
   insert into table_workload  select * from table_workload;
 end loop;
end;
/
commit;
select count(1) from table_workload;

exit

Tuesday, 13 August 2013

Materialized Views

What is a Materialized View?

    A materialized view is a replica of a target master from a single point in time. The master can be either a master table at a master site or a master materialized view at a materialized view site. Where as in multi master replication tables are continuously updated by other master sites, materialized views are updated from one or more masters through individual batch updates, known as a refreshes, from a single master site or master materialized view site, as illustrated in. The arrows in represent database links.

Materialized View Connected to a Single Master Site

Read-Only Materialized Views


You can make a materialized view read-only during creation by omitting the FOR UPDATE clause or disabling the equivalent option in the Replication Management tool.
SQL>CREATE MATERIALIZED VIEW hr.employees AS SELECT * FROM hr.employees@orc1.world;

Updatable Materialized Views


You can make a materialized view updatable during creation by including the FOR UPDATE clause or enabling the equivalent option in the Replication Management tool. For changes made to an updatable materialized view to be pushed back to the master during refresh, the updatable materialized view must belong to a materialized view group.

SQL>CREATE MATERIALIZED VIEW hr.departments FOR UPDATE AS SELECT * FROM hr.departments@orc1.world;

Writeable Materialized Views

A writeable materialized view is one that is created using the FOR UPDATE clause but is not part of a materialized view group. Users can perform DML operations on a writeable materialized view, but if you refresh the materialized view, then these changes are not pushed back to the master and the changes are lost in the materialized view itself. Writeable materialized views are typically allowed wherever fast-refreshable read-only materialized views are allowed.


Required Privileges for Materialized View Operations

Three distinct types of users perform operations on materialized views:
  • Creator: the user who creates the materialized view
  • Refresher: the user who refreshes the materialized view
  • Owner: the user who owns the materialized view. The materialized view resides in this user's schema.

Tuesday, 30 July 2013

oracle database 12c pre installation steps

-Unzip the files.

#unzip linuxamd64_12c_database_1of2.zip
#unzip linuxamd64_12c_database_2of2.zip


-Add or amend the following lines in the "/etc/sysctl.conf" file.

#vi /etc/sysctl.conf

fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576



-Run the following command to change the current kernel parameters.

#/sbin/sysctl -p


-Add the following lines to the "/etc/security/limits.conf" file.

#vi /etc/security/limits.conf

oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    2047
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

-Reate the new groups and users.

#groupadd -g 54321 oinstall
#groupadd -g 54322 dba
#groupadd -g 54324 backupdbadd

#useradd -u 500 -g oinstall -G dba oracle

-set password for oracle

#passwd oracle

-Set secure Linux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.

#vi /etc/selinux/config

SELINUX=permissive or disable

-Once the change is complete, restart the server


-If you have the Linux firewall enabled, you will need to disable or configure it, as shown here or here. To disable it, do the following.

# service iptables stop
# chkconfig iptables off

#systemctl status firewalld
#service firewalld stop
#systemctl disable firewalld

- create directories

#mkdir -p /u01/app/oracle/product/12.1.0/db_1
#chown -R oracle:oinstall /u01
#chmod -R 775 /u01

-login as oracle user and start installaion


$./runInstaller


click here for Explanation of each kernal parameter

For database creation using DBCA click here

very good examples for crontab


Wednesday, 17 July 2013

Views to know oracle database version

we can find database version and bit version using v$version;

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
PL/SQL Release 10.2.0.1.0 - Production
CORE    10.2.0.1.0      Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production



SQL> select * from product_component_version;

PRODUCT                             VERSION      STATUS
----------------------------------- ------------ ------------
NLSRTL                              11.2.0.1.0   Production
Oracle Database 11g Enterprise Edit 11.2.0.1.0   Production
ion

PL/SQL                              11.2.0.1.0   Production
TNS for Linux:                      11.2.0.1.0   Production



SQL>  column comp_name format a30;
SQL> select comp_name,version from dba_registry;

COMP_NAME                      VERSION
------------------------------ ------------------------------
OWB                            11.2.0.1.0
Oracle Application Express     3.2.1.00.10
Oracle Enterprise Manager      11.2.0.1.0
OLAP Catalog                   11.2.0.1.0
Spatial                        11.2.0.1.0
Oracle Multimedia              11.2.0.1.0
Oracle XML Database            11.2.0.1.0
Oracle Text                    11.2.0.1.0
Oracle Expression Filter       11.2.0.1.0
Oracle Rules Manager           11.2.0.1.0
Oracle Workspace Manager       11.2.0.1.0

COMP_NAME                      VERSION
------------------------------ ------------------------------
Oracle Database Catalog Views  11.2.0.1.0
Oracle Database Packages and T 11.2.0.1.0
ypes

JServer JAVA Virtual Machine   11.2.0.1.0
Oracle XDK                     11.2.0.1.0
Oracle Database Java Packages  11.2.0.1.0
OLAP Analytic Workspace        11.2.0.1.0
Oracle OLAP API                11.2.0.1.0

18 rows selected.

Thursday, 11 July 2013

understanding oracle dataguard 11gr2



Introduction to Oracle Data Guard
Oracle Data Guard ensures high availability, data protection, and disaster recovery for enterprise data. Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions. Data Guard maintains these standby databases as copies of the production database. Then, if the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, minimizing the downtime associated with the outage. 




Protection Mode:
There are three protection modes for the primary database:
Maximum Availability: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If no standby location is available, it acts in the same manner as maximum performance mode until a standby becomes available again.
Maximum Performance: Transactions on the primary commit as soon as redo information has been written to the online redo log. Transfer of redo information to the standby server is asynchronous, so it does not impact on performance of the primary.
Maximum Protection: Transactions on the primary do not commit until redo information has been written to the online redo log and the standby redo logs of at least one standby location. If not suitable standby location is available, the primary database shuts down.

Data Guard Services

The following sections explain how Data Guard manages the transmission of redo data, the application of redo data, and changes to the database roles:
·         Redo Transport Services
Control the automated transfer of redo data from the production database to one or more archival destinations.
·         Apply Services
·         Apply redo data on the standby database to maintain transactional synchronization with the primary database. Redo data can be applied either from archived redo log files, or, if real-time apply is enabled, directly from the standby redo log files as they are being filled, without requiring the redo data to be archived first at the standby database.
·         Role Transitions
Change the role of a database from a standby database to a primary database, or from a primary database to a standby database using either a switchover or a failover operation.

Redo Transport Services


How to Send Redo Data

On the primary database, Oracle Data Guard uses archiver processes (ARCn) or the log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations. Although you cannot use both the archiver and log writer processes to send redo data to the same destination, you can choose to use the log writer process for some destinations, while archiver processes send redo data to other destinations.
Data Guard also uses the fetch archive log (FAL) client and server to send archived redo log files to standby destinations following a network outage, for automatic gap resolution, and resynchronization. 

Using Archiver Processes (ARCn) to Archive Redo Data


By default, redo transport services use ARCn processes to archive the online redo log files on the primary database. ARCn archival processing supports only the maximum performance level of data protection in Data Guard configurations. You must use the LGWR process to transmit redo data to standby locations that operate in other data protection modes.
Enabling ARCn Processes to Archive to Local or Remote Destinations
You specify attributes on the LOG_ARCHIVE_DEST_n initialization parameter to control the automated transfer of redo data from the primary database to other destinations. Because ARCn archiver processing is the default archival behavior, specifying the ARCH attribute on the LOG_ARCHIVE_DEST_n parameter is optional.
The LOG_ARCHIVE_MAX_PROCESSES initialization parameter specifies the maximum number of ARCn processes. By default, 4 archiver processes are invoked when the primary database instance starts and Oracle Database dynamically adjusts the number of processes to balance the archiver workload.
We can change by following command, it is dynamic parameter
ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES = 20;


 Archiving to Local Destinations Before Archiving to Remote Destinations


Archiving happens when a log switch occurs on the primary database:
·         On the primary database, after the ARC0 process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), theARC1 process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2).
·         On the remote destination, the remote file server process (RFS) will, in turn, write the redo data to an archived redo log file from a standby redo log file. Log apply services use Redo Apply (MRP processFoot 1 ) or SQL Apply (LSP processFoot 2 ) to apply the redo to the standby database.

Using the Log Writer Process (LGWR) to Archive Redo Data

Using the LGWR process differs from ARCn processing, because instead of waiting for the online redo log to switch at the primary database and then writing the entire archived redo log at the remote destination all at once, the LGWR process selects a standby redo log file at the standby site that reflects the log sequence number (and size) of the current online redo log file of the primary database. Then, as redo is generated at the primary database, it is also transmitted to the remote destination. The transmission to the remote destination will either be synchronous or asynchronous, based on whether the SYNC or the ASYNC attribute is set on the LOG_ARCHIVE_DEST_n parameter. Synchronous LGWR processing is required for the maximum protection and maximum availability modes of data protection in Data Guard configurations.

LOG_ARCHIVE_DEST_n Attributes for LGWR Archival Processing

- LGWR, SYNC, and ASYNC
You must specify the LGWR and SERVICE attributes on the LOG_ARCHIVE_DEST_n parameter to enable redo transport services to use the LGWR process to transmit redo data to remote archival destinations.
The SYNC attribute performs all network I/O synchronously, in conjunction with each write operation to the online redo log file, and waits for the network I/O to complete. This is the default network transmission setting.
The ASYNC attribute performs all network I/O asynchronously and control is returned to the executing application or user immediately, without waiting for the network I/O to complete.





Above image shows a Data Guard configuration that uses the LGWR process to synchronously transmit redo data to the standby system at the same time it is writing redo data to the online redo log file on the primary database:
  • On the primary database, the LGWR process submits the redo data to one or more network server (LNSn) processes, which then initiate the network I/O in parallel to multiple remote destinations. Transactions are not committed on the primary database until the redo data necessary to recover the transaction is received by all LGWR SYNC destinations.
  • On the standby system, the remote file server (RFS) receives redo data over the network from the LGWR process and writes the redo data to the standby redo log files.
A log switch on the primary database triggers a log switch on the standby database, causing ARCn processes on the standby database to archive the standby redo log files to archived redo log files on the standby database. Then, Redo Apply (MRP process) or SQL Apply (LSP process) applies the redo data to the standby database. If real-time apply is enabled, Data Guard recovers redo data directly from the current standby redo log file as it is being filled up by the RFS process.

Apply Services

The redo data transmitted from the primary database is written to the standby redo log on the standby database. Apply services automatically apply the redo data on the standby database to maintain consistency with the primary database. It also allows read-only access to the data.

The main difference between physical and logical standby databases is the manner in which apply services apply the archived redo data:
·         For physical standby databases, Data Guard uses Redo Apply technology, which applies redo data on the standby database using standard recovery techniques of an Oracle database, as shown in 
 Automatic Updating of a Physical Standby Database

By default, for a newly created standby database, the primary database is in maximum performance mode.

SELECT protection_mode FROM v$database;

PROTECTION_MODE
--------------------
MAXIMUM PERFORMANCE

SQL>
The mode can be switched using the following commands. Note the alterations in the redo transport attributes.
-- Maximum Availability.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
-- Maximum Performance.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

-- Maximum Protection.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=db11g_stby AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G_STBY';
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PROTECTION;

ALTER DATABASE OPEN;

 steps for setup datagaurd 11gr2

Wednesday, 10 July 2013

Active cloning in 11gr2 (Using RMAN DUPLICATE command)

Active Database cloning

Oracle 11g introduced the ability to create duplicate databases directly without the need for a backup. This is known as active database duplication. The process is similar to the backup-based duplication, with a few exceptions.
First, and most obviously, you don't need a backup of the source system, but it does have to be in ARCHIVELOG mode.

->The passwords in the password files must match for both servers, so remember to set the correct password when creating the password file on the destination server. or copy the source password file to destination 

$scp orapwdorcl oracle@192.168.152.131:/u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwclone

Both the source and destination database servers require a "tnsnames.ora" entry in the destination server.

-configure the tnsnames.ora on destination server for both the clone and server.

$vi tnsnames.ora

clone =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.152.131)(PORT = 1521))
    )
    (CONNECT_DATA =
      (ORACLE_SID = clone)
    (UR=A)
    )
  )

orcl =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.152.132)(PORT = 1521))
    )
    (CONNECT_DATA =
      (ORACLE_SID = orcl)
    (UR=A)
    )
  )


In the destination server requires static listener configuration in a "listener.ora" file.

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = clone)
      (ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1)
      (SID_NAME = clone)
    )
  )
LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.152.132)(PORT = 1521))
    )
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
    )
  )

ADR_BASE_LISTENER = /u01/app/oracle

In clone server when connecting to RMAN, you must use a connect string for both the target and auxiliary connections.

$vi /etc/oratab

clone:/u01/app/oracle/product/11.2.0/dbhome_1:N

:wq 

$export ORACLE_SID=clone
$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
$export PATH=$PATH:$ORACLE_HOME/bin

$rman TARGET sys/password@orcl AUXILIARY sys/password@clone

Include the FROM ACTIVE DATABASE clause in the DUPLICATE command


RMAN>DUPLICATE DATABASE TO DB11G FROM ACTIVE DATABASE