Oracle Data Pump:
Oracle Data Pump is a newer, faster and more flexible
alternative to the "exp" and "imp" utilities used in
previous Oracle versions.
Roles:
Many Data Pump Export and Import operations require the user
to have the DATAPUMP_EXP_FULL_DATABASE role and/or
the DATAPUMP_IMP_FULL_DATABASE role.The DATAPUMP_EXP_FULL_DATABASE
role
affects only export operations. The DATAPUMP_IMP_FULL_DATABASE role affects
import operations and operations that use the Import SQLFILE parameter.
Key Features:
Fast Performance:
Operations performed with the new Data Pump Export and
Import utilities are typically much faster than operations performed with the
original Export and Import utilities. With Data Pump Export, when the direct
path method of unloading is used, a single stream of data unload is about two
times faster than original Export. This is because the direct path API has been
modified to be even more efficient than before. Depending on the level of
parallelism, the performance improvement can be even greater.
With Data Pump Import, a single stream of data load is about
15-45 times faster than original Import. This is because original Import uses
only conventional mode inserts, whereas Data Pump Import uses the direct path
method of loading. As with Export, the performance improvement can be even
greater depending on the level of parallelism.
Improved Management
Restart:
Every Data Pump operation has a master table that is created
in the schema of the user running a Data Pump job. The master table maintains
information about all aspects of the job, such as the current state of every
object exported or imported and its location in the dump file set. In the event
of a planned or unplanned job stoppage, Data Pump knows which objects were
currently being worked on and whether or not they completed successfully.
Therefore, all stopped Data Pump jobs can be restarted without loss of data as
long as the master table and dump file set remain undisturbed while the job is
stopped.
Object Selection:
A Data Pump job can exclude or include virtually any type of
object and any subset of objects within a type. The following client parameters
are used:
The EXCLUDE parameter
filters the metadata that is exported and imported by specifying objects and
object types to be excluded from the current operation. An optional name
qualifier can be used for finer selectivity within each object type specified.
The INCLUDE parameter
filters the metadata that is exported and imported by specifying objects and
object types to be included for the current operation. An optional name
qualifier can be used for finer selectivity within each object type specified.
The CONTENT parameter
specifies what is exported or imported: metadata only, data only, or both.
The QUERY parameter
filters data by specifying a clause for a SQL SELECT statement, which is
applied to all tables in the export job or to a specific table.
Monitoring and Estimating
Capability:
In addition to the standard progress and error messages
printed by the client and to the log file, the new interactive STATUS command
displays cumulative status of the job, along with a description of the current
operation. An estimated completion percentage for the job is also returned. The
user can also specify a time period, in seconds, for an automatic status update
at specific intervals.
Because more than one client can be attached to a running
job, the user can start a job at work, detach from it, go home, reattach to the
job at home, and monitor it throughout the evening.
The start of every export job now includes an estimate phase
in which the approximate amount of all data to be unloaded is determined. This
allows the user to allocate a sufficient amount of disk space for the dump file
set.
Network Mode:
Data Pump Export and Import both support a network mode in
which the job's source is a remote Oracle instance. When you perform an import
over the network, there are no dump files involved because the source is
another database, not a dump file set.
When you perform an export over the network, the source can
be a read-only database on another system. Dump files are written out on the
local system just as they are with a local (non-networked) export.
Default Locations for
Dump, Log, and SQL Files
Because Data Pump is server-based, rather than client-based,
dump files, log files, and SQL files are accessed relative to server-based
directory paths. Data Pump requires you to specify directory paths as directory
objects. A directory object maps a name to a directory path on the file system.
For example, the following SQL statement creates a directory
object named dpump_dir1 that is mapped to a directory located at /u01/app/dumpfiles.
SQL> CREATE DIRECTORY dpump_dir1 AS '/u01/app/dumpfiles';
The reason that a directory object is required is to ensure
data security and integrity. For example:
If you were allowed to specify a directory path location for
an input file, you might be able to read data that the server has access to,
but to which you should not.
If you were allowed to specify a directory path location for
an output file, the server might overwrite a file that you might not normally have
privileges to delete.
On Unix and Windows NT systems, a default directory object, DATA_PUMP_DIR, is created
at database creation or whenever the database dictionary is upgraded. By
default, it is available only to privileged users.
If you are not a privileged user, before you can run Data
Pump Export or Data Pump Import, a directory object must be created by a
database administrator (DBA) or by any user with the CREATE ANY DIRECTORY privilege.
After a directory is created, the user creating the
directory object needs to grant READ or WRITE permission on the directory to
other users. For example, to allow the Oracle database to read and write files
on behalf of user hr in the directory named by dpump_dir1, the DBA must execute
the following command:
SQL> GRANT
READ, WRITE ON DIRECTORY dpump_dir TO ramesh;
Note that READ or WRITE permission to a directory object
only means that the Oracle database will read or write that file on your
behalf. You are not given direct access to those files outside of the Oracle
database unless you have the appropriate operating system privileges.
Similarly, the Oracle database requires permission from the operating system to
read and write files in the directories.
Original Export and
Import versus Data Pump Export and Import
If you are familiar with the original Export (exp) and Import
(imp) utilities,
it is important to understand that many of the concepts behind them do not
apply to Data Pump Export (expdp) and Data Pump Import (impdp). In
particular:
Data Pump Export and Import operate on a group of files
called a dump file set rather than on a single sequential dump file.
Data Pump Export and Import access files on the server
rather than on the client. This results in improved performance. It also means
that directory objects are required when you specify file locations.
Data Pump Export and Import use parallel execution rather
than a single stream of execution, for improved performance. This means that
the order of data within dump file sets and the information in the log files is
more variable.
Data Pump Export and Import represent metadata in the dump
file set as XML documents rather than as DDL commands. This provides improved
flexibility for transforming the metadata at import time.
Data Pump Export and Import are self-tuning utilities.
Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither
required nor supported by Data Pump Export and Import.
At import time there is no option to perform interim commits
during the restoration of a partition. This was provided by the COMMIT parameter in
original Import.
There is no option to merge extents when you re-create
tables. In original Import, this was provided by the COMPRESS parameter.
Instead, extents are reallocated according to storage parameters for the target
table.
Sequential media, such as tapes and pipes, are not supported.
The Data Pump method for moving data between different
database versions is different than the method used by original Export/Import.
With original Export, you had to run an older version of Export (exp) to
produce a dump file that was compatible with an older database version. With
Data Pump, you can use the current Export (expdp) version and
simply use the VERSION parameter to specify the target database
version.
When you are importing data into an existing table using
either APPEND or TRUNCATE, if any row
violates an active constraint, the load is discontinued and no data is loaded.
This is different from original Import, which logs any rows that are in
violation and continues with the load.
Data Pump Export and Import consume more undo tablespace than
original Export and Import. This is due to additional metadata queries during
export and some relatively long-running master table queries during import. As
a result, for databases with large amounts of metadata, you may receive an
ORA-01555: snapshot too old error. To avoid this, consider adding additional
undo tablespace or increasing the value of the UNDO_RETENTION
initialization parameter for the database.
If a table has compression enabled, Data Pump Import
attempts to compress the data being loaded. Whereas, the original Import
utility loaded data in such a way that if a even table had compression enabled,
the data was not compressed upon import.
Examples:
--------------
Export Full Database Mode
Only users with the DBA role or
the DATAPUMP_EXP_FULL_DATABASE role
can export in full database mode. In this example, an entire database is
exported to the filedba.dmp.
Scenario:
Connect to database and create user.
Sql>create user ramesh identified by
ramesh;
User created
SQL> grant connect,resource to
ramesh;
Grant succeeded.
SQL>create directory dump as
‘/u01/app/backup’;
“NOTE:
Check the directory exists or not physically in OS level. In not there need to
create.
$mkdir /u01/app/backup”
SQL>grant read,write on directory
dump to system;
SQL> conn ramesh
Enter password:
Connected.
And create some tables in the user, I created two tables as
bellow
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ -------
----------
TEST TABLE
TEST_TBL TABLE
Sql>conn sys as sysdba
Enter password:
connected
Sql>create user siva identified by
siva;
User created
SQL> grant connect,resource to siva;
Grant succeeded.
SQL> conn siva
Enter password:
Connected.
Create table in this user also.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ -------
----------
EMPLOYEE TABLE
Now export full database by using full=y parameter.
$expdp system/xxxxx directory=dump dumpfile=full_db.dmp
logfile=full_db.log full=y
After completing the export, drop the user.
SQL> drop user ramesh cascade;
User dropped.
SQL> drop user siva cascade;
User dropped.
Check the user existence in database;
SQL> select username from dba_users
where username in ('RAMESH',’SIVA’);
no rows selected
Now import the exported dumpfile to restore the dropped user
using ignore=y.
$impdp system/xxxxxx directory=dump
dumpfile=full_database.dmp logfile=full_db.log table_exists_action=append
full=y
Now export data imported, now check the user in the
database.
SQL> select username from dba_users
where username IN ('RAMESH',’SIVA’);
USERNAME
------------------------------
RAMESH
SIVA
SQL> conn ramesh
Enter password:
Connected.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ -------
----------
TEST TABLE
TEST_TBL TABLE
User Mode
For exporting user
owner
parameter will be used.Scenario:
Export the user using owner parameter
$expdp system/xxxxx directory=dump dumpfile=ramesh_data.dmp
logfile=ramesh_data.log schemas=ramesh
Now drop the user in database
SQL>drop user ramesh cascade;
User dropped.
Now check the user existence.
SQL>select username
from dba_users where username=’RAMESH’;
no rows selected
Now try to import the dumpfile into database, so that
dropped user will come back.
[oracle@localhost ~]$ impdp system/xxxxx
directory=dump dumpfile=ramesh_data.dmp logfile=ramesh.log;
Import is done
SQL> select username from dba_users
where username='RAMESH';
USERNAME
------------------------------
RAMESH
SQL> select
object_name,object_type from dba_objects where owner='RAMESH';
OBJECT_NAME OBJECT_TYPE
--------------------
--------------------
TEST_TBL TABLE
TEST TABLE
Exporting table:
To export a table we use tables parameter in expdp utility.
In this example we are taking backup of the table test_tbl belongs to ramesh schema to the dump file tbl_exp.dmp.
[oracle@localhost ~]$ expdp
system/xxxxx directory=dump dumpfile=tbl_exp.dmp logfile=tbl_exp.log
tables=ramesh.test_tbl;
Export: Release 11.2.0.1.0
- Production on Thu Jan 8 07:44:48 2015
Copyright (c) 1982, 2009, Oracle and/or
its affiliates. All rights reserved.
EXP-00056: ORACLE error 28002
encountered
ORA-28002: the password
will expire within 7 days
Connected to: Oracle
Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning,
OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII
character set and AL16UTF16 NCHAR character set
server uses WE8MSWIN1252
character set (possible charset conversion)
About to export specified
tables via Conventional Path ...
Current user changed to
RAMESH
. . exporting table TEST_TBL 5 rows exported
Export terminated
successfully without warnings.
Now import the exported table into
another user siva. To do this, we need to user remapschema parameters as bellow.
[oracle@localhost ~]$ impdp
system/xxxxx directory=dump dumpfile=tbl_exp.dmp logfile=tbl_imp.log remap_schema=ramesh:siva
tables=ramesh.test_tbl;
Parameter File Method
$expdp SYSTEM/password
PARFILE=params.par
The params.dat file contains the following
information:
Userid=’system/xxxxx’
dumpfile=/u01/app/full_db.dmp
logfile=/u01/app/full_db.log
FULL=y
=========
No comments:
Post a Comment