A.Very useful method to upgrade 11gR2 to 12c using transportable=always
Full transportable export/import is a new feature in Oracle Database 12c that greatly simplifies the
process of database migration. Combining the ease of use of Oracle Data Pump with the performance
of transportable tablespaces, full transportable export/import gives you the ability to upgrade or
migrate to Oracle Database 12c in a single operation if your source database is at least Oracle Database
11g Release 2 (11.2.0.3). Full transportable export/import is a valuable tool for migrating to pluggable
databases, allowing you to take advantage of the cost savings and economies of scale inherent in
moving to a multitenant architecture.You must explicitly specify the service name of the PDB in the
connect string for the impdp command.
STEP 1: Check the endianness of both platforms
To check the endianness of a platform, run the following query on each platform.
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_ID = d.PLATFORM_ID;
In this case, both Oracle Solaris x86 and Oracle Enterprise Linux have little endian format, so no
endian conversion is necessary.
STEP 2: Verify that the set of tablespaces to be transported is self-contained
SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(‘hr_1,hr_2’, TRUE);
Note that you must include all user tablespaces in the database when performing this check for a full
transportable export/import.
After invoking this PL/SQL procedure, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view.
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
STEP 3: Create a directory object in the source database
SQL> CREATE DIRECTORY dp_dir AS ’/u01/app/datafiles’;
STEP 4: Place the hr_1 and hr_2 tablespaces in read-only mode
The tablespaces to be transported must be in read-only mode for the duration of the export. In this
case we need to issue two commands on the source database.
SQL> ALTER TABLESPACE hr_1 READ ONLY;
SQL> ALTER TABLESPACE hr_2 READ ONLY;
The tablespaces can be returned to read-write status once the full transportable export has finished and
the tablespace data files have been copied to the destination system.
STEP 5: Invoke full transportable export on the source database
Invoke the Data Pump export utility as a user with the DATAPUMP_EXP_FULL_DATABASE role.
$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log
Note that the VERSION=12 parameter is required because the source database is Oracle Database 11g
Release 2 (11.2.0.3). This is the only time that a version number greater than the current version is
allowed by the expdp command. If the source database is Oracle Database 12c, with
COMPATIBLE=12.0 or higher, then the VERSION parameter is not required.
After the export command completes, the export log file shows a list of all of the tablespace data files
that need to be moved to the target.
STEP 6: Transport the tablespace data files and the export dump file from source to target
$ cd /u01/app/oracle/oradata/hr_pdb/
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_101.dbf .
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_201.dbf .
$ cp /net/<source-server>/u01/app/datafiles/full_tts.dmp .
STEP 7: Create a directory object on the destination database
Because we copied the data pump dump file to the oradata directory for HR_PDB, we will create a
directory object to point to that same directory for this import. This directory object must be created
by a user connected to the PDB container.
SQL> CREATE DIRECTORY dp_dir AS ‘/u01/app/oracle/oradata/hr_pdb’;
SQL> GRANT read, write on directory dp_dir to system;
STEP 8: Invoke full transportable import on the destination database
Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role.
$ impdp system/manager@hr_pdb directory=dp_dir \
dumpfile=full_tts.dmp logfile=full_tts_imp.log \
metrics=y \
encryption_password=secret123word456 \
transport_datafiles=’/u01/app/oracle/oradata/hr_pdb/hr_101.dbf’,\
‘/u01/app/oracle/oradata/hr_pdb/hr_201.dbf’
Note that, while this example shows several parameters specified on the command line, in most cases
use of a data pump parameter file is recommended to avoid problems with white space and quotation
marks on the command line.
After this statement executes successfully, the user tablespaces are automatically placed in read/write
mode on the destination database. Check the import log file to ensure that no unexpected error
occurred, and perform your normal post-migration validation and testing.
STEP 9: (Optional) Restore user tablespaces to read-write mode on the source database
After the full transportable export has finished, you can return the user-defined tablespaces to readwrite
mode at the source database if desired.
B.General Examples:-
————————————-Export and import————
Need to create directory TEST for /u01/dppump first
sqlplus / as sysdba
SQL>create directory as ‘/u01/dppump
SQL>grant read,write on directory test to public;
expdp \”SYS AS SYSDBA\” schemas=TEST directory=TST_PUMP dumpfile=test_062717.dmp
impdp \”/ as sysdba\” directory=TEST dumpfile=test_062717.dmp logfile=test_062717_imp.log schemas=TEST TRANSFORM=oid:n
——————Using par file example———————
PAR file exp.par for expdp
directory=expp
dumpfile=exp_test%U.dmp
logfile=test.log
cluster=n
tables=
TEST.T1,
TEST.T2
parallel=12
flashback_scn=82850177023
nohup expdp system/test parfile=exp.par &
PAR file imp.par for impdp
directory=expp
dumpfile=test%U.dmp
logfile=test_imp.log
cluster=n
remap_tablespace=
USERS:PROD
DEMO:PROD
remap_table=
TEST.T1:TT1
TEST.T2:TT2
remap_schema=
TEST:TEST1
parallel=8
PARTITION_OPTIONS=
MERGE
transform=
table_compression_clause:”COLUMN STORE COMPRESS FOR QUERY”
table_exists_action=
replace
nohup impdp system/test parfile=imp.par &
C.All Major parameters used for expdp and impdp in 12c
EXPDP |
||||
PARAMETER |
OVERVIEW |
LIMITATIONS |
EXPDP SAMPLE COMMAND |
IMPDP SAMPLE COMMAND |
ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE] | Instructs Export to use a particular method to unload data | If the NETWORK_LINK parameter is also specified, then direct path mode is not supported. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL ACCESS_METHOD=EXTERNAL_TABLE |
|
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE] | Specifies which data to compress before writing to the dump file set. | To make full use of all these compression options, the COMPATIBLE initialization parameter must be set to at least 11.0.0. • The METADATA_ONLY option can be used even if the COMPATIBLE initialization parameter is set to 10.2. • Compression of data using ALL or DATA_ONLY is valid only in the Enterprise Edition of Oracle Database 11g or later, and they require that the Oracle Advanced Compression option be enabled. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY | |
CLUSTER=[YES | NO] | Determines whether Data Pump can use Oracle Real Application Clusters (Oracle RAC) resources and start workers on other Oracle RAC instances. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=NO PARALLEL=3 | impdp ORCL DIRECTORY=dpump_dir1 SCHEMAS=ORCL CLUSTER=NO PARALLEL=3 NETWORK_LINK=dbs1 | |
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} |
Specifies the compression algorithm to be used when compressing dump file data. | Restrictions • To use this feature, database compatibility must be set to 12.0.0 or later. • This feature requires that the Oracle Advanced Compression option be enabled. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp COMPRESSION=DATA_ONLY COMPRESSION_ALGORITHM=LOW |
|
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY] | Enables you to filter what Export unloads: data only, metadata only, or both. | The CONTENT=METADATA_ONLY parameter cannot be used with the TRANSPORT_TABLESPACES (transportable-tablespace mode) parameter or with the QUERY parameter. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp CONTENT=METADATA_ONLY | impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp CONTENT=METADATA_ONLY |
DIRECTORY=directory_object | Users with access to the default DATA_PUMP_DIR directory object do not need to use the DIRECTORY parameter at all. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY | impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=dpump_dir2:expfull.log |
|
DUMPFILE=[directory_object:]file_name [, …] | For example, exp%Uaa %U.dmp would resolve to exp01aa01.dmp, exp02aa02.dmp, and so forth |
expdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp PARALLEL=3 |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp | |
ENCRYPTION = [ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE] | Default: The default value depends upon the combination of encryption-related parameters that are used. To enable encryption, either the ENCRYPTION or ENCRYPTION_PASSWORD parameter, or both, must be specified. If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION parameter defaults to ALL. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp JOB_NAME=enc1 ENCRYPTION=data_only ENCRYPTION_PASSWORD=foobar |
||
ESTIMATE=[BLOCKS | STATISTICS] | BLOCKS – The estimate is calculated by multiplying the number of database blocks used by the source objects, times the appropriate block sizes. • STATISTICS – The estimate is calculated using statistics for each table. For this method to be as accurate as possible, all tables should have been analyzed recently. (Table analysis can be done with either the SQL ANALYZE statement or the DBMS_STATS PL/SQL package.) |
If the Data Pump export job involves compressed tables, then the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS. • The estimate may also be inaccurate if either the QUERY or REMAP_DATA parameter is used. |
expdp ORCL TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1 DUMPFILE=estimate_stat.dmp |
impdp ORCL TABLES=job_history NETWORK_LINK=source_database_link DIRECTORY=dpump_dir1 ESTIMATE=STATISTICS (Only valid for NETWORK_LINK import) |
ESTIMATE_ONLY=[YES | NO] | ESTIMATE_ONLY=YES, then Export estimates the space that would be consumed, but quits without actually performing the export operation. |
expdp ORCL ESTIMATE_ONLY=YES NOLOGFILE=YES SCHEMAS=ORCL | ||
EXCLUDE=FUNCTION EXCLUDE=PROCEDURE EXCLUDE=PACKAGE EXCLUDE=INDEX:”LIKE ‘EMP%’ “ |
Enables you to filter the metadata that is exported by specifying objects and object types to be excluded from the export operation. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW, PACKAGE, FUNCTIONexpdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:”=’ORCL'” |
impdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:”=’ORCL'” | |
FILESIZE=integer[B | KB | MB | GB | TB] | Specifies the maximum size of each dump file. If the size is reached for any member of the dump file set, then that file is closed and an attempt is made to create a new file, if the file specification contains a substitution variable or if additional dump files have been added to the job. |
• The minimum size for a file is ten times the default Data Pump block size, which is 4 kilobytes. • The maximum size for a file is 16 terabytes. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3MB | |
FLASHBACK_SCN=scn_value | Specifies the system change number (SCN) that Export will use to enable the Flashback Query utility. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632 | impdp ORCL DIRECTORY=dpump_dir1 FLASHBACK_SCN=123456 NETWORK_LINK=source_database_link(Only valid if NETWORK_LINK parameter specified) |
|
FLASHBACK_TIME=”TO_TIMESTAMP(time-value)” | The SCN that most closely matches the specified time is found, and this SCN is used to enable the Flashback utility. The export operation is performed with data that is consistent up to this SCN. |
DIRECTORY=dpump_dir1 DUMPFILE=hr_time.dmp FLASHBACK_TIME=”TO_TIMESTAMP(’27-10-2012 13:16:00′, ‘DD-MM-YYYY HH24:MI:SS’)” |
FLASHBACK_TIME=”TO_TIMESTAMP(’27-10-2012 13:40:00′, ‘DD-MM-YYYY HH24:MI:SS’)” You could then issue the following command: > impdp ORCL DIRECTORY=dpump_dir1 PARFILE=flashback_imp.par NETWORK_LINK=source_database_link | |
FULL=[YES | NO] | FULL=YES indicates that all data and metadata are to be exported. To perform a full export, you must have the DATAPUMP_EXP_FULL_DATABASE role. |
• A full export does not, by default, export system schemas that contain Oraclemanaged data and metadata. Examples of system schemas that are not exported by default include SYS, ORDSYS, and MDSYS. • Grants on objects owned by the SYS schema are never exported. |
expdp ORCL DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=YES NOLOGFILE=YES | impdp ORCL DUMPFILE=dpump_dir1:expfull.dmp FULL=YES LOGFILE=dpump_dir2:full_imp.log |
INCLUDE | Enables you to filter the metadata that is exported by specifying objects and object types for the current export mode. The specified objects and all their dependent objects are exported. Grants on these objects are also exported. |
SCHEMAS=ORCL DUMPFILE=expinclude.dmp DIRECTORY=dpump_dir1 LOGFILE=expinclude.log INCLUDE=TABLE:”IN (‘EMPLOYEES’, ‘DEPARTMENTS’)” INCLUDE=PROCEDURE INCLUDE=INDEX:”LIKE ‘EMP%'” |
impdp system SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=imp_include.par |
|
LOGFILE | To perform a Data Pump Export using Oracle Automatic Storage Management (Oracle ASM), you must specify a LOGFILE parameter that includes a directory object that does not include the Oracle ASM + notation. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp LOGFILE=hr_export.log | impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir2 LOGFILE=imp.log DUMPFILE=dpump_dir1:expfull.dmp | |
LOGTIME=[NONE | STATUS | LOGFILE | ALL] | The available options are defined as follows: • NONE–No timestamps on status or log file messages (same as default) • STATUS–Timestamps on status messages only • LOGFILE–Timestamps on log file messages only • ALL–Timestamps on both status and log file messages |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL LOGTIME=ALL | impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL LOGTIME=ALL TABLE_EXISTS_ACTION=REPLACE |
|
METRICS=[YES | NO] | When METRICS=YES is used, the number of objects and the elapsed time are recorded in the Data Pump log file. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL METRICS=YES | impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 LOGFILE=schemas.log DUMPFILE=expdat.dmp METRICS=YES |
|
NETWORK_LINK | Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance. | Network exports do not support LONG columns. • When transporting a database over the network using full transportable export, tables with LONG or LONG RAW columns that reside in administrative tablespaces (such as SYSTEM or SYSAUX) are not supported. |
expdp ORCL DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link DUMPFILE=network_export.dmp LOGFILE=network_export.log |
impdp ORCL TABLES=employees DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT |
PARALLEL=integer | This parameter is valid only in the Enterprise Edition of Oracle Database 11g or later. • To export a table or table partition in parallel (using PQ slaves), you must have the DATAPUMP_EXP_FULL_DATABASE role. |
expdp ORCL DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4 |
impdp ORCL DIRECTORY=dpump_dir1 LOGFILE=parallel_import.log JOB_NAME=imp_par3 DUMPFILE=par_exp%U.dmp PARALLEL=3 |
|
QUERY = [schema.][table_name:] query_clause | QUERY=employees:”WHERE department_id > 10 AND salary > 10000″ NOLOGFILE=YES DIRECTORY=dpump_dir1 DUMPFILE=exp1.dmp |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp PARFILE=query_imp.par NOLOGFILE=YESSuppose you have a parameter file, query_imp.par, that contains the following: QUERY=departments:”WHERE department_id < 120″ |
||
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function | A common use for this option is to mask data when moving from a production system to a test system. For example, a column of sensitive customer data such as credit card numbers could be replaced with numbers generated by a REMAP_DATA function. This would allow the data to retain its essential formatting and processing characteristics without exposing private data to unauthorized personnel. |
• Remapping LOB column data of a remote table is not supported. • Columns of the following types are not supported byREMAP_DATA: User Defined Types, attributes of User Defined Types, LONGs, REFs, VARRAYs, Nested Tables, BFILEs, and XMLtype. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=remap1.dmp TABLES=employees REMAP_DATA=ORCL.employees.employee_id:ORCL.remap.minus10 REMAP_DATA=ORCL.employees.first_name:ORCL.remap.plusx |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp TABLES=ORCL.employees REMAP_DATA=ORCL.employees.first_name:ORCL.remap.plusx |
SAMPLE=[[schema_name.]table_name:]sample_percent | Allows you to specify a percentage of the data rows to be sampled and unloaded from the source database. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70 | ||
SCHEMAS=schema_name [, …] | Specifies that you want to perform a schema-mode export. This is the default mode for Export |
• If you do not have the DATAPUMP_EXP_FULL_DATABASE role, then you can specify only your own schema. • The SYS schema cannot be used as a source schema for export jobs. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL,sh,oe | impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 LOGFILE=schemas.log DUMPFILE=expdat.dmp |
SERVICE_NAME=name | If you start a Data Pump job on instance D and specify CLUSTER=YES and SERVICE_NAME=my_service, then workers can be started on instances A, B, C, and D. Even though instance D is not in my_service it is included because it is the instance on which the job was started. • If you start a Data Pump job on instance A and specify CLUSTER=NO, then any SERVICE_NAME parameter you specify is ignored and all processes will start on instance A. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_svname2.dmp SERVICE_NAME=sales | impdp system DIRECTORY=dpump_dir1 SCHEMAS=ORCL SERVICE_NAME=sales NETWORK_LINK=dbs1 |
|
SOURCE_EDITION | Specifies the database edition from which objects will be exported. | expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp SOURCE_EDITION=exp_edition EXCLUDE=USER |
||
TABLES=[schema_name.]table_name[:partition_name] [, …] | Specifies that you want to perform a table-mode export. | expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp TABLES=employees,jobs,departmentsexpdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012 |
> impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs The following example shows the use of the TABLES parameter to import partitions: > impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012 |
|
TABLESPACES=tablespace_name [, …] | In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both object metadata and data are unloaded |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp TABLESPACES=tbs_4, tbs_5, tbs_6 |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLESPACES=tbs_1,tbs_2,tbs_3,tbs_4 |
|
TRANSPORT_TABLESPACES=tablespace_name [, …] | The TRANSPORT_TABLESPACES parameter cannot be used in conjunction with the QUERY parameter. |
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log |
||
TRANSPORTABLE = [ALWAYS | NEVER] | expdp sh DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp TABLES=sh.sales2 TRANSPORTABLE=ALWAYSimpdp system PARTITION_OPTIONS=DEPARTITION TRANSPORT_DATAFILES=oracle/dbs/tbs2 DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp REMAP_SCHEMA=sh:dp |
|||
VERSION=[COMPATIBLE | LATEST | version_string] | Dump files created on Oracle Database 11g releases with the Data Pump parameter VERSION=12 can only be imported on Oracle Database 12c Release 1 (12.1) and later. |
expdp ORCL TABLES=ORCL.employees VERSION=LATEST DIRECTORY=dpump_dir1 DUMPFILE=emp.dmp NOLOGFILE=YES |
impdp ORCL FULL=Y DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link VERSION=12 |
|
VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], … | The VIEWS_AS_TABLES parameter can be used by itself or along with the TABLES parameter. If either is used, Data Pump performs a table-mode export. |
• The VIEWS_AS_TABLES parameter cannot be used with the TRANSPORTABLE=ALWAYS parameter. • Tables created using the VIEWS_AS_TABLES parameter do not contain any hidden columns that were part of the specified view. • The VIEWS_AS_TABLES parameter does not support tables that have columns with a data type of LONG. |
expdp scott/tiger views_as_tables=view1 directory=data_pump_dir dumpfile=scott1.dmp | impdp ORCL VIEWS_AS_TABLES=view1:view1_tab NETWORK_LINK=dblink1 |
IMPDP |
||||
DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS | REJECT_ROWS_WITH_REPL_CHAR] |
The SKIP_CONSTRAINT_ERRORS option specifies that you want the import operation to proceed even if non-deferred constraint violations are encountered. It logs any rows that cause non-deferred constraint violations, but does not stop the load for the data object experiencing the violation. |
impdp ORCL TABLES=employees CONTENT=DATA_ONLY DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors |
||
PARTITION_OPTIONS=[NONE | DEPARTITION | MERGE] | A value of departition promotes each partition or subpartition to a new individual table. The default name of the new table will be the concatenation of the table and partition name or the table and subpartition name, as appropriate. |
• If the export operation that created the dump file was performed with the transportable method and if a partition or subpartition was specified, then the import operation must use the departition option. • If the export operation that created the dump file was performed with the transportable method, then the import operation cannot use PARTITION_OPTIONS=MERGE. |
impdp system TABLES=sh.sales PARTITION_OPTIONS=MERGE DIRECTORY=dpump_dir1 DUMPFILE=sales.dmp REMAP_SCHEMA=sh:scott |
|
REMAP_DATAFILE=source_datafile:target_datafile | Changes the name of the source data file to the target data file name in all SQL statements where the source data file is referenced: CREATE TABLESPACE, CREATE LIBRARY, and CREATE DIRECTORY. |
DIRECTORY=dpump_dir1 FULL=YES DUMPFILE=db_full.dmp REMAP_DATAFILE=”‘DB1$:[HRDATA.PAYROLL]tbs6.dbf’:’/db1/hrdata/payroll/tbs6.dbf'” You can then issue the following command: > impdp ORCL PARFILE=payroll.par |
||
REMAP_SCHEMA=source_schema:target_schema | Loads all objects from the source schema into a target schema. | > expdp system SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp > impdp system DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp REMAP_SCHEMA=ORCL:scott |
||
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename | Allows you to rename tables during an import operation. | Only objects created by the Import will be remapped. In particular, preexisting tables will not be remapped. |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp TABLES=ORCL.employees REMAP_TABLE=ORCL.employees:emps |
|
REMAP_TABLESPACE=source_tablespace:target_tablespace | Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace. Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace. |
impdp ORCL REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp |
||
SKIP_UNUSABLE_INDEXES=[YES | NO] | Specifies whether Import skips loading tables that have indexes that were set to the Index Unusable state (by either the system or the user). |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log SKIP_UNUSABLE_INDEXES=YES |
||
SQLFILE=[directory_object:]file_name | If SQLFILE is specified, then the CONTENT parameter is ignored if it is set to either ALL or DATA_ONLY. |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp SQLFILE=dpump_dir2:expfull.sql |
||
TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE] | Tells Import what to do if the table it is trying to create already exists. | impdp ORCL TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLE_EXISTS_ACTION=REPLACE |
||
TARGET_EDITION=name | Specifies the database edition into which objects should be imported | • This parameter is only useful if there are two or more versions of the same versionable objects in the database. • The job version must be 11.2 or later. See “VERSION (page 3-65)”. |
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp TARGET_EDITION=exp_edition | |
TRANSFORM = transform_name:value[:object_type]
DISABLE_ARCHIVE_LOGGING:[Y | N] |
impdp ORCL TABLES=ORCL.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp TRANSFORM=STORAGE:N:tableimpdp ORCL TABLES=ORCL.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp TRANSFORM=STORAGE:N:table |