1.What are new features of datapump in 12c?
//oracle-base.com/articles/12c/data-pump-enhancements-12cr1
2.which case we need to use flashback_time or flashback_scn parameter?
If you need consistent backup for example when you do initial load using datapump for golden gate.
3.During impdp with full=y caluse , do we need to create user before import?
No.User will be automatically created.
4.which memory area expdp and impdp uses mainly?
Stream pool size.
5.How can you improve performance of datapump?
//ksadba.wordpress.com/2008/09/16/data-pump-performance-tuning-of-export-import/
//docs.oracle.com/cd/B28359_01/server.111/b28319/dp_perf.htm#i1007295
7.What happens internally when we set direct=y?
8.Whats difference of old exp vs expdp ?
What is the difference between Traditional Export and Datapump?
• Datapump operates on a group of files called dump file sets. However, normal export operates on a single file.
• Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both (not using ORACLE directories).
• Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format.
• Datapump has parallel execution but in exp/imp single stream execution.
• Datapump does not support sequential media like tapes, but traditional export supports.
10.In 12c , Does IMPDP Re-create Jobs in the New Schema?
11.what are the boundaries/limits while importing using impdp in 12c for Edition Based objects?
12.What is the use of table existing option parameter in datapump?
13.what are the known issues in export/Impdp 12c?
14. how do you monitor Expdp/impdp Process?
15.How do we tune impdp/expdp Operations.
16.If we have taking export more than 2 tb how internally it will process’s since we only have 5 gb sga..
17. What are the Process involved in Expdp/ImpDP? at back end?
Client process: this process initiated by client utility and makes call data pump api. Once data pump is initiated this process is necessary for the job.
Shadow process:when client login to db foreground process is created. It services the client data pump api requests. This process creates the master table and creates advanced queues for communication. Once client process ends shadow process also go away.
Master control process :MCP controls the execution of data pump job.there is one MCP per job. MCP divides the data pump job into various metadata and data load or unload jobs and hand over them to worker processes.
Worker processes:MCP creates worked process based on the value of parallel parameter. The worker process performs the task requested by MCP.
18.What basis you will determine to use parallel option.. if we have 4 cpu count how much we can allocate?
19.what is VIEWS_AS_TABLES parameter?
20.specify cases when expdp fails and you have to use traditional exp?
21.Does IMPDP Create Index and Constraints while Importing? If not , what is involved in the Process?
22.How do you get to know about the export level (schema,table,full etc) when you have only dumpfile to impdp ?
23.How do you import only specific schema and ignore rest of other Schemas?
24.Does impdp/expdp platform Dependent?
25.Define a Scenario where you need both expdp/impdp and RMAN in place?
26.What is the consistent option in exp?
27.Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
28.What is the order of importing objects in impdp?
29.How to import only metadata?
30.What is the significance of data pump ‘version’ parameter and instance ‘compatible’ parameter while doing cross release expdp/Impdp?
Ans:-
Test 1
——-
Source – 12.1.0.2 with compatible set to 12.1.0.2
Destination – 11.2.0.2 with compatible 11.2.0
SQL> select con_id,open_mode,name from v$pdbs;
CON_ID OPEN_MODE NAME
———- ———- ——————————
2 READ ONLY PDB$SEED
3 READ WRITE GG1PDB1
SQL> alter session set container=gg1pdb1;
Session altered.
SQL> sho parameter compatible;
NAME TYPE VALUE
———————————— ———– —————————-
compatible string 12.1.0.2.0
noncdb_compatible boolean FALSE
SQL> create user test_u identified by test_u default tablespace users;
User created.
SQL> grant create session, resource to test_u;
Grant succeeded.
SQL> create directory test_dir as ‘/tmp/dir/’;
Directory created.
SQL> grant read,write on directory test_dir to test_u;
Grant succeeded.
SQL> conn test_u/test_u@gg1pdb1
Connected.
SQL> create table table_export_test as select * from all_objects;
Table created.
SQL> select count(*) from table_export_test;
COUNT(*)
———-
73653
[oracle@machineGG1 dir]$ expdp test_u/test_u@gg1pdb1 dumpfile=exp_test.dmp logfile=exp_log.log tables=table_export_test directory=test_dir
Export: Release 12.1.0.2.0 – Production on Sun Feb 11 20:41:07 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “TEST_U”.”SYS_EXPORT_TABLE_01″: test_u/********@gg1pdb1 dumpfile=exp_test.dmp logfile=exp_log.log tables=table_export_test directory=test_dir
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 11 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “TEST_U”.”TABLE_EXPORT_TEST” 8.523 MB 73653 rows
Master table “TEST_U”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for TEST_U.SYS_EXPORT_TABLE_01 is:
/tmp/dir/exp_test.dmp
Job “TEST_U”.”SYS_EXPORT_TABLE_01″ successfully completed at Sun Feb 11 20:41:29 2018 elapsed 0 00:00:22
Destination
[oracle@machineGG1 dir]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 11 20:52:14 2018
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to an idle instance.
SYS@orcl1> startup
ORACLE instance started.
Total System Global Area 2071076864 bytes
Fixed Size 2228032 bytes
Variable Size 1207959744 bytes
Database Buffers 855638016 bytes
Redo Buffers 5251072 bytes
Database mounted.
Database opened.
SYS@orcl1> create user test_u identified by test_u default tablespace users;
User created.
SYS@orcl1> grant create session, resource to test_u;
Grant succeeded.
SYS@orcl1> create directory test_dir as ‘/tmp/dir/’;
Directory created.
SYS@orcl1> grant read,write on directory test_dir to test_u;
Grant succeeded.
[oracle@machineGG1 dir]$ impdp test_u/test_u@orcl1 dumpfile=exp_test.dmp logfile=imp_log.log tables=table_export_test directory=test_dir
Import: Release 11.2.0.2.0 – Production on Sun Feb 11 20:58:53 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-39142: incompatible version number 4.1 in dump file “/tmp/dir/exp_test.dmp”
// As expected it failed while coming from higher to lower version.
// Now take export again but with VERSION=10.2 instead of 11.2 to see if we really require 11.2 for importing in an 11g database.
[oracle@machineGG1 dir]$ expdp test_u/test_u@gg1pdb1 dumpfile=exp_test_1.dmp logfile=exp_log.log tables=table_export_test directory=test_dir version=10.2
Export: Release 12.1.0.2.0 – Production on Sun Feb 11 21:01:59 2018
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “TEST_U”.”SYS_EXPORT_TABLE_01″: test_u/********@gg1pdb1 dumpfile=exp_test_1.dmp logfile=exp_log.log tables=table_export_test directory=test_dir version=10.2
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 11 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “TEST_U”.”TABLE_EXPORT_TEST” 8.522 MB 73653 rows
Master table “TEST_U”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for TEST_U.SYS_EXPORT_TABLE_01 is:
/tmp/dir/exp_test_1.dmp
Job “TEST_U”.”SYS_EXPORT_TABLE_01″ successfully completed at Sun Feb 11 21:02:10 2018 elapsed 0 00:00:09
SYS@orcl1> sho parameter compatible;
NAME TYPE VALUE
———————————— ———– —————————-
compatible string 11.2.0.0.0
[oracle@machineGG1 dir]$ impdp test_u/test_u@orcl1 dumpfile=exp_test_1.dmp logfile=imp_log.log tables=table_export_test directory=test_dir
Import: Release 11.2.0.2.0 – Production on Sun Feb 11 21:03:38 2018
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “TEST_U”.”SYS_IMPORT_TABLE_01″ successfully loaded/unloaded
Starting “TEST_U”.”SYS_IMPORT_TABLE_01″: test_u/********@orcl1 dumpfile=exp_test_1.dmp logfile=imp_log.log tables=table_export_test directory=test_dir
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “TEST_U”.”TABLE_EXPORT_TEST” 8.522 MB 73653 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job “TEST_U”.”SYS_IMPORT_TABLE_01″ successfully completed at 21:03:42
// all successful so its not required to have same version parameter in data pump.
// if the compatible parameter is higher than the version parameter used during export , the import will succeed.
Test 2
// now let’s set version parameter higher than compatible and check if it succeeds.
// Source – 11.2.0.4 with compatible 11.2.0
// Destination – 10.2.0.5 with compatible 10.2.0
// I have used version parameter in export as 10.3.0 to check if our theory of test 1 above is correct ?
HOST1:/tmp/small_test>expdp test_u/Qwertyu_1234 directory=test_small tables=small_test logfile=new_exp.log version=10.3 dumpfile=exp_small_to_lower.
Export: Release 11.2.0.4.0 – Production on Fri Feb 9 21:16:40 2018
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “TEST_U”.”SYS_EXPORT_TABLE_01″: test_u/******** directory=test_small tables=small_test logfile=new_exp.log version=10.3 dumpfile=exp_small_to_lower.dmp
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported “TEST_U”.”SMALL_TEST” 43.28 KB 381 rows
Master table “TEST_U”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for TEST_U.SYS_EXPORT_TABLE_01 is:
/tmp/small_test/exp_small_to_lower.dmp
Job “TEST_U”.”SYS_EXPORT_TABLE_01″ successfully completed at Fri Feb 9 21:16:47 2018 elapsed 0 00:00:06
// Taken export from 11.2.0.4 with version = 10.3, and will import it in 10.2 database.
-> impdp test_u/test_u directory=test_small tables=small_test logfile=new_imp.log dumpfile=exp_small_to_lower.dmp remap_tablespace=genuser:users_1
Import: Release 10.2.0.5.0 – 64bit Production on Friday, 09 February, 2018 21:24:10
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table “TEST_U”.”SYS_IMPORT_TABLE_01″ successfully loaded/unloaded
Starting “TEST_U”.”SYS_IMPORT_TABLE_01″: test_u/******** directory=test_small tables=small_test logfile=new_imp.log dumpfile=exp_small_to_lower.dmp remap_tablespace=genuser:users_1
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “TEST_U”.”SMALL_TEST” 43.28 KB 381 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job “TEST_U”.”SYS_IMPORT_TABLE_01″ successfully completed at 21:24:13
// It succeeded.
// Because that Data Pump checks only the major version number (for example, 10g,11g, 12c), not specific release numbers (for example, 12.1,10.1, 10.2, 11.1, or 11.2).
31.How to export/import without using external directory
32.What are the mandatory parameters in impdp an expdp?
33.How to import already exist schema and tablespace?
34.Using normal exp/imp, how to export in higher version (11g) and import into lower version (10g/9i)?
35.Does data pump address corruption of tables and indexes??
36.What happens when legacy commands are used with datapump?Ans:-Oracle will automatically convert the legacy parameters into new ones which are compatible with expdp.
e.g.,
#> expdp system/passwd rows=n dumpfile=ddl.dmp — remember rows=n was used with exp.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: “rows=FALSE” Location: Command Line, Replaced with: “content=metadata_only”
Legacy Mode has set reuse_dumpfiles=true parameter.
Legacy Mode has set nologfile=true parameter.
Starting “SYSTEM”.”SYS_EXPORT_SCHEMA_01″: system/**** content=metadata_only dumpfile=test.dmp reuse_dumpfiles=true nologfile=true
As it can be seen above, the legacy command is automatically converted into new one by oracle.
The parameter ROWS=N is replaced by CONTENT=METADATA_ONLY.
#> expdp system/passwd content=metadata_only dumpfile=test.dmp reuse_dumpfiles=true nologfile=true
For more info, please check
How To Use DataPump Export Legacy Mode (Doc ID 1366550.1)
37. Which process is responsible for writing data into dumpfile?
38.How do we optimize the use of streams pools to tune the performance of datapump??
39.How to operate datapump job from client machine?
40.Is it possible to take datapump job from physical standby without effecting primary database?
41.how to schedule datapump job using dbms_scheduler?
42.why the datapump file size is huge different with & without encryption ?
43.In which circumstances I use SQL*Loader instead of Data Pump Export and Import?
44.What is maximum number of parallelism we can we for datapump
45.What role does the user should have to perform data pump
46.How to resume datapump jobs if server reboot happens.. Is it possible to resume?
47. Which process is responsible for writing data into dumpfile?
48.If you got a dumpfile to import and don’t know the fromuser, how you will get that information?
49.What are Adv and disadv of datapump over exp/imp
50.You are observing undo tablespace error during import, how you will avoid it?
51.How to avoid redo generations while importing?
ANS – This is a new feature in 12c. we can disable archive logging during import operation. It disables logging for table, index or both during import job.
Logging is not completely disable but only a small amount is generated. Also don’t forget that there is a database parameter FORCE LOGGING which overwrites this feature.
Example:
– For schema
impdp scott/tiger DIRECTORY=dpump1 DUMPFILE=scott1.dmp schemas=scott TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y
– For index no logging
impdp scott/tiger DIRECTORY=dpump1 DUMPFILE=scott1.dmp schemas=scott TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:INDEX
– For table no logging but for other objects logging=Y
impdp scott/tiger DIRECTORY=dpump1 DUMPFILE=scott1.dmp schemas=scott TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y TRANSFORM=DISABLE_ARCHIVE_LOGGING:N:TABLE
52.How to get estimate size of dumpfile before exporting
Expdp estimate_only=y estimate=blocks | statistics
53.Can we allocate more one worker on single while exporting?
54.What would be compression ratio/percentage when we use compression param
It would be Around 80 percent when compared to dumps without compression.
55.Methods/techniques we can follow when we copy larger db in TBs using datapump
We can use parallel as first option during import. We can exclude indexes and constraints during import since though we use parallel it will not be used for jndexes. we have to create a sql script for indexes using sql file option and use parallel there.
56. What will the parameter COMPRESS do during export?
57.How will you confirm that export dumpfile is valid?
58.How can we import objects without creating dumpfile using impdp?
59.How can we change parallelism of export /import while the job is running?
60. Which directory will be used if we dont specify the directory parameter?Ans:-
From 10g R2 while creating or upgrading database , a default directory is created called DATA_PUMP_DIR. It is useful in case we forgot or do not want to write directory name during expdp.
By default it points to $ORACLE_BASE/admin/$ORACLE_SID/dpdump.
For more info, please check –
10gR2 New Feature Default Directory DATA_PUMP_DIR (Doc ID 334590.1)
61.What are the 4 mechanism data pump uses to move data in and out of a database?
Related
3 comments on “EXPDP IMPDP Datapump interview questions”
Great work
Great set of question …. From basic to Advanced …
Very good contents and useful
india pharmacy indian pharmacies safe world pharmacy india