EXPDP and IMPDP command reference 12c

 

A.Very useful method to upgrade 11gR2 to 12c using transportable=always

Full transportable export/import is a new feature in Oracle Database 12c that greatly simplifies the
process of database migration. Combining the ease of use of Oracle Data Pump with the performance
of transportable tablespaces, full transportable export/import gives you the ability to upgrade or
migrate to Oracle Database 12c in a single operation if your source database is at least Oracle Database
11g Release 2 (11.2.0.3). Full transportable export/import is a valuable tool for migrating to pluggable
databases, allowing you to take advantage of the cost savings and economies of scale inherent in
moving to a multitenant architecture.You must explicitly specify the service name of the PDB in the
connect string for the impdp command.

STEP 1: Check the endianness of both platforms

To check the endianness of a platform, run the following query on each platform.

SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_ID = d.PLATFORM_ID;

In this case, both Oracle Solaris x86 and Oracle Enterprise Linux have little endian format, so no
endian conversion is necessary.

STEP 2: Verify that the set of tablespaces to be transported is self-contained

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(‘hr_1,hr_2’, TRUE);

Note that you must include all user tablespaces in the database when performing this check for a full
transportable export/import.
After invoking this PL/SQL procedure, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

STEP 3: Create a directory object in the source database

SQL> CREATE DIRECTORY dp_dir AS ’/u01/app/datafiles’;

STEP 4: Place the hr_1 and hr_2 tablespaces in read-only mode

The tablespaces to be transported must be in read-only mode for the duration of the export. In this
case we need to issue two commands on the source database.

SQL> ALTER TABLESPACE hr_1 READ ONLY;
SQL> ALTER TABLESPACE hr_2 READ ONLY;

The tablespaces can be returned to read-write status once the full transportable export has finished and
the tablespace data files have been copied to the destination system.

STEP 5: Invoke full transportable export on the source database

Invoke the Data Pump export utility as a user with the DATAPUMP_EXP_FULL_DATABASE role.

$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log

Note that the VERSION=12 parameter is required because the source database is Oracle Database 11g
Release 2 (11.2.0.3). This is the only time that a version number greater than the current version is
allowed by the expdp command. If the source database is Oracle Database 12c, with
COMPATIBLE=12.0 or higher, then the VERSION parameter is not required.
After the export command completes, the export log file shows a list of all of the tablespace data files
that need to be moved to the target.

STEP 6: Transport the tablespace data files and the export dump file from source to target

$ cd /u01/app/oracle/oradata/hr_pdb/
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_101.dbf .
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_201.dbf .
$ cp /net/<source-server>/u01/app/datafiles/full_tts.dmp .

STEP 7: Create a directory object on the destination database

Because we copied the data pump dump file to the oradata directory for HR_PDB, we will create a
directory object to point to that same directory for this import. This directory object must be created
by a user connected to the PDB container.

SQL> CREATE DIRECTORY dp_dir AS ‘/u01/app/oracle/oradata/hr_pdb’;
SQL> GRANT read, write on directory dp_dir to system;

STEP 8: Invoke full transportable import on the destination database

Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role.
$ impdp system/manager@hr_pdb directory=dp_dir \
dumpfile=full_tts.dmp logfile=full_tts_imp.log \
metrics=y \
encryption_password=secret123word456 \
transport_datafiles=’/u01/app/oracle/oradata/hr_pdb/hr_101.dbf’,\
‘/u01/app/oracle/oradata/hr_pdb/hr_201.dbf’

Note that, while this example shows several parameters specified on the command line, in most cases
use of a data pump parameter file is recommended to avoid problems with white space and quotation
marks on the command line.
After this statement executes successfully, the user tablespaces are automatically placed in read/write
mode on the destination database. Check the import log file to ensure that no unexpected error
occurred, and perform your normal post-migration validation and testing.

STEP 9: (Optional) Restore user tablespaces to read-write mode on the source database

After the full transportable export has finished, you can return the user-defined tablespaces to readwrite
mode at the source database if desired.

B.General Examples:-

————————————-Export and import————

Need to create directory TEST for /u01/dppump first

sqlplus / as sysdba

SQL>create directory as ‘/u01/dppump

SQL>grant read,write on directory test to public;

expdp \”SYS AS SYSDBA\” schemas=TEST directory=TST_PUMP dumpfile=test_062717.dmp

impdp \”/ as sysdba\” directory=TEST dumpfile=test_062717.dmp  logfile=test_062717_imp.log schemas=TEST TRANSFORM=oid:n

——————Using par file example———————

PAR file exp.par for expdp

directory=expp
dumpfile=exp_test%U.dmp
logfile=test.log
cluster=n
tables=
TEST.T1,
TEST.T2
parallel=12
flashback_scn=82850177023

nohup expdp system/test parfile=exp.par &

PAR file imp.par for impdp

directory=expp
dumpfile=test%U.dmp
logfile=test_imp.log
cluster=n
remap_tablespace=
USERS:PROD
DEMO:PROD
remap_table=
TEST.T1:TT1
TEST.T2:TT2
remap_schema=
TEST:TEST1
parallel=8
PARTITION_OPTIONS=
MERGE
transform=
table_compression_clause:”COLUMN STORE COMPRESS FOR QUERY”
table_exists_action=
replace

nohup impdp system/test parfile=imp.par &

 

C.All Major parameters used for expdp and impdp in 12c

EXPDP

PARAMETER

OVERVIEW

LIMITATIONS

EXPDP SAMPLE COMMAND

IMPDP SAMPLE COMMAND

ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE] Instructs Export to use a particular method to unload data If the NETWORK_LINK parameter is also specified, then direct path mode is not
supported.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL
ACCESS_METHOD=EXTERNAL_TABLE
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE] Specifies which data to compress before writing to the dump file set.  To make full use of all these compression options, the COMPATIBLE initialization
parameter must be set to at least 11.0.0.
• The METADATA_ONLY option can be used even if the COMPATIBLE initialization
parameter is set to 10.2.
• Compression of data using ALL or DATA_ONLY is valid only in the Enterprise
Edition of Oracle Database 11g or later, and they require that the Oracle Advanced
Compression option be enabled.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY
CLUSTER=[YES | NO] Determines whether Data Pump can use Oracle Real Application Clusters (Oracle
RAC) resources and start workers on other Oracle RAC instances.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=NO PARALLEL=3  impdp ORCL DIRECTORY=dpump_dir1 SCHEMAS=ORCL CLUSTER=NO PARALLEL=3 NETWORK_LINK=dbs1
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
Specifies the compression algorithm to be used when compressing dump file data. Restrictions
• To use this feature, database compatibility must be set to 12.0.0 or later.
• This feature requires that the Oracle Advanced Compression option be enabled.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp COMPRESSION=DATA_ONLY
COMPRESSION_ALGORITHM=LOW
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY] Enables you to filter what Export unloads: data only, metadata only, or both. The CONTENT=METADATA_ONLY parameter cannot be used with the
TRANSPORT_TABLESPACES (transportable-tablespace mode) parameter or with
the QUERY parameter.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp CONTENT=METADATA_ONLY impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp CONTENT=METADATA_ONLY
DIRECTORY=directory_object  Users with access to the default
DATA_PUMP_DIR directory object do not need to use the DIRECTORY parameter at all.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
LOGFILE=dpump_dir2:expfull.log
DUMPFILE=[directory_object:]file_name [, …] For example, exp%Uaa
%U.dmp would resolve to exp01aa01.dmp, exp02aa02.dmp, and so forth
expdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,
exp2%U.dmp PARALLEL=3
 impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp
ENCRYPTION = [ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE] Default: The default value depends upon the combination of encryption-related
parameters that are used. To enable encryption, either the ENCRYPTION or
ENCRYPTION_PASSWORD parameter, or both, must be specified.
If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION
parameter defaults to ALL.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp JOB_NAME=enc1
ENCRYPTION=data_only ENCRYPTION_PASSWORD=foobar
ESTIMATE=[BLOCKS | STATISTICS]  BLOCKS – The estimate is calculated by multiplying the number of database blocks
used by the source objects, times the appropriate block sizes.
• STATISTICS – The estimate is calculated using statistics for each table. For this
method to be as accurate as possible, all tables should have been analyzed
recently. (Table analysis can be done with either the SQL ANALYZE statement or
the DBMS_STATS PL/SQL package.)
 If the Data Pump export job involves compressed tables, then the default size
estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS
is used. This is because the size estimate does not reflect that the data was stored
in a compressed form. To get a more accurate size estimate for compressed tables,
use ESTIMATE=STATISTICS.
• The estimate may also be inaccurate if either the QUERY or REMAP_DATA
parameter is used.
expdp ORCL TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1
DUMPFILE=estimate_stat.dmp
impdp ORCL TABLES=job_history NETWORK_LINK=source_database_link
DIRECTORY=dpump_dir1 ESTIMATE=STATISTICS (Only valid for NETWORK_LINK import)
ESTIMATE_ONLY=[YES | NO] ESTIMATE_ONLY=YES, then Export estimates the space that would be consumed,
but quits without actually performing the export operation.
expdp ORCL ESTIMATE_ONLY=YES NOLOGFILE=YES SCHEMAS=ORCL
EXCLUDE=FUNCTION
EXCLUDE=PROCEDURE
EXCLUDE=PACKAGE
EXCLUDE=INDEX:”LIKE ‘EMP%’ “
Enables you to filter the metadata that is exported by specifying objects and object
types to be excluded from the export operation.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,
PACKAGE, FUNCTIONexpdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:”=’ORCL'”
impdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:”=’ORCL'”
FILESIZE=integer[B | KB | MB | GB | TB] Specifies the maximum size of each dump file. If the size is reached for any member of
the dump file set, then that file is closed and an attempt is made to create a new file, if
the file specification contains a substitution variable or if additional dump files have
been added to the job.
• The minimum size for a file is ten times the default Data Pump block size, which
is 4 kilobytes.
• The maximum size for a file is 16 terabytes.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3MB
FLASHBACK_SCN=scn_value Specifies the system change number (SCN) that Export will use to enable the
Flashback Query utility.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632  impdp ORCL DIRECTORY=dpump_dir1 FLASHBACK_SCN=123456
NETWORK_LINK=source_database_link(Only valid if
NETWORK_LINK parameter specified)
FLASHBACK_TIME=”TO_TIMESTAMP(time-value)” The SCN that most closely matches the specified time is found, and this SCN is used to
enable the Flashback utility. The export operation is performed with data that is
consistent up to this SCN.
DIRECTORY=dpump_dir1
DUMPFILE=hr_time.dmp
FLASHBACK_TIME=”TO_TIMESTAMP(’27-10-2012 13:16:00′, ‘DD-MM-YYYY HH24:MI:SS’)”
FLASHBACK_TIME=”TO_TIMESTAMP(’27-10-2012 13:40:00′, ‘DD-MM-YYYY HH24:MI:SS’)” You could then issue the following command: > impdp ORCL DIRECTORY=dpump_dir1 PARFILE=flashback_imp.par NETWORK_LINK=source_database_link
FULL=[YES | NO] FULL=YES indicates that all data and metadata are to be exported. To perform a full
export, you must have the DATAPUMP_EXP_FULL_DATABASE role.
• A full export does not, by default, export system schemas that contain Oraclemanaged
data and metadata. Examples of system schemas that are not exported
by default include SYS, ORDSYS, and MDSYS.
• Grants on objects owned by the SYS schema are never exported.
expdp ORCL DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=YES NOLOGFILE=YES impdp ORCL DUMPFILE=dpump_dir1:expfull.dmp FULL=YES
LOGFILE=dpump_dir2:full_imp.log
INCLUDE Enables you to filter the metadata that is exported by specifying objects and object
types for the current export mode. The specified objects and all their dependent objects
are exported. Grants on these objects are also exported.
SCHEMAS=ORCL
DUMPFILE=expinclude.dmp
DIRECTORY=dpump_dir1
LOGFILE=expinclude.log
INCLUDE=TABLE:”IN (‘EMPLOYEES’, ‘DEPARTMENTS’)”
INCLUDE=PROCEDURE
INCLUDE=INDEX:”LIKE ‘EMP%'”
impdp system SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
PARFILE=imp_include.par
LOGFILE To perform a Data Pump Export using Oracle Automatic Storage Management
(Oracle ASM), you must specify a LOGFILE parameter that includes a directory
object that does not include the Oracle ASM + notation.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp LOGFILE=hr_export.log impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir2 LOGFILE=imp.log DUMPFILE=dpump_dir1:expfull.dmp
LOGTIME=[NONE | STATUS | LOGFILE | ALL] The available options are defined as follows:
• NONE–No timestamps on status or log file messages (same as default)
• STATUS–Timestamps on status messages only
• LOGFILE–Timestamps on log file messages only
• ALL–Timestamps on both status and log file messages
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL LOGTIME=ALL  impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL LOGTIME=ALL
TABLE_EXISTS_ACTION=REPLACE
METRICS=[YES | NO] When METRICS=YES is used, the number of objects and the elapsed time are recorded
in the Data Pump log file.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL METRICS=YES  impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp METRICS=YES
NETWORK_LINK Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance. Network exports do not support LONG columns.
• When transporting a database over the network using full transportable export,
tables with LONG or LONG RAW columns that reside in administrative tablespaces
(such as SYSTEM or SYSAUX) are not supported.
expdp ORCL DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
DUMPFILE=network_export.dmp LOGFILE=network_export.log
 impdp ORCL TABLES=employees DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT
PARALLEL=integer  This parameter is valid only in the Enterprise Edition of Oracle Database 11g or
later.
• To export a table or table partition in parallel (using PQ slaves), you must have
the DATAPUMP_EXP_FULL_DATABASE role.
 expdp ORCL DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log
JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
impdp ORCL DIRECTORY=dpump_dir1 LOGFILE=parallel_import.log
JOB_NAME=imp_par3 DUMPFILE=par_exp%U.dmp PARALLEL=3
QUERY = [schema.][table_name:] query_clause QUERY=employees:”WHERE department_id > 10 AND salary > 10000″
NOLOGFILE=YES
DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp
 impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
PARFILE=query_imp.par NOLOGFILE=YESSuppose you have a parameter file, query_imp.par, that contains the following:
QUERY=departments:”WHERE department_id < 120″
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function A common use for this option is to
mask data when moving from a production system to a test system. For example, a
column of sensitive customer data such as credit card numbers could be replaced with
numbers generated by a REMAP_DATA function. This would allow the data to retain its
essential formatting and processing characteristics without exposing private data to
unauthorized personnel.
• Remapping LOB column data of a remote table is not supported.
• Columns of the following types are not supported byREMAP_DATA: User Defined
Types, attributes of User Defined Types, LONGs, REFs, VARRAYs, Nested Tables,
BFILEs, and XMLtype.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=remap1.dmp TABLES=employees
REMAP_DATA=ORCL.employees.employee_id:ORCL.remap.minus10
REMAP_DATA=ORCL.employees.first_name:ORCL.remap.plusx
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=ORCL.employees REMAP_DATA=ORCL.employees.first_name:ORCL.remap.plusx
SAMPLE=[[schema_name.]table_name:]sample_percent Allows you to specify a percentage of the data rows to be sampled and unloaded from
the source database.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70
SCHEMAS=schema_name [, …] Specifies that you want to perform a schema-mode export. This is the default mode for
Export
• If you do not have the DATAPUMP_EXP_FULL_DATABASE role, then you can
specify only your own schema.
• The SYS schema cannot be used as a source schema for export jobs.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL,sh,oe  impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp
SERVICE_NAME=name  If you start a Data Pump job on instance D and specify CLUSTER=YES and
SERVICE_NAME=my_service, then workers can be started on instances A, B, C,
and D. Even though instance D is not in my_service it is included because it is
the instance on which the job was started.
• If you start a Data Pump job on instance A and specify CLUSTER=NO, then any
SERVICE_NAME parameter you specify is ignored and all processes will start on
instance A.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_svname2.dmp SERVICE_NAME=sales  impdp system DIRECTORY=dpump_dir1 SCHEMAS=ORCL
SERVICE_NAME=sales NETWORK_LINK=dbs1
SOURCE_EDITION Specifies the database edition from which objects will be exported. expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp SOURCE_EDITION=exp_edition
EXCLUDE=USER
TABLES=[schema_name.]table_name[:partition_name] [, …] Specifies that you want to perform a table-mode export. expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departmentsexpdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp
TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012
> impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs
The following example shows the use of the TABLES parameter to import partitions:
> impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp
TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012
TABLESPACES=tablespace_name [, …] In tablespace mode, only the tables contained in a specified set of tablespaces are
unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both
object metadata and data are unloaded
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
TABLESPACES=tbs_1,tbs_2,tbs_3,tbs_4
TRANSPORT_TABLESPACES=tablespace_name [, …] The TRANSPORT_TABLESPACES parameter cannot be used in conjunction with
the QUERY parameter.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log
TRANSPORTABLE = [ALWAYS | NEVER] expdp sh DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp
TABLES=sh.sales2 TRANSPORTABLE=ALWAYSimpdp system PARTITION_OPTIONS=DEPARTITION
TRANSPORT_DATAFILES=oracle/dbs/tbs2 DIRECTORY=dpump_dir1
DUMPFILE=tto1.dmp REMAP_SCHEMA=sh:dp
VERSION=[COMPATIBLE | LATEST | version_string] Dump files created on Oracle Database 11g releases with the Data Pump
parameter VERSION=12 can only be imported on Oracle Database 12c Release 1
(12.1) and later.
 expdp ORCL TABLES=ORCL.employees VERSION=LATEST DIRECTORY=dpump_dir1
DUMPFILE=emp.dmp NOLOGFILE=YES
 impdp ORCL FULL=Y DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link VERSION=12
VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], … The VIEWS_AS_TABLES
parameter can be used by itself or along with the TABLES parameter. If either is used,
Data Pump performs a table-mode export.
• The VIEWS_AS_TABLES parameter cannot be used with the
TRANSPORTABLE=ALWAYS parameter.
• Tables created using the VIEWS_AS_TABLES parameter do not contain any
hidden columns that were part of the specified view.
• The VIEWS_AS_TABLES parameter does not support tables that have columns
with a data type of LONG.
expdp scott/tiger views_as_tables=view1 directory=data_pump_dir dumpfile=scott1.dmp  impdp ORCL VIEWS_AS_TABLES=view1:view1_tab NETWORK_LINK=dblink1

IMPDP

DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS |
REJECT_ROWS_WITH_REPL_CHAR]
The SKIP_CONSTRAINT_ERRORS option specifies that you want the import
operation to proceed even if non-deferred constraint violations are encountered. It
logs any rows that cause non-deferred constraint violations, but does not stop the
load for the data object experiencing the violation.
 impdp ORCL TABLES=employees CONTENT=DATA_ONLY
DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors
PARTITION_OPTIONS=[NONE | DEPARTITION | MERGE] A value of departition promotes each partition or subpartition to a new individual
table. The default name of the new table will be the concatenation of the table and
partition name or the table and subpartition name, as appropriate.
• If the export operation that created the dump file was performed with the
transportable method and if a partition or subpartition was specified, then the
import operation must use the departition option.
• If the export operation that created the dump file was performed with the
transportable method, then the import operation cannot use
PARTITION_OPTIONS=MERGE.
impdp system TABLES=sh.sales PARTITION_OPTIONS=MERGE
DIRECTORY=dpump_dir1 DUMPFILE=sales.dmp REMAP_SCHEMA=sh:scott
REMAP_DATAFILE=source_datafile:target_datafile Changes the name of the source data file to the target data file name in all SQL
statements where the source data file is referenced: CREATE TABLESPACE, CREATE
LIBRARY, and CREATE DIRECTORY.
DIRECTORY=dpump_dir1
FULL=YES
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”‘DB1$:[HRDATA.PAYROLL]tbs6.dbf’:’/db1/hrdata/payroll/tbs6.dbf'”
You can then issue the following command:
> impdp ORCL PARFILE=payroll.par
REMAP_SCHEMA=source_schema:target_schema Loads all objects from the source schema into a target schema. > expdp system SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp
> impdp system DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp REMAP_SCHEMA=ORCL:scott
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename Allows you to rename tables during an import operation. Only objects created by the Import will be remapped. In particular, preexisting
tables will not be remapped.
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=ORCL.employees REMAP_TABLE=ORCL.employees:emps
REMAP_TABLESPACE=source_tablespace:target_tablespace Remaps all objects selected for import with persistent data in the source tablespace to
be created in the target tablespace.
Remaps all objects selected for import with persistent data in the source tablespace to
be created in the target tablespace.
 impdp ORCL REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpump_dir1
DUMPFILE=employees.dmp
SKIP_UNUSABLE_INDEXES=[YES | NO] Specifies whether Import skips loading tables that have indexes that were set to the
Index Unusable state (by either the system or the user).
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log
SKIP_UNUSABLE_INDEXES=YES
SQLFILE=[directory_object:]file_name If SQLFILE is specified, then the CONTENT parameter is ignored if it is set to either
ALL or DATA_ONLY.
 impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql
TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE] Tells Import what to do if the table it is trying to create already exists. impdp ORCL TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
TABLE_EXISTS_ACTION=REPLACE
TARGET_EDITION=name Specifies the database edition into which objects should be imported • This parameter is only useful if there are two or more versions of the same
versionable objects in the database.
• The job version must be 11.2 or later. See “VERSION (page 3-65)”.
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp TARGET_EDITION=exp_edition
TRANSFORM = transform_name:value[:object_type]

DISABLE_ARCHIVE_LOGGING:[Y | N]
INMEMORY:[Y | N]
INMEMORY_CLAUSE:”string with a valid in-memory parameter”
LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]
OID:[Y | N]
PCTSPACE:some_number_greater_than_zero
SEGMENT_ATTRIBUTES:[Y | N]
SEGMENT_CREATION:[Y | N]
STORAGE:[Y | N]
TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]

impdp ORCL TABLES=ORCL.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp
TRANSFORM=STORAGE:N:tableimpdp ORCL TABLES=ORCL.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp
TRANSFORM=STORAGE:N:table

Oracle Database Audit FGA using SYSLOG to capture

Oracle Database Audit using SYSLOG to capture

Oracle Database can be configured to log events into a database table, XML
files or syslog. To configure Oracle Database to log events using syslog:

Configure SYSLOG

1. Execute the following commands:

mkdir -p /var/log/oracledb/
touch /var/log/oracledb/oracledb.log

2. Add the following line to /etc/rsyslog.conf:

local1.info /var/log/oracledb/oracledb.log

3.Configuring Logging on Network Elements

Note: The separator between local1.info and /var/log/oracledb/oracledb.log must be tab, not space.

4. Restart the syslog service. In Linux

service rsyslog stop

service rsyslog start

Configure in FGA audit in DB level if required.

Please enable FGA for particular table

Create a policy on a table & column to be audited
BEGIN
 dbms_fga.add_policy
 (
 object_schema=>'CUST',
 object_name=>'CUST_DETAILS',
 policy_name=>'TEST_AUDIT',
 audit_column => 'PASSPORT',
 statement_types => 'UPDATE, DELETE, SELECT',
 audit_condition => 'PASSPORT IS NOT NULL'
 );
 END;

 

Here we are asking to audit PASSPORT column in CUST_DETAILS  table for SELECT, UPDATE & DELETE statements.

It will not audit operations, where PASSPORT is NULL.

Policy name for this access control is TEST_AUDIT.

STEP 2

Check the policy details from

SQL> select policy_name, object_name, object_schema, policy_text, policy_column from dba_audit_policies
/

STEP 3

Now if anyone uses statement containing PASSPORT column

SQL> select * from cust_details
/

SQL> select PASSPORT from CUST_DETAILS
/

It will be logged in sys.fga_log$ OR dba_fga_audit_trail views

SQL> select timestamp, db_user, os_user,object_schema, object_name,sql_text from dba_fga_audit_trail
/

STEP 4

To delete the audit log

SQL> delete from sys.fga_log$

OR

SQL> delete from dba_fga_audit_trail

STEP 5

To delete the policy

begin
DBMS_FGA.DROP_POLICY
(
object_schema => ‘CUST’,
object_name => ‘CUST_DETAILS’,
policy_name => ‘TEST_AUDIT’
);
end;

DB level parameter change for enabling SYSLOG audit capturing

5. Log in to sqlplus and execute:

SQL> ALTER SYSTEM SET AUDIT_TRAIL=OS SCOPE=SPFILE;
SQL> ALTER SYSTEM SET AUDIT_SYS_OPERATIONS=TRUE
SCOPE=SPFILE;
SQL> ALTER SYSTEM SET AUDIT_SYSLOG_LEVEL=’local1.info’
SCOPE=SPFILE;

Please note local1.info is same whatever it is there in /etc/rsyslog.conf

The audit_file_dest parameter should be NULL.

5. Restart the Oracle database instance.

6. Log in to the database, execute some arbitrary SQL statements and
verify that /var/ecalaudit/oracledb/oracledb.log is updated
accordingly.

Sample output of syslog:-

Dec 21 22:06:26 localhost Oracle Audit[32231]: LENGTH: “313” SESSIONID:[7] “4902919” ENTRYID:[1] “9” STATEMENT:[2] “35” USERID:[2] “CW” USERHOST:[7] “GSYC622” TERMINAL:[7] “unknown” ACTION:[3] “103” RETURNCODE:[1] “0” OBJ$CREATOR:[2] “CW” OBJ$NAME:[15] “CW_CONFIG_AUDIT” SES$ACTIONS:[16] “——S———” SES$TID:[5] “56505” OS$USERID:[7] “eprppak” DBID:[9] “673878994”

Script for checking tablespace growth

with t as (
 select ss.run_time,ts.name,round(su.tablespace_size*dt.block_size/1024/1024/1024,2) alloc_size_gb,
 round(su.tablespace_usedsize*dt.block_size/1024/1024/1024,2) used_size_gb
 from
 dba_hist_tbspc_space_usage su,
 (select trunc(BEGIN_INTERVAL_TIME) run_time,max(snap_id) snap_id from dba_hist_snapshot
 group by trunc(BEGIN_INTERVAL_TIME) ) ss,
 v$tablespace ts,
 dba_tablespaces dt
 where su.snap_id = ss.snap_id
 and su.tablespace_id = ts.ts#
 and ts.name =upper('USERS')
 and ts.name = dt.tablespace_name )
 select e.run_time,e.name,e.alloc_size_gb,e.used_size_gb curr_used_size_gb,
 b.used_size_gb prev_used_size_gb,
 case when e.used_size_gb > b.used_size_gb
 then to_char(e.used_size_gb - b.used_size_gb)
 when e.used_size_gb = b.used_size_gb
 then '***NO DATA GROWTH'
 when e.used_size_gb < b.used_size_gb
 then '******DATA PURGED' end variance
 from t e, t b
 where e.run_time = b.run_time + 1
 order by 1

script to get query for reclaim space from auto extensible datafile

set linesize 1000 pagesize 0 feedback off trimspool on
 with
 hwm as (
 -- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
 select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
 from sys.x$ktfbue group by ktfbuefno,ktfbuesegtsn
 ),
 hwmts as (
 -- join ts# with tablespace_name
 select name tablespace_name,relative_fno,hwm_blocks
 from hwm join v$tablespace using(ts#)
 ),
 hwmdf as (
 -- join with datafiles, put 5M minimum for datafiles with no extents
 select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes
 from hwmts right join dba_data_files using(tablespace_name,relative_fno)
 )
 select
 case when autoextensible='YES' and maxbytes>=bytes
 then -- we generate resize statements only if autoextensible can grow back to current size
 '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
 ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
 ||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
 else -- generate only a comment when autoextensible is off
 '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
 ||'M from '||to_char(ceil(bytes/1024/1024),999999)
 ||'M after setting autoextensible maxsize higher than current size for file '
 || file_name||' */'
 end SQL
 from hwmdf
 where
 bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
 order by bytes-hwm_bytes desc
 /

Implementing the Golden Gate plug-in monitor for Oracle Cloud Control 13cR2(13.2.0.0.0)

This note describes the procedure of implementing the GoldenGate plug-in for Oracle Cloud Control 13cR2.

These versions are required for installing the plug-in:

  • Enterprise Manager Cloud Control 13c Bundle Patch 1 (13.2.0.0.0) and later
  • Oracle GoldenGate 12c (12.3.0.1.0) and later
  • Oracle GoldenGate Plug-in for EMCC Release 13c

http://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html

  • Download, install and configure Oracle GoldenGate Monitor Agent 12.2.1.2.0

http://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html
– Oracle GoldenGate Monitor 12.2.1.2.0 (425 MB)

Install Java 1.8 (or later) JDK on the servers where the GoldenGate instances will be running.

 Please ftp jdk-8u151-linux-x64.tar.gz in binary mode from your download location to /home/oracle directory of the server

cd /home/oracle

[oracle@xxx ~]$ tar xvf jdk-8u151-linux-x64.tar.gz

Set the JAVA_HOME variable to the location of the JDK installation and ensure that the PATH variable includes the $JAVA_HOME/jre/bin/ server directory location.

EMCLI configuration to manually upload GG plugin to cloud control server

For instruction,you may navigate from cloud control url :
Setup->command line interface

https://XXX:7803/em/public_lib_download/emcli/kit/emcliadvancedkit.jar

Login to cloud control application server and perform below steps:

[oracle@xxx u01]# /u01/app/oemcc/middleware/oracle_common/jdk/bin/java -jar /home/oracle/emcliadvancedkit.jar -install_dir=/home/oracle/emcli

Oracle Enterprise Manager 13c Release 1.

Copyright (c) 2012, 2015 Oracle Corporation.  All rights reserved.

 

EM CLI Advanced install completed successfully.

Execute “emcli help sync” from the EM CLI home (the directory where you have installed EM CLI) for further instructions.

[oracle@xxx emcli]$ export JAVA_HOME=/u01/app/oemcc/middleware/oracle_common/jdk

[oracle@xxx emcli]$ export PATH=$JAVA_HOME/bin:$PATH                 [oracle@CDV1PPOCCAPV01 emcli]$ ./emcli login -username=sysman -password=sysm4n4dm1n

Error: No current OMS. Run setup to establish an OMS connection.

./emcli setup -url=https://10.49.3.22:7803/em -username=SYSMAN -trustall

Oracle Enterprise Manager 13c Release 1.

Copyright (c) 1996, 2015 Oracle Corporation and/or its affiliates. All rights reserved.

Enter password

Emcli setup successful

./emcli import_update -file=/home/oracle/13.2.1.0.0_oracle.fmw.gg_2000_0.opar -omslocal

Select Oracle GoldenGate under the Plug-in Name column and click the Download button
After the download is completed, the status will change from Available to Downloaded

setup->Extensiblity->plugin->Middleware

This is step is to install management server

Log in to Enterprise Manager Cloud Control to complete the deployment:
a. Select Setup, Extensibility, Plug-ins to open the Plug-ins page.
b. Expand the Middleware folder.
c. Select Oracle GoldenGate, Deploy on, Management Servers… to start the
deployment process.
d. Enter the Repository SYS password and click Continue.
A series of prerequisite system checks begins. As each system check
completes,
e. Click Next after each system check completes to continue to the next check. Do
this until all of the prerequisite checks are complete.
f. Click Next and then Deploy

The next step is to deploy the management agent Plug-in on both the nodes of the RAC cluster where the GoldenGate instances are going to be running.

1. Select Setup, Extensibility, Plug-ins to open the Plug-ins page.
2. Expand the Middleware folder.
3. Select Oracle GoldenGate, Deploy on, Management Agent… to start the
deployment process.

4.Select the required version of plug-in, then click Continue.

5. Select all the EM Agents where you want to install plug-in.
6. Click Continue then click Deploy.

Once the Enterprise Manager Plug-In for Oracle GoldenGate is deployed, an Oracle
GoldenGate item appears under Targets in Enterprise Manager Cloud Control.

Installation of Golden gate monitoring agent in the server where GG is installed

 How to install/configure Oracle GoldenGate Monitor Agent 12.2.1.x with GoldenGate “core”? [VIDEO] (Doc ID 2171015.1)                                                                                               

 [oracle@xxx ~]$ export JAVA_HOME=/home/oracle/jdk1.8.0_151/

[oracle@xxx ~]$ export PATH=$JAVA_HOME/bin:$PATH

[oracle@xxx ~]$ java -version

java version “1.8.0_151”

Java(TM) SE Runtime Environment (build 1.8.0_151-b12)

Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)

This step requires display to be open so use Mobaxterm.

[oracle@xxx ogg_home]$ java -jar fmw_12.2.1.2.0_ogg.jar

Launcher log file is /tmp/OraInstall2017-12-20_01-03-55AM/launcher2017-12-20_01-03-55AM.log.

Extracting the installer . . . . . Done

Checking if CPU speed is above 300 MHz.   Actual 2493.748 MHz    Passed

Checking monitor: must be configured to display at least 256 colors.   Actual 16777216    Passed

Checking swap space: must be greater than 512 MB.   Actual 17407 MB    Passed

Checking if this platform requires a 64-bit JVM.   Actual 64    Passed (64-bit not required)

Checking temp space: must be greater than 300 MB.   Actual 6675 MB    Passed

Patch 26982776: Oracle GoldenGate Monitor 12.2.1.2.171115 (PS2 BP3) (Cumulative) Install

How To Upgrade Existing Oracle GoldenGate Monitor Agent 12.1.3.x to version 12.1.3.0.4? [VIDEO] (Doc ID 2024198.1)

export ORACLE_HOME=/u01/app/oracle/ogg_home/oggmon

export PATH=$ORACLE_HOME/OPatch:$PATH

[oracle@xxx oggmon]$ opatch lsinv

Oracle Interim Patch Installer version 13.9.1.0.0

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

 

 

Oracle Home       : /u01/app/oracle/ogg_home/oggmon

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/ogg_home/oggmon/oraInst.loc

OPatch version    : 13.9.1.0.0

OUI version       : 13.9.1.0.0

Log file location : /u01/app/oracle/ogg_home/oggmon/cfgtoollogs/opatch/opatch2017-12-20_01-38-19AM_1.log

OPatch detects the Middleware Home as “/u01/app/oracle/ogg_home/oggmon”

Lsinventory Output file location : /u01/app/oracle/ogg_home/oggmon/cfgtoollogs/opatch/lsinv/lsinventory2017-12-20_01-38-19AM.txt

——————————————————————————–

Local Machine Information::

Hostname:XXX

ARU platform id: 226

ARU platform description:: Linux x86-64

Interim patches (6) :

Patch  19030178     : applied on Wed Dec 20 01:12:44 CLST 2017

Unique Patch ID:  19234068

Patch description:  “One-off”

Created on 4 Aug 2015, 05:40:22 hrs UTC

Bugs fixed:

19030178

Patch  19154304     : applied on Wed Dec 20 01:12:12 CLST 2017

Unique Patch ID:  19278518

Patch description:  “One-off”

Created on 25 Aug 2015, 07:10:13 hrs UTC

Bugs fixed:

19154304

 

Patch  19632480     : applied on Wed Dec 20 01:11:41 CLST 2017

Unique Patch ID:  19278519

Patch description:  “One-off”

Created on 25 Aug 2015, 07:19:43 hrs UTC

Bugs fixed:

19632480

 

Patch  19795066     : applied on Wed Dec 20 01:11:12 CLST 2017

Unique Patch ID:  19149348

Patch description:  “One-off”

Created on 16 Jul 2015, 15:51:43 hrs UTC

Bugs fixed:

19795066

 

Patch  21663638     : applied on Wed Dec 20 01:10:42 CLST 2017

Unique Patch ID:  20477024

Patch description:  “One-off”

Created on 31 Aug 2016, 21:01:13 hrs UTC

Bugs fixed:

21663638

 

Patch  22754279     : applied on Wed Dec 20 01:10:12 CLST 2017

Unique Patch ID:  20383951

Patch description:  “One-off”

Created on 9 Jul 2016, 00:36:58 hrs UTC

Bugs fixed:

22754279

——————————————————————————-

OPatch succeeded.

[oracle@xxx ~]$ cd 26982776

[oracle@xxx 26982776]$ opatch apply

Oracle Interim Patch Installer version 13.9.1.0.0

Copyright (c) 2017, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/ogg_home/oggmon

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/ogg_home/oggmon/oraInst.loc

OPatch version    : 13.9.1.0.0

OUI version       : 13.9.1.0.0

Log file location : /u01/app/oracle/ogg_home/oggmon/cfgtoollogs/opatch/opatch2017-12-20_01-40-23AM_1.log

OPatch detects the Middleware Home as “/u01/app/oracle/ogg_home/oggmon”

Verifying environment and performing prerequisite checks…

OPatch continues with these patches:   26982776

Do you want to proceed? [y|n]

y

User Responded with: Y

All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.

(Oracle Home = ‘/u01/app/oracle/ogg_home/oggmon’)

Is the local system ready for patching? [y|n]

y

User Responded with: Y

Backing up files…

Applying interim patch ‘26982776’ to OH ‘/u01/app/oracle/ogg_home/oggmon’

ApplySession: Optional component(s) [ oracle.rcu.oggmon, 12.2.1.2.0 ] , [ oracle.rcu.oggmon, 12.2.1.2.0 ] , [ oracle.ogg.monitor.server, 12.2.1.2.0 ] , [ oracle.fmw.upgrade.oggmon, 12.2.1.2.0 ] , [ oracle.fmw.upgrade.oggmon, 12.2.1.2.0 ]  not present in the Oracle Home or a higher version is found.

 

Patching component oracle.ogg.monitor.agent, 12.2.1.2.0…

 

Patching component oracle.ogg.monitor.agent, 12.2.1.2.0…

Patch 26982776 successfully applied.

Log file location: /u01/app/oracle/ogg_home/oggmon/cfgtoollogs/opatch/opatch2017-12-20_01-40-23AM_1.log

OPatch succeeded.

Enable monitoring of GG from agent

How To Enable Monitoring For GoldenGate 12.3.x Targets Using Oracle Enterprise Manager 13c R2+? (Doc ID 2314622.1)

1.Create instance

 cd /u01/app/oracle/ogg_home/oggmon/oggmon/ogg_agent

[oracle@XXX ogg_agent]$ export JAVA_HOME=/home/oracle/jdk1.8.0_151/

[oracle@XXX ogg_agent]$ export PATH=$JAVA_HOME/bin:$PATH

 

[oracle@XXX ogg_agent]$ ./createMonitorAgentInstance.sh

Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/ogg_home

Please enter absolute path of OGG Agent instance : /u01/app/oracle/ogg_home/instance1

Please enter unique name to replace timestamp in startMonitorAgent script (startMonitorAgentInstance_20171220014616.sh) :

Successfully created OGG Agent instance.

2.Create the Oracle Wallet

Add the password that the Oracle Management Agent will use to connect to the Oracle GoldenGate agent .
Navigate to the Oracle GoldenGate agent instance directory and run the pw_agent_util.sh script

[oracle@XXXogg_agent]$ cd /u01/app/oracle/ogg_home/instance1

[oracle@XXX instance1]$ ls -ltr

total 32

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 dirprm

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 backup

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 dirwlt

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 dircrt

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 dirchk

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 cfg

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 logs

drwxr-xr-x. 2 oracle oinstall 4096 Dec 20 01:46 bin

[oracle@XXX instance1]$ rm -rf dirwlt

[oracle@XXX instance1]$ cd bin

[oracle@XXX bin]$ ls -ltr

total 12

-rwxr–r–. 1 oracle oinstall 1242 Dec 20 01:46 pw_agent_util.sh

-rwxr–r–. 1 oracle oinstall  433 Dec 20 01:46 displayMonitorAgentVersion.sh

-rwxr–r–. 1 oracle oinstall  379 Dec 20 01:46 startMonitorAgentInstance_20171220014616.sh

[oracle@XXX bin]$ ./pw_agent_util.sh -jagentonly

Please create a password for Java Agent:

Please confirm password for Java Agent:

Dec 20, 2017 1:48:56 AM oracle.security.jps.JpsStartup start

INFO: Jps initializing.

Dec 20, 2017 1:48:57 AM oracle.security.jps.JpsStartup start

INFO: Jps started.

Wallet is created successfully.

3.Configure the GoldenGate instance for OEM 13cR2

Navigate to the Oracle GoldenGate installation directory(/u01/app/oracle/ogg_home/instance1)
Edit the \cfg\Config.properties file.
In our case we have changed these values:

agent.type.enabled=OEM

jagent.host=xxx

jagent.username=oracle

Edit the GLOBALS file and add the parameter ENABLEMONITORING under GG Home
Create the Datastore  GGSCI (kens-racnode1) 1> CREATE DATASTORE
NOTE:
As indicated in Doc ID 2171015.1,

ALL “datastore” commands are deprecated in Oracle GoldenGate 12.3.x and above.

If using GoldenGate 12.3.x and above then DO NOT execute delete datastore, create datastore commands shown below or you will
receive “command not found” type errors.

GGSCI (xxx1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

JAGENT      STOPPED

PMSRVR      STOPPED

EXTRACT     RUNNING     DPRAF01     00:00:00      00:00:06

EXTRACT     RUNNING     EXTRAF01    00:00:00      00:00:04

stop *
stop manager

delete datastore <— confirm delete of datastore.  NOT needed for OGG 12.3.x and above
create datastore <—– Use command “create datastore mmap” instead IF GoldenGate is installed on shared disk.  NOT needed for OGG 12.3.x and above

start manager
start *
start jagent  <<———— In GoldenGate 12.3.x and above this command will also start PMSRVR
info all <—– Confirm “jagent” and all other processes are UP and running

GGSCI (xxx) 6> info all

 

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

 

MANAGER     RUNNING

JAGENT      RUNNING

PMSRVR      RUNNING

EXTRACT     RUNNING     DPRAF01     00:00:00      00:00:03

EXTRACT     RUNNING     EXTRAF01    00:00:01      00:00:04

EXTRACT     STOPPED     EXTRAF02    00:00:00      18:28:33

Now we need to discover GG process from cloud control

Login to OEM 13c R2 console and discover/promote the GoldenGate targets
https://xxx:7803/em/login.jsp

Go to
Setup (Gear icon on top toolbar) -> Add Target -> Configure Auto Discovery -> Targets on Host
Select/highlight the “Host” -> click -> Discovery Modules -> Select/Highlight GoldenGateDiscovery
click -> Edit Parameters

Enter
——-
JAgent User Name:oracle
JAgent Password
Jagent Host Name:IP of the server where GG installed

Click OK
Click OK

Select/highlight the “Host” -> click -> Discover Now

After discovery procedure is finished -> click Close
Click -> xx in “Discovered Targets” column

In “Auto Discovery Results” page select the discovered GoldenGate target and click -> Promote.

You can select 1 of the targets associated with the “instance” and when you click “Promote”
it will bring up all the targets of the instance for final “Promote” click.

Once targets are promoted click -> “Close”

The screen shots:-

Now you can check GG target status using the navigation Targets->Goldengate

 

Upgrade of Oracle 11g RAC database to Oracle 12c RAC DBUA

This document will help you providing guideline to upgrade Oracle 11g RAC to Oracle 12c RAC clusterware  and Database .

Author:-Saibal Ghosh

https://www.linkedin.com/in/saibal-ghosh-ccsk-prince2-%C2%AE-469b0a7/

1 Tasks Before Upgrade

1.1 Backup the Database:

Before we start the upgrade, it is a best practice to backup the database, Oracle Cluster Registry (OCR) and Oracle database home and Grid home .

1.2  LINUX X86-64 DATABASE FEATURES PACKAGE REQUIREMENTS:-

The following packages or later version need to be installed on the system for the upgrade to go through successfully:

 

binutils-2.23.52.0.1-12.el7.x86_64

compat-libcap1-1.10-3.el7.x86_64

gcc-4.8.2-3.el7.x86_64

gcc-c++-4.8.2-3.el7.x86_64

glibc-2.17-36.el7.i686

glibc-2.17-36.el7.x86_64

glibc-devel-2.17-36.el7.i686

glibc-devel-2.17-36.el7.x86_64

ksh

libaio-0.3.109-9.el7.i686

libaio-0.3.109-9.el7.x86_64

libaio-devel-0.3.109-9.el7.i686

libaio-devel-0.3.109-9.el7.x86_64

libgcc-4.8.2-3.el7.i686

libgcc-4.8.2-3.el7.x86_64

libstdc++-4.8.2-3.el7.i686

libstdc++-4.8.2-3.el7.x86_64

libstdc++-devel-4.8.2-3.el7.i686

libstdc++-devel-4.8.2-3.el7.x86_64

libXi-1.7.2-1.el7.i686

libXi-1.7.2-1.el7.x86_64

libXtst-1.2.2-1.el7.i686

libXtst-1.2.2-1.el7.x86_64

make-3.82-19.el7.x86_64

sysstat-10.1.5-1.el7.x86_64

 

Pluggable Authentication Modules for Linux (Linux PAM)

We need to install the latest Linux PAM (Pluggable Authentication Modules for Linux) library for our Linux distribution. PAM provides greater flexibility for system administrators to choose how applications authenticate users. On Linux, external scheduler jobs require PAM.

 

Oracle JDBC/OCI Drivers
We can use the following optional JDK version with the Oracle
JDBC/OCI drivers however, it is not required for the installation:
JDK 6 Update 10 (Java SE Development Kit 1.6.0_21)
JDK 1.5.0-24 (JDK 5.0) with the JNDI extension

1.3  NETWORK TIME PROTOCOL SETTING:-

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.
We have two options for time synchronization: an operating system configured network time protocol (NTP), or Oracle Cluster Time Synchronization Service. Oracle Cluster Time Synchronization Service is designed for organizations whose cluster servers are unable to access NTP services. If we use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd ) starts up in observer mode. If we do not have NTP daemons, then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server.
If we have NTP daemons on our server but we cannot configure them to synchronize time with a time server, and we want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then we need to deactivate and deinstall the NTP.
To deactivate the NTP service, we must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To complete these step on Oracle Linux, we run the following commands as the root user:

# /sbin/service ntpd stop

# chkconfig ntpd off

# rm /etc/ntp.conf

 

or, mv /etc/ntp.conf to /etc/ntp.conf.org

Also we need to remove the following file:

/var/run/ntpd.pid

This file maintains the pid for the NTP daemon. When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is installed in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode, and no active time synchronization is performed by Oracle Clusterware within the cluster.
To confirm that ctssd is active after upgrade, we need to enter the following command as the Grid installation owner:

$ crsctl check ctss

If we are using NTP, and we prefer to continue using it instead of Cluster Time Synchronization Service, then we need to modify the NTP configuration to set the –x flag, which prevents time from being adjusted backward. Then we restart the network time protocol daemon after we complete this task.

 

To do this, on Oracle Linux, Red Hat Linux, and Asianux systems, we edit the /etc/sysconfig/ntpd
file to add the –x flag, as in the following example:

 

# Drop root to id ‘ntp:ntp’ by default.

OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”

# Set to ‘yes’ to sync hw clock after successful ntpdate

SYNC_HWCLOCK=no

# Additional options for ntpdate

NTPDATE_OPTIONS=””

Then, restart the NTP service.

# /sbin/service ntp restart

1.4 CHECKING IF THE THE CVUQDISK PACKAGE FOR LINUX IS ALREADY INSTALLED OR NOT:-

We use the following command to find if we have an existing version of the cvuqdisk package:

# rpm -qi cvuqdisk

We need to ensure that the above package is installed; else we need to install the package. We need to ensure that the Oracle software owner user (oracle or grid ) has the Oracle Inventory group (oinstall) as its primary group and is a member of the appropriate OSDBA.

1.5  CHECKING RESOURCE LIMITS:

1. Log in as an installation owner.
2. We need to check the soft and hard limits for the file descriptor setting. We need to ensure that the result is in the recommended range. For example:

$ ulimit -Sn

1024

$ ulimit -Hn

65536

3. We need to check the soft and hard limits for the number of processes available to a user.
 We need to ensure that the result is in the recommended range. For example:

$ ulimit -Su

2047

$ ulimit -Hu

16384

4. We need to check the soft limit for the stack setting. We need to ensure that the result is in the recommended range. For example:

$ ulimit -Ss

10240

$ ulimit -Hs

32768

We need to repeat this procedure for each Oracle software installation owner.
As an example, the settings in the  Production Server are as follows:

 

 

2.Upgrade the Grid Infrastructure:

Step1: Unset the following:

1.    ORACLE_BASE

2.    ORACLE_HOME

3.    GI_HOME

4.    TNS_ADMIN

5.    ORA_NLS10

Step2: Check whether there is enough space on the mount point and /tmp as well as there ought to be at least 4GB of free space in the OCR and VOTING DISK diskgroup, because the MGMTDB is created in that diskgroup.
Step 3: Back up the Cluster and Oracle Homes and the OCR
Step 4: Check crs active and software versions are the same

·         crsctl query crs activeversion

·         crsctl query crs softwareversion

Step 5: Validate the Node readiness for the upgrade. Run a command similar to the following:

./runcluvfy.sh stage –pre crsinst –upgrade –rolling –src_crshome /orasw/app/11.2.0/grid –dest_crshome /orasw/app/12.1.0/grid –dest_version 12.1.0.2.0

Step 6: Start the Upgrade. The screenshots are self-explanatory.

Screen 1) we select the Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management option.

Screen 2) We choose the default language: English.

Screen 3) We select the nodes to form part of the RAC Cluster in the upgrade.

Screen 4)  EM Cloud Control is not selected at this point.

 

Screen 5)  The Operating System groups are selected from the drop down list.

Screen 6) Specifying the Oracle base and software location. We do not use the default software location, and thus we see a warning message.

Screen 7)  We prefer to manually run the configuration scripts and do not check ‘Automatically run configuration scripts’ checkbox.

Screen 8) In the Prerequisite Checks page we find that the swap space check fails. Therefore we manually  increase the swap space, and move on to the next step.

Screen 9)  This is the Summary page and we see a consolidated page of the information that is being taken into the upgrade process.

Screen 10) the pop-up comes up for running the rootupgrade.sh script.

Screen 11) The Product installation is continuing.

 

Screen 12)  We get the message that the upgrade of the Grid Infrastructure was successful.

 

3. Installing the database software

We now need to install the database software. For this we need to run the runInstaller.

Screen 1) Beginning the process of installation of the database software. We choose not to provide an email id.

 

Screen 2) Installing only the database software.

Screen 3)  Oracle Real Application Clusters database installation.

Screen 4)  We select the nodes to form part of the cluster.

Screen 5) The default language is English.

Screen 6) We chose the Enterprise Edition.

Screen 7) The Oracle base and the software location.

Screen 8) The Operating System Groups.

Screen 9) We get the swap size /dev/shm error, which are ignoble in this case.

Screen 10) The Summary page.

Screen 11) The final screen-the installation of the Oracle database software is successful.

 

Now, the final step would be to run the Database Upgrade Assistant to actually upgrade the database.

4.  Database Upgrade

Screen 2) We choose the database to upgrade.

Screen 3) The prerequisite checks.

Screen 4) Prerequisite checks continuing. On the next screen we take steps to recompile invalid objects.

Screen 4) Upgrade Options.

Step 5) Management Options.

Screen 6) We choose to have have our own backup policy.

 

Screen 7) the Summary Page

Screen 8) The Progress page. The pop-up alert is because of un-compiled PL/SQL objects, and since we have already planned to re-compile invalid objects, this error is ignoble and we continue ahead.

 

Screen 8) The Progress page continues.

 Screen 9) The Final screen shows that the upgrade completed successfully.

The Database and Grid Infrastructure was successfully upgraded to Oracle 12.1.0.2.

******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************

 

ORA-01111: name for data file is unknown – rename to correct file

Error in dataguard alert log/start managed recovery process:-

SYS@XXX>alter database recover managed standby database;
alter database recover managed standby database
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01111: name for data file 61 is unknown – rename to correct file
ORA-01110: data file 61: ‘/u01/app/oracle/product/12.1.0/db_1/dbs/UNNAMED00061’
ORA-01157: cannot identify/lock data file 61 – see DBWR trace file
ORA-01111: name for data file 61 is unknown – rename to correct file
ORA-01110: data file 61: ‘/u01/app/oracle/product/12.1.0/db_1/dbs/UNNAMED00061’

 

Solution:-

Check the exact size of the datafile in primary for file_id=61

In Standby,

SYS@XXX>alter system set standby_file_management=manual;

System altered.

SYS@XXX>Alter database create datafile ‘/u01/app/oracle/product/12.1.0/db_1/dbs/UNNAMED00061’ as ‘+DATA/’ size 34358689792;

Database altered.

SYS@XXX>alter system set standby_file_management=auto;

System altered.

Now you will be able to start managed recover process.

Active RMAN duplicate clone 12c using section size and compress backupset

Overview of New PULL method

The original “push” process is based on image copies.With Oracle Database 12c, a “pull” (or restore) process is based on backup sets. A connection is first established with the source database. The auxiliary instance then retrieves the required database files from the source database as backup sets. A restore operation is performed from the auxiliary instance instance. Therefore, fewer resources are used on the source database.

Both TNS connections are required on target and auxiliary instances.Based on the DUPLICATE clauses, RMAN dynamically determines which process to use (push or pull’. This ensures that existing customized scripts continue to function.

  • When you specify USING BACKUPSET, RMAN uses the pull method.

  • When you specify SET ENCRYPTION before the DUPLICATE command, RMAN

automatically uses the pull method and creates backup sets. The backups sent to the destination are encrypted.

  • The SECTION SIZE clause divides data files into subsections that are restored in parallel across multiple channels on the auxiliary database. For an effective use of parallelization, allocate more AUXILIARY channels.

  • With the USING COMPRESSED BACKUPSET clause, the files are transferred as compressed backup sets. RMAN uses unused block compression while creating backups,thus reducing the size of backups that are transported over the network.

NOOPEN

You might duplicate a database with RMAN for various reasons. In earlier versions a recovered duplicated database was automatically opened. By default, this functionality continues with the Oracle Database 12c.
What is new is that you have an option to finish the duplication process with the database in a mounted, but not opened state. This is useful when the attempt to open the database would produce errors and in all cases when you want to modify initialization settings, which are otherwise quite difficult to modify.
For example, you may want to move the location of the database to ASM. Also when you are performing an upgrade, where the database must not be open with resetlogs, prior to running upgrade scripts.
The NOOPEN option allows the duplication to create a new database as part of an upgrade procedure and leaves the database in a state ready for opening in upgrade mode and subsequent execution of upgrade scripts.

Multi-section now is available on image copy.

 

Active duplication step

Create init parameter file from source and change relevant parameters like control_file,db_name etc.I have provided below sample of init.ora.

[oracle@rac1 dbs]$ cat initrcat.ora
*._catalog_foreign_restore=FALSE
*.audit_file_dest=’/u01/app/product/admin/rcat/adump’
*.audit_trail=’db’
*.compatible=’12.1.0.2.0′
*.control_files=’+DATA/controlrcat.clt’
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_name=’rcat’
*.db_recovery_file_dest=’+DATA’
*.db_recovery_file_dest_size=4785m
*.diagnostic_dest=’/u01/app/product’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rcatXDB)’
*.enable_pluggable_database=true
*.open_cursors=300
*.optimizer_adaptive_features=FALSE
*.parallel_max_servers=8
*.parallel_min_servers=0
*.pga_aggregate_target=570m
*.processes=300
*.remote_login_passwordfile=’exclusive’
*.session_cached_cursors=1000
*.sga_target=1710m
*.shared_pool_size=629145600
*.undo_tablespace=’UNDOTBS1′

 

Create required audit directory in target

mkdir -p /u01/app/product/admin/rcat/adump

Add entry as static listener.ora under $GRID_HOME/network/admin

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = rcat)
(ORACLE_HOME = /u01/app/product/ora12c/12.1.0/dbhome_1)
(GLOBAL_DBNAME = rcat)
)
)

Add corresponding entry in tnsnames.ora

[oracle@rac1 admin]$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/product/ora12c/12.1.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

############Source###################

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

 

#######Target#################
RCAT =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.101)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = rcat)
(UR=A)
)
)

 

The (UR=A) clause for TNS connect strings was created in response to an enhancement request.
This clause can be inserted into the “(CONNECT_DATA=” section of a TNS connect string and allow a privileged or administrative user to connect via the listener even when the service handler is blocking connections for non-privileged users. This feature is introduced since Oracle 10g

If UR=A is not added,you will get following error during connection to target in nomount state in next steps:-

ORA-12528: TNS:listener: all appropriate instances are blocking new connections

Create password file

cd $ORACLE_HOME/dbs

orapwd file=orapwrcat password=oracle

Now start target database in nomount

export ORACLE_SID=rcat

[oracle@rac1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Dec 9 23:20:41 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> shutdow abort;
  ORACLE instance shut down.
  SQL> startup nomount pfile='initrcat.ora';
  ORACLE instance started.

Total System Global Area 1795162112 bytes
Fixed Size 2925456 bytes
Variable Size 805309552 bytes
Database Buffers 973078528 bytes
Redo Buffers 13848576 bytes

 

Please verify password file is working as expected in target

SQL> select * from v$pwfile_users;

USERNAME SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM CON_ID
—————————— —– —– —– —– —– —– ———-
SYS TRUE TRUE FALSE FALSE FALSE FALSE 1

Register in listener

SQL>alter system register;

Now connect to target and auxiliary and start duplicate using new 12c parameters

[oracle@rac1 dbs]$ rman target sys/oracle@orcl auxiliary sys/oracle@rcat

Recovery Manager: Release 12.1.0.2.0 – Production on Sat Dec 9 23:24:40 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1489144156)
connected to auxiliary database: RCAT (not mounted)

RMAN> CONFIGURE DEVICE TYPE disk PARALLELISM 4;
using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> duplicate target database to rcat from active database section size 500M using compressed backupset;

Starting Duplicate Db at 09-DEC-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=34 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=35 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=36 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=37 device type=DISK
current log archived

contents of Memory Script:
{
sql clone “create spfile from memory”;
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes

contents of Memory Script:
{
sql clone “alter system set db_name =
”ORCL” comment=
”Modified by RMAN duplicate” scope=spfile”;
sql clone “alter system set db_unique_name =
”RCAT” comment=
”Modified by RMAN duplicate” scope=spfile”;
shutdown clone immediate;
startup clone force nomount
restore clone from service ‘orcl’ using compressed backupset
primary controlfile;
alter clone database mount;
}
executing Memory Script

sql statement: alter system set db_name = ”ORCL” comment= ”Modified by RMAN duplicate” scope=spfile

sql statement: alter system set db_unique_name = ”RCAT” comment= ”Modified by RMAN duplicate” scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes

Starting restore at 09-DEC-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=33 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=35 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=36 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=37 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
output file name=+DATA/controlrcat.clt
Finished restore at 09-DEC-17

database mounted

contents of Memory Script:
{
set newname for clone datafile 1 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 6 to new;
set newname for clone datafile 7 to new;
set newname for clone datafile 8 to new;
set newname for clone datafile 15 to new;
set newname for clone datafile 16 to new;
set newname for clone datafile 17 to new;
restore
from service ‘orcl’ section size
500 m using compressed backupset
clone database
;
sql ‘alter system archive log current’;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 09-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATA/RCAT/DATAFILE/system.313.962321229
channel ORA_AUX_DISK_1: restoring section 1 of 2
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00003 to +DATA/RCAT/DATAFILE/sysaux.325.962321231
channel ORA_AUX_DISK_2: restoring section 1 of 2
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00004 to +DATA/RCAT/DATAFILE/undotbs1.326.962321233
channel ORA_AUX_DISK_3: restoring section 1 of 1
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using compressed network backup set from service orcl
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00005 to +DATA/RCAT/DATAFILE/system.327.962321241
channel ORA_AUX_DISK_4: restoring section 1 of 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:27
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00001 to +DATA/RCAT/DATAFILE/system.313.962321229
channel ORA_AUX_DISK_3: restoring section 2 of 2
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:01:25
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using compressed network backup set from service orcl
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00003 to +DATA/RCAT/DATAFILE/sysaux.325.962321231
channel ORA_AUX_DISK_4: restoring section 2 of 2
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:02:03
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00006 to +DATA/RCAT/DATAFILE/users.328.962321353
channel ORA_AUX_DISK_2: restoring section 1 of 1
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:05
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00007 to +DATA/RCAT/DATAFILE/sysaux.329.962321359
channel ORA_AUX_DISK_2: restoring section 1 of 2
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:01:46
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00007 to +DATA/RCAT/DATAFILE/sysaux.329.962321359
channel ORA_AUX_DISK_3: restoring section 2 of 2
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:25
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00008 to +DATA/RCAT/DATAFILE/undotbs2.330.962321375
channel ORA_AUX_DISK_1: restoring section 1 of 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:13
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00015 to +DATA/RCAT/DATAFILE/system.331.962321379
channel ORA_AUX_DISK_3: restoring section 1 of 1
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:00:55
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using compressed network backup set from service orcl
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00016 to +DATA/RCAT/DATAFILE/sysaux.332.962321381
channel ORA_AUX_DISK_4: restoring section 1 of 2
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00016 to +DATA/RCAT/DATAFILE/sysaux.332.962321381
channel ORA_AUX_DISK_1: restoring section 2 of 2
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:17
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00017 to +DATA/RCAT/DATAFILE/my_tbs.333.962321401
channel ORA_AUX_DISK_1: restoring section 1 of 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:09
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:01:08
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:01:50
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:01:37
Finished restore at 09-DEC-17

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
restore clone force from service ‘orcl’ using compressed backupset
archivelog from scn 3152882;
switch clone datafile all;
}
executing Memory Script

Starting restore at 09-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=47
channel ORA_AUX_DISK_2: starting archived log restore to default destination
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: restoring archived log
archived log thread=1 sequence=48
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:02
Finished restore at 09-DEC-17

datafile 1 switched to datafile copy
input datafile copy RECID=27 STAMP=962321483 file name=+DATA/RCAT/DATAFILE/system.313.962321229
datafile 3 switched to datafile copy
input datafile copy RECID=28 STAMP=962321483 file name=+DATA/RCAT/DATAFILE/sysaux.325.962321231
datafile 4 switched to datafile copy
input datafile copy RECID=29 STAMP=962321484 file name=+DATA/RCAT/DATAFILE/undotbs1.326.962321233
datafile 5 switched to datafile copy
input datafile copy RECID=30 STAMP=962321484 file name=+DATA/RCAT/DATAFILE/system.327.962321241
datafile 6 switched to datafile copy
input datafile copy RECID=31 STAMP=962321484 file name=+DATA/RCAT/DATAFILE/users.328.962321353
datafile 7 switched to datafile copy
input datafile copy RECID=32 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/sysaux.329.962321359
datafile 8 switched to datafile copy
input datafile copy RECID=33 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/undotbs2.330.962321375
datafile 15 switched to datafile copy
input datafile copy RECID=34 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/system.331.962321379
datafile 16 switched to datafile copy
input datafile copy RECID=35 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/sysaux.332.962321381
datafile 17 switched to datafile copy
input datafile copy RECID=36 STAMP=962321486 file name=+DATA/RCAT/DATAFILE/my_tbs.333.962321401

contents of Memory Script:
{
set until scn 3153106;
recover
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 09-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

starting media recovery

archived log for thread 1 with sequence 47 is already on disk as file +DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_47.336.962321481
archived log for thread 1 with sequence 48 is already on disk as file +DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_48.337.962321483
archived log file name=+DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_47.336.962321481 thread=1 sequence=47
archived log file name=+DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_48.337.962321483 thread=1 sequence=48
media recovery complete, elapsed time: 00:00:01
Finished recover at 09-DEC-17
Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes

contents of Memory Script:
{
sql clone “alter system set db_name =
”RCAT” comment=
”Reset to original value by RMAN” scope=spfile”;
sql clone “alter system reset db_unique_name scope=spfile”;
}
executing Memory Script

sql statement: alter system set db_name = ”RCAT” comment= ”Reset to original value by RMAN” scope=spfile

sql statement: alter system reset db_unique_name scope=spfile
Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE “RCAT” RESETLOGS ARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 SIZE 50 M ,
GROUP 2 SIZE 50 M
DATAFILE
‘+DATA/RCAT/DATAFILE/system.313.962321229’,
‘+DATA/RCAT/DATAFILE/system.327.962321241’,
‘+DATA/RCAT/DATAFILE/system.331.962321379’
CHARACTER SET AL32UTF8

sql statement: ALTER DATABASE ADD LOGFILE

INSTANCE ‘i2’
GROUP 3 SIZE 50 M ,
GROUP 4 SIZE 50 M

contents of Memory Script:
{
set newname for clone tempfile 1 to new;
set newname for clone tempfile 2 to new;
set newname for clone tempfile 3 to new;
switch clone tempfile all;
catalog clone datafilecopy “+DATA/RCAT/DATAFILE/sysaux.325.962321231”,
“+DATA/RCAT/DATAFILE/undotbs1.326.962321233”,
“+DATA/RCAT/DATAFILE/users.328.962321353”,
“+DATA/RCAT/DATAFILE/sysaux.329.962321359”,
“+DATA/RCAT/DATAFILE/undotbs2.330.962321375”,
“+DATA/RCAT/DATAFILE/sysaux.332.962321381”,
“+DATA/RCAT/DATAFILE/my_tbs.333.962321401”;
switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file
renamed tempfile 2 to +DATA in control file
renamed tempfile 3 to +DATA in control file

cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/sysaux.325.962321231 RECID=1 STAMP=962321529
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/undotbs1.326.962321233 RECID=2 STAMP=962321529
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/users.328.962321353 RECID=3 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/sysaux.329.962321359 RECID=4 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/undotbs2.330.962321375 RECID=5 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/sysaux.332.962321381 RECID=6 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/my_tbs.333.962321401 RECID=7 STAMP=962321530

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=962321529 file name=+DATA/RCAT/DATAFILE/sysaux.325.962321231
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=962321529 file name=+DATA/RCAT/DATAFILE/undotbs1.326.962321233
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/users.328.962321353
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/sysaux.329.962321359
datafile 8 switched to datafile copy
input datafile copy RECID=5 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/undotbs2.330.962321375
datafile 16 switched to datafile copy
input datafile copy RECID=6 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/sysaux.332.962321381
datafile 17 switched to datafile copy
input datafile copy RECID=7 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/my_tbs.333.962321401

contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
sql clone “alter pluggable database all open”;
}
executing Memory Script

sql statement: alter pluggable database all open
Cannot remove created server parameter file
Finished Duplicate Db at 09-DEC-17

Create password file in ASM now

Create spfile.

Now add database in srvctl

srvctl add database -db rcat -oraclehome $ORACLE_HOME
srvctl modify database -db rcat -pwfile +DATA/pwdrcat.ora
srvctl modify database -db rcat -spfile +DATA/spfilercat.ora

Issue faced:-

RMAN-11003: failure during parse/execution of SQL statement: alter system set db_unique_name = ‘RCAT’ comment= ‘Modified by RMAN duplicate’ scope=spfile
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

Please remove the database from srvctl if already added before duplicate

[oracle@rac1 dbs]$ srvctl remove database -d rcat

 

rman command reference

##Connecting RMAN##############

rman TARGET SYS/target_pwd@target_str # connects in NOCATALOG mode
rman TARGET / CATALOG rman/rman@rcat
rman TARGET / CATALOG rman/rman@rcat AUXILIARY sys/aux_pwd@aux_str

##Create user and catalog in RMAN database ########

CREATE USER rman_dba IDENTIFIED BY rman_dba TEMPORARY TABLESPACE temp DEFAULT TABLESPACE rman_dba QUOTA UNLIMITED ON rman_dba;
GRANT RECOVERY_CATALOG_OWNER TO rman_dba;
CREATE CATALOG;

## Register Database######

Rman target / catalog rman_dba/rman_dba@<catalog>

register database;
select * from rc_database;

### Catalog copy in RMAN Catalog###

CATALOG BACKUPPIECE ‘/disk2/09dtq55d_1_2’, ‘/disk2/0bdtqdou_1_1’;
CATALOG DATAFILECOPY ‘/tmp/users01.dbf’;
CATALOG RECOVERY AREA;
CHANGE CONTROLFILECOPY ‘/tmp/control01.ctl’ UNCATALOG;
CHANGE DATAFILECOPY ‘/tmp/system01.dbf’ UNCATALOG;
CHANGE DATAFILECOPY ‘/tmp/control01.ctl’ UNAVAILABLE;
CHANGE COPY OF ARCHIVELOG SEQUENCE BETWEEN 1000 AND 1012 UNAVAILABLE;
CHANGE BACKUPSET 12 UNAVAILABLE;
CHANGE BACKUP OF SPFILE TAG “TAG20020208T154556” UNAVAILABLE;
CHANGE DATAFILECOPY ‘/tmp/system01.dbf’ AVAILABLE;
CHANGE BACKUPSET 12 AVAILABLE;
CHANGE BACKUP OF SPFILE TAG “TAG20020208T154556” AVAILABLE;
CATALOG START WITH ‘/backup/MYSID/arch’;

### Configure RMAN ######

CONFIGURE CHANNEL DEVICE TYPE sbt CLEAR;
CONFIGURE RETENTION POLICY CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

CONFIGURE DEFAULT DEVICE TYPE TO DISK/SBT;
CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
CONFIGURE DEVICE TYPE sbt PARALLELISM 2;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT ‘d:\oracle\orclbackup\ora_df%t_s%s_s%p’;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘d:\oracle\orclbackup\ora_cf%F’;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2G;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT /tmp/%U;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘+dgroup1/%F’;
CONFIGURE DEVICE TYPE <DISK | SBT> BACKUP TYPE TO COMPRESSED BACKUPSET;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT ‘/disk1/%U’, ‘/disk2/%U’;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE sbt TO 2;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE sbt TO 2;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 2;
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 4 DAYS;

###RMAN backup check view######

SELECT SID,
SERIAL#,
CONTEXT,
SOFAR,
TOTALWORK,
ROUND(SOFAR / TOTALWORK * 100, 2) “% COMPLETE”
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE ‘RMAN%’ AND OPNAME NOT LIKE ‘%aggregate%’ AND
TOTALWORK != 0 AND SOFAR <> TOTALWORK ;

COL in_sec FORMAT a10
COL out_sec FORMAT a10
COL TIME_TAKEN_DISPLAY FORMAT a10
SELECT SESSION_KEY,
OPTIMIZED,
COMPRESSION_RATIO,
INPUT_BYTES_PER_SEC_DISPLAY in_sec,
OUTPUT_BYTES_PER_SEC_DISPLAY out_sec,
TIME_TAKEN_DISPLAY
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;

SELECT FILE#, STATUS, ERROR, RECOVER, TABLESPACE_NAME, NAME
FROM V$DATAFILE_HEADER
WHERE RECOVER = ‘YES’
OR (RECOVER IS NULL AND ERROR IS NOT NULL);

SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME,
BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .2
ORDER BY COMPLETION_TIME;

SELECT * FROM V$RECOVERY_FILE_DEST;
SELECT * FROM V$RECOVERY_AREA_USAGE;
SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;

### List and Report of backup ####

LIST BACKUP OF DATABASE;
LIST COPY OF DATAFILE 1, 2;
LIST BACKUP OF ARCHIVELOG FROM SEQUENCE 10;
LIST BACKUPSET OF DATAFILE 1;
LIST BACKUP;
LIST cOPY;
LIST ARCHIVELOG;
LIST RESTORE POINT;
LIST EXPIRED;
LIST BACKUP SUMMARY;
LIST FAILURE;
LIST BACKUPSET TAG ‘weekly_full_db_backup’;
LIST BACKUPSET 213;
LIST COPY OF DATAFILE 2 COMPLETED BETWEEN ’10-DEC-2002′ AND ’17-DEC-2002′
LIST BACKUP OF DATAFILE 1;

REPORT OBSOLETE;
REPORT SCHEMA;
REPORT NEED BACKUP
REPORT NEED BACKUP RECOVERY WINDOW OF 2 DAYS DATABASE DEVICE TYPE sbt;
REPORT NEED BACKUP DEVICE TYPE DISK;
REPORT NEED BACKUP TABLESPACE TBS_3 DEVICE TYPE sbt;
REPORT OBSOLETE RECOVERY WINDOW OF 3 DAYS;
REPORT OBSOLETE REDUNDANCY 1;

## Crosscheck backup #####

CROSSCHECK BACKUP DEVICE TYPE DISK;
CROSSCHECK BACKUP DEVICE TYPE sbt;
CROSSCHECK BACKUP; # checks backup sets, proxy copies, and image copies
CROSSCHECK COPY OF DATABASE;
CROSSCHECK BACKUPSET 1338, 1339, 1340;
CROSSCHECK BACKUPPIECE TAG ‘nightly_backup’;
CROSSCHECK BACKUP OF ARCHIVELOG ALL SPFILE;
CROSSCHECK BACKUP OF DATAFILE “?/oradata/trgt/system01.dbf” COMPLETED AFTER ‘SYSDATE-14’;
CROSSCHECK CONTROLFILECOPY ‘/tmp/control01.ctl’;
CROSSCHECK DATAFILECOPY 113, 114, 115;
CROSSCHECK PROXY 789;

### Delete backup ########

DELETE BACKUPPIECE 101;
DELETE CONTROLFILECOPY ‘/tmp/control01.ctl’;
DELETE NOPROMPT ARCHIVELOG UNTIL SEQUENCE 300;
DELETE BACKUP TAG ‘before_upgrade’;
DELETE ARCHIVELOG ALL BACKED UP 3 TIMES TO sbt;
DELETE EXPIRED BACKUP;
DELETE OBSOLETE;

###Simple unix script #############

#!/bin/tcsh
# name: runbackup.sh
# usage: use the tag name and number of copies as arguments
set media_family = $argv[1]
set format = $argv[2]
set restore_point = $argv[3]
rman @’/disk1/scripts/whole_db.cmd’ USING $media_family $format $restore_point

% runbackup.sh archival_backup bck0906 FY06Q3

##Backup Database command #####

BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE ‘SYSDATE-7’ DELETE INPUT;
BACKUP DEVICE TYPE DISK COPIES 3 DATAFILE 7 FORMAT ‘/disk1/%U’,’?/oradata/%U’,’?/%U’;
BACKUP DEVICE TYPE sbt BACKUPSET COMPLETED BEFORE ‘SYSDATE-7’ DELETE INPUT;
BACKUP AS BACKUPSET DATABASE;
BACKUP AS BACKUPSET DEVICE TYPE DISK DATABASE;
BACKUP AS BACKUPSET DEVICE TYPE SBT DATABASE;
BACKUP AS COPY DEVICE TYPE DISK DATABASE;
BACKUP AS BACKUPSET COPIES 1 DATAFILE 7 TAG mondaybkp;
BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
BACKUP DEVICE TYPE sbt DATAFILE 1,2,3,4 DATAFILECOPY ‘/tmp/system01.dbf’;
BACKUP AS COPY DB_FILE_NAME_CONVERT (‘/maindisk/oradata/users’,’/backups/users_ts’) TABLESPACE users;
BACKUP INCREMENTAL LEVEL 0 DATABASE;
BACKUP AS BACKUPSET DATABASE FORMAT ‘/disk1/%U’,’/disk2/%U’;
BACKUP AS BACKUPSET DEVICE TYPE DISK COPIES 3 INCREMENTAL LEVEL 0 DATABASE;
BACKUP DURATION 4:00 TABLESPACE users;
BACKUP DURATION 4:00 PARTIAL TABLESPACE users FILESPERSET 1;
BACKUP DURATION 4:00 PARTIAL MINIMIZE TIME DATABASE FILESPERSET 1;
BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;
backup as compressed backupset incremental level 0 database plus archivelog;
BACKUP DEVICE TYPE sbt ARCHIVELOG ALL DELETE ALL INPUT;
BACKUP INCREMENTAL LEVEL 1 TABLESPACE SYSTEM, tools;
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE TABLESPACE users;
BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;
BACKUP ARCHIVELOG FROM TIME ‘SYSDATE-1’;
BACKUP ARCHIVELOG FROM TIME ‘SYSDATE-5’ UNTIL TIME ‘SYSDATE-1’

RUN
{
ALLOCATE CHANNEL disk1 DEVICE TYPE DISK FORMAT ‘/disk1/%d_backups/%U’;
ALLOCATE CHANNEL disk2 DEVICE TYPE DISK FORMAT ‘/disk2/%d_backups/%U’;
ALLOCATE CHANNEL disk3 DEVICE TYPE DISK FORMAT ‘/disk3/%d_backups/%U’;
BACKUP AS COPY DATABASE;
}

RUN
{
CONFIGURE DEVICE TYPE DISK PARALLELISM 3;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT ‘/disk1/%d_backups/%U’;
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT ‘/disk2/%d_backups/%U’;
CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT ‘/disk3/%d_backups/%U’;
BACKUP AS COPY DATABASE;
}

## Incremental updated backup #######

recover copy of database with tag ‘tcstest3’;
backup incremental level 1 tag ‘tcstest3’ for recover of copy with tag ‘tcstest3’ database ;

## Block change tracking ########

ALTER SYSTEM SET
DB_CREATE_FILE_DEST = ‘/disk1/bct/’
SCOPE=BOTH SID=’*’;

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE ‘/mydir/rman_change_track.f’ REUSE;

COL STATUS FORMAT A8
COL FILENAME FORMAT A60
SELECT STATUS, FILENAME
FROM V$BLOCK_CHANGE_TRACKING;

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE ‘new_location’;

### Restore Preview #########

RESTORE DATABASE VALIDATE;
RESTORE ARCHIVELOG ALL VALIDATE;
RESTORE DATABASE PREVIEW;
RESTORE ARCHIVELOG FROM TIME ‘SYSDATE-7’ PREVIEW;
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE ARCHIVELOG ALL PREVIEW RECALL;
REPAIR FAILURE PREVIEW;
VALIDATE DATABASE;
RUN{
ALLOCATE CHANNEL c1 DEVICE TYPE DISK;
ALLOCATE CHANNEL c2 DEVICE TYPE DISK;
VALIDATE DATAFILE 1 SECTION SIZE 1200M;
}
VALIDATE DATAFILE 4 BLOCK 10 TO 13;
VALIDATE BACKUPSET 3;
RECOVER CORRUPTION LIST;

### Recover database flashback technology ####

SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;

SELECT CURRENT_SCN FROM V$DATABASE;
SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,GUARANTEE_FLASHBACK_DATABASE FROM V$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE=’YES’;
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;

FLASHBACK DATABASE TO SCN 46963;
FLASHBACK DATABASE TO RESTORE POINT BEFORE_CHANGES;
FLASHBACK DATABASE TO TIME “TO_DATE(’09/20/05′,’MM/DD/YY’)”;
ALTER DATABASE OPEN READ ONLY;

SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER DATABASE OPEN RESETLOGS;

SET UNTIL TIME ‘Nov 15 2004 09:00:00’;
SET UNTIL SEQUENCE 9923;
SET UNTIL RESTORE POINT before_update;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;

### Recover database to copy location and again recover back to original location #######

RUN{
SWITCH DATABASE TO COPY;
RECOVER DATABASE;
ALTER DATABASE OPEN;
}

run{
SET NEWNAME FOR DATAFILE 1 TO ‘/oracle/oradata/tcstest/system01.dbf’;
SET NEWNAME FOR DATAFILE 2 TO ‘/oracle/oradata/tcstest/undotbs01.dbf’;
SET NEWNAME FOR DATAFILE 3 TO ‘/oracle/oradata/tcstest/sysaux01.dbf’;
SET NEWNAME FOR DATAFILE 4 TO ‘/oracle/oradata/tcstest/users01.dbf’;
SET NEWNAME FOR DATAFILE 5 TO ‘/oracle/oradata/tcstest/example01.dbf’;
RESTORE DATABASE;
SWITCH DATAFILE ALL;
RECOVER DATABASE;
}

## Recover individual tablespace and datafile###

sql “alter tablespace working_data offline”;
sql “alter database datafile 13 offline”;
restore tablespace working_data;
restore datafile 13;
recover tablespace working_data;
recover datafile 13;
sql “alter tablespace working_data online”;
sql “alter database datafile 13 online”;

SWITCH DATAFILE 4 TO COPY;
RECOVER DATAFILE 4;

backup as copy datafile 4;

run{
SET NEWNAME FOR DATAFILE 4 TO ‘/oracle/oradata/tcstest/users01.dbf’;
restore datafile 4;
switch datafile 4;
recover datafile 4;
}

SET NEWNAME FOR DATAFILE ‘/disk1/oradata/prod/users01.dbf’
TO ‘/disk2/users01.dbf’;
RESTORE TABLESPACE users;
SWITCH DATAFILE ALL; # update control file with new filenames
RECOVER TABLESPACE users;

## Recover individual block #######

RECOVER DATAFILE 8 BLOCK 13 DATAFILE 2 BLOCK 199 FROM TAG mondayam;
RECOVER CORRUPTION LIST;

## Recover No-archivelog mode with catalog########

rman target sys/password catalog rcat_user/rcat_password@catalogdb
startup force nomount;
restore spfile from autobackup;
shutdown immediate;
startup nomount;
restore controlfile from autobackup;
alter database mount;
configure default device type to sbt;
configure channel 1 device type sbt parms = “env=(nb_ora_serv=mgtserv, nb_ora_client=cervantes)”;
restore database;
recover database noredo;
alter database open resetlogs;

## Recover No-archivelog mode with no catalog####

rman target sys/password
startup nomount
set dbid=2540040039;
restore controlfile from autobackup;
sql ‘alter database mount’;
restore database;
recover database noredo;
sql “alter database open resetlogs”;

### Recover complete database loss–No catalog ###

rman target /
set dbid=204062491;
startup force nomount;
run {
allocate channel tape_1 type sbt
parms=’env=(nb_ora_serv=rmsrv, nb_ora_client=cervantes)’;
restore spfile from autobackup;

}

shutdown immediate;

startup nomount;

alter system set control_files= ‘/u02/oradata/prod/control01.dbf’,
‘/u03/oradata/prod/control02.dbf’ scope=spfile;

alter system set db_file_name_convert= (‘/u04’ , ‘/u02’ ,
‘/u05’ , ‘/u02’ ,
‘u06’ , ‘ u03’ ,
‘u07’ , ‘u03’) scope=spfile;

alter system set log_file_name_convert= (‘/u04’ , ‘/u02’ ,
‘/u05’ , ‘/u02’ ,
‘u06’ , ‘ u03’ ,
‘u07’ , ‘u03’) scope=spfile;

alter system set log_archive_dest_1=
‘location=/u02/oradata/prod/arch’ scope=spfile;

alter system set db_cache_size=300m scope=spfile;

alter system set shared_pool_size=200m scope=spfile;

shutdown immediate;

startup nomount;

run {
allocate channel tape_1 type sbt
parms=’env=(nb_ora_serv=rmsrv, nb_ora_client=Cervantes)’;
restore controlfile from autobackup;

}

alter database mount;

configure default device type to sbt;

configure device type sbt parallelism 2;

configure auxiliary channel 1 device type sbt parms
= “env=(nb_ora_serv=mgtserv, nb_ora_client=cervantes)”;

configure auxiliary channel 2 device type sbt parms
= “env=(nb_ora_serv=mgtserv, nb_ora_cient=cervantes)”;

list backup of archivelog from time = ‘sysdate-7’;

restore database;

recover database until sequence=<number>;

alter database open resetlogs;

## Recovery of database if inactive redo log is deleted####

sql>startup mount;
sql>alter database clear logfile group 2;
sql>alter database open;

##Recovery after loss of controlfile #######

rman target /
startup nomount;
restore controlfile from autobackup;
alter database mount;
recover database;
alter database open (resetlogs);

## Tablespace point in time recovery ########

Database should be in opened state

recover tablespace “APP_DATA” until time
“to_date(‘2009-08-04 12:15:00’,’YYYY-MM-DD HH24:MI:SS’)”
auxiliary destination ’/opt/oracle/temp’;

### Recovery after loss of current redolog file ###

RUN
{
# SET UNTIL TIME ‘Nov 15 2002 09:00:00’;
# SET UNTIL SCN 1000; # alternatively, specify SCN
SET UNTIL SEQUENCE 1; # alternatively, specify log sequence number
RESTORE DATABASE;
RECOVER DATABASE;
}

###Backup of CDB and PDB in 12c#####

—CDB backup–

export ORACLE_SID=ORCL1

[oracle@rac1 ~]$ rman target /

RMAN> backup database plus archivelog;

—CDB root backup—

RMAN> backup pluggable database “CDB$ROOT”;

–Backup pluggable database—

backup pluggable database oem;

backup pluggable database oem plus archivelog;