Oracle 12c gather statistics oracle deep dive

I referred some reference from my favorite Tim Hall blog (ORACLE_BASE)

A.Basic of optimizer statistics: –

The optimizer cost model relies on statistics collected about the objects involved in a query, and the database and host where the query runs. Statistics are critical to the optimizer’s ability to pick the best execution plan for a SQL statement.

Object

What influences statistics

Views

Table Statistics
1.No of rows.
2.No of blocks.
3.Avg row length.
1.DBA_TABLES
2.DBA_TAB_STATISTICS
Column Statistics
1.No of distinct values.
2.No of nulls
3.Data distribution(Histogram)
4.Extended statistics
1.DBA_TAB_COLS
2.DBA_TAB_COL_STATISTICS
3.USER_PART_COL_STATISTICS
4.USER_SUBPART_COL_STATISTICS
Index Statistics
No of distinct values.
No of leaf blocks.
Clustering g factor
1.DBA_INDEXES
2.DBA_IND_STATISTICS
Histograms
Data distribution in columns
DBA_TAB_HISTOGRAMS
Extended Stats
Relation between data stored in different columns of the same table
DBA_TAB_COL_STATISTICS
System statistics
I/O performance and utilization.
CPU performance and utilization
sys.aux_stats$

 

B.Configuring Automatic Optimizer Statistics Collection Using Cloud Control FOR gather statistics oracle 

Please go to “Administration”->”Oracle Schedure”->”Automated Maintenance Tasks”

 

You can see 3 default automatic jobs scheduled. You must choose “Optimizer Statistics Gathering”

Now click on “Optimizer Statistics Gathering”

You may click on “Global Statistics Gathering Options” for viewing configured parameters.

 

You must choose “Configure” button to enable or disable /Change Window.

You may check by sqlplus also to check status of  Automatics Optimizer Stats Collection
SQL> COL CLIENT_NAME FORMAT a31
SELECT CLIENT_NAME, STATUS
FROM DBA_AUTOTASK_CLIENT
WHERE CLIENT_NAME = ‘auto optimizer stats collection’;
SQL>   2    3
CLIENT_NAME                     STATUS
——————————- ——–
auto optimizer stats collection ENABLED

C.Manual Statistics Gather standard parameters  FOR gather statistics oracle 

Preference Description Default (11gR2) Scope Version
CASCADE
Determines if index stats should be gathered for the current table (TRUE, FALSE, AUTO_CASCADE).
DBMS_STATS.AUTO_CASCADE
G, D, S, T
10gR1+
DEGREE
Degree of parallelism (integer or DEFAULT_DEGREE).
DBMS_STATS.DEFAULT_DEGREE
G, D, S, T
10gR1+
ESTIMATE_PERCENT
Percentage of rows to sample when gathering stats (0.000001-100 or AUTO_SAMPLE_SIZE).
DBMS_STATS.AUTO_SAMPLE_SIZE
G, D, S, T
10gR1+
METHOD_OPT
Controls column statistics collection and histogram creation.
FOR ALL COLUMNS SIZE AUTO
G, D, S, T
10gR1+
NO_INVALIDATE
Determines if dependent cursors should be invalidated as a result of new stats on objects (TRUE, FALSE or AUTO_INVALIDATE).
DBMS_STATS.AUTO_INVALIDATE
G, D, S, T
10gR1+
AUTOSTATS_TARGET
Determines which objects have stats gathered (ALL, ORACLE, AUTO).
AUTO
G
10gR2+
GRANULARITY
The granularity of stats to be collected on partitioned objects (ALL, AUTO, DEFAULT, GLOBAL, ‘GLOBAL AND PARTITION’, PARTITION, SUBPARTITION).
AUTO
G, D, S, T
10gR2+
PUBLISH
Determines if gathered stats should be published immediately or left in a pending state (TRUE, FALSE).
TRUE
G, D, S, T
11gR2+
INCREMENTAL
Determines whether incremental stats will be used for global statistics on partitioned objects, rather than generated using table scans (TRUE, FALSE).
FALSE
G, D, S, T
11gR2+
CONCURRENT
Should objects statistics be gathered on multiple objects at once, or one at a time (MANUAL, AUTOMATIC, ALL, OFF).
OFF
G
12cR1+
GLOBAL_TEMP_TABLE_STATS
Should stats on global temporary tables be session-specific or shared between sessions (SHARED, SESSION).
SESSION
G, D, S
12cR1+
INCREMENTAL_LEVEL
Which level of synopses should be collected for incremental partitioned statistics (TABLE, PARTITION).
PARTITION
G, D, S, T
12cR1+
INCREMENTAL_STALENESS
How is staleness of partition statistics determined (USE_STALE_PERCENT, USE_LOCKED_STATS, NULL).
NULL
G, D, S, T
12cR1+
TABLE_CACHED_BLOCKS
The number of blocks cached in the buffer cache during calculation of index cluster factor. Jonathan Lewis recommends “16” as a sensible value.
1
G, D, S, T
12cR1+
OPTIONS
Used for the OPTIONS parameter of the GATHER_TABLE_STATS procedure (GATHER, GATHER AUTO).
GATHER
G, D, S, T
12cR1+

D.Some standard statistics gather  FOR gather statistics oracle 

Gather statistics:-

EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15, cascade => TRUE);
EXEC DBMS_STATS.gather_schema_stats(‘SCOTT’); EXEC DBMS_STATS.gather_schema_stats(‘SCOTT’, estimate_percent => 15);
EXEC DBMS_STATS.gather_schema_stats(‘SCOTT’, estimate_percent => 15, cascade => TRUE);
EXEC DBMS_STATS.gather_table_stats(‘SCOTT’, ‘EMPLOYEES’);
EXEC DBMS_STATS.gather_table_stats(‘SCOTT’, ‘EMPLOYEES’, estimate_percent => 15);
EXEC DBMS_STATS.gather_table_stats(‘SCOTT’, ‘EMPLOYEES’, estimate_percent => 15, cascade => TRUE);
 EXEC DBMS_STATS.gather_table_stats(ownname=>’C##TEST’,tabname=>’TT1′,method_opt=>’FOR COLUMNS C1′);

Delete statistics:-

EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats(‘SCOTT’);
EXEC DBMS_STATS.delete_table_stats(‘SCOTT’, ‘EMP’);
EXEC DBMS_STATS.delete_column_stats(‘SCOTT’, ‘EMP’, ‘EMPNO’);
EXEC DBMS_STATS.delete_index_stats(‘SCOTT’, ‘EMP_PK’);

E.Determining When Optimizer Statistics Are Stale  FOR gather statistics oracle 

A.First create a demo table and load some data.

SQL> set linesize 300;

SQL> create table tab1(c1 number(10),c2 char(100));

Table created.

SQL> insert into tab1 select rownum,’A’ from dual connect by rownum<10000;

9999 rows created.

SQL> commit;

Commit complete.

B.Now statistics are not populated.

SQL> select NUM_ROWS,BLOCKS,STALE_STATS from dba_tab_statistics where table_name=’TAB1′;

NUM_ROWS     BLOCKS STA

———- ———- —

C.Let me gather statistics now

 

SQL>  exec dbms_stats.gather_table_stats(‘TEST’,’TAB1′);

PL/SQL procedure successfully completed.

 

SQL> select NUM_ROWS,BLOCKS,STALE_STATS from dba_tab_statistics where table_name=’TAB1′;

NUM_ROWS     BLOCKS STA

———- ———- —

9999        244 NO

D.Now again let me load more data

SQL> insert into tab1 select rownum,’A’ from dual connect by rownum<10000;

9999 rows created.

SQL> commit;

Commit complete.

E.Still DBA_TAB_STATISTICS shows STALE statistics NO.

SQL> select NUM_ROWS,BLOCKS,STALE_STATS from dba_tab_statistics where table_name=’TAB1′;

NUM_ROWS     BLOCKS STA

———- ———- —

9999        244 NO

 

F.Now let me flush/write the database monitoring information from memory to disk

SQL>  BEGIN
 DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
END;
/  2    3    4
PL/SQL procedure successfully completed.

 

G.Now the statistics will show STALE

SQL> select NUM_ROWS,BLOCKS,STALE_STATS from dba_tab_statistics where table_name=’TAB1′;

NUM_ROWS     BLOCKS STA

———- ———- —

9999        244 YES

F.Gathering Statistics for Fixed Objects  FOR gather statistics oracle 

Fixed objects are dynamic performance tables and their indexes. These objects record current database activity. Unlike other database tables, the database does not automatically use dynamic statistics for SQL statement referencing X$ tables when optimizer statistics are missing. Instead, the optimizer uses predefined default values. These defaults may not be representative and could potentially lead to a suboptimal execution plan. Thus, it is important to keep fixed object statistics current.
To gather schema statistics using GATHER_FIXED_OBJECTS_STATS:
  1. Start SQL*Plus, and connect to the database with the appropriate privileges for the procedure that you intend to run.
  2. Run the DBMS_STATS.GATHER_FIXED_OBJECTS_STATS procedure, specifying the desired parameters. Chapter 12 Gathering Optimizer Statistics Manually

G.Gathering Statistics for Volatile Tables Using Dynamic Statistics  FOR gather statistics oracle 

A.Let us think my table TAB2 is extremely volatile. So We want dynamic statistics to be collected on the fly to avoid optimizer gather wrong statistics.

First let me create the demo table.

SQL> create table tab2(c1 number(10),c2 char(100));

Table created.

Let me gather statistics now.

SQL> exec dbms_stats.gather_table_stats(‘TEST’,’TAB2′);

PL/SQL procedure successfully completed.

 

SQL> select NUM_ROWS,BLOCKS,STALE_STATS from dba_tab_statistics where table_name=’TAB2′;

NUM_ROWS     BLOCKS STA

———- ———- —

0          0 NO

Now I need to load data again

SQL> insert into tab2 select rownum,’A’ from dual connect by rownum<10000;

9999 rows created.

SQL> commit;

Commit complete.

Now check explain plan does not show proper cardinality

SQL> set autotrace traceonly;

SQL> select * from tab2;

9999 rows selected.

Execution Plan

———————————————————-

Plan hash value: 2156729920

————————————————————————–

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |

————————————————————————–

|   0 | SELECT STATEMENT  |      |     1 |   115 |     2   (0)| 00:00:01 |

|   1 |  TABLE ACCESS FULL| TAB2 |     1 |   115 |     2   (0)| 00:00:01 |

Now We need to delete the statistics and lock the statistics. So optimizer will not gather statistics if there is any change in table.

sqlplus test/test

SQL> exec dbms_stats.delete_table_stats(‘TEST’,’TAB2′);

PL/SQL procedure successfully completed.

 

SQL> exec dbms_stats.lock_table_stats(‘TEST’,’TAB2′);

PL/SQL procedure successfully completed.

Now as there is no statistics, Oracle optimizer uses dynamic sampling.

The following table describes the levels for dynamic statistics. Note the following:

  • If dynamic statistics are not disabled, then the database may choose to use dynamic statistics when a SQL statement uses parallel execution.

  • If the OPTIMIZER_ADAPTIVE_FEATURES initialization parameter is true, then the optimizer uses dynamic statistics when relevant SQL plan directives exist. The database maintains the resulting statistics in the server result cache, making them available to other queries.

SQL> set autotrace traceonly;

SQL>  select * from tab2;

9999 rows selected.

Execution Plan

———————————————————-

Plan hash value: 2156729920

————————————————————————–

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |

————————————————————————–

|   0 | SELECT STATEMENT  |      |  8435 |   947K|    68   (0)| 00:00:01 |

|   1 |  TABLE ACCESS FULL| TAB2 |  8435 |   947K|    68   (0)| 00:00:01 |

————————————————————————–

Note

—–

   – dynamic statistics used: dynamic sampling (level=2)

H.Enabling Concurrent Statistics Gathering  FOR gather statistics oracle 

Oracle Database 11g Release 2 (11.2.0.2) introduces a new statistics gathering mode, ‘concurrent statistics gathering’. The goal of this new mode is to enable a user to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. Gathering statistics on multiple tables and (sub)partitions concurrently can reduce the overall time it takes to gather statistics by allowing Oracle to fully utilize a multi-processor environment.

To enable concurrent statistics gathering, use the DBMS_STATS.SET_GLOBAL_PREFS procedure to set the CONCURRENT preference. Possible values are as follows:
  • MANUAL Concurrency is enabled only for manual statistics gathering.
  • AUTOMATIC Concurrency is enabled only for automatic statistics gathering.
  • ALL Concurrency is enabled for both manual and automatic statistics gathering.
  • OFF Concurrency is disabled for both manual and automatic statistics gathering. This is the default value.
This tutorial in this section explains how to enable concurrent statistics gathering.

Create a partitioned table.

SQL> create table t1 (id number, sometext varchar2(50),my_date date) tablespace data partition by hash (id) partitions 16;

Please note that concurrency is OFF.

SQL> SELECT DBMS_STATS.GET_PREFS(‘CONCURRENT’) FROM DUAL;
DBMS_STATS.GET_PREFS(‘CONCURRENT’)
——————————————————————————–
OFF
SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = ‘DEFAULT_PLAN’;
System altered.

Check if you have sufficient job process

SQL> show parameter job;

NAME                                 TYPE        VALUE

———————————— ———– ——————————

job_queue_processes                  integer     1000

Now you change concurrency to ‘ALL’

SQL> BEGIN
 DBMS_STATS.SET_GLOBAL_PREFS(‘CONCURRENT’,’ALL’);
END;
/  2    3    4

PL/SQL procedure successfully completed.

SQL> SELECT DBMS_STATS.GET_PREFS(‘CONCURRENT’) FROM DUAL;
DBMS_STATS.GET_PREFS(‘CONCURRENT’)
——————————————————————————–
ALL

Create a procedure to parallel insert data (30 sessions will insert data parallely)

SQL> create or replace procedure manyinserts as
v_m number;
 begin
   for i in 1..1000000 loop
  select round(dbms_random.value() * 44444444444) + 1 into v_m from dual t;
  insert /*+ new2 */ into t1 values (v_m, ‘DOES THIS’||dbms_random.value(),sysdate);
 end loop;
commit;
end;
 /
Procedure created.

 

SQL> create or replace procedure manysessions as
v_jobno number:=0;
begin
FOR i in 1..30 LOOP
 dbms_job.submit(v_jobno,’manyinserts;’, sysdate);
END LOOP;
commit;
end;
/

Procedure created.

SQL> exec manysessions;

PL/SQL procedure successfully completed.

Now gather schema in another session.

SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(‘TEST’);

PL/SQL procedure successfully completed.

Please check status now.

SQL> /

SQL> SET LINESIZE 1000
SQL> COLUMN TARGET FORMAT a8
COLUMN TARGET_TYPE FORMAT a25
SQL> SQL> COLUMN JOB_NAME FORMAT a14
COLUMN START_TIME FORMAT a40
SELECT TARGET, TARGET_TYPE, JOB_NAME,
SQL> SQL>   2   TO_CHAR(START_TIME, ‘dd-mon-yyyy hh24:mi:ss’)
  3  FROM DBA_OPTSTAT_OPERATION_TASKS
  4  WHERE STATUS = ‘IN PROGRESS’
AND OPID = (SELECT MAX(ID)
  5    6   FROM DBA_OPTSTAT_OPERATIONS
 WHERE OPERATION = ‘gather_schema_stats’);
  7
TARGET   TARGET_TYPE               JOB_NAME       TO_CHAR(START_TIME,’DD-MON-YY
——– ————————- ————– —————————–
TEST.T1  TABLE                                    20-feb-2018 02:22:41
TEST.T1. TABLE PARTITION                20-feb-2018 02:22:41
SYS_P786

I.Incremental Statistics  FOR gather statistics oracle 

:-

In typical data warehousing environment existence of huge partitioned tables is very common, gathering statistics on such tables is challenging tasks. For partitioned tables there are two types of statistics Global and Partition level statistics. Gathering global statistics is very expensive and resource consuming operation as it scans whole table. Hence most of the time people use to reduce estimate_percent down to less than 1 percent. This does helps in reducing time taken to gather stats but may not be sufficient to represent the data distribution. Gathering partition level statistics is not so expensive as it gathers only for the partitions where data has been changed.
Traditionally statistics are gathered in two phase
  1. Scan complete table to gather Global statistics
  2. Scan only the partitions where data has been changed
Obviously global stats can be derived by using partition level stats like say for example number of rows at table level = just sum number of rows from all the partitions. But global stats like NDV(Number of Distinct Values) which is very important in calculating cardinality can’t be derived so easily. The only way to derive them is by scanning the whole table.
These synopsis data are stored in WRI$_OPTSTAT_SYNOPSIS$ and WRI$_OPTSTAT_SYNOPSIS_HEAD$ tables residing in SYSAUX tablespace. Table WRI$_OPTSTAT_SYNOPSIS$ will grow enormously as there will be individual synopsis created for each hash proportional to distinct value existing at table,partition and column level. Table WRI$_OPTSTAT_SYNOPSIS_HEAD$ will have each record for every table, partition, and column. In 11.1 release gathering incremental statistics would take longer time if you have wide tables with many partitions due to delete statement working on WRI$_OPTSTAT_SYNOPSIS$ table. In 11.2 this issue has been resolved by Range-Hash partitioning the WRI$_OPTSTAT_SYNOPSIS$ table.

Create a range based partition

CREATE TABLE t_range
 (ID NUMBER,
 CREATE_DATE DATE,
 NAME CHAR(100))
PARTITION BY RANGE(CREATE_DATE)
INTERVAL(NUMTODSINTERVAL(1,’DAY’)) STORE IN (tbs1,tbs2,tbs3,tbs4)
 (PARTITION part_old VALUES LESS THAN (TO_DATE(’04-FEB-2018′, ‘DD-MON-YYYY’)) TABLESPACE tbs1
);

Insert data in the table

SQL> Declare
Begin
For i in 1..10 loop
For j in 1..1000 loop
Insert into t_range values(j,sysdate+i,’DEBASIS’||to_char(i));
End loop;
End loop;
end;
/
PL/SQL procedure successfully completed.

Let me gather the statistics now:-

SQL> exec dbms_stats.gather_table_stats(‘SYS’,’T_RANGE’);

Please check the analyze date

SELECT partition_name,
       to_char( last_analyzed, ‘DD-MON-YYYY, HH24:MI:SS’ ) last_analyze,
       num_rows
FROM   DBA_TAB_PARTITIONS
WHERE  table_name = ‘T_RANGE’
ORDER  BY partition_position;
SQL> /
PARTITION_NAME                 LAST_ANALYZE                     NUM_ROWS
—————————— —————————— ———-
PART_OLD                       21-FEB-2018, 22:38:05                   0
SYS_P463                       21-FEB-2018, 22:38:05                1000
SYS_P464                       21-FEB-2018, 22:38:05                1000
SYS_P465                       21-FEB-2018, 22:38:05                1000
SYS_P466                       21-FEB-2018, 22:38:05                1000
SYS_P467                       21-FEB-2018, 22:38:05                1000
SYS_P468                       21-FEB-2018, 22:38:05                1000
SYS_P469                       21-FEB-2018, 22:38:05                1000
SYS_P470                       21-FEB-2018, 22:38:05                1000
SYS_P471                       21-FEB-2018, 22:38:05                1000
SYS_P472                       21-FEB-2018, 22:38:05                1000

Please check initial setting of statistics related parameters

SQL> SELECT dbms_stats.get_prefs(‘INCREMENTAL’,’SYS’,’T_RANGE’) “INCREMENTAL” FROM   dual;
INCREMENTAL
——————————————————————————–
FALSE
SQL> SELECT dbms_stats.get_prefs(‘PUBLISH’,’SYS’,’T_RANGE’) “PUBLISH” FROM   dual;
PUBLISH
——————————————————————————–
TRUE
SQL> SELECT dbms_stats.get_prefs(‘ESTIMATE_PERCENT’,’SYS’,’T_RANGE’) “ESTIMATE_PERCENT” FROM   dual;
ESTIMATE_PERCENT
——————————————————————————–
DBMS_STATS.AUTO_SAMPLE_SIZE
SQL> SELECT dbms_stats.get_prefs(‘GRANULARITY’,’SYS’,’T_RANGE’) “GRANULARITY” FROM   dual;
GRANULARITY
——————————————————————————–
AUTO

Now let me set TRUE to INCREMENTAL statistics

exec dbms_stats.set_table_prefs(‘TEST’,’T1′,’INCREMENTAL’,’TRUE’);

Please check synopsis table now

SELECT o.name         “Table Name”,
       p.subname      “Part”,
       c.name         “Column”,
       h.analyzetime  “Synopsis Creation Time”
FROM   WRI$_OPTSTAT_SYNOPSIS_HEAD$ h,
       OBJ$ o,
       USER$ u,
       COL$ c,
       ( ( SELECT TABPART$.bo#  BO#,
                  TABPART$.obj# OBJ#
           FROM   TABPART$ tabpart$ )
         UNION ALL
         ( SELECT TABCOMPART$.bo#  BO#,
                  TABCOMPART$.obj# OBJ#
           FROM   TABCOMPART$ tabcompart$ ) ) tp,
       OBJ$ p
WHERE  u.name = ‘SYS’ AND
       o.name = ‘T_RANGE’ AND
       tp.obj# = p.obj# AND
       h.bo# = tp.bo# AND
       h.group# = tp.obj# * 2 AND
       h.bo# = c.obj#(+) AND
       h.intcol# = c.intcol#(+) AND
       o.owner# = u.user# AND
       h.bo# = o.obj#
ORDER  BY 4,1,2,3
/
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              PART_OLD             CREATE_DATE          21-FEB-18
T_RANGE              PART_OLD             ID                   21-FEB-18
T_RANGE              PART_OLD             NAME                 21-FEB-18
T_RANGE              SYS_P463             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P463             ID                   21-FEB-18
T_RANGE              SYS_P463             NAME                 21-FEB-18
T_RANGE              SYS_P464             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P464             ID                   21-FEB-18
T_RANGE              SYS_P464             NAME                 21-FEB-18
T_RANGE              SYS_P465             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P465             ID                   21-FEB-18
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              SYS_P465             NAME                 21-FEB-18
T_RANGE              SYS_P466             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P466             ID                   21-FEB-18
T_RANGE              SYS_P466             NAME                 21-FEB-18
T_RANGE              SYS_P467             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P467             ID                   21-FEB-18
T_RANGE              SYS_P467             NAME                 21-FEB-18
T_RANGE              SYS_P468             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P468             ID                   21-FEB-18
T_RANGE              SYS_P468             NAME                 21-FEB-18
T_RANGE              SYS_P469             CREATE_DATE          21-FEB-18
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              SYS_P469             ID                   21-FEB-18
T_RANGE              SYS_P469             NAME                 21-FEB-18
T_RANGE              SYS_P470             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P470             ID                   21-FEB-18
T_RANGE              SYS_P470             NAME                 21-FEB-18
T_RANGE              SYS_P471             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P471             ID                   21-FEB-18
T_RANGE              SYS_P471             NAME                 21-FEB-18
T_RANGE              SYS_P472             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P472             ID                   21-FEB-18
T_RANGE              SYS_P472             NAME                 21-FEB-18
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              SYS_P473             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P473             ID                   21-FEB-18
T_RANGE              SYS_P473             NAME                 21-FEB-18

Now I will again load data only to one partition

SQL> Declare
Begin
For j in 1..1000 loop
Insert into t_range values(j,sysdate+1,’DEBASIS’||to_char(j));
End loop;
end;  2    3    4    5    6
  7  /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.

Let me gather stats again

SQL>  exec dbms_stats.gather_table_stats(‘SYS’,’T_RANGE’);
PL/SQL procedure successfully completed.

Now We will notice that the partition,where we load data is only analyzed avoiding costly full scan of all partitions

SQL>  SELECT partition_name,
       to_char( last_analyzed, ‘DD-MON-YYYY, HH24:MI:SS’ ) last_analyze,
       num_rows
FROM   DBA_TAB_PARTITIONS
WHERE  table_name = ‘T_RANGE’
ORDER  BY partition_position;  2    3    4    5    6
PARTITION_NAME                 LAST_ANALYZE                     NUM_ROWS
—————————— —————————— ———-
PART_OLD                       21-FEB-2018, 22:46:58                   0
SYS_P473                       21-FEB-2018, 22:46:58                1000
SYS_P463                       21-FEB-2018, 22:50:15                2000
SYS_P464                       21-FEB-2018, 22:46:58                1000
SYS_P465                       21-FEB-2018, 22:46:58                1000
SYS_P466                       21-FEB-2018, 22:46:58                1000
SYS_P467                       21-FEB-2018, 22:46:58                1000
SYS_P468                       21-FEB-2018, 22:46:58                1000
SYS_P469                       21-FEB-2018, 22:46:58                1000
SYS_P470                       21-FEB-2018, 22:46:58                1000
SYS_P471                       21-FEB-2018, 22:46:58                1000
SYS_P472                       21-FEB-2018, 22:46:58                1000

We can see the preference for the INCREMENTAL still TRUE

SQL> SELECT dbms_stats.get_prefs(‘INCREMENTAL’,’SYS’,’T_RANGE’) “INCREMENTAL” FROM   dual;
INCREMENTAL
——————————————————————————–
TRUE

Now let me turn INCREMENTAL statistics to FALSE again

SQL> exec dbms_stats.set_table_prefs(‘SYS’,’T_RANGE’,’INCREMENTAL’,’FALSE’);
PL/SQL procedure successfully completed.
SQL> SELECT dbms_stats.get_prefs(‘INCREMENTAL’,’SYS’,’T_RANGE’) “INCREMENTAL” FROM   dual;
INCREMENTAL
——————————————————————————–
FALSE
SQL> SELECT o.name         “Table Name”,
  2         p.subname      “Part”,
       c.name         “Column”,
  3    4         h.analyzetime  “Synopsis Creation Time”
FROM   WRI$_OPTSTAT_SYNOPSIS_HEAD$ h,
  5    6         OBJ$ o,
       USER$ u,
  7    8         COL$ c,
       ( ( SELECT TABPART$.bo#  BO#,
                  TABPART$.obj# OBJ#
  9   10   11             FROM   TABPART$ tabpart$ )
         UNION ALL
         ( SELECT TABCOMPART$.bo#  BO#,
 12   13   14                    TABCOMPART$.obj# OBJ#
           FROM   TABCOMPART$ tabcompart$ ) ) tp,
 15   16         OBJ$ p
WHERE  u.name = ‘SYS’ AND
       o 17  .name = ‘T_RANGE’ AND
 18   19         tp.obj# = p.obj# AND
       h.bo# = tp.bo# AND
 20   21         h.group# = tp.obj# * 2 AND
       h.bo# = c.obj#(+) AND
       h.intcol# = c.intcol#(+) AND
 22   23   24         o.owner# = u.user# AND
       h.bo# = o.obj#
ORDER  BY 4,1,2,3
 25   26   27  /
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              PART_OLD             CREATE_DATE          21-FEB-18
T_RANGE              PART_OLD             ID                   21-FEB-18
T_RANGE              PART_OLD             NAME                 21-FEB-18
T_RANGE              SYS_P464             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P464             ID                   21-FEB-18
T_RANGE              SYS_P464             NAME                 21-FEB-18
T_RANGE              SYS_P465             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P465             ID                   21-FEB-18
T_RANGE              SYS_P465             NAME                 21-FEB-18
T_RANGE              SYS_P466             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P466             ID                   21-FEB-18
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              SYS_P466             NAME                 21-FEB-18
T_RANGE              SYS_P467             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P467             ID                   21-FEB-18
T_RANGE              SYS_P467             NAME                 21-FEB-18
T_RANGE              SYS_P468             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P468             ID                   21-FEB-18
T_RANGE              SYS_P468             NAME                 21-FEB-18
T_RANGE              SYS_P469             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P469             ID                   21-FEB-18
T_RANGE              SYS_P469             NAME                 21-FEB-18
T_RANGE              SYS_P470             CREATE_DATE          21-FEB-18
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              SYS_P470             ID                   21-FEB-18
T_RANGE              SYS_P470             NAME                 21-FEB-18
T_RANGE              SYS_P471             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P471             ID                   21-FEB-18
T_RANGE              SYS_P471             NAME                 21-FEB-18
T_RANGE              SYS_P472             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P472             ID                   21-FEB-18
T_RANGE              SYS_P472             NAME                 21-FEB-18
T_RANGE              SYS_P473             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P473             ID                   21-FEB-18
T_RANGE              SYS_P473             NAME                 21-FEB-18
Table Name           Part                 Column               Synopsis
——————– ——————– ——————– ———
T_RANGE              SYS_P463             CREATE_DATE          21-FEB-18
T_RANGE              SYS_P463             ID                   21-FEB-18
T_RANGE              SYS_P463             NAME                 21-FEB-18
36 rows selected.
SQL> commit;
Commit complete.

Let me gather statistics again

SQL> exec dbms_stats.gather_table_stats(‘SYS’,’T_RANGE’);
PL/SQL procedure successfully completed.

Now please note statistics updated all the partitions of the table again which will go for  costly full table scan for all partitions

SQL> SELECT partition_name,
       to_char( last_analyzed, ‘DD-MON-YYYY, HH24:MI:SS’ ) last_analyze,
       num_rows
FROM   DBA_TAB_PARTITIONS
WHERE  table_name = ‘T_RANGE’
ORDER  BY partition_position;   2    3    4    5    6
PARTITION_NAME                 LAST_ANALYZE                     NUM_ROWS
—————————— —————————— ———-
PART_OLD                       21-FEB-2018, 23:06:47                   0
SYS_P473                       21-FEB-2018, 23:06:47                1000
SYS_P463                       21-FEB-2018, 23:06:47                3000
SYS_P464                       21-FEB-2018, 23:06:47                1000
SYS_P465                       21-FEB-2018, 23:06:47                1000
SYS_P466                       21-FEB-2018, 23:06:47                1000
SYS_P467                       21-FEB-2018, 23:06:47                1000
SYS_P468                       21-FEB-2018, 23:06:47                1000
SYS_P469                       21-FEB-2018, 23:06:47                1000
SYS_P470                       21-FEB-2018, 23:06:47                1000
SYS_P471                       21-FEB-2018, 23:06:47                1000
SYS_P472                       21-FEB-2018, 23:06:47                1000
12 rows selected.

J.SQL Plan directives  FOR gather statistics oracle 

SQL Plan Directives are one of the functionalities that make up Adaptive Query Optimization in Oracle Database 12c. SQL plan directives are like “extra notes” for the optimizer, to remind it that it previously selected a suboptimal plan, typically because of incorrect cardinality estimates. Incorrect cardinality estimates are often caused by issues like missing statistics, stale statistics, complex predicates or complex operators. Unlike SQL profiles, which are statement specific, SQL plan directives are linked to query expressions, so they can be used by several statements containing matching query expressions. Situations like missing histograms or missing extended statistics may result in SQL plan directives being generated.
The database manages SQL plan directives internally. Situations like automatic reoptimization may result in SQL plan directives being written to the SGA and later persisted to the SYSAUX tablespace, at which point they can be displayed using the DBA_SQL_PLAN_DIRECTIVES and DBA_SQL_PLAN_DIR_OBJECTS views. Alternatively, existing SQL plan directives can be persisted manually using the DBMS_SPD package.

Let me create test seup first

sqlplus test/test
SQL> DROP TABLE tab1 PURGE;
Table dropped.

SQL> CREATE TABLE tab1 (
id               NUMBER,
gender           VARCHAR2(1),
has_y_chromosome VARCHAR2(1),
CONSTRAINT tab1_pk PRIMARY KEY (id),
CONSTRAINT tab1_gender_chk CHECK (gender IN (‘M’, ‘F’)),
CONSTRAINT tab1_has_y_chromosome_chk CHECK (has_y_chromosome IN (‘Y’, ‘N’))
);

Table created.

SQL>
INSERT /*+ APPEND */ INTO tab1
SELECT level, ‘M’, ‘Y’
FROM   dual
CONNECT BY level <= 10;
COMMIT;
10 rows created.
SQL>Commit complete.
SQL>
INSERT /*+ APPEND */ INTO tab1
SELECT 10+level, ‘F’, ‘N’
FROM   dual
CONNECT BY level <= 90;
COMMIT;
 90 rows created.
SQL>Commit complete.
SQL> CREATE INDEX tab1_gender_idx ON tab1(gender);
CREATE INDEX tab1_has_y_chromosome_idx ON tab1(has_y_chromosome);
Index created.
Index created.
 SQL> EXEC DBMS_STATS.gather_table_stats(USER, ‘TAB1’);
 PL/SQL procedure successfully completed.

Now please check there is no histogram collected

SQL> COLUMN column_name FORMAT A20
 SELECT column_id,
       column_name,
       histogram
FROM   user_tab_columns
WHERE  table_name = ‘TAB1’
ORDER BY column_id;
 COLUMN_ID COLUMN_NAME          HISTOGRAM
———- ——————– —————
         1 ID                   NONE
         2 GENDER               NONE
         3 HAS_Y_CHROMOSOME     NONE

Please note that we observe cardinality miscalculation in explain plan,which is evident here.Cardinality mismatch always leads to improper explain plan hence poor performance.

SQL> SELECT /*+ GATHER_PLAN_STATISTICS */
       *
FROM   tab1
WHERE  gender = ‘M’
AND    has_y_chromosome = ‘Y’;
SET LINESIZE 200 PAGESIZE 100
SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(format => ‘allstats last’));  2    3    4    5
        ID G H
———- – –
         1 M Y
         2 M Y
         3 M Y
         4 M Y
         5 M Y
         6 M Y
         7 M Y
         8 M Y
         9 M Y
        10 M Y
10 rows selected.
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
SQL_ID  5t8y8p5mpb99j, child number 1
————————————-
SELECT /*+ GATHER_PLAN_STATISTICS */        * FROM   tab1 WHERE  gender
= ‘M’ AND    has_y_chromosome = ‘Y’
Plan hash value: 1552452781
—————————————————————————————————————–
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
—————————————————————————————————————–
|   0 | SELECT STATEMENT                    |                 |      1 |        |     10 |00:00:00.01 |       4 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TAB1            |      1 |     25 |     10 |00:00:00.01 |       4 ||*  2 |   INDEX RANGE SCAN                  | TAB1_GENDER_IDX |      1 |     50 |     10 |00:00:00.01 |       2 |
—————————————————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   1 – filter(“HAS_Y_CHROMOSOME”=’Y’)
   2 – access(“GENDER”=’M’)
21 rows selected.

We will also observe that Oracle found that this sql query can be optimized further

SQL> COLUMN sql_text FORMAT A40
COLUMN is_reoptimizable FORMAT A16
SELECT sql_text, is_reoptimizable
FROM   v$sql
WHERE  sql_id = ‘5t8y8p5mpb99j’;SQL> SQL> SQL>   2    3
SQL_TEXT                                 IS_REOPTIMIZABLE
—————————————- —————-
SELECT /*+ GATHER_PLAN_STATISTICS */     Y
    * FROM   tab1 WHERE  gender = ‘M’ AN
D    has_y_chromosome = ‘Y’
SELECT /*+ GATHER_PLAN_STATISTICS */     Y
    * FROM   tab1 WHERE  gender = ‘M’ AN
D    has_y_chromosome = ‘Y’

At this point, the optimizer has written SQL plan directives into the SGA, but they are not visible to us as they have not been persisted to the SYSAUX tablespace.

SQL> SET LINESIZE 200
COLUMN dir_id FORMAT A20
COLUMN owner FORMAT A10
COLUMN object_name FORMAT A10
COLUMN col_name FORMAT A10SQL> SELECT TO_CHAR(d.directive_id) dir_id, o.owner, o.object_name,
o.subobject_name col_name, o.object_type, d.type, d.state, d.reason
2    3  FROM   dba_sql_plan_directives d, dba_sql_plan_dir_objects o
WHERE  d.directive_id=o.directive_id
4    5  AND    o.owner = ‘TEST’
ORDER BY 1,2,3,4,5;  6no rows selected

We can wait for the database to persist the SQL plan directives, or manually persist them using the DBMS_SPD package.

SQL> EXEC DBMS_SPD.flush_sql_plan_directive;
PL/SQL procedure successfully completed.

If we repeat the previous query, the SQL plan directives will be visible.

SQL> SET LINESIZE 200
COLUMN dir_id FORMAT A20
COLUMN owner FORMAT A10
COLUMN object_name FORMAT A10
COLUMN col_name FORMAT A10
SQL> SELECT TO_CHAR(d.directive_id) dir_id, o.owner, o.object_name,
       o.subobject_name col_name, o.object_type, d.type, d.state, d.reason
 FROM   dba_sql_plan_directives d, dba_sql_plan_dir_objects o
WHERE  d.directive_id=o.directive_id
 AND    o.owner = ‘TEST’
 ORDER BY 1,2,3,4,5;
DIR_ID               OWNER      OBJECT_NAM COL_NAME   OBJECT TYPE             STATE      REASON
——————– ———- ———- ———- —— —————- ———- ————————————
796134444163473829   TEST       TAB1       GENDER     COLUMN DYNAMIC_SAMPLING USABLE     SINGLE TABLE CARDINALITY MISESTIMATE
796134444163473829   TEST       TAB1                  TABLE  DYNAMIC_SAMPLING USABLE     SINGLE TABLE CARDINALITY MISESTIMATE

Now let me rerun the query and query explain plan will show proper cardinality with help of sql plan directive

SQL> SELECT /*+ GATHER_PLAN_STATISTICS */
       *
FROM   tab1
WHERE  gender = ‘M’
AND    has_y_chromosome = ‘Y’;
SET LINESIZE 200 PAGESIZE 100
SELECT * FROM TABLE(DBMS_XPLAN.display_cursor(format => ‘allstats last’));
  2    3    4    5
        ID G H
———- – –
         1 M Y
         2 M Y
         3 M Y
         4 M Y
         5 M Y
         6 M Y
         7 M Y
         8 M Y
         9 M Y
        10 M Y
10 rows selected.
SQL> SQL> SQL>
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
SQL_ID  5t8y8p5mpb99j, child number 2
————————————-
SELECT /*+ GATHER_PLAN_STATISTICS */        * FROM   tab1 WHERE  gender
= ‘M’ AND    has_y_chromosome = ‘Y’
Plan hash value: 1552452781
—————————————————————————————————————–
| Id  | Operation                           | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
—————————————————————————————————————–
|   0 | SELECT STATEMENT                    |                 |      1 |        |     10 |00:00:00.01 |       4 |
|*  1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TAB1            |      1 |     10 |     10 |00:00:00.01 |       4 |
|*  2 |   INDEX RANGE SCAN                  | TAB1_GENDER_IDX |      1 |     10 |     10 |00:00:00.01 |       2 |
—————————————————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   1 – filter(“HAS_Y_CHROMOSOME”=’Y’)
   2 – access(“GENDER”=’M’)
Note
—–
   – dynamic statistics used: dynamic sampling (level=2)
   – statistics feedback used for this statement
   – 1 Sql Plan Directive used for this statement
27 rows selected.

K.Extended Statistics  FOR gather statistics oracle 

In real-world data, there is often a relationship or correlation between the data stored in different columns of the same table. For example, consider a customers table where the values in a cust_state_province column are influenced by the values in a country_id column, because the state of California is only going to be found in the United States. If the Oracle Optimizer is not aware of these real-world relationships, it could potentially miscalculate the cardinality estimate if multiple columns from the same table are used in the where clause of a statement. With extended statistics you have an opportunity to tell the optimizer about these real-world relationships between the columns.
By creating extended statistics on a group of columns, the optimizer can determine a more accurate cardinality estimate when the columns are used together in a where clause of a SQL statement.You can use DBMS_STATS.CREATE_EXTENDED_STATS to define the column group you want to have statistics gathered on as a whole. Once the group has been established Oracle will automatically maintain the statistics on that column group when statistics are gathered on the table.
If we continue with the initial example of the customers table, When the value of cust_state_province is ‘CA’ we know the value of country_id will be 52790 or the USA. There is also a skew in the data in these two columns; because the company used in the SH is based in San Francisco so the majority of rows in the table have the values ‘CA’ and 52790. Both the relationship between the columns and the skew in the data can make it difficult for the optimizer to calculate the cardinality of these columns correctly when they are used together in a query.

Let us connect to SH schema:-

SQL> select count(1) from customers where cust_state_province=’CA’ and country_id=52790;
  COUNT(1)
———-
      3341

It is evident from explain plain that cardinality estimate incorrect.

SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
——————————————————————————–
SQL_ID  f3rfbwhagqcss, child number 0
————————————-
select count(1) from customers where cust_state_province=’CA’ and
country_id=52790
Plan hash value: 296924608
——————————————————————————–
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————–
|   0 | SELECT STATEMENT   |           |       |       |   423 (100)|          |
PLAN_TABLE_OUTPUT
——————————————————————————–
|   1 |  SORT AGGREGATE    |           |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |   128 |  2048 |   423   (1)| 00:00:01 |
——————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   2 – filter((“CUST_STATE_PROVINCE”=’CA’ AND “COUNTRY_ID”=52790))
20 rows selected.

Prior to Oracle Database 11g the only option open to us would be to make the optimizer aware of the data skew in both the country_id column (most rows have 52790 as the value) and the cust_state_province column (most rows have ‘CA’ as the value). We can do this by gathering histograms on the skewed columns.

SQL> exec dbms_stats.gather_table_stats(user,’CUSTOMERS’,method_opt=>’for all columns size skewonly’);
PL/SQL procedure successfully completed.

Let me create extended statistics  on the two column used in where clause

select dbms_stats.create_extended_stats(null,’customers’,'(country_id,cust_state_province)’) from dual;
SQL> col COLUMN_NAME format a30;
SQL> col HISTOGRAM format a30;
SQL> select column_name,histogram from user_tab_col_statistics where table_name=’CUSTOMERS’;
COLUMN_NAME                    HISTOGRAM
—————————— ——————————
SYS_STUJGVLRVH5USVDU$XNV4_IR#4 FREQUENCY
CUST_STATE_PROVINCE_ID         FREQUENCY
COUNTRY_ID                     FREQUENCY
CUST_MAIN_PHONE_NUMBER         HYBRID
CUST_INCOME_LEVEL              FREQUENCY
CUST_CREDIT_LIMIT              FREQUENCY
CUST_EMAIL                     HYBRID
CUST_TOTAL                     FREQUENCY
CUST_TOTAL_ID                  FREQUENCY
CUST_SRC_ID                    NONE
CUST_EFF_FROM                  FREQUENCY
CUST_EFF_TO                    NONE
CUST_VALID                     FREQUENCY
CUST_ID                        HYBRID
CUST_FIRST_NAME                HYBRID
CUST_LAST_NAME                 HYBRID
CUST_GENDER                    FREQUENCY
CUST_YEAR_OF_BIRTH             FREQUENCY
CUST_MARITAL_STATUS            FREQUENCY
CUST_STREET_ADDRESS            HYBRID
CUST_POSTAL_CODE               HYBRID
CUST_CITY                      HYBRID
CUST_CITY_ID                   HYBRID
CUST_STATE_PROVINCE            FREQUENCY
SQL> select count(1) from customers where cust_state_province=’CA’ and country_id=52790;
  COUNT(1)
———-
      3341

We can now see it goes for dynamic sampling and adaptive optimizer feature

SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
SQL_ID  f3rfbwhagqcss, child number 0
————————————-
select count(1) from customers where cust_state_province=’CA’ and
country_id=52790
Plan hash value: 296924608
——————————————————————————–
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————–
|   0 | SELECT STATEMENT   |           |       |       |   423 (100)|          |
|   1 |  SORT AGGREGATE    |           |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |  3424 | 54784 |   423   (1)| 00:00:01 |
——————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   2 – filter((“CUST_STATE_PROVINCE”=’CA’ AND “COUNTRY_ID”=52790))
Note
—–
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
   – dynamic statistics used: dynamic sampling (level=2)
   – 1 Sql Plan Directive used for this statement
25 rows selected.

Optionally we can see report of the column

SQL> set long 100000
SQL> set lines 120
SQL> set pages 0
SQL> select dbms_stats.report_col_usage(user,’CUSTOMERS’) from dual;
LEGEND:
…….
EQ         : Used in single table EQuality predicate
RANGE      : Used in single table RANGE predicate
LIKE       : Used in single table LIKE predicate
NULL       : Used in single table is (not) NULL predicate
EQ_JOIN    : Used in EQuality JOIN predicate
NONEQ_JOIN : Used in NON EQuality JOIN predicate
FILTER     : Used in single table FILTER predicate
JOIN       : Used in JOIN predicate
GROUP_BY   : Used in GROUP BY expression
…………………………………………………………………….
###############################################################################
COLUMN USAGE REPORT FOR SH.CUSTOMERS
………………………………
  1. COUNTRY_ID : EQ EQ_JOIN
  2. CUST_CITY_ID : EQ_JOIN
  3. CUST_ID : EQ_JOIN
  4. CUST_STATE_PROVINCE : EQ
  5. CUST_STATE_PROVINCE_ID : EQ_JOIN
  6. CUST_TOTAL_ID : EQ_JOIN
  7. SYS_STUJGVLRVH5USVDU$XNV4_IR#4 : EQ
  8. (CUST_STATE_PROVINCE, COUNTRY_ID) : FILTER
###############################################################################

Also I need to turn off optimizer dynamic sampling and optimizer adaptive feature to simulate extended statistics.

SQL> alter system set optimizer_dynamic_sampling=0;
SQL> alter system set optimizer_adaptive_features=false;
SQL> select count(1) from customers where cust_state_province=’CA’ and country_id=52790;
  COUNT(1)
———-
      3341
SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
——————————————————————————–
SQL_ID  f3rfbwhagqcss, child number 2
————————————-
select count(1) from customers where cust_state_province=’CA’ and
country_id=52790
Plan hash value: 296924608
——————————————————————————–
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————–
|   0 | SELECT STATEMENT   |           |       |       |   424 (100)|          |
PLAN_TABLE_OUTPUT
——————————————————————————–
|   1 |  SORT AGGREGATE    |           |     1 |    35 |            |          |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |    13 |   455 |   424   (1)| 00:00:01 |
——————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   2 – filter((“CUST_STATE_PROVINCE”=’CA’ AND “COUNTRY_ID”=52790))

Let me drop the extended statistics

SQL> exec dbms_stats.drop_extended_stats(null,’customers’,'(country_id,cust_state_province)’);
PL/SQL procedure successfully completed.

I will recreate statistics again

SQL> select dbms_stats.create_extended_stats(null,’customers’,'(country_id,cust_state_province)’) from dual;
DBMS_STATS.CREATE_EXTENDED_STATS(NULL,’CUSTOMERS’,'(COUNTRY_ID,CUST_STATE_PROVINCE)’)
—————————————————————————————————————————————–
SYS_STUJGVLRVH5USVDU$XNV4_IR#4

Let me gather statistics again

SQL>  exec dbms_stats.gather_table_stats(user,’CUSTOMERS’);
PL/SQL procedure successfully completed.

Still it will show bad cardinality estimation

SQL> select count(1) from customers where cust_state_province=’CA’ and country_id=52790;
  COUNT(1)
———-
      3341
SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
SQL_ID  f3rfbwhagqcss, child number 2
————————————-
select count(1) from customers where cust_state_province=’CA’ and
country_id=52790
Plan hash value: 296924608
——————————————————————————–
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————–
|   0 | SELECT STATEMENT   |           |       |       |   423 (100)|          |
PLAN_TABLE_OUTPUT
————————————————————————————————————————————————————————————————————————————————————————————————————
|   1 |  SORT AGGREGATE    |           |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |  1115 | 17840 |   423   (1)| 00:00:01 |
——————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   2 – filter((“CUST_STATE_PROVINCE”=’CA’ AND “COUNTRY_ID”=52790))
20 rows selected.

Now let me again gather statistics with skewonly option of histogram.It will show correct estimate.

SQL> exec dbms_stats.gather_table_stats(user,’CUSTOMERS’,method_opt=>’for all columns size skewonly’);
PL/SQL procedure successfully completed.
SQL>  select count(1) from customers where cust_state_province=’CA’ and country_id=52790;
  COUNT(1)
———-
      3341
SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
SQL_ID  74w1jdh78t4w8, child number 0
————————————-
 select count(1) from customers where cust_state_province=’CA’ and
country_id=52790
Plan hash value: 296924608
——————————————————————————–
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————–
|   0 | SELECT STATEMENT   |           |       |       |   423 (100)|          |
|   1 |  SORT AGGREGATE    |           |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |  3341 | 53456 |   423   (1)| 00:00:01 |
——————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   2 – filter((“CUST_STATE_PROVINCE”=’CA’ AND “COUNTRY_ID”=52790))

L.How to enable old statistics from history  FOR gather statistics oracle 

Let me check what are old statistics collected in previous test case

SQL> COL TABLE_NAME FORMAT a10
SQL>SELECT TABLE_NAME,
 TO_CHAR(STATS_UPDATE_TIME,’YYYY-MM-DD:HH24:MI:SS’) AS STATS_MOD_TIME
FROM DBA_TAB_STATS_HISTORY
WHERE TABLE_NAME=’CUSTOMERS’
AND OWNER=’SH’
ORDER BY STATS_UPDATE_TIME DESC;
TABLE_NAME STATS_MOD_TIME
———- ——————-
CUSTOMERS  2018-02-22:03:30:57
CUSTOMERS  2018-02-22:03:24:07
CUSTOMERS  2018-02-22:03:20:27
CUSTOMERS  2018-02-22:03:00:37
CUSTOMERS  2018-02-22:02:16:48
CUSTOMERS  2018-02-22:02:01:26
CUSTOMERS  2018-02-22:01:42:21
CUSTOMERS  2018-02-07:12:15:38
8 rows selected.

Let me enable older statistics now

SQL> BEGIN
 DBMS_STATS.RESTORE_TABLE_STATS( ‘SH’,’CUSTOMERS’,
 TO_TIMESTAMP(‘2018-02-22:03:24:07′,’YYYY-MM-DD:HH24:MI:SS’) );
END;
/
  2    3    4    5
PL/SQL procedure successfully completed.

Now if we check explain plan,it went back to previous explain plan cost

SQL> select count(1) from customers where cust_state_province=’CA’ and country_id=52790;
  COUNT(1)
———-
      3341
SQL> select * from table(dbms_xplan.display_cursor());
PLAN_TABLE_OUTPUT
—————————————————————————————————————————————–
SQL_ID  f3rfbwhagqcss, child number 2
————————————-
select count(1) from customers where cust_state_province=’CA’ and
country_id=52790
Plan hash value: 296924608
——————————————————————————–
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
——————————————————————————–
|   0 | SELECT STATEMENT   |           |       |       |   423 (100)|          |
PLAN_TABLE_OUTPUT
————————————————————————————————————————————————————————————————————————————————————————————————————
|   1 |  SORT AGGREGATE    |           |     1 |    16 |            |          |
|*  2 |   TABLE ACCESS FULL| CUSTOMERS |  1115 | 17840 |   423   (1)| 00:00:01 |
——————————————————————————–
Predicate Information (identified by operation id):
—————————————————
   2 – filter((“CUST_STATE_PROVINCE”=’CA’ AND “COUNTRY_ID”=52790))
20 rows selected.

M.Online Statistics FOR GATHER STATISTICS ORACLE

In Oracle Database 12c, online statistics gathering “piggybacks” statistics gather as part of a direct-path data loading operation such as, create table as select (CTAS) and insert as select (IAS) operations. Gathering statistics as part of the data loading operation means no additional full data scan is required to have statistics available immediately after the data is loaded.
Online statistics gathering does not gather histograms or index statistics, as these types of statistics require additional data scans, which could have a large impact on the performance of the data load. To gather the necessary histogram and index statistics without re-gathering the base column statistics use the DBMS_STATS.GATHER_TABLE_STATS procedure with the new options parameter set to GATHER AUTO. Note that for performance reasons, GATHER AUTO builds histogram using a sample of rows rather than all rows in the table.

Installation of Oracle 12c RAC Flex cluster infrastructure on Virtual Box

Introduction

This document will help to guide you for installation of Oracle 12cR1 RAC Flex cluster infrastructure on Virtual Box which is very popular because of shared storage simulation feature.
Below is overall requirement in nut-shell.

We need 3 servers for now.I will install RAC on First 2 servers and 3rd server will act as DNS server.My desktop RAM is 16 GB.

Hostname:-rac1.example.com

Physical IP:-192.168.50.110
Private IP:-192.168.1.101
RAM:-4GB
/u01 mount point 100GB

Hostname:-rac2.example.com

Physical IP:-192.168.50.111
Private IP:-192.168.1.102
RAM:-4GB
/u01 mount point 100GB

Hostname:-dns.example.com

Physical IP:192.168.50.113

RAM:-1 GB
/ mount point 20 GB
SCAN name should be resolved by GNS.The name would be :-rac-scan.gns.rac.example.com
Virtual IP should be assigned from DHCP and DNS.
GNS name:-gns.rac.example.com

Download Software

Download the following software.
Depending on your version of VirtualBox and  Linux, there may be some slight variation in how the screen shots look.

 

VirtualBox Network Setup

We need to make sure a host-only network is configured and check/modify the IP range for that network. This will be the public network for our RAC installation.
  • Start VirtualBox from the menu.
  • Select the “File > Preferences” menu option.
  • Click “Network” in the left pane and click the “Host-only Networks” tab.
  • Click the “Adds new host-only network” button on the right size of the screen. Depending on the host OS, a network called “vboxnet0” or “VirtualBox Host-Only Ethernet Adapter” will be created.
  • Click the “Edits selected host-only network.” button on the right size of the screen.
  • If you want to use a different subnet for your public addresses you can change the network details here. Just make sure the subnet you choose doesn’t match any real subnets on your network. I’ve decided to choose “192.168.50.X”.
  • Please go to DHCP server tab and do not select DHCP as we are building seperate DHCP server

 

Virtual Machine Setup

Now we must define the two virtual RAC nodes. We can save time by defining one VM, then cloning it when it is installed.

 

Start VirtualBox and click the “New” button on the toolbar. Enter the name “rac1”, OS “Linux” and Version “Oracle (64 bit)”, then click the “Next” button.

Please select to allocate 4 GB RAM.

Next section, please select the following option

Please select “vdi” option

Please select first option “Dynamically Allocated”

Please select prefer location to create the disk and select 30GB size

 

Now press “create” button and your virtual machine is ready.

The “rac1” VM will appear on the left hand pane. Scroll down the details on the right and click on the “Network” link.
Make sure “Adapter 1” is enabled, set to “NAT”, then click on the “Adapter 2” tab.

Make sure “Adapter 2” is enabled, set to “Host-only Adapter”, then click on the “Adapter 3” tab.

Now in adapter 3,please select following:-

Move “Hard Disk” to the top of the boot order and uncheck the “Floppy” option, then click the “OK” button.

Guest Operating System Installation in all the VM .

Please use any standard note to install Oracle Linux 6.

You can choose 100GB mount point for /u01 in RAC1 and RAC2 .You can choose 30 GB root mount point for DNS server.

Configure YUM repository in all the VM

With the new VM highlighted, click the “Start” button on the toolbar. On the “Select start-up disk” screen, choose the relevant Linux ISO image and click the “Start” button.y folder. For my case, I am copying to /home/debasis/SOFT  folder.
Please configure yum repository by creating following file.Login as root user.
Go to /etc/yum.repos.d

[root@rac1 yum.repos.d]# cat rhel.repo
[rhel.repo]
gpgcheck = 0
enabled = 1
baseurl = file:///home/debasis/SOFT
name = rhel-repo

 

[root@rac1 yum.repos.d]# yum clean all

Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-
: manager

This system is not registered with an entitlement server. You can use subscription-manager to register.
Cleaning repos: rhel.repo
Cleaning up everything

Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos

[root@rac1 yum.repos.d]# yum repolist

Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-
: manager

This system is not registered with an entitlement server. You can use subscription-manager to register.
rhel.repo | 4.1 kB 00:00
(1/2): rhel.repo/group_gz | 137 kB 00:00
(2/2): rhel.repo/primary_db | 4.0 MB 00:00
repo id repo name status
rhel.repo rhel-repo 4,986
repolist: 4,986

This part will help to configure DHCP and DNS for NAME server DNS( 192.168.50.113)

Please change /etc/sysconfig/network-scripts/ifcfg-eth1 parameter BOOTPROTO=dhcp

configure DHCP server:-

DHCP server will be used to provide dynamic IP for VIP and SCAN IP. There has to be a package which provides the service, and sure enough this is called  dhcp. The dhcp client normally is installed already which makes it easy to avoid confusion.

Step 1:-Install DHCP server

yum install dhcp*

Step 2:-Configure DHCP server

vi /etc/dhcp/dhcpd.conf

Please add following entry

ddns-update-style interim;
ignore client-updates;
subnet 192.168.50.0 netmask 255.255.255.0 {
 option subnet-mask              255.255.255.0;
 option domain-name              “rac.example.com”;
 range  192.168.50.117 192.168.50.130;
 default-lease-time 21600;
 max-lease-time 43200;
 option routers 192.168.50.1;
 option ip-forwarding off;
 option domain-name-servers 192.168.50.113;
 option broadcast-address 192.168.50.255;
}

I am explaining major parameters

domain-name-servers is name or dns server ip (192.168.50.113)

router is default gateway (192.168.50.1)

range is between 192.168.50.117 and 192.168.50.130.So all DHCP IP will be assigned from that range.

Configure NAME server:-

DNS for GNS

We need to configure DNS first and then we have to delegate the subdomain to GNS,  which involves a little bit of work on the DNS server. Also, we need to put a virtual IP address for GNS into DNS, but that address must not be in use. I also created a dedicated server for DNS and bind,  on 192.168.50.113. The configuration changes I made are as follows:

Step 1:-Install bind packages

yum install bind*

Step 2:-Make configuration change in /etc/named.conf

options
{
/* make named use port 53 for the source of all queries, to allow
* firewalls to block all ports except 53:
*/
query-source port 53;
query-source-v6 port 53;

// Put files that named is allowed to write in the data/ directory:
directory “/var/named”; // the default
dump-file “data/cache_dump.db”;
statistics-file “data/named_stats.txt”;
memstatistics-file “data/named_mem_stats.txt”;

allow-transfer {“none”;};
zone-statistics yes;
};

logging
{
channel default_debug {
file “data/named.run”;
severity dynamic;
};
};

zone “example.com” IN {
type master;
file “/var/named/example.com.db”;
allow-transfer { 192.168.50.113; };
};

zone “50.168.192.in-addr.arpa” IN {
type master;
file “/var/named/192.168.50.db”;
};

// named.rfc1912.zones here….

Step 3:=Now you need to create example.com.db and 192.168.50.db under /var/named

[root@dns dhcp]#vi /var/named/example.com.db

$TTL 86400
$ORIGIN example.com.
@ 1D IN SOA dns.example.com. hostmaster.example.com. (
2002022401 ; serial
3H ; refresh
15 ; retry
1w ; expire
3h ; minimum
)
; main domain name servers
IN NS dns
dns IN A 192.168.50.113
; A record for mail server above
mail IN A 192.168.1.113

rac1 IN A 192.168.50.110
rac2 IN A 192.168.50.111
example.com. A 192.168.50.113

; sub-domain definitions
$ORIGIN rac.example.com.
@ IN NS gns.rac.example.com.
IN NS dns.example.com.
; sub-domain address records for name server only – glue record
gns IN A 192.168.50.115 ; ‘glue’ record

[root@dns dhcp]#vi /var/named/192.168.50.db

$TTL 86400 ; 24 hours could have been written as 24h or 1d
@ 1D IN SOA dns.example.com. hostmaster.example.com. (
2002022401 ; serial
3H ; refresh
15 ; retry
1w ; expire
3h ; minimum
)
IN NS dns.example.com.
dns IN A 192.168.56.113
110 IN PTR rac1.example.com.
111 IN PTR rac2.example.com.
115 IN PTR gns.rac.example.com.

Step 3:-Restart the named server

service named restart

Reference:-

DNS and DHCP Setup Example for Grid Infrastructure GNS (Doc ID 946452.1)

Please stop network manager for all servers so that /etc/resolv.conf will not be updated automatically.

service NetworkManager stop

chkconfig NetworkManager off

Please make entry in /etc/resolv.conf in all servers so that it connects to named server

options attempts: 2

options timeout: 1

search example.com rac.example.com

nameserver 192.168.50.113

VM Guest installation for mouse and keyboard integration.You can do for all VM.

Please install following package to correctly install vm guest edition.

yum install kernel-devel*
yum install gcc*
Please click Run button.

Configure SSH now in ALL VM

Now you can ping  but can not ssh You need to configure ssh . You need to login as root.
enable  PermitRootLogin to yes in  /etc/ssh/sshd_config.
[root@rac1 yum.repos.d]# service sshd restart
Redirecting to /bin/systemctl restart sshd.service
Do this step for all servers

Install following package in RAC1

yum install -y binutils.x86_64 glibc.x86_64 libgcc.x86_64 libstdc++.x86_64 libaio.x86_64 \
libXext.x86_64 libXtst.x86_64 libX11.x86_64 libXau.x86_64 ksh.x86_64 libxcb.x86_64 libXi.x86_64 \
make.x86_64 sysstat.x86_64 compat-libcap1.x86_64 compat-libstdc++- 33.x86_64 gcc.x86_64 gcc-c++.x86_64 \
glibc-devel.x86_64 libstdc++-devel.x86_64 libaio-devel.x86_64 cloog-ppl.x86_64 cpp.x86_64 \
glibc-headers.x86_64 kernel-headers.x86_64 mpfr.x86_64 ppl.x86_64 xterm.x86_64 xorg-x11- xauth \
xclock X11 xorg-x11- server-utils xorg-x11- xauth xclock X11 xorg-x11- server-utils psmisc.x86_64

 

Now all kernel parameters need to be added in /etc/sysctl.conf in RAC1

 

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Run the following command to change the current kernel parameters.
/sbin/sysctl -p

Add the following lines to a file called “/etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf” file in RAC1.

oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768
oracle   hard   memlock    134217728
oracle   soft   memlock    134217728
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 16384
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid hard memlock 134217728
grid soft memlock 134217728

Please create below groups and user GRID and ORACLE in RAC1.

 

groupadd -g 54321 oinstall
groupadd -g 54322 dba
groupadd -g 54323 oper
groupadd -g 54325 asmdba
groupadd -g 54328 asmadmin
groupadd -g 54329 asmoper
useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba -d /home/grid -m -c  “User for Oracle Grid Infrastructure Software”  -s /bin/bash grid
useradd -u 54321 -g oinstall -G dba,asmdba,oper,asmadmin -d /home/oracle -m -c “User for Oracle Database Software” -s /bin/bash oracle

Please modify below files to make compatible for RAC installation in RAC1

/etc/selinux/config
SELINUX= permissive
/etc/systemd/logind.conf
RemoveIPC=no
/etc/sysconfig/network
NOZEROCONF=yes

Disable firewall services for all VM

service iptables stop
chkconfig iptables off

Please add following entry in /etc/pam.d/login in RAC1

session required /lib64/security/pam_limits.so
session required pam_limits.so

Please create following directory and change the permissions in RAC1

mkdir -p /u01/oemcc
mkdir -p /u01/app/grid
mkdir -p /u01/app/product
mkdir -p /u01/app/oraInventory
mkdir -p /u01/app/oracle
chown -R oracle:oinstall /u01
chown -R grid:oinstall /u01/app/oraInventory
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/
chmod -R 770 /u01/app/oraInventory
chown -R oracle:oinstall /u01
chown -R grid:oinstall /u01/app/oraInventory
chown -R grid:oinstall /u01/app/grid
chmod -R 775 /u01/
chmod -R 770 /u01/app/oraInventory

Please add/modify .bash_profile under oracle user in /home/oracle in RAC1

 

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=$ORACLE_BASE/product/12c/db_1
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH_64=$ORACLE_HOME/lib
export ORACLE_BASE ORACLE_HOME LD_LIBRARY_PATH LD_LIBRARY_PATH_64

TNS_ADMIN=$ORACLE_HOME/network/admin
LIB_PATH=$ORACLE_HOME/lib
SQLPATH=$ORACLE_HOME/lib
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_PROCDIR ORACLE_PATH ORACLE_PROC ORA_NLS10 NLS_LANG TNS_ADMIN LIB_PATH SQLPATH CLASSPATH

PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/bin:/etc:/usr/openwin/bin:/usr/sbin:/usr/ucb/bin:/usr/local/bin/:/usr/sfw/bin
export PATH
ORACLE_SID=ORCL1
export ORACLE_SID

Please add/modify .bash_profile under grid user in /home/grid in RAC1

 

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

export ORACLE_BASE=/u01/app/grid
ORACLE_HOME=$ORACLE_BASE/product/grid
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
LD_LIBRARY_PATH_64=$ORACLE_HOME/lib
export ORACLE_BASE ORACLE_HOME LD_LIBRARY_PATH LD_LIBRARY_PATH_64

TNS_ADMIN=$ORACLE_HOME/network/admin
LIB_PATH=$ORACLE_HOME/lib
SQLPATH=$ORACLE_HOME/lib
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_PROCDIR ORACLE_PATH ORACLE_PROC ORA_NLS10 NLS_LANG TNS_ADMIN LIB_PATH SQLPATH CLASSPATH

PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/bin:/etc:/usr/openwin/bin:/usr/sbin:/usr/ucb/bin:/usr/local/bin/:/usr/sfw/bin
export PATH
ORACLE_SID=+ASM1
export ORACLE_SID

 

Please add following entry in /etc/hosts in RAC1

/etc/hosts

127.0.0.1       localhost.localdomain   localhost
# Public
192.168.50.110   rac1.example.com  rac1
192.168.50.111   rac2.example.com rac2
# Private
192.168.1.101   rac1-priv
192.168.1.102   rac2-priv

 

Now you need to configure shared storage which is requirement for ASM disks to be available from both RAC node.

Open command prompt from your laptop after coming out from Linux node.You may choose either D: or C: to store raw ASM disk.
mkdir D:\VirtualBox\ol6-121-ra
cd:
cd C:\VirtualBox\ol6-121-rac
“c:\Program Files\Oracle\VirtualBox\VBoxManage” createhd –filename asm1.vdi –size 5120 –format VDI –variant Fixed
“c:\Program Files\Oracle\VirtualBox\VBoxManage” storageattach rac1 –storagectl “SATA” –port 1 –device 0 –type hdd –medium asm1.vdi –mtype shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” storageattach rac1 –storagectl “SATA” –port 2 –device 0 –type hdd –medium asm2.vdi –mtype shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” storageattach rac1 –storagectl “SATA” –port 3 –device 0 –type hdd –medium asm3.vdi –mtype shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” storageattach rac1 –storagectl “SATA” –port 4 –device 0 –type hdd –medium asm4.vdi –mtype shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” modifyhd asm1.vdi –type shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” modifyhd asm2.vdi –type shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” modifyhd asm3.vdi –type shareable
“c:\Program Files\Oracle\VirtualBox\VBoxManage” modifyhd asm4.vdi –type shareable
Please proceed with disk partitioning using fdisk after connecting with root user to node 1
[root@rac1 dev]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

Device does not contain a recognized partition table

Building a new DOS disklabel with disk identifier 0x184caae5.

 

Command (m for help): n

Partition type:

p   primary (0 primary, 0 extended, 4 free)

e   extended

Select (default p): p

Partition number (1-4, default 1):

First sector (2048-10485759, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759):

Using default value 10485759

Partition 1 of type Linux and of size 5 GiB is set

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

 

Please add following entry to enable SCSI

Add the following to the “/etc/scsi_id.config” file to configure SCSI devices as trusted. Create the file if it doesn’t already exist.
options=-g

 

Please identify SCSI ID using following command for each individual disk of ASM which is added and shared.

[root@rac1 dev]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb
1ATA_VBOX_HARDDISK_VB7a401c0f-c528452b

Please create a file 99-ASM.rules under /etc/udev/rules.d and make following entry. RESULT should be followed by SCSI ID retrieved from previous step

KERNEL==”sd?1″, BUS==”scsi”, PROGRAM==”/sbin/scsi_id -g -u -d /dev/$parent”, RESULT==”1ATA_VBOX_HARDDISK_VB1bb0c812-29a5f87c”, NAME=”asm-disk1″, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

Please restart udev

# /sbin/udevadm control –reload-rules # /sbin/start_udev

Now you need to use partprobe to refresh new partitions

[root@rac1 dev]# /sbin/partprobe /dev/sdb1

Now you need to clone the RAC1 virtual machine to RAC2 using below steps

 

Remove all ASM storage device from storage option from VM Virtual box and share the ASM disk already created and attached with RAC1.
Please stop both virtual box RAC1 and RAC2 and run following command from command  prompt from your host laptop/desktop.
“c:\Program Files\Oracle\VirtualBox\VBoxManage” storageattach ora_linux6_12c_n2 –storagectl “SATA” –port 1 –device 0 –type hdd –medium asm1.vdi –mtype shareable

Please re-initialize all IP in cloned node after logging into cloned VM (Say RAC2)

Log in to the “rac2” virtual machine as the “root” user so we can reconfigure the network settings to match the following.

  • hostname: rac2.example.com
  • eth0: DHCP (*Not* Connect Automatically)
  • eth1: IP=192.168.50.110, Subnet=255.255.255.0, Gateway=192.168.50.1, DNS=192.168.50.113, Search=example.com (Connect Automatically)
  • eth2: IP=192.168.1.102, Subnet=255.255.255.0, Gateway=<blank>, DNS=<blank>, Search=<blank> (Connect Automatically)

Amend the hostname in the “/etc/sysconfig/network” file.

NETWORKING=yes
HOSTNAME=rac2.example.com

Check the MAC address of each of the available network connections. Don’t worry that they are listed as “eth3” to “eth5”. These are dynamically created connections because the MAC address of the “eth0” to “eth2” connections are incorrect.

# ifconfig -a | grep eth
eth3      Link encap:Ethernet  HWaddr 08:00:27:43:41:74  
eth4      Link encap:Ethernet  HWaddr 08:00:27:4B:4F:0F  
eth5      Link encap:Ethernet  HWaddr 08:00:27:E8:70:17
#

Edit the “/etc/sysconfig/network-scripts/ifcfg-eth0”, amending only the HWADDR setting as follows and deleting the UUID entry. Note, the HWADDR value comes from the “eth3” interface displayed above.Please change the HWADDR and interface name to “eth0”

HWADDR=08:00:27:43:41:74

Edit the “/etc/sysconfig/network-scripts/ifcfg-eth1”, amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the “eth4” interface displayed above.Please change the HWADDR and interface name to “eth1”

HWADDR=08:00:27:4B:4F:0F
IPADDR=192.168.50.111

Edit the “/etc/sysconfig/network-scripts/ifcfg-eth2”, amending only the IPADDR and HWADDR settings as follows and deleting the UUID entry. Note, the HWADDR value comes from the “eth5” interface displayed above.Please change the HWADDR and interface name to “eth2”

HWADDR=08:00:27:E8:70:17 IPADDR=192.168.1.102

Also please check rules related to network under /etc/udev/rules.d/90-persistent.rules file and change
interface name accordingly

 

Please share the local folder where you have kept all the installable files or you can ftp to rac1

Open mobaxterm to fetch GUI in local.

Please unzip the grid installation binaries under /u01/app/grid.
Now you need to go to /u01/app/grid/grid

Now please wait and your runInstaller will open the GUI. Select the “Install and Configure Oracle Grid Infrastructure for a Cluster” option, then click the “Next” button.

Accept the “Configure a Flexcluster” option by clicking the “Next” button.

Select the “Advance Installation” option, then click the “Next” button.

You can select your own language preference.

Please provide cluster name,scan name and port. GNS IP should be 192.168.50.115 (This is configured in /var/named/example.com.db in DNS server)

Enter the details of the second node in the cluster, then click the “OK” button.

Click the “SSH Connectivity…” button and enter the password for the “grid” user. Click the “Setup” button to configure SSH connectivity, and the “Test” button to test it once it is complete. Once the test is complete, click the “Next” button.
Please select public and private network interface which you defined earlier for corresponding interface name.

ASM configuration should start now.

Please select 2 disks for OCR,Voting disk and management DB.
Allocation Unit Size 4 MB (For any Diskgroup you may create)

Please set the password for ASMSNMP

Please use following option.

Please select the group as below

Please select ORACLE_BASE and ORACLE_HOME

Please select oracle inventory as following

Please do not select automatic execution of root.sh rather I would prefer to run manually.

The validation check will result as below:-

Please install the following package downloded manually in both rac1 and rac2 node.

https://drive.google.com/open?id=0B8-3R318F39hNl9BOEVGekVuU0U

yum install compat-libstdc++-33-3.2.3-71.el7.x86_64.rpm

Please run the verification again.

After installation progress,you will get following screen where it has been instructed to run the following script.

[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

 

Please execute root.sh for rac1
[root@rac1 ~]# /u01/app/grid/product/grid/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME=  /u01/app/grid/product/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …
Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/product/grid/crs/install/crsconfig_params
2017/11/07 06:46:21 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.
2017/11/07 06:46:56 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2017/11/07 06:46:58 CLSRSC-363: User ignored prerequisites during installation
OLR initialization – successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
2017/11/07 06:47:56 CLSRSC-330: Adding Clusterware entries to file ‘oracle-ohasd.service’
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start ‘ora.evmd’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.evmd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac1’
CRS-2676: Start of ‘ora.diskmon’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘rac1’ succeeded
2017/11/07 07:01:34 CLSRSC-343: Successfully started Oracle Clusterware stack
2017/11/07 07:02:19 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster … succeeded
Please proceed with next steps. It may take another 15 minutes to 30 minutes to complete.

 

Please check over-all status using below command:-
[root@rac1 ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle

[root@rac1 ~]# crsctl stat res -t

——————————————————————————–
Name           Target  State        Server                   State details
——————————————————————————–
Local Resources
——————————————————————————–
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.DATA.dg
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
——————————————————————————–
Cluster Resources
——————————————————————————–
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       rac1                     169.254.9.67 192.168
                                                             .1.100,STABLE
ora.asm
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.gns
      1        ONLINE  ONLINE       rac1                     STABLE
ora.gns.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       rac1                     Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                     STABLE
——————————————————————————–

EXPDP and IMPDP command reference 12c

 

A.Very useful method to upgrade 11gR2 to 12c using transportable=always

Full transportable export/import is a new feature in Oracle Database 12c that greatly simplifies the
process of database migration. Combining the ease of use of Oracle Data Pump with the performance
of transportable tablespaces, full transportable export/import gives you the ability to upgrade or
migrate to Oracle Database 12c in a single operation if your source database is at least Oracle Database
11g Release 2 (11.2.0.3). Full transportable export/import is a valuable tool for migrating to pluggable
databases, allowing you to take advantage of the cost savings and economies of scale inherent in
moving to a multitenant architecture.You must explicitly specify the service name of the PDB in the
connect string for the impdp command.

STEP 1: Check the endianness of both platforms

To check the endianness of a platform, run the following query on each platform.

SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_ID = d.PLATFORM_ID;

In this case, both Oracle Solaris x86 and Oracle Enterprise Linux have little endian format, so no
endian conversion is necessary.

STEP 2: Verify that the set of tablespaces to be transported is self-contained

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK(‘hr_1,hr_2’, TRUE);

Note that you must include all user tablespaces in the database when performing this check for a full
transportable export/import.
After invoking this PL/SQL procedure, you can see all violations by selecting from the
TRANSPORT_SET_VIOLATIONS view.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

STEP 3: Create a directory object in the source database

SQL> CREATE DIRECTORY dp_dir AS ’/u01/app/datafiles’;

STEP 4: Place the hr_1 and hr_2 tablespaces in read-only mode

The tablespaces to be transported must be in read-only mode for the duration of the export. In this
case we need to issue two commands on the source database.

SQL> ALTER TABLESPACE hr_1 READ ONLY;
SQL> ALTER TABLESPACE hr_2 READ ONLY;

The tablespaces can be returned to read-write status once the full transportable export has finished and
the tablespace data files have been copied to the destination system.

STEP 5: Invoke full transportable export on the source database

Invoke the Data Pump export utility as a user with the DATAPUMP_EXP_FULL_DATABASE role.

$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log

Note that the VERSION=12 parameter is required because the source database is Oracle Database 11g
Release 2 (11.2.0.3). This is the only time that a version number greater than the current version is
allowed by the expdp command. If the source database is Oracle Database 12c, with
COMPATIBLE=12.0 or higher, then the VERSION parameter is not required.
After the export command completes, the export log file shows a list of all of the tablespace data files
that need to be moved to the target.

STEP 6: Transport the tablespace data files and the export dump file from source to target

$ cd /u01/app/oracle/oradata/hr_pdb/
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_101.dbf .
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_201.dbf .
$ cp /net/<source-server>/u01/app/datafiles/full_tts.dmp .

STEP 7: Create a directory object on the destination database

Because we copied the data pump dump file to the oradata directory for HR_PDB, we will create a
directory object to point to that same directory for this import. This directory object must be created
by a user connected to the PDB container.

SQL> CREATE DIRECTORY dp_dir AS ‘/u01/app/oracle/oradata/hr_pdb’;
SQL> GRANT read, write on directory dp_dir to system;

STEP 8: Invoke full transportable import on the destination database

Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role.
$ impdp system/manager@hr_pdb directory=dp_dir \
dumpfile=full_tts.dmp logfile=full_tts_imp.log \
metrics=y \
encryption_password=secret123word456 \
transport_datafiles=’/u01/app/oracle/oradata/hr_pdb/hr_101.dbf’,\
‘/u01/app/oracle/oradata/hr_pdb/hr_201.dbf’

Note that, while this example shows several parameters specified on the command line, in most cases
use of a data pump parameter file is recommended to avoid problems with white space and quotation
marks on the command line.
After this statement executes successfully, the user tablespaces are automatically placed in read/write
mode on the destination database. Check the import log file to ensure that no unexpected error
occurred, and perform your normal post-migration validation and testing.

STEP 9: (Optional) Restore user tablespaces to read-write mode on the source database

After the full transportable export has finished, you can return the user-defined tablespaces to readwrite
mode at the source database if desired.

B.General Examples:-

————————————-Export and import————

Need to create directory TEST for /u01/dppump first

sqlplus / as sysdba

SQL>create directory as ‘/u01/dppump

SQL>grant read,write on directory test to public;

expdp \”SYS AS SYSDBA\” schemas=TEST directory=TST_PUMP dumpfile=test_062717.dmp

impdp \”/ as sysdba\” directory=TEST dumpfile=test_062717.dmp  logfile=test_062717_imp.log schemas=TEST TRANSFORM=oid:n

——————Using par file example———————

PAR file exp.par for expdp

directory=expp
dumpfile=exp_test%U.dmp
logfile=test.log
cluster=n
tables=
TEST.T1,
TEST.T2
parallel=12
flashback_scn=82850177023

nohup expdp system/test parfile=exp.par &

PAR file imp.par for impdp

directory=expp
dumpfile=test%U.dmp
logfile=test_imp.log
cluster=n
remap_tablespace=
USERS:PROD
DEMO:PROD
remap_table=
TEST.T1:TT1
TEST.T2:TT2
remap_schema=
TEST:TEST1
parallel=8
PARTITION_OPTIONS=
MERGE
transform=
table_compression_clause:”COLUMN STORE COMPRESS FOR QUERY”
table_exists_action=
replace

nohup impdp system/test parfile=imp.par &

 

C.All Major parameters used for expdp and impdp in 12c

EXPDP

PARAMETER

OVERVIEW

LIMITATIONS

EXPDP SAMPLE COMMAND

IMPDP SAMPLE COMMAND

ACCESS_METHOD=[AUTOMATIC | DIRECT_PATH | EXTERNAL_TABLE] Instructs Export to use a particular method to unload data If the NETWORK_LINK parameter is also specified, then direct path mode is not
supported.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL
ACCESS_METHOD=EXTERNAL_TABLE
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE] Specifies which data to compress before writing to the dump file set.  To make full use of all these compression options, the COMPATIBLE initialization
parameter must be set to at least 11.0.0.
• The METADATA_ONLY option can be used even if the COMPATIBLE initialization
parameter is set to 10.2.
• Compression of data using ALL or DATA_ONLY is valid only in the Enterprise
Edition of Oracle Database 11g or later, and they require that the Oracle Advanced
Compression option be enabled.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=METADATA_ONLY
CLUSTER=[YES | NO] Determines whether Data Pump can use Oracle Real Application Clusters (Oracle
RAC) resources and start workers on other Oracle RAC instances.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_clus%U.dmp CLUSTER=NO PARALLEL=3  impdp ORCL DIRECTORY=dpump_dir1 SCHEMAS=ORCL CLUSTER=NO PARALLEL=3 NETWORK_LINK=dbs1
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
Specifies the compression algorithm to be used when compressing dump file data. Restrictions
• To use this feature, database compatibility must be set to 12.0.0 or later.
• This feature requires that the Oracle Advanced Compression option be enabled.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp COMPRESSION=DATA_ONLY
COMPRESSION_ALGORITHM=LOW
CONTENT=[ALL | DATA_ONLY | METADATA_ONLY] Enables you to filter what Export unloads: data only, metadata only, or both. The CONTENT=METADATA_ONLY parameter cannot be used with the
TRANSPORT_TABLESPACES (transportable-tablespace mode) parameter or with
the QUERY parameter.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp CONTENT=METADATA_ONLY impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp CONTENT=METADATA_ONLY
DIRECTORY=directory_object  Users with access to the default
DATA_PUMP_DIR directory object do not need to use the DIRECTORY parameter at all.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=employees.dmp CONTENT=METADATA_ONLY impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
LOGFILE=dpump_dir2:expfull.log
DUMPFILE=[directory_object:]file_name [, …] For example, exp%Uaa
%U.dmp would resolve to exp01aa01.dmp, exp02aa02.dmp, and so forth
expdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp,
exp2%U.dmp PARALLEL=3
 impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=dpump_dir2:exp1.dmp, exp2%U.dmp
ENCRYPTION = [ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE] Default: The default value depends upon the combination of encryption-related
parameters that are used. To enable encryption, either the ENCRYPTION or
ENCRYPTION_PASSWORD parameter, or both, must be specified.
If only the ENCRYPTION_PASSWORD parameter is specified, then the ENCRYPTION
parameter defaults to ALL.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_enc.dmp JOB_NAME=enc1
ENCRYPTION=data_only ENCRYPTION_PASSWORD=foobar
ESTIMATE=[BLOCKS | STATISTICS]  BLOCKS – The estimate is calculated by multiplying the number of database blocks
used by the source objects, times the appropriate block sizes.
• STATISTICS – The estimate is calculated using statistics for each table. For this
method to be as accurate as possible, all tables should have been analyzed
recently. (Table analysis can be done with either the SQL ANALYZE statement or
the DBMS_STATS PL/SQL package.)
 If the Data Pump export job involves compressed tables, then the default size
estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS
is used. This is because the size estimate does not reflect that the data was stored
in a compressed form. To get a more accurate size estimate for compressed tables,
use ESTIMATE=STATISTICS.
• The estimate may also be inaccurate if either the QUERY or REMAP_DATA
parameter is used.
expdp ORCL TABLES=employees ESTIMATE=STATISTICS DIRECTORY=dpump_dir1
DUMPFILE=estimate_stat.dmp
impdp ORCL TABLES=job_history NETWORK_LINK=source_database_link
DIRECTORY=dpump_dir1 ESTIMATE=STATISTICS (Only valid for NETWORK_LINK import)
ESTIMATE_ONLY=[YES | NO] ESTIMATE_ONLY=YES, then Export estimates the space that would be consumed,
but quits without actually performing the export operation.
expdp ORCL ESTIMATE_ONLY=YES NOLOGFILE=YES SCHEMAS=ORCL
EXCLUDE=FUNCTION
EXCLUDE=PROCEDURE
EXCLUDE=PACKAGE
EXCLUDE=INDEX:”LIKE ‘EMP%’ “
Enables you to filter the metadata that is exported by specifying objects and object
types to be excluded from the export operation.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_exclude.dmp EXCLUDE=VIEW,
PACKAGE, FUNCTIONexpdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:”=’ORCL'”
impdp FULL=YES DUMPFILE=expfull.dmp EXCLUDE=SCHEMA:”=’ORCL'”
FILESIZE=integer[B | KB | MB | GB | TB] Specifies the maximum size of each dump file. If the size is reached for any member of
the dump file set, then that file is closed and an attempt is made to create a new file, if
the file specification contains a substitution variable or if additional dump files have
been added to the job.
• The minimum size for a file is ten times the default Data Pump block size, which
is 4 kilobytes.
• The maximum size for a file is 16 terabytes.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_3m.dmp FILESIZE=3MB
FLASHBACK_SCN=scn_value Specifies the system change number (SCN) that Export will use to enable the
Flashback Query utility.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_scn.dmp FLASHBACK_SCN=384632  impdp ORCL DIRECTORY=dpump_dir1 FLASHBACK_SCN=123456
NETWORK_LINK=source_database_link(Only valid if
NETWORK_LINK parameter specified)
FLASHBACK_TIME=”TO_TIMESTAMP(time-value)” The SCN that most closely matches the specified time is found, and this SCN is used to
enable the Flashback utility. The export operation is performed with data that is
consistent up to this SCN.
DIRECTORY=dpump_dir1
DUMPFILE=hr_time.dmp
FLASHBACK_TIME=”TO_TIMESTAMP(’27-10-2012 13:16:00′, ‘DD-MM-YYYY HH24:MI:SS’)”
FLASHBACK_TIME=”TO_TIMESTAMP(’27-10-2012 13:40:00′, ‘DD-MM-YYYY HH24:MI:SS’)” You could then issue the following command: > impdp ORCL DIRECTORY=dpump_dir1 PARFILE=flashback_imp.par NETWORK_LINK=source_database_link
FULL=[YES | NO] FULL=YES indicates that all data and metadata are to be exported. To perform a full
export, you must have the DATAPUMP_EXP_FULL_DATABASE role.
• A full export does not, by default, export system schemas that contain Oraclemanaged
data and metadata. Examples of system schemas that are not exported
by default include SYS, ORDSYS, and MDSYS.
• Grants on objects owned by the SYS schema are never exported.
expdp ORCL DIRECTORY=dpump_dir2 DUMPFILE=expfull.dmp FULL=YES NOLOGFILE=YES impdp ORCL DUMPFILE=dpump_dir1:expfull.dmp FULL=YES
LOGFILE=dpump_dir2:full_imp.log
INCLUDE Enables you to filter the metadata that is exported by specifying objects and object
types for the current export mode. The specified objects and all their dependent objects
are exported. Grants on these objects are also exported.
SCHEMAS=ORCL
DUMPFILE=expinclude.dmp
DIRECTORY=dpump_dir1
LOGFILE=expinclude.log
INCLUDE=TABLE:”IN (‘EMPLOYEES’, ‘DEPARTMENTS’)”
INCLUDE=PROCEDURE
INCLUDE=INDEX:”LIKE ‘EMP%'”
impdp system SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
PARFILE=imp_include.par
LOGFILE To perform a Data Pump Export using Oracle Automatic Storage Management
(Oracle ASM), you must specify a LOGFILE parameter that includes a directory
object that does not include the Oracle ASM + notation.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp LOGFILE=hr_export.log impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir2 LOGFILE=imp.log DUMPFILE=dpump_dir1:expfull.dmp
LOGTIME=[NONE | STATUS | LOGFILE | ALL] The available options are defined as follows:
• NONE–No timestamps on status or log file messages (same as default)
• STATUS–Timestamps on status messages only
• LOGFILE–Timestamps on log file messages only
• ALL–Timestamps on both status and log file messages
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL LOGTIME=ALL  impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL LOGTIME=ALL
TABLE_EXISTS_ACTION=REPLACE
METRICS=[YES | NO] When METRICS=YES is used, the number of objects and the elapsed time are recorded
in the Data Pump log file.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL METRICS=YES  impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp METRICS=YES
NETWORK_LINK Enables an export from a (source) database identified by a valid database link. The data from the source database instance is written to a dump file set on the connected database instance. Network exports do not support LONG columns.
• When transporting a database over the network using full transportable export,
tables with LONG or LONG RAW columns that reside in administrative tablespaces
(such as SYSTEM or SYSAUX) are not supported.
expdp ORCL DIRECTORY=dpump_dir1 NETWORK_LINK=source_database_link
DUMPFILE=network_export.dmp LOGFILE=network_export.log
 impdp ORCL TABLES=employees DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link EXCLUDE=CONSTRAINT
PARALLEL=integer  This parameter is valid only in the Enterprise Edition of Oracle Database 11g or
later.
• To export a table or table partition in parallel (using PQ slaves), you must have
the DATAPUMP_EXP_FULL_DATABASE role.
 expdp ORCL DIRECTORY=dpump_dir1 LOGFILE=parallel_export.log
JOB_NAME=par4_job DUMPFILE=par_exp%u.dmp PARALLEL=4
impdp ORCL DIRECTORY=dpump_dir1 LOGFILE=parallel_import.log
JOB_NAME=imp_par3 DUMPFILE=par_exp%U.dmp PARALLEL=3
QUERY = [schema.][table_name:] query_clause QUERY=employees:”WHERE department_id > 10 AND salary > 10000″
NOLOGFILE=YES
DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp
 impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
PARFILE=query_imp.par NOLOGFILE=YESSuppose you have a parameter file, query_imp.par, that contains the following:
QUERY=departments:”WHERE department_id < 120″
REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function A common use for this option is to
mask data when moving from a production system to a test system. For example, a
column of sensitive customer data such as credit card numbers could be replaced with
numbers generated by a REMAP_DATA function. This would allow the data to retain its
essential formatting and processing characteristics without exposing private data to
unauthorized personnel.
• Remapping LOB column data of a remote table is not supported.
• Columns of the following types are not supported byREMAP_DATA: User Defined
Types, attributes of User Defined Types, LONGs, REFs, VARRAYs, Nested Tables,
BFILEs, and XMLtype.
 expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=remap1.dmp TABLES=employees
REMAP_DATA=ORCL.employees.employee_id:ORCL.remap.minus10
REMAP_DATA=ORCL.employees.first_name:ORCL.remap.plusx
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=ORCL.employees REMAP_DATA=ORCL.employees.first_name:ORCL.remap.plusx
SAMPLE=[[schema_name.]table_name:]sample_percent Allows you to specify a percentage of the data rows to be sampled and unloaded from
the source database.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=sample.dmp SAMPLE=70
SCHEMAS=schema_name [, …] Specifies that you want to perform a schema-mode export. This is the default mode for
Export
• If you do not have the DATAPUMP_EXP_FULL_DATABASE role, then you can
specify only your own schema.
• The SYS schema cannot be used as a source schema for export jobs.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp SCHEMAS=ORCL,sh,oe  impdp ORCL SCHEMAS=ORCL DIRECTORY=dpump_dir1 LOGFILE=schemas.log
DUMPFILE=expdat.dmp
SERVICE_NAME=name  If you start a Data Pump job on instance D and specify CLUSTER=YES and
SERVICE_NAME=my_service, then workers can be started on instances A, B, C,
and D. Even though instance D is not in my_service it is included because it is
the instance on which the job was started.
• If you start a Data Pump job on instance A and specify CLUSTER=NO, then any
SERVICE_NAME parameter you specify is ignored and all processes will start on
instance A.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=hr_svname2.dmp SERVICE_NAME=sales  impdp system DIRECTORY=dpump_dir1 SCHEMAS=ORCL
SERVICE_NAME=sales NETWORK_LINK=dbs1
SOURCE_EDITION Specifies the database edition from which objects will be exported. expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp SOURCE_EDITION=exp_edition
EXCLUDE=USER
TABLES=[schema_name.]table_name[:partition_name] [, …] Specifies that you want to perform a table-mode export. expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departmentsexpdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tables_part.dmp
TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012
> impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp TABLES=employees,jobs
The following example shows the use of the TABLES parameter to import partitions:
> impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp
TABLES=sh.sales:sales_Q1_2012,sh.sales:sales_Q2_2012
TABLESPACES=tablespace_name [, …] In tablespace mode, only the tables contained in a specified set of tablespaces are
unloaded. If a table is unloaded, then its dependent objects are also unloaded. Both
object metadata and data are unloaded
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
TABLESPACES=tbs_1,tbs_2,tbs_3,tbs_4
TRANSPORT_TABLESPACES=tablespace_name [, …] The TRANSPORT_TABLESPACES parameter cannot be used in conjunction with
the QUERY parameter.
expdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log
TRANSPORTABLE = [ALWAYS | NEVER] expdp sh DIRECTORY=dpump_dir1 DUMPFILE=tto1.dmp
TABLES=sh.sales2 TRANSPORTABLE=ALWAYSimpdp system PARTITION_OPTIONS=DEPARTITION
TRANSPORT_DATAFILES=oracle/dbs/tbs2 DIRECTORY=dpump_dir1
DUMPFILE=tto1.dmp REMAP_SCHEMA=sh:dp
VERSION=[COMPATIBLE | LATEST | version_string] Dump files created on Oracle Database 11g releases with the Data Pump
parameter VERSION=12 can only be imported on Oracle Database 12c Release 1
(12.1) and later.
 expdp ORCL TABLES=ORCL.employees VERSION=LATEST DIRECTORY=dpump_dir1
DUMPFILE=emp.dmp NOLOGFILE=YES
 impdp ORCL FULL=Y DIRECTORY=dpump_dir1
NETWORK_LINK=source_database_link VERSION=12
VIEWS_AS_TABLES=[schema_name.]view_name[:table_name], … The VIEWS_AS_TABLES
parameter can be used by itself or along with the TABLES parameter. If either is used,
Data Pump performs a table-mode export.
• The VIEWS_AS_TABLES parameter cannot be used with the
TRANSPORTABLE=ALWAYS parameter.
• Tables created using the VIEWS_AS_TABLES parameter do not contain any
hidden columns that were part of the specified view.
• The VIEWS_AS_TABLES parameter does not support tables that have columns
with a data type of LONG.
expdp scott/tiger views_as_tables=view1 directory=data_pump_dir dumpfile=scott1.dmp  impdp ORCL VIEWS_AS_TABLES=view1:view1_tab NETWORK_LINK=dblink1

IMPDP

DATA_OPTIONS = [DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERRORS |
REJECT_ROWS_WITH_REPL_CHAR]
The SKIP_CONSTRAINT_ERRORS option specifies that you want the import
operation to proceed even if non-deferred constraint violations are encountered. It
logs any rows that cause non-deferred constraint violations, but does not stop the
load for the data object experiencing the violation.
 impdp ORCL TABLES=employees CONTENT=DATA_ONLY
DUMPFILE=dpump_dir1:table.dmp DATA_OPTIONS=skip_constraint_errors
PARTITION_OPTIONS=[NONE | DEPARTITION | MERGE] A value of departition promotes each partition or subpartition to a new individual
table. The default name of the new table will be the concatenation of the table and
partition name or the table and subpartition name, as appropriate.
• If the export operation that created the dump file was performed with the
transportable method and if a partition or subpartition was specified, then the
import operation must use the departition option.
• If the export operation that created the dump file was performed with the
transportable method, then the import operation cannot use
PARTITION_OPTIONS=MERGE.
impdp system TABLES=sh.sales PARTITION_OPTIONS=MERGE
DIRECTORY=dpump_dir1 DUMPFILE=sales.dmp REMAP_SCHEMA=sh:scott
REMAP_DATAFILE=source_datafile:target_datafile Changes the name of the source data file to the target data file name in all SQL
statements where the source data file is referenced: CREATE TABLESPACE, CREATE
LIBRARY, and CREATE DIRECTORY.
DIRECTORY=dpump_dir1
FULL=YES
DUMPFILE=db_full.dmp
REMAP_DATAFILE=”‘DB1$:[HRDATA.PAYROLL]tbs6.dbf’:’/db1/hrdata/payroll/tbs6.dbf'”
You can then issue the following command:
> impdp ORCL PARFILE=payroll.par
REMAP_SCHEMA=source_schema:target_schema Loads all objects from the source schema into a target schema. > expdp system SCHEMAS=ORCL DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp
> impdp system DIRECTORY=dpump_dir1 DUMPFILE=ORCL.dmp REMAP_SCHEMA=ORCL:scott
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename Allows you to rename tables during an import operation. Only objects created by the Import will be remapped. In particular, preexisting
tables will not be remapped.
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp
TABLES=ORCL.employees REMAP_TABLE=ORCL.employees:emps
REMAP_TABLESPACE=source_tablespace:target_tablespace Remaps all objects selected for import with persistent data in the source tablespace to
be created in the target tablespace.
Remaps all objects selected for import with persistent data in the source tablespace to
be created in the target tablespace.
 impdp ORCL REMAP_TABLESPACE=tbs_1:tbs_6 DIRECTORY=dpump_dir1
DUMPFILE=employees.dmp
SKIP_UNUSABLE_INDEXES=[YES | NO] Specifies whether Import skips loading tables that have indexes that were set to the
Index Unusable state (by either the system or the user).
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp LOGFILE=skip.log
SKIP_UNUSABLE_INDEXES=YES
SQLFILE=[directory_object:]file_name If SQLFILE is specified, then the CONTENT parameter is ignored if it is set to either
ALL or DATA_ONLY.
 impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
SQLFILE=dpump_dir2:expfull.sql
TABLE_EXISTS_ACTION=[SKIP | APPEND | TRUNCATE | REPLACE] Tells Import what to do if the table it is trying to create already exists. impdp ORCL TABLES=employees DIRECTORY=dpump_dir1 DUMPFILE=expfull.dmp
TABLE_EXISTS_ACTION=REPLACE
TARGET_EDITION=name Specifies the database edition into which objects should be imported • This parameter is only useful if there are two or more versions of the same
versionable objects in the database.
• The job version must be 11.2 or later. See “VERSION (page 3-65)”.
impdp ORCL DIRECTORY=dpump_dir1 DUMPFILE=exp_dat.dmp TARGET_EDITION=exp_edition
TRANSFORM = transform_name:value[:object_type]

DISABLE_ARCHIVE_LOGGING:[Y | N]
INMEMORY:[Y | N]
INMEMORY_CLAUSE:”string with a valid in-memory parameter”
LOB_STORAGE:[SECUREFILE | BASICFILE | DEFAULT | NO_CHANGE]
OID:[Y | N]
PCTSPACE:some_number_greater_than_zero
SEGMENT_ATTRIBUTES:[Y | N]
SEGMENT_CREATION:[Y | N]
STORAGE:[Y | N]
TABLE_COMPRESSION_CLAUSE:[NONE | compression_clause]

impdp ORCL TABLES=ORCL.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp
TRANSFORM=STORAGE:N:tableimpdp ORCL TABLES=ORCL.employees DIRECTORY=dpump_dir1 DUMPFILE=hr_emp.dmp
TRANSFORM=STORAGE:N:table

Upgrade of Oracle 11g RAC database to Oracle 12c RAC DBUA

This document will help you providing guideline to upgrade Oracle 11g RAC to Oracle 12c RAC clusterware  and Database .

Author:-Saibal Ghosh

https://www.linkedin.com/in/saibal-ghosh-ccsk-prince2-%C2%AE-469b0a7/

1 Tasks Before Upgrade

1.1 Backup the Database:

Before we start the upgrade, it is a best practice to backup the database, Oracle Cluster Registry (OCR) and Oracle database home and Grid home .

1.2  LINUX X86-64 DATABASE FEATURES PACKAGE REQUIREMENTS:-

The following packages or later version need to be installed on the system for the upgrade to go through successfully:

 

binutils-2.23.52.0.1-12.el7.x86_64

compat-libcap1-1.10-3.el7.x86_64

gcc-4.8.2-3.el7.x86_64

gcc-c++-4.8.2-3.el7.x86_64

glibc-2.17-36.el7.i686

glibc-2.17-36.el7.x86_64

glibc-devel-2.17-36.el7.i686

glibc-devel-2.17-36.el7.x86_64

ksh

libaio-0.3.109-9.el7.i686

libaio-0.3.109-9.el7.x86_64

libaio-devel-0.3.109-9.el7.i686

libaio-devel-0.3.109-9.el7.x86_64

libgcc-4.8.2-3.el7.i686

libgcc-4.8.2-3.el7.x86_64

libstdc++-4.8.2-3.el7.i686

libstdc++-4.8.2-3.el7.x86_64

libstdc++-devel-4.8.2-3.el7.i686

libstdc++-devel-4.8.2-3.el7.x86_64

libXi-1.7.2-1.el7.i686

libXi-1.7.2-1.el7.x86_64

libXtst-1.2.2-1.el7.i686

libXtst-1.2.2-1.el7.x86_64

make-3.82-19.el7.x86_64

sysstat-10.1.5-1.el7.x86_64

 

Pluggable Authentication Modules for Linux (Linux PAM)

We need to install the latest Linux PAM (Pluggable Authentication Modules for Linux) library for our Linux distribution. PAM provides greater flexibility for system administrators to choose how applications authenticate users. On Linux, external scheduler jobs require PAM.

 

Oracle JDBC/OCI Drivers
We can use the following optional JDK version with the Oracle
JDBC/OCI drivers however, it is not required for the installation:
JDK 6 Update 10 (Java SE Development Kit 1.6.0_21)
JDK 1.5.0-24 (JDK 5.0) with the JNDI extension

1.3  NETWORK TIME PROTOCOL SETTING:-

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.
We have two options for time synchronization: an operating system configured network time protocol (NTP), or Oracle Cluster Time Synchronization Service. Oracle Cluster Time Synchronization Service is designed for organizations whose cluster servers are unable to access NTP services. If we use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd ) starts up in observer mode. If we do not have NTP daemons, then ctssd starts up in active mode and synchronizes time among cluster members without contacting an external time server.
If we have NTP daemons on our server but we cannot configure them to synchronize time with a time server, and we want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then we need to deactivate and deinstall the NTP.
To deactivate the NTP service, we must stop the existing ntpd service, disable it from the initialization sequences and remove the ntp.conf file. To complete these step on Oracle Linux, we run the following commands as the root user:

# /sbin/service ntpd stop

# chkconfig ntpd off

# rm /etc/ntp.conf

 

or, mv /etc/ntp.conf to /etc/ntp.conf.org

Also we need to remove the following file:

/var/run/ntpd.pid

This file maintains the pid for the NTP daemon. When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is installed in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode, and no active time synchronization is performed by Oracle Clusterware within the cluster.
To confirm that ctssd is active after upgrade, we need to enter the following command as the Grid installation owner:

$ crsctl check ctss

If we are using NTP, and we prefer to continue using it instead of Cluster Time Synchronization Service, then we need to modify the NTP configuration to set the –x flag, which prevents time from being adjusted backward. Then we restart the network time protocol daemon after we complete this task.

 

To do this, on Oracle Linux, Red Hat Linux, and Asianux systems, we edit the /etc/sysconfig/ntpd
file to add the –x flag, as in the following example:

 

# Drop root to id ‘ntp:ntp’ by default.

OPTIONS=”-x -u ntp:ntp -p /var/run/ntpd.pid”

# Set to ‘yes’ to sync hw clock after successful ntpdate

SYNC_HWCLOCK=no

# Additional options for ntpdate

NTPDATE_OPTIONS=””

Then, restart the NTP service.

# /sbin/service ntp restart

1.4 CHECKING IF THE THE CVUQDISK PACKAGE FOR LINUX IS ALREADY INSTALLED OR NOT:-

We use the following command to find if we have an existing version of the cvuqdisk package:

# rpm -qi cvuqdisk

We need to ensure that the above package is installed; else we need to install the package. We need to ensure that the Oracle software owner user (oracle or grid ) has the Oracle Inventory group (oinstall) as its primary group and is a member of the appropriate OSDBA.

1.5  CHECKING RESOURCE LIMITS:

1. Log in as an installation owner.
2. We need to check the soft and hard limits for the file descriptor setting. We need to ensure that the result is in the recommended range. For example:

$ ulimit -Sn

1024

$ ulimit -Hn

65536

3. We need to check the soft and hard limits for the number of processes available to a user.
 We need to ensure that the result is in the recommended range. For example:

$ ulimit -Su

2047

$ ulimit -Hu

16384

4. We need to check the soft limit for the stack setting. We need to ensure that the result is in the recommended range. For example:

$ ulimit -Ss

10240

$ ulimit -Hs

32768

We need to repeat this procedure for each Oracle software installation owner.
As an example, the settings in the  Production Server are as follows:

 

 

2.Upgrade the Grid Infrastructure:

Step1: Unset the following:

1.    ORACLE_BASE

2.    ORACLE_HOME

3.    GI_HOME

4.    TNS_ADMIN

5.    ORA_NLS10

Step2: Check whether there is enough space on the mount point and /tmp as well as there ought to be at least 4GB of free space in the OCR and VOTING DISK diskgroup, because the MGMTDB is created in that diskgroup.
Step 3: Back up the Cluster and Oracle Homes and the OCR
Step 4: Check crs active and software versions are the same

·         crsctl query crs activeversion

·         crsctl query crs softwareversion

Step 5: Validate the Node readiness for the upgrade. Run a command similar to the following:

./runcluvfy.sh stage –pre crsinst –upgrade –rolling –src_crshome /orasw/app/11.2.0/grid –dest_crshome /orasw/app/12.1.0/grid –dest_version 12.1.0.2.0

Step 6: Start the Upgrade. The screenshots are self-explanatory.

Screen 1) we select the Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management option.

Screen 2) We choose the default language: English.

Screen 3) We select the nodes to form part of the RAC Cluster in the upgrade.

Screen 4)  EM Cloud Control is not selected at this point.

 

Screen 5)  The Operating System groups are selected from the drop down list.

Screen 6) Specifying the Oracle base and software location. We do not use the default software location, and thus we see a warning message.

Screen 7)  We prefer to manually run the configuration scripts and do not check ‘Automatically run configuration scripts’ checkbox.

Screen 8) In the Prerequisite Checks page we find that the swap space check fails. Therefore we manually  increase the swap space, and move on to the next step.

Screen 9)  This is the Summary page and we see a consolidated page of the information that is being taken into the upgrade process.

Screen 10) the pop-up comes up for running the rootupgrade.sh script.

Screen 11) The Product installation is continuing.

 

Screen 12)  We get the message that the upgrade of the Grid Infrastructure was successful.

 

3. Installing the database software

We now need to install the database software. For this we need to run the runInstaller.

Screen 1) Beginning the process of installation of the database software. We choose not to provide an email id.

 

Screen 2) Installing only the database software.

Screen 3)  Oracle Real Application Clusters database installation.

Screen 4)  We select the nodes to form part of the cluster.

Screen 5) The default language is English.

Screen 6) We chose the Enterprise Edition.

Screen 7) The Oracle base and the software location.

Screen 8) The Operating System Groups.

Screen 9) We get the swap size /dev/shm error, which are ignoble in this case.

Screen 10) The Summary page.

Screen 11) The final screen-the installation of the Oracle database software is successful.

 

Now, the final step would be to run the Database Upgrade Assistant to actually upgrade the database.

4.  Database Upgrade

Screen 2) We choose the database to upgrade.

Screen 3) The prerequisite checks.

Screen 4) Prerequisite checks continuing. On the next screen we take steps to recompile invalid objects.

Screen 4) Upgrade Options.

Step 5) Management Options.

Screen 6) We choose to have have our own backup policy.

 

Screen 7) the Summary Page

Screen 8) The Progress page. The pop-up alert is because of un-compiled PL/SQL objects, and since we have already planned to re-compile invalid objects, this error is ignoble and we continue ahead.

 

Screen 8) The Progress page continues.

 Screen 9) The Final screen shows that the upgrade completed successfully.

The Database and Grid Infrastructure was successfully upgraded to Oracle 12.1.0.2.

******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************

 

Active RMAN duplicate clone 12c using section size and compress backupset

Overview of New PULL method

The original “push” process is based on image copies.With Oracle Database 12c, a “pull” (or restore) process is based on backup sets. A connection is first established with the source database. The auxiliary instance then retrieves the required database files from the source database as backup sets. A restore operation is performed from the auxiliary instance instance. Therefore, fewer resources are used on the source database.

Both TNS connections are required on target and auxiliary instances.Based on the DUPLICATE clauses, RMAN dynamically determines which process to use (push or pull’. This ensures that existing customized scripts continue to function.

  • When you specify USING BACKUPSET, RMAN uses the pull method.

  • When you specify SET ENCRYPTION before the DUPLICATE command, RMAN

automatically uses the pull method and creates backup sets. The backups sent to the destination are encrypted.

  • The SECTION SIZE clause divides data files into subsections that are restored in parallel across multiple channels on the auxiliary database. For an effective use of parallelization, allocate more AUXILIARY channels.

  • With the USING COMPRESSED BACKUPSET clause, the files are transferred as compressed backup sets. RMAN uses unused block compression while creating backups,thus reducing the size of backups that are transported over the network.

NOOPEN

You might duplicate a database with RMAN for various reasons. In earlier versions a recovered duplicated database was automatically opened. By default, this functionality continues with the Oracle Database 12c.
What is new is that you have an option to finish the duplication process with the database in a mounted, but not opened state. This is useful when the attempt to open the database would produce errors and in all cases when you want to modify initialization settings, which are otherwise quite difficult to modify.
For example, you may want to move the location of the database to ASM. Also when you are performing an upgrade, where the database must not be open with resetlogs, prior to running upgrade scripts.
The NOOPEN option allows the duplication to create a new database as part of an upgrade procedure and leaves the database in a state ready for opening in upgrade mode and subsequent execution of upgrade scripts.

Multi-section now is available on image copy.

 

Active duplication step

Create init parameter file from source and change relevant parameters like control_file,db_name etc.I have provided below sample of init.ora.

[oracle@rac1 dbs]$ cat initrcat.ora
*._catalog_foreign_restore=FALSE
*.audit_file_dest=’/u01/app/product/admin/rcat/adump’
*.audit_trail=’db’
*.compatible=’12.1.0.2.0′
*.control_files=’+DATA/controlrcat.clt’
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_name=’rcat’
*.db_recovery_file_dest=’+DATA’
*.db_recovery_file_dest_size=4785m
*.diagnostic_dest=’/u01/app/product’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=rcatXDB)’
*.enable_pluggable_database=true
*.open_cursors=300
*.optimizer_adaptive_features=FALSE
*.parallel_max_servers=8
*.parallel_min_servers=0
*.pga_aggregate_target=570m
*.processes=300
*.remote_login_passwordfile=’exclusive’
*.session_cached_cursors=1000
*.sga_target=1710m
*.shared_pool_size=629145600
*.undo_tablespace=’UNDOTBS1′

 

Create required audit directory in target

mkdir -p /u01/app/product/admin/rcat/adump

Add entry as static listener.ora under $GRID_HOME/network/admin

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = rcat)
(ORACLE_HOME = /u01/app/product/ora12c/12.1.0/dbhome_1)
(GLOBAL_DBNAME = rcat)
)
)

Add corresponding entry in tnsnames.ora

[oracle@rac1 admin]$ cat tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/product/ora12c/12.1.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

############Source###################

ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)

 

#######Target#################
RCAT =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.56.101)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = rcat)
(UR=A)
)
)

 

The (UR=A) clause for TNS connect strings was created in response to an enhancement request.
This clause can be inserted into the “(CONNECT_DATA=” section of a TNS connect string and allow a privileged or administrative user to connect via the listener even when the service handler is blocking connections for non-privileged users. This feature is introduced since Oracle 10g

If UR=A is not added,you will get following error during connection to target in nomount state in next steps:-

ORA-12528: TNS:listener: all appropriate instances are blocking new connections

Create password file

cd $ORACLE_HOME/dbs

orapwd file=orapwrcat password=oracle

Now start target database in nomount

export ORACLE_SID=rcat

[oracle@rac1 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Dec 9 23:20:41 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> shutdow abort;
  ORACLE instance shut down.
  SQL> startup nomount pfile='initrcat.ora';
  ORACLE instance started.

Total System Global Area 1795162112 bytes
Fixed Size 2925456 bytes
Variable Size 805309552 bytes
Database Buffers 973078528 bytes
Redo Buffers 13848576 bytes

 

Please verify password file is working as expected in target

SQL> select * from v$pwfile_users;

USERNAME SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM CON_ID
—————————— —– —– —– —– —– —– ———-
SYS TRUE TRUE FALSE FALSE FALSE FALSE 1

Register in listener

SQL>alter system register;

Now connect to target and auxiliary and start duplicate using new 12c parameters

[oracle@rac1 dbs]$ rman target sys/oracle@orcl auxiliary sys/oracle@rcat

Recovery Manager: Release 12.1.0.2.0 – Production on Sat Dec 9 23:24:40 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1489144156)
connected to auxiliary database: RCAT (not mounted)

RMAN> CONFIGURE DEVICE TYPE disk PARALLELISM 4;
using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> duplicate target database to rcat from active database section size 500M using compressed backupset;

Starting Duplicate Db at 09-DEC-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=34 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=35 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=36 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=37 device type=DISK
current log archived

contents of Memory Script:
{
sql clone “create spfile from memory”;
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
shutdown clone immediate;
startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes

contents of Memory Script:
{
sql clone “alter system set db_name =
”ORCL” comment=
”Modified by RMAN duplicate” scope=spfile”;
sql clone “alter system set db_unique_name =
”RCAT” comment=
”Modified by RMAN duplicate” scope=spfile”;
shutdown clone immediate;
startup clone force nomount
restore clone from service ‘orcl’ using compressed backupset
primary controlfile;
alter clone database mount;
}
executing Memory Script

sql statement: alter system set db_name = ”ORCL” comment= ”Modified by RMAN duplicate” scope=spfile

sql statement: alter system set db_unique_name = ”RCAT” comment= ”Modified by RMAN duplicate” scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes

Starting restore at 09-DEC-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=33 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=35 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=36 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=37 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
output file name=+DATA/controlrcat.clt
Finished restore at 09-DEC-17

database mounted

contents of Memory Script:
{
set newname for clone datafile 1 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 5 to new;
set newname for clone datafile 6 to new;
set newname for clone datafile 7 to new;
set newname for clone datafile 8 to new;
set newname for clone datafile 15 to new;
set newname for clone datafile 16 to new;
set newname for clone datafile 17 to new;
restore
from service ‘orcl’ section size
500 m using compressed backupset
clone database
;
sql ‘alter system archive log current’;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 09-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +DATA/RCAT/DATAFILE/system.313.962321229
channel ORA_AUX_DISK_1: restoring section 1 of 2
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00003 to +DATA/RCAT/DATAFILE/sysaux.325.962321231
channel ORA_AUX_DISK_2: restoring section 1 of 2
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00004 to +DATA/RCAT/DATAFILE/undotbs1.326.962321233
channel ORA_AUX_DISK_3: restoring section 1 of 1
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using compressed network backup set from service orcl
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00005 to +DATA/RCAT/DATAFILE/system.327.962321241
channel ORA_AUX_DISK_4: restoring section 1 of 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:27
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00001 to +DATA/RCAT/DATAFILE/system.313.962321229
channel ORA_AUX_DISK_3: restoring section 2 of 2
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:01:25
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using compressed network backup set from service orcl
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00003 to +DATA/RCAT/DATAFILE/sysaux.325.962321231
channel ORA_AUX_DISK_4: restoring section 2 of 2
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:02:03
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00006 to +DATA/RCAT/DATAFILE/users.328.962321353
channel ORA_AUX_DISK_2: restoring section 1 of 1
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:05
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00007 to +DATA/RCAT/DATAFILE/sysaux.329.962321359
channel ORA_AUX_DISK_2: restoring section 1 of 2
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:01:46
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00007 to +DATA/RCAT/DATAFILE/sysaux.329.962321359
channel ORA_AUX_DISK_3: restoring section 2 of 2
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:25
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00008 to +DATA/RCAT/DATAFILE/undotbs2.330.962321375
channel ORA_AUX_DISK_1: restoring section 1 of 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:13
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: using compressed network backup set from service orcl
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00015 to +DATA/RCAT/DATAFILE/system.331.962321379
channel ORA_AUX_DISK_3: restoring section 1 of 1
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:00:55
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: using compressed network backup set from service orcl
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00016 to +DATA/RCAT/DATAFILE/sysaux.332.962321381
channel ORA_AUX_DISK_4: restoring section 1 of 2
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00016 to +DATA/RCAT/DATAFILE/sysaux.332.962321381
channel ORA_AUX_DISK_1: restoring section 2 of 2
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:17
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00017 to +DATA/RCAT/DATAFILE/my_tbs.333.962321401
channel ORA_AUX_DISK_1: restoring section 1 of 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:09
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:01:08
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:01:50
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:01:37
Finished restore at 09-DEC-17

sql statement: alter system archive log current
current log archived

contents of Memory Script:
{
restore clone force from service ‘orcl’ using compressed backupset
archivelog from scn 3152882;
switch clone datafile all;
}
executing Memory Script

Starting restore at 09-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: using compressed network backup set from service orcl
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=47
channel ORA_AUX_DISK_2: starting archived log restore to default destination
channel ORA_AUX_DISK_2: using compressed network backup set from service orcl
channel ORA_AUX_DISK_2: restoring archived log
archived log thread=1 sequence=48
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:02
Finished restore at 09-DEC-17

datafile 1 switched to datafile copy
input datafile copy RECID=27 STAMP=962321483 file name=+DATA/RCAT/DATAFILE/system.313.962321229
datafile 3 switched to datafile copy
input datafile copy RECID=28 STAMP=962321483 file name=+DATA/RCAT/DATAFILE/sysaux.325.962321231
datafile 4 switched to datafile copy
input datafile copy RECID=29 STAMP=962321484 file name=+DATA/RCAT/DATAFILE/undotbs1.326.962321233
datafile 5 switched to datafile copy
input datafile copy RECID=30 STAMP=962321484 file name=+DATA/RCAT/DATAFILE/system.327.962321241
datafile 6 switched to datafile copy
input datafile copy RECID=31 STAMP=962321484 file name=+DATA/RCAT/DATAFILE/users.328.962321353
datafile 7 switched to datafile copy
input datafile copy RECID=32 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/sysaux.329.962321359
datafile 8 switched to datafile copy
input datafile copy RECID=33 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/undotbs2.330.962321375
datafile 15 switched to datafile copy
input datafile copy RECID=34 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/system.331.962321379
datafile 16 switched to datafile copy
input datafile copy RECID=35 STAMP=962321485 file name=+DATA/RCAT/DATAFILE/sysaux.332.962321381
datafile 17 switched to datafile copy
input datafile copy RECID=36 STAMP=962321486 file name=+DATA/RCAT/DATAFILE/my_tbs.333.962321401

contents of Memory Script:
{
set until scn 3153106;
recover
clone database
delete archivelog
;
}
executing Memory Script

executing command: SET until clause

Starting recover at 09-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4

starting media recovery

archived log for thread 1 with sequence 47 is already on disk as file +DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_47.336.962321481
archived log for thread 1 with sequence 48 is already on disk as file +DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_48.337.962321483
archived log file name=+DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_47.336.962321481 thread=1 sequence=47
archived log file name=+DATA/RCAT/ARCHIVELOG/2017_12_09/thread_1_seq_48.337.962321483 thread=1 sequence=48
media recovery complete, elapsed time: 00:00:01
Finished recover at 09-DEC-17
Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes

contents of Memory Script:
{
sql clone “alter system set db_name =
”RCAT” comment=
”Reset to original value by RMAN” scope=spfile”;
sql clone “alter system reset db_unique_name scope=spfile”;
}
executing Memory Script

sql statement: alter system set db_name = ”RCAT” comment= ”Reset to original value by RMAN” scope=spfile

sql statement: alter system reset db_unique_name scope=spfile
Oracle instance started

Total System Global Area 1795162112 bytes

Fixed Size 2925456 bytes
Variable Size 822086768 bytes
Database Buffers 956301312 bytes
Redo Buffers 13848576 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE “RCAT” RESETLOGS ARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 SIZE 50 M ,
GROUP 2 SIZE 50 M
DATAFILE
‘+DATA/RCAT/DATAFILE/system.313.962321229’,
‘+DATA/RCAT/DATAFILE/system.327.962321241’,
‘+DATA/RCAT/DATAFILE/system.331.962321379’
CHARACTER SET AL32UTF8

sql statement: ALTER DATABASE ADD LOGFILE

INSTANCE ‘i2’
GROUP 3 SIZE 50 M ,
GROUP 4 SIZE 50 M

contents of Memory Script:
{
set newname for clone tempfile 1 to new;
set newname for clone tempfile 2 to new;
set newname for clone tempfile 3 to new;
switch clone tempfile all;
catalog clone datafilecopy “+DATA/RCAT/DATAFILE/sysaux.325.962321231”,
“+DATA/RCAT/DATAFILE/undotbs1.326.962321233”,
“+DATA/RCAT/DATAFILE/users.328.962321353”,
“+DATA/RCAT/DATAFILE/sysaux.329.962321359”,
“+DATA/RCAT/DATAFILE/undotbs2.330.962321375”,
“+DATA/RCAT/DATAFILE/sysaux.332.962321381”,
“+DATA/RCAT/DATAFILE/my_tbs.333.962321401”;
switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to +DATA in control file
renamed tempfile 2 to +DATA in control file
renamed tempfile 3 to +DATA in control file

cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/sysaux.325.962321231 RECID=1 STAMP=962321529
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/undotbs1.326.962321233 RECID=2 STAMP=962321529
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/users.328.962321353 RECID=3 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/sysaux.329.962321359 RECID=4 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/undotbs2.330.962321375 RECID=5 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/sysaux.332.962321381 RECID=6 STAMP=962321530
cataloged datafile copy
datafile copy file name=+DATA/RCAT/DATAFILE/my_tbs.333.962321401 RECID=7 STAMP=962321530

datafile 3 switched to datafile copy
input datafile copy RECID=1 STAMP=962321529 file name=+DATA/RCAT/DATAFILE/sysaux.325.962321231
datafile 4 switched to datafile copy
input datafile copy RECID=2 STAMP=962321529 file name=+DATA/RCAT/DATAFILE/undotbs1.326.962321233
datafile 6 switched to datafile copy
input datafile copy RECID=3 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/users.328.962321353
datafile 7 switched to datafile copy
input datafile copy RECID=4 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/sysaux.329.962321359
datafile 8 switched to datafile copy
input datafile copy RECID=5 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/undotbs2.330.962321375
datafile 16 switched to datafile copy
input datafile copy RECID=6 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/sysaux.332.962321381
datafile 17 switched to datafile copy
input datafile copy RECID=7 STAMP=962321530 file name=+DATA/RCAT/DATAFILE/my_tbs.333.962321401

contents of Memory Script:
{
Alter clone database open resetlogs;
}
executing Memory Script

database opened

contents of Memory Script:
{
sql clone “alter pluggable database all open”;
}
executing Memory Script

sql statement: alter pluggable database all open
Cannot remove created server parameter file
Finished Duplicate Db at 09-DEC-17

Create password file in ASM now

Create spfile.

Now add database in srvctl

srvctl add database -db rcat -oraclehome $ORACLE_HOME
srvctl modify database -db rcat -pwfile +DATA/pwdrcat.ora
srvctl modify database -db rcat -spfile +DATA/spfilercat.ora

Issue faced:-

RMAN-11003: failure during parse/execution of SQL statement: alter system set db_unique_name = ‘RCAT’ comment= ‘Modified by RMAN duplicate’ scope=spfile
ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

Please remove the database from srvctl if already added before duplicate

[oracle@rac1 dbs]$ srvctl remove database -d rcat

 

CDB and PDB concepts in Oracle 12c

Oracle introduced PDB from 12c to minimize following drawbacks in 11g

 

In 11g,sometimes we need to create numerous databases in single host which resulted below side effects and eventually slowed down whole system.

1.Too many background processes.

2.High shared memory.

3.Many copies of oracle metadata.

Now We will delve into more deeper what Oracle 12c PDB concepts bring to us

1.Multiple databases in centralized managed platform.

a.Less instance overhead.

b.Less storage.

2.Fast and easy provisioning.

3.Time savings for patch and upgrade.

4.Separation of duties.

5.Ensure full backward compatibility with Non-CDB.

6.Fully operate on RAC.

Naming the containers

Multi-tenant Architecture

1.Redo logs are common.Redo log contains annotated information pertaining to PDB.

2.Control files are common.

3.UNDO tablespace is common.

4.Temporary tablespaces are common, but each PDB can contains temp tablespaces.

5.Each container have it’s own dictionary in SYSTEM and SYSAUX tablespaces.

6.Each PDB will have it’s own tablespace and datafiles.

Concepts of Containers

 

A CDB has new characteristics compared to non-CDBs:

  • Two containers:

– The root (CDB$ROOT)

– The seed PDB (PDB$SEED)

  • Several services: one per container

– Name of root service = name of the CDB (cdb1)

  • Common users in root and seed: SYS,SYSTEM …

Common privileges granted to common users

  • Pre-defined common roles
  • Tablespaces and data files associated to each container:

– root:

— SYSTEM: system-supplied metadata and no user data

— SYSAUX

– seed: SYSTEM, SYSAUX Global

 

Root container can be connected from OS authentication or using root service name.

PDB can only be connected using service name by TNS names.Each PDB by default will create it’s own service.

Automatic Diagnostic repository

 

Manual creation of CDB .I do not recommend it.I recommend to create using DBCA.

oracle@cdv1proccmdbf01:XCORMAN[/u01/app/oracle/product/12.1.0/db_1/dbs]$

The following parameters are basic .

a.Please create pfile

cat initXCORMAN.ora

control_files=’+DATA/XCORMAN/control01.ctl’

DB_NAME=XCORMAN

ENABLE_PLUGGABLE_DATABASE=TRUE

DB_CREATE_FILE_DEST=’+DATA’

 

b.Create the database

 

Export ORACLE_SID=XCORMAN

sqlplus / as sysdba

SQL>Create database XCORMAN;

c.Close and open seed PDB

Set the session with a new parameter:

alter session set “_oracle_script”=true;

  1. Close and open the seed PDB:

 

alter pluggable database pdb$seed close;

alter pluggable database pdb$seed open;

d.Please execute post database creation scripts

  1. Execute catalog.sql and other post-creation scripts.

?/rdbms/admin/catalog.sql

?/rdbms/admin/catblock.sql

?/rdbms/admin/catproc.sql

?/rdbms/admin/catoctk.sql

?/rdbms/admin/owminst.plb

?/sqlplus/admin/pupbld.sql

e.Register into listener

SYS@XCORTST1> alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=10.XX.11.XXX)(PORT=1590))’;

System altered.

SYS@XCORTST1> alter system register;

System altered.

f.Please check in listener status now.

Creation of CDB from DBCA

PDB creation overview

Copies the data files from

PDB$SEED data files

  • Creates tablespaces SYSTEM,

SYSAUX

  • Creates a full catalog including

metadata pointing to Oracle-

supplied objects

  • Creates common users:

– Superuser SYS

  • SYSTEM
  • Creates a local user (PDBA)

granted local PDB_DBA role

  • Creates a new default service SYSAUX Global

Command line PDB creation

SYS@XCORTST1> CREATE PLUGGABLE DATABASE my_new_pdb

ADMIN USER my_pdb_admin IDENTIFIED BY my_pdb_admin

ROLES = (dba)

DEFAULT TABLESPACE my_tbs

DATAFILE ‘+DATA’ SIZE 50M AUTOEXTEND ON  2    3    4    5  ;

Pluggable database created.

SYS@XCORTST1> alter pluggable database my_new_pdb open;

Pluggable database altered.

The datafiles will be created in below directory.

ASMCMD> pwd

+DATA/XCORTST/5F2CCEBE3E091BD4E053850B330AF8E1/DATAFILE

ASMCMD> ls -ltr

Type      Redund  Striped  Time             Sys  Name

DATAFILE  UNPROT  COARSE   NOV 30 01:00:00  Y    MY_TBS.445.961378753

DATAFILE  UNPROT  COARSE   NOV 30 01:00:00  Y    SYSAUX.443.961378749

DATAFILE  UNPROT  COARSE   NOV 30 01:00:00  Y    SYSTEM.442.961378749

Creation of pluggable database using SQL Developer and manage from sql developer

Once a PDB is created using seed PDB or plugging or cloning methods, or even closed, you can view the status of the new or closed PDB by querying the STATUS column of the CDB_PDBS view.

If common users and roles had been previously created, the new or closed PDB must be synchronized to retrieve the new common users and roles from the root. The synchronization is automatically performed if the PDB is opened in read write mode.

If you open the PDB in read only mode, an error is returned.

When a PDB is opened, Oracle Database checks the compatibility of the PDB with the CDB.Each compatibility violation is either a warning or an error. If a compatibility violation is a warning, then the warning is recorded in the alert log, but the PDB is opened normally without displaying a warning message. If a compatibility violation is an error, then an error message is displayed when the PDB is opened, and the error is recorded in the alert log. You must correct the condition that caused each error. When there are errors, access to the PDB is limited to users with RESTRICTED SESSION privilege so that the compatibility violations can be addressed. You can view descriptions of violations by querying PDB_PLUG_IN_VIOLATIONS view.

Clone PDB from Existing PDB using command line

—clone pdb to another pdb—

SYS@XCORTST1> alter  PLUGGABLE DATABASE “PDB_ORCL” open read only;

Pluggable database altered.

SYS@XCORTST1> create PLUGGABLE DATABASE “PDB_ORCL_CLONE” from  “PDB_ORCL”;

Pluggable database created.

SYS@XCORTST1> alter pluggable database PDB_ORCL_CLONE close;

Pluggable database altered.

SYS@XCORTST1> alter pluggable database PDB_ORCL_CLONE open;

Pluggable database altered.

SYS@XCORTST1> alter pluggable database PDB_ORCL_CLONE open read only;

 

SYS@XCORTST1> ALTER PLUGGABLE DATABASE pdb_orcl close;

 

Pluggable database altered.

Plug and unplug PDB to another CDB using command line

In Source CDB

SYS@XCORTST1> ALTER PLUGGABLE DATABASE pdb_orcl UNPLUG INTO  ‘/tmp/pdb_orcl.xml’;

Pluggable database altered.

SYS@XCORTST1> drop PLUGGABLE DATABASE pdb_orcl;

Pluggable database dropped.

Please check the compatibility of unplugged PDB can be plugged to new CDB

 

SET SERVEROUTPUT ON
DECLARE
compatible CONSTANT VARCHAR2(3) :=
CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(pdb_descr_file=> ‘/tmp/pdb_orcl.xml’,pdb_name => ‘pdb_orcl’)
WHEN TRUE THEN ‘YES’
ELSE ‘NO’
END;
BEGIN
DBMS_OUTPUT.PUT_LINE(compatible);
END;
/

In Target CDB

SYS@XCLONEST1> create PLUGGABLE DATABASE “PDB_ORCL” using ‘/tmp/pdb_orcl.xml’ nocopy;

Pluggable database created.

Possible way to Convert non-CDB to PDB of another CDB

There are three possible methods to plug a non-CDB database into a CDB.

Whichever method is used, you have to get the non-CDB into a transitionally-consistent state

and open it in restricted mode.

  • Either use transportable tablespace (TTS) or full conventional export / import or

transportable database (TDB) provided that in the last one any user-defined object

resides in a single user-defined tablespace.

  • Or use DBMS_PDB package to construct an XML file describing the non-CDB data files to

plug the non-CDB into the CDB as a PDB. This method presupposes that the non-CDB is

an Oracle 12c database.

  • Or use replication with GoldenGate

Convert non-CDB to PDB of another CDB example

In non-cdb

Export ORACLE_SID=ORCL

sqlplus / as sysdba

Shutdown immediate;

Startup mount;

Alter database open read only;

Exec edbms_pdb.describe(‘/tmp/orcl.xml’);

In target CDB

sqlplus / as sysdba

Create pluggable database pdb1 using ‘/tmp/orcl.xml’;

Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete

unnecessary metadata from PDB SYSTEM tablespace. This script must be run before the

PDB can be opened for the first time. This script is required for plugging non-CDBs only.

Sqlplus / as sysdba

Connect sys/xxx@pdb2 as sysdba

@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb;

 

Different restore and recovery scenario for PDB and CDB in Oracle 12c

Introduction

The next document will describe the steps to restore scenario for CDB and PDB using Data protector and RMAN

1.New SYSBACKUP privilege in RMAN 12c

1.1 create common user in CDB with sysbackup privilege.

[oracle@rac1 ~]$ export ORACLE_SID=orcl1
 [oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sat Dec 9 19:56:36 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> create user c##dba_backup identified by test123;

User created.

SQL> grant sysbackup to c##dba_backup;

Grant succeeded.

 

1.2 Backup whole database using sysbackup privilege

[oracle@rac1 ~]$ rman target '"c##dba_backup@orcl as sysbackup"'

Recovery Manager: Release 12.1.0.2.0 – Production on Sat Dec 9 19:58:38 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

target database Password:
connected to target database: ORCL (DBID=1489144156)

RMAN> backup database;

Starting backup at 09-DEC-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=105 instance=orcl1 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/ORCL/DATAFILE/system.278.961363313
input datafile file number=00003 name=+DATA/ORCL/DATAFILE/sysaux.277.961363267
input datafile file number=00004 name=+DATA/ORCL/DATAFILE/undotbs1.280.961363369
input datafile file number=00008 name=+DATA/ORCL/DATAFILE/undotbs2.291.961364193
input datafile file number=00006 name=+DATA/ORCL/DATAFILE/users.279.961363369
channel ORA_DISK_1: starting piece 1 at 09-DEC-17

 

2.Backup of CDB and PDB

—CDB backup–
export ORACLE_SID=ORCL1

[oracle@rac1 ~]$ rman target /

RMAN> backup database plus archivelog;
—CDB root backup—
RMAN> backup pluggable database "CDB$ROOT";
–Backup pluggable database—
RMAN>backup pluggable database oem;

RMAN>backup pluggable database oem plus archivelog;

3.Active database duplicate database and other new RMAN 12c feature

Active RMAN duplicate clone 12c using section size and compress backupset

4.Lost of temp file scenario for CDB and PDB

When you create a user, you can specify a temporary tablespace to be used by the user. If a temporary tablespace is not specified, the default tablespace for the PDB is used. If a default tablespace is not specified for the PDB, the temporary tablespace for the CDB is used.
If a temp file belonging to the CDB temporary tablespace is lost or damaged, and the user issuing the statement uses it, an error during the execution of SQL statements that require this temporary space occurs.
SQL> select * from dba_source order by 1;

274137 rows selected.

Execution Plan ———————————————————-

ERROR: ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 201 ORA-01110: data file 201: ‘/u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_temp_do2ljro0_.tmp’
ORA-27041: unable to open file Linux-x86_64 Error: 2: No such file or directory Additional information: 3

4.1 Create a temp file in CDB

SQL> alter tablespace temp add tempfile;

Tablespace altered.

NB: Temporary datafile will be automatically re-created once DB is started.

4.2 Temporary tablespace lost for PDB

[oracle@XXX ~]$ sqlplus system/oracle@ORA12CPD1

 

SQL> set autotrace traceonly;
SQL> select * from dba_source order by 1;

285357 rows selected.

Execution Plan
———————————————————-
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 203
ORA-01110: data file 203:
‘/u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf
_temp_do2lvkg7_.dbf’
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

4.3 Add tempfile to temp tablespace in PDB and then drop corrupted temp file

SQL> alter tablespace temp add tempfile;

Tablespace altered.

SQL> alter tablespace temp drop tempfile ‘/u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_temp_do2lvkg7_.dbf’;

Tablespace altered.

SQL> select * from dba_source order by 1;

285357 rows selected.

NB: The PDB can open with missing temp files. If any of the temporary files do not exist when the PDB is opened , they are not created automatically. They are automatically recreated at CDB startup. 

 5. Lost of Control file in CDB (PDB does not contain control file)

Failure Scenario

[oracle@xxx ~]$ rm /u01/app/oracle/oradata/ORA12CP/controlfile/o1_mf_do2ljh6z_.ctl
[oracle@xxx ~]$ sqlplus / as sysdba

 

SQL> select count(1) from dba_objects;

COUNT(1)
———-
72648

SQL> create table t1(c1 char(100));

Table created.
SQL> insert into t1 select 'A' from dual connect by rownum<100000;
 insert into t1 select 'A' from dual connect by rownum<100000
 *
 ERROR at line 1:
 ORA-00210: cannot open the specified control file
 ORA-00202: control file:
 '/u01/app/oracle/oradata/ORA12CP/controlfile/o1_mf_do2ljh6z_.ctl'
 ORA-27041: unable to open file
 Linux-x86_64 Error: 2: No such file or directory
 Additional information: 3
SQL> select count(1) from dba_objects;

COUNT(1)
———-
72649

5.1 Recover the CONTROLFILE now (The database was previously restored to old incarnation)

[oracle@XXX ~]$ rman target /

Recovery Manager: Release 12.2.0.1.0 – Production on Tue Jun 27 15:02:13 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORA12CP (DBID=1429060559, not open)

RMAN> list incarnation;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
——- ——- ——– —————- — ———- ———-
1 1 ORA12CP 1429060559 PARENT 1 26-JAN-17
2 2 ORA12CP 1429060559 PARENT 1408558 26-JUN-17
3 3 ORA12CP 1429060559 CURRENT 1499533 26-JUN-17

RMAN> reset database to incarnation 2;

database reset to incarnation 2

[oracle@XXX ~]$ rman target / catalog rco/rco@DBADB

Recovery Manager: Release 12.2.0.1.0 – Production on Tue Jun 27 14:58:14 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORA12CP (DBID=1429060559, not open)
connected to recovery catalog database

RMAN> run {
 2> allocate channel 'dev_0' type 'sbt_tape'
 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> recover database;
 5> alter database open resetlogs;
 6> }

allocated channel: dev_0
channel dev_0: SID=34 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting recover at 27-JUN-17

starting media recovery
media recovery failed
released channel: dev_0
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/27/2017 15:04:16
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover
if needed start until cancel using backup controlfile
ORA-00283: recovery session canceled due to errors
ORA-19909: datafile 1 belongs to an orphan incarnation
ORA-01110: data file 1: ‘/u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_system_do2lf6n8_.dbf’

You need to restore and recover as you reset incarnation

RMAN> run {
 2> allocate channel 'dev_0' type 'sbt_tape'
 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> restore database;
 5> recover database;
 6> alter database open resetlogs;
 7> }

allocated channel: dev_0
channel dev_0: SID=42 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting restore at 27-JUN-17

skipping datafile 5; already restored to file /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_system_do2ljv4y_.dbf
skipping datafile 6; already restored to file /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_sysaux_do2ljv4q_.dbf
skipping datafile 8; already restored to file /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_undotbs1_do2ljv4z_.dbf
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00001 to /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_system_do2lf6n8_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_2:947689345:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_2:947689345:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00003 to /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_sysaux_do2lgmrx_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_3:947689360:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_3:947689360:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00010 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_sysaux_do2lvkg6_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_4:947689367:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_4:947689367:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00014 to /u01/app/oracle/oradata/ORA12CP/52E1B5C5F134583AE0537904330A679A/datafile/o1_mf_sysaux_do2lvqd8_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_5:947689374:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_5:947689374:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00009 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_system_do2lvkfl_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_8:947689396:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_8:947689396:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00013 to /u01/app/oracle/oradata/ORA12CP/52E1B5C5F134583AE0537904330A679A/datafile/o1_mf_system_do2lvqd7_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_9:947689403:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_9:947689403:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00011 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_undotbs1_do2lvkg7_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_11:947689413:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_11:947689413:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00015 to /u01/app/oracle/oradata/ORA12CP/52E1B5C5F134583AE0537904330A679A/datafile/o1_mf_undotbs1_do2lvqd8_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_12:947689416:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_12:947689416:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00004 to /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_undotbs1_do2lhdxt_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_13:947689419:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_13:947689419:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00007 to /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_users_do2lhg0j_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_14:947689422:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_14:947689422:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00012 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_users_do2lvpy8_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_15:947689429:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_15:947689429:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00016 to /u01/app/oracle/oradata/ORA12CP/52E1B5C5F134583AE0537904330A679A/datafile/o1_mf_users_do2lvxsx_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_16:947689433:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_16:947689433:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
Finished restore at 27-JUN-17

Starting recover at 27-JUN-17

starting media recovery
media recovery failed
released channel: dev_0
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 06/27/2017 15:13:38
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover
if needed start until cancel using backup controlfile
ORA-00283: recovery session canceled due to errors
ORA-19912: cannot recover to target incarnation 2

[oracle@xxx ~]$ rman target /

Recovery Manager: Release 12.2.0.1.0 – Production on Tue Jun 27 15:16:04 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORA12CP (DBID=1429060559, not open)

RMAN> list incarnation;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
——- ——- ——– —————- — ———- ———-
1 1 ORA12CP 1429060559 PARENT 1 26-JAN-17
2 2 ORA12CP 1429060559 CURRENT 1408558 26-JUN-17
3 3 ORA12CP 1429060559 ORPHAN 1499533 26-JUN-17

RMAN> reset database to incarnation 3;
RMAN> run {
 2> allocate channel 'dev_0' type 'sbt_tape'
 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> recover database;
 5> alter database open resetlogs;
 6> }

allocated channel: dev_0
channel dev_0: SID=34 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do3p7ltn_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/onlinelog/o1_mf_2_do3fzo1m_.log
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=2
channel dev_0: reading from backup piece ora12cpt<ORA12cP_18:947689444:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_18:947689444:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cv24b_.arc thread=1 sequence=2
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cv24b_.arc RECID=26 STAMP=947776675
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=3
channel dev_0: reading from backup piece ora12cpt<ORA12cP_19:947689447:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_19:947689447:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5cwg82_.arc thread=1 sequence=3
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5cwg82_.arc RECID=27 STAMP=947776718
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc thread=1 sequence=5
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do3p7ltn_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/onlinelog/o1_mf_2_do3fzo1m_.log thread=1 sequence=2
media recovery complete, elapsed time: 00:00:25
Finished recover at 27-JUN-17

Statement processed
released channel: dev_0

5.2 Restore database control file current incarnation

[oracle@cdv1pradmdbv01 ~]$ rm /u01/app/oracle/oradata/ORA12CP/controlfile/o1_mf_do2ljh6z_.ctl
[oracle@cdv1pradmdbv01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jun 27 15:26:11 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> shutdown abort;
 ORACLE instance shut down.

SQL> startup nomount;
 ORACLE instance started.

Total System Global Area 1828716544 bytes
Fixed Size 8793928 bytes
Variable Size 553648312 bytes
Database Buffers 1258291200 bytes
Redo Buffers 7983104 bytes

[oracle@cdv1pradmdbv01 ~]$ rman target /

connected to target database: ORA12CP (not mounted)

RMAN> restore controlfile from autobackup;

Starting restore at 27-JUN-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK

recovery area destination: /u01/app/oracle/fast_recovery_area/ORA12cP
database name (or database unique name) used for search: ORA12CP
channel ORA_DISK_1: AUTOBACKUP /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/autobackup/2017_06_27/o1_mf_s_947776764_do5cxx44_.bkp found in the recovery area
AUTOBACKUP search with format “%F” not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/autobackup/2017_06_27/o1_mf_s_947776764_do5cxx44_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/u01/app/oracle/oradata/ORA12CP/controlfile/o1_mf_do2ljh6z_.ctl
output file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/controlfile/o1_mf_do2ljh8f_.ctl
Finished restore at 27-JUN-17

 

RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1

RMAN> run {
 allocate channel 'dev_0' type 'sbt_tape'
 parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 recover database;
 }2> 3> 4> 5>

allocated channel: dev_0
channel dev_0: SID=35 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/onlinelog/o1_mf_1_do3fznnn_.log
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/onlinelog/o1_mf_1_do3fznnn_.log thread=1 sequence=1
media recovery complete, elapsed time: 00:00:00
Finished recover at 27-JUN-17
released channel: dev_0

RMAN> alter database open RESETLOGS;

Statement processed

RMAN> list incarnation;

List of Database Incarnations
DB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time
——- ——- ——– —————- — ———- ———-
1 1 ORA12CP 1429060559 PARENT 1 26-JAN-17
2 2 ORA12CP 1429060559 PARENT 1408558 26-JUN-17
3 3 ORA12CP 1429060559 PARENT 1499533 26-JUN-17
4 4 ORA12CP 1429060559 PARENT 1589073 27-JUN-17
5 5 ORA12CP 1429060559 CURRENT 1591011 27-JUN-17

RMAN> alter pluggable database all open;

Statement processed

6.RESTORE TABLESPACE SCENERIO (CDB)

If the missing or corrupted data file belongs to the root container SYSTEM or UNDO tablespace, then the CDB instance will require shutdown, and a media recovery is required. In a RAC environment, you would shut down all instances of the CDB.
This means that all PDBs will be closed.
The CDB must be mounted before restoring and recovering the missing root data file.
After the root data file is recovered, open the CDB and all PDBs.

 6.1 RESTORE SYSTEM AND UNDO TABLESPACE FOR ROOT(CDB)

[oracle@cdv1pradmdbv01 ~]$ sqlplus / as sysdba

SQL> shutdown immediate;

ORACLE instance shut down.
 SQL> startup mount;
ORACLE instance started.

Total System Global Area 1828716544 bytes
Fixed Size 8793928 bytes
Variable Size 553648312 bytes
Database Buffers 1258291200 bytes
Redo Buffers 7983104 bytes
Database mounted.

[oracle@cdv1pradmdbv01 ~]$ rman target / catalog rco/rco@DBADB

Recovery Manager: Release 12.2.0.1.0 – Production on Tue Jun 27 15:37:49 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORA12CP (DBID=1429060559, not open)
connected to recovery catalog database

RMAN> run {
 allocate channel 'dev_0' type 'sbt_tape'
 2> 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> restore tablespace 'UNDOTBS1';
 5> recover tablespace 'UNDOTBS1';
 6> }

allocated channel: dev_0
channel dev_0: SID=41 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting restore at 27-JUN-17

channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00004 to /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_undotbs1_do2lhdxt_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_13:947689419:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_13:947689419:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
Finished restore at 27-JUN-17

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=2
channel dev_0: reading from backup piece ora12cpt<ORA12cP_18:947689444:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_18:947689444:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5f37t5_.arc thread=1 sequence=2
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5f37t5_.arc RECID=37 STAMP=947777960
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=3
channel dev_0: reading from backup piece ora12cpt<ORA12cP_19:947689447:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_19:947689447:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5f4nw9_.arc thread=1 sequence=3
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5f4nw9_.arc RECID=38 STAMP=947778004
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc thread=1 sequence=5
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc thread=1 sequence=2
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc thread=1 sequence=1
media recovery complete, elapsed time: 00:00:03
Finished recover at 27-JUN-17
released channel: dev_0

RMAN> alter database open;

Statement processed
starting full resync of recovery catalog
full resync complete

RMAN> alter pluggable database all open;

Statement processed

NB: If the missing or corrupted file belongs to root container SYSTEM or UNDO tablespace ,then CDB instance needs to shutdown and media recover required. All PDBs will be shutdown also.
After root datafile recovered, CDB and PDB could be opened.

 

6.2 RESTORE SYSAUX TABLESPACE FOR ROOT IN CDB

There is no need to stop CDB and PDB.This recovery can be done when database is online
[oracle@cdv1pradmdbv01 ~]$ rman target / catalog rco/rco@DBADB

Recovery Manager: Release 12.2.0.1.0 – Production on Tue Jun 27 16:10:13 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORA12CP (DBID=1429060559)
connected to recovery catalog database

RMAN> run {
 allocate channel 'dev_0' type 'sbt_tape'
 2> 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> restore tablespace 'SYSAUX';
 5> recover tablespace 'SYSAUX';
 6> alter tablespace SYSAUX online;
 7> }

allocated channel: dev_0
channel dev_0: SID=58 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting restore at 27-JUN-17

channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00003 to /u01/app/oracle/oradata/ORA12CP/datafile/o1_mf_sysaux_do2lgmrx_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_3:947689360:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_3:947689360:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
Finished restore at 27-JUN-17

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=2
channel dev_0: reading from backup piece ora12cpt<ORA12cP_18:947689444:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_18:947689444:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:46
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5h134y_.arc thread=1 sequence=2
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5h134y_.arc RECID=42 STAMP=947779940
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=3
channel dev_0: reading from backup piece ora12cpt<ORA12cP_19:947689447:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_19:947689447:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5h2hq4_.arc thread=1 sequence=3
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5h2hq4_.arc RECID=43 STAMP=947779983
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc thread=1 sequence=5
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc thread=1 sequence=2
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc thread=1 sequence=1
media recovery complete, elapsed time: 00:00:03
Finished recover at 27-JUN-17
starting full resync of recovery catalog
full resync complete

Statement processed
released channel: dev_0

7.RESTORE TABLESPACE SCENERIO (PDB)

7.1 Restore missing system tablespace in PDB

If the data file missing or corrupted belongs to a PDB and more specifically to the SYSTEM tablespace, the CDB must be closed unless the PDB is already closed.
A pluggable database or tablespace or data file media recovery is required before the PDB can be reopened.
If the PDB was closed at the time issue, the users can still work in other PDBs during the PDB recovery.
If the PDB was still opened at the time issue, users cannot work at all in any other PDB because the CDB needs to be shut down and mounted only.
The recovery must be issued from root.
Option 1:- Recover whole PDB database .CDB needs to be in Mount state.
[oracle@cdv1pradmdbv01 ~]$ sqlplus

 

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> shutdown immediate;
 Database closed.
 Database dismounted.
 ORACLE instance shut down.

SQL> startup mount;
 ORACLE instance started.

Total System Global Area 1828716544 bytes
Fixed Size 8793928 bytes
Variable Size 553648312 bytes
Database Buffers 1258291200 bytes
Redo Buffers 7983104 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

[oracle@cdv1pradmdbv01 ~]$ rman target /

 

connected to target database: ORA12CP (DBID=1429060559, not open)

RMAN> run {
 allocate channel 'dev_0' type 'sbt_tape'
 2> 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> restore pluggable database ORA12CPD1;
 5> recover pluggable database ORA12CPD1;
 6> alter database open;
 7> alter pluggable database ORA12CPD1 open;
 8> }

using target database control file instead of recovery catalog
allocated channel: dev_0
channel dev_0: SID=41 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting restore at 27-JUN-17

channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00010 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_sysaux_do2lvkg6_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_4:947689367:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_4:947689367:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00009 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_system_do2lvkfl_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_8:947689396:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_8:947689396:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00011 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_undotbs1_do2lvkg7_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_11:947689413:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_11:947689413:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00012 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_users_do2lvpy8_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_15:947689429:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_15:947689429:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
Finished restore at 27-JUN-17

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=2
channel dev_0: reading from backup piece ora12cpt<ORA12cP_18:947689444:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_18:947689444:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5l1gjw_.arc thread=1 sequence=2
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5l1gjw_.arc RECID=44 STAMP=947783023
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=3
channel dev_0: reading from backup piece ora12cpt<ORA12cP_19:947689447:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_19:947689447:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5l2t4x_.arc thread=1 sequence=3
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5l2t4x_.arc RECID=45 STAMP=947783066
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc thread=1 sequence=5
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc thread=1 sequence=2
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc thread=1 sequence=1
media recovery complete, elapsed time: 00:00:06
Finished recover at 27-JUN-17

RMAN> alter database open;

Statement processed

RMAN> alter pluggable database ORA12CPD1 open;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 06/27/2017 17:05:18
ORA-65019: pluggable database ORA12CPD1 already open

 

Option2: Only recover the corrupted system tablespace.  CDB needs to be in mount state.
[oracle@cdv1pradmdbv01 ~]$ rm /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_system_do2lvkfl_.dbf
[oracle@cdv1pradmdbv01 ~]$ sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jun 27 17:09:48 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production

SQL> shutdown immediate;
 ORA-01116: error in opening database file 9
 ORA-01110: data file 9: '/u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_system_do2lvkfl_.dbf'
 ORA-27041: unable to open file
 Linux-x86_64 Error: 2: No such file or directory
 Additional information: 3

SQL> shutdown abort;

ORACLE instance shut down.

SQL> startup mount;

ORACLE instance started.

Total System Global Area 1828716544 bytes
Fixed Size 8793928 bytes
Variable Size 553648312 bytes
Database Buffers 1258291200 bytes
Redo Buffers 7983104 bytes
Database mounted.

[oracle@cdv1pradmdbv01 ~]$ rman target /

Recovery Manager: Release 12.2.0.1.0 – Production on Tue Jun 27 17:11:18 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORA12CP (DBID=1429060559, not open)

RMAN> run {
 2> allocate channel 'dev_0' type 'sbt_tape'
 3> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 4> restore tablespace ORA12CPD1:system;
 5> recover tablespace ORA12CPD1:system;
 6> }

using target database control file instead of recovery catalog
allocated channel: dev_0
channel dev_0: SID=41 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting restore at 27-JUN-17

channel dev_0: starting datafile backup set restore
channel dev_0: specifying datafile(s) to restore from backup set
channel dev_0: restoring datafile 00009 to /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_system_do2lvkfl_.dbf
channel dev_0: reading from backup piece ora12cpt<ORA12cP_8:947689396:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_8:947689396:1>.dbf tag=TAG20170626T150225
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
Finished restore at 27-JUN-17

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=2
channel dev_0: reading from backup piece ora12cpt<ORA12cP_18:947689444:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_18:947689444:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5llvyw_.arc thread=1 sequence=2
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5llvyw_.arc RECID=46 STAMP=947783580
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=3
channel dev_0: reading from backup piece ora12cpt<ORA12cP_19:947689447:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_19:947689447:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5lnb83_.arc thread=1 sequence=3
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5lnb83_.arc RECID=47 STAMP=947783626
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc thread=1 sequence=5
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc thread=1 sequence=2
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc thread=1 sequence=1
media recovery complete, elapsed time: 00:00:04
Finished recover at 27-JUN-17
released channel: dev_0

database dismounted
Oracle instance shut down

RMAN> shutdown immediate;
RMAN> startup;

connected to target database (not started)
Oracle instance started
database mounted
database opened

 

 

8.Point in time recovery of pluggable database (PDB)

If you need to recover a PDB database to a point in time in the past beyond flashback retention, then in this case, flashback is not possible, therefore a point-in-time recovery is necessary.
Recovering a PDB to a point-in-time does not affect all parts of the CDB: the whole CDB is still opened and therefore all other PDBs are opened. After recovering a PDB to a specified point-in-time, when you open the PDB using the RESETLOGS option, a new incarnation of the PDB is created. The PDB RESETLOGS does not perform a RESETLOGS for the CDB.
• A PDB record in the control file is updated.
• Each redo log record carries PDB id in the redo header. This is how recovery knows
which redo applies to which PDB. Redo logs are shared by all PDBs; redo from each PDB is written to a single set of redo logs.
Conceptually a PDB resetlogs is similar to a database resetlogs.
After recovery, the old backup of the PDB remains valid and can be used if a media failure occurs. After restoring/recovering a PDB to a past point in time, one cannot open the PDB read only. PDB read-write open through resetlogs is required.
A PDB incarnation is a subincarnation of the CDB. For example, if the CDB is incarnation 5,and a PDB is incarnation 3, then the fully specified incarnation number of the PDB is (5, 3). The nitial incarnation of a PDB is 0. To view the incarnation of a PDB, query the $PDB_INCARNATION view.

CDB and other PDB remain un affected.

RMAN> run {
 set until time = "TO_DATE('06/27/2017 17:34:00','MM/DD/YYYY HH24:MI:SS')" ;
 2> 3> allocate channel 'dev_0' type 'sbt_tape'
 4> parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=ORA12cP,OB2BARLIST=1498520660,OB2BARHOSTNAME=cdv1pradmdbv01.tdeprdcl.internal)';
 5> restore pluggable database ORA12CPD1;
 6> recover pluggable database ORA12CPD1 auxiliary destination '/u01/app/oracle/oradata';
 7> alter pluggable database ORA12CPD1 open resetlogs;
 8> }

executing command: SET until clause

allocated channel: dev_0
channel dev_0: SID=8 device type=SBT_TAPE
channel dev_0: Data Protector A.09.00/110

Starting restore at 27-JUN-17

skipping datafile 9; already restored to file /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_system_do5lkhfq_.dbf
skipping datafile 10; already restored to file /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_sysaux_do2lvkg6_.dbf
skipping datafile 11; already restored to file /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_undotbs1_do2lvkg7_.dbf
skipping datafile 12; already restored to file /u01/app/oracle/oradata/ORA12CP/52E1B5675376582DE0537904330A562D/datafile/o1_mf_users_do2lvpy8_.dbf
Finished restore at 27-JUN-17

Starting recover at 27-JUN-17

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_4_do314vlc_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_26/o1_mf_1_5_do3fzn9p_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5cx8k2_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5cx8x2_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_1_do5dh20l_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5ndy7b_.arc
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=2
channel dev_0: reading from backup piece ora12cpt<ORA12cP_18:947689444:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_18:947689444:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:45
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_2_do5nns56_.arc RECID=50 STAMP=947785690
channel dev_0: starting archived log restore to default destination
channel dev_0: restoring archived log
archived log thread=1 sequence=3
channel dev_0: reading from backup piece ora12cpt<ORA12cP_19:947689447:1>.dbf
channel dev_0: piece handle=ora12cpt<ORA12cP_19:947689447:1>.dbf tag=TAG20170626T150404
channel dev_0: restored backup piece 1
channel dev_0: restore complete, elapsed time: 00:00:35
channel default: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ORA12cP/ORA12CP/archivelog/2017_06_27/o1_mf_1_3_do5np4dk_.arc RECID=51 STAMP=947785732
media recovery complete, elapsed time: 00:00:06
Finished recover at 27-JUN-17

Statement processed
released channel: dev_0