Database backup and restore after manual catalog RMAN backup

Database backup and restore after manual catalog RMAN backup

1.Please take backup of primary database ORCL

rman target / nocatalog log=/tmp/rman_bkp.log << EOF1
run
{
backup as compressed backupset database format ‘/opt/app/oratest1/bkp/ORCL_%U’;
backup as compressed backupset archivelog all format ‘/opt/app/oratest1/bkp/ORCL_ARCH_%U’;
}
exit;
EOF1

Please copy it in backup location to /opt/app/oratest1/bkp

2.Please start nomount RCATT database.

3.Please create controlfile from backup.The DB_CREATE_FILE_DEST should be set to diskgroup. Control file location should be in Diskgroup location
as control.ctl

CREATE CONTROLFILE SET DATABASE “RCATT” RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘+DATA1/rcatt/onlinelog/group_1.261.825435443’ SIZE 50M BLOCKSIZE 512,
GROUP 2 ‘+DATA1/rcatt/onlinelog/group_2.262.825435443’ SIZE 50M BLOCKSIZE 512,
GROUP 3 ‘+DATA1/rcatt/onlinelog/group_3.263.825435445’ SIZE 50M BLOCKSIZE 512
— STANDBY LOGFILE
DATAFILE
‘+DATA1/rcatt/datafile/system.256.825435379’,
‘+DATA1/rcatt/datafile/sysaux.257.825435379’,
‘+DATA1/rcatt/datafile/undotbs1.258.825435381’,
‘+DATA1/rcatt/datafile/users.259.825696133’,
‘+DATA1/rcatt/datafile/orcl.266.825438401’
CHARACTER SET WE8MSWIN1252
;

4.rman target /
RMAN> catalog start with ‘/opt/app/oratest1/bkp’;
RMAN>restore database;
RMAN>list backup; (Please identify highest SCN of archivelog)
RMAN>recover database until sequence 26;(Please specify highest SCN)

Please ignore the following error

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 09/11/2013 17:02:13
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 16 and starting SCN of 1545450

5.Please open database with resetlogs option.

SQL>alter database open resetlogs

ORACLE OPTIMIZER COST CALCULATIONS BASIC OVERVIEW

Cost Calculation for Full table Scan

How FTS cost depends on system statistics

Gather statistics manually on load:

The following command will start to gather system statistics.
SQL> execute DBMS_STATS.GATHER_SYSTEM_STATS (‘START’);

Please wait for system warm up and stop gather system statistics after work load.

This is called workload statistics.
The MBRC will be calculated automatically depending on no. of FTS occurring on database.

SQL> execute DBMS_STATS.GATHER_SYSTEM_STATS (‘STOP’);

SQL> select * from aux_stats$;

SNAME PNAME PVAL1
—————————— —————————— ———-
PVAL2
——————————————————————————–
SYSSTATS_INFO      STATUS                      AUTOGATHERING

SYSSTATS_INFO     DSTART                      01-15-2018 04:44

SYSSTATS_INFO     DSTOP                         01-15-2018 04:46

SYSSTATS_INFO     FLAGS                         0

SYSSTATS_TEMP    SBLKRDS                   46416

SYSSTATS_TEMP   SBLKRDTIM             22465.055

SYSSTATS_TEMP  MBLKRDS                 3864

SYSSTATS_TEMP  MBLKRDTIM           4291.249

SYSSTATS_TEMP CPUCYCLES             69720

SYSSTATS_TEMP CPUTIM                   23054

SYSSTATS_TEMP JOB                            14

SYSSTATS_TEMP CACHE_JOB           16

SYSSTATS_TEMP MBRTOTAL            65755

SYSSTATS_MAIN CPUSPEEDNW      3024

SYSSTATS_MAIN IOSEEKTIM           10

SYSSTATS_MAIN IOTFRSPEED        4096

SYSSTATS_MAIN SREADTIM

SYSSTATS_MAIN MREADTIM

SYSSTATS_MAIN CPUSPEED         3024

SYSSTATS_MAIN MBRC

SYSSTATS_MAIN MAXTHR

SYSSTATS_MAIN SLAVETHR

Sometimes all values may not be populated because of nature of workload.

Example of FTS Cost calculation (System
statistics gathered on load)

sreadtim : wait time to read single block, in milliseconds .
mreadtim : wait time to read a multi block, in milliseconds.
cpuspeed : cycles per second, in millions.
MRBC : average multi block read count for sequential read, in blocks .

CPU_COST and IO_COST in PLAN_TABLE table after gathering system statistics
on load:

C##TEST@TESTDB1> select count(1) from tab1;

Execution Plan
———————————————————-
Plan hash value: 1117438016

——————————————————————-
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
——————————————————————-
| 0 | SELECT STATEMENT | | 1 | 113K (2)| 00:00:05 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TAB1 | 1999K| 113K (2)| 00:00:05 |
——————————————————————-

Statistics
———————————————————-
0 recursive calls
0 db block gets
285857 consistent gets
0 physical reads
0 redo size
545 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

So CPU cost:-2% of Total COST  IO cost:-98% of Total COST.

10053 trace generation :-

SYS@TESTDB1> alter session set events ‘10053 trace name context forever’;

Session altered.

SYS@TESTDB1> select count(1) from c##test.tab1;

COUNT(1)
———-
1999998

SYS@TESTDB1> alter session set events ‘10053 trace name context off’;

Session altered.

Cost parameters in 10053 trace:-

—————————–
SYSTEM STATISTICS INFORMATION
—————————–
Using dictionary system stats.
Using WORKLOAD Stats
CPUSPEED: 2942 millions instructions/sec
SREADTIM: 0.621000 milliseconds
MREADTIM: 0.974000 millisecons
MBRC: 4 blocks
MAXTHR: -1 bytes/sec
SLAVETHR: -1 bytes/sec

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: TAB1 Alias: TAB1
#Rows: 1999998 SSZ: 0 LGR: 0 #Blks: 286592 AvgRowLen: 1012.00 NEB: 0 ChainCnt: 0.00 SPC: 0 RFL: 0 RNF: 0 CBK: 0 CHR: 0 KQDFLG: 1
#IMCUs: 0 IMCRowCnt: 0 IMCJournalRowCnt: 0 #IMCBlocks: 0 IMCQuotient: 0.000000
=======================================
SPD: BEGIN context at query block level

Access path analysis for TAB1
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for TAB1[TAB1]
SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
Table: TAB1 Alias: TAB1
Card: Original: 1999998.000000 Rounded: 1999998 Computed: 1999998.000000 Non Adjusted: 1999998.000000
Scan IO Cost (Disk) = 112377.000000
Scan CPU Cost (Disk) = 2340947432.480000
Total Scan IO Cost = 112377.000000 (scan (Disk))
= 112377.000000
Total Scan CPU Cost = 2340947432.480000 (scan (Disk))
= 2340947432.480000
Access Path: TableScan
Cost: 113658.319374 Resp: 113658.319374 Degree: 0
Cost_io: 112377.000000 Cost_cpu: 2340947432
Resp_io: 112377.000000 Resp_cpu: 2340947432
Best:: AccessPath: TableScan
Cost: 113658.319374 Degree: 1 Resp: 113658.319374 Card: 1999998.000000 Bytes: 0.000000

Example of FTS Cost calculation…..

Formula for FTS (system statistics is gathered on load):-

iocost:=(No. of Blocks/MBRC)*mreadtim/sreadtim
cpucost:=#cpu cycles/(cpuspeed*(sreadtim*1000))—->#cpu cycles is Total Scan CPU Cost
No. of blocks:=286592
MBRC:=4
mreadtim:=0.974000
sreadtim:=0.621000
iocost:=(286592 /4)*(0.974000/0.621000)=112377
cpucost:=2340947432/(2942*(0.621000*1000))=1281

 

You can disable cpu cost using following command:=
alter system set “_optimizer_cost_model” =io;

You can test the explain plan in lower version of oracle by following command:=
SQL> alter system set optimizer_features_enable=”11.2.0″;

Example of FTS (System statistics gathered on
no load)

Formula for FTS (system statistics is gathered on no load):-

Sreadtim:=ioseektim+db_block_size/iotrfrspeed.
Mreadtim:=ioseektim+db_file_multiblock_read_count
*db_block_size/iotrftspeed
No. of blocks=710
db_file_multiblock_read_count=16 (This is set in session level by alter system command)
Sreadtim:=10+8192/4096=12
Mreadtim:=10+16*8192/4096=42
Iocost:=(710/16)*(42/12)=156+1=157

Example of FTS in Parallel…..

Formula for FTS in Parallel:-
10g cost at degree N=ceil(serial cost/0.9*N)
N=degree of parallelism
iocost=ceil(157/0.9*4)=44
If _optimizer_percent_parallel=75 , then 75% of parallel cost+25% of serial cost

Cardinality Calculation

Step 1: We need to create table and index as below:

Create table t1(c1 number(10),c2 char(100),c3 number(10));
Create index ind_c3 on column c3 and ind_c1_c3 on c1 and c3;

Step 2: Please insert some data as below:-

insert into t1 select 1,‘A’,1 from dual connect by rownum<1001;
insert into t1 select 2,‘A’,1 from dual connect by rownum<1001;
insert into t1 select 3,‘A’,1 from dual connect by rownum<1001;
insert into t1 select 4,‘B’,2 from dual connect by rownum<1001;
insert into t1 select 5,‘C’,NULL from dual connect by rownum<1001;
insert into t1 select 5,’D’,14 from dual connect by rownum<1001;

Step 3: Please analyze the table by compute statistics clause.

Case1 : Select * from t1 where c1=1
Cardinality=(num_rows)*(density) where Density=1/No. of distinct value

Case 2: Select * from t1 where c1 in (1,2)
Cardinality=num_rows*No. of occurrence of values in IN operator*individual
cardinality

Case 3:Select * from t1 where c1<=2
Cardinality=num_rows*[(limit-low val)/(high val-low val)+1/No. of distinct value]

Case 4:Select * from t1 where c1>=2
Cardinality=num_rows*[(high val-limit)/(high val-low val)+1/No. of distinct value]

Case 5: Select * from t1 where c2=2
If null value is there in c2,then
Cardinality=(num_rows-null values)*(density) Density=1/No. of distinct value

Case 6: Select * from t1 where c1 =1 and c2=1
Cardinality=(num_rows-null values)*(density of c1)*(density of c2)

Case 7: Select * from t1 where c1 between X and Y
Cardinality=num_rows*[(X-Y)/(high val-low val)+1/No. of distinct value+1/No. of
distinct value]

Cost Calculation for Index based table Scan

Index range scan with one predicate in where clause

Formula for Index based scan :-
Cost:=blevel + ceiling(leaf blocks * effective index selectivity)
Blevel=1
Leaf_blocks=10
Selectivity=1/distinct_value=1/3
Cost:=1+ceiling(10*1/3)=4

SQL> select c3 from t1 where c3=1;
Execution Plan
—————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————
| 0 | SELECT STATEMENT | | 1667 | 3334 | 4 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| IND_C3 | 1667 | 3334 | 4 (0)| 00:00:01

Index range scan with more predicate in where clause

Formula for Index based scan :-
Cost:=blevel + ceiling(leaf blocks * effective index selectivity1*effective index
selectivity2)
Blevel=1
Leaf_blocks=14
Selectivity1=1/distinct_value=1/3 Selectivity2=1/distinct_value=1/5
Cost:=1+ceiling(14*1/3*1/5)=2

SQL> select c3 from t1 where c1=1 and c3=1;
——————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————
| 0 | SELECT STATEMENT | | 333 | 1332 | 2 (0)| 00:00:01 |
|* 1 | INDEX RANGE SCAN| IND_C1_C3 | 333 | 1332 | 2 (0)| 00:00:01 |

Formula for Index based table scan :-

 

C##TEST@TESTDB1> select index_name,blevel,distinct_keys,clustering_factor,num_rows,leaf_blocks from user_indexes;

INDEX_NAME
——————————————————————————–
BLEVEL DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS LEAF_BLOCKS
———- ————- —————– ———- ———–
T1_I1
2 500 9743 10000 1111

Cost:=blevel + ceiling(leaf blocks * effective index selectivity)
+ceiling(cluster_factor*effective table selectivity)
Blevel=1
Leaf_blocks=1111
Cluster_factor=9743
Selectivity1=1/distinct_values=1/500=0.002
Cost:=2+ceiling(1111*0.002)+ceiling(9743*0.002)=2+3+20=25

select
small_vc
from
t1
where
n1 = 2
and ind_pad = rpad(‘x’,40)
and n2 = 3
; 2 3 4 5 6 7 8 9

19 rows selected.

 

—– Plan Table —–

============
Plan Table
============
——————————————————-+———————————–+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
——————————————————-+———————————–+
| 0 | SELECT STATEMENT | | | | 25 | |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED | T1 | 20 | 1160 | 25 | 00:00:01 |
| 2 | INDEX RANGE SCAN | T1_I1 | 20 | | 5 | 00:00:01 |
——————————————————-+———————————–+
Predicate Information:
———————-
2 – access(“N1″=2 AND “IND_PAD”=’x ‘ AND “N2″=3)

Content of other_xml column
===========================
db_version : 12.1.0.2
parse_schema : C##TEST
plan_hash_full : 162118435
plan_hash : 3320414027
plan_hash_2 : 162118435
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE(‘12.1.0.2’)
DB_VERSION(‘12.1.0.2’)
ALL_ROWS
OUTLINE_LEAF(@”SEL$1″)
INDEX_RS_ASC(@”SEL$1″ “T1″@”SEL$1” (“T1”.”N1″ “T1″.”IND_PAD” “T1”.”N2″))
BATCH_TABLE_ACCESS_BY_ROWID(@”SEL$1″ “T1″@”SEL$1”)
END_OUTLINE_DATA
*/

10053 trace shows:-

****** Costing Index T1_I1
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN
ColGroup Usage:: PredCnt: 3 Matches Full: #1 Partial: Sel: 0.002000
SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER
ColGroup Usage:: PredCnt: 3 Matches Full: #1 Partial: Sel: 0.002000
Access Path: index (AllEqRange)
Index: T1_I1
resc_io: 25.000000 resc_cpu: 186236
ix_sel: 0.002000 ix_sel_with_filters: 0.002000
Cost: 25.005132 Resp: 25.005132 Degree: 1
Best:: AccessPath: IndexRange
Index: T1_I1
Cost: 25.005132 Degree: 1 Resp: 25.005132 Card: 20.000000 Bytes: 0.000000

***************************************

Calculation For Fast Full Index Scan

Formula for Fast Full Index scan :-
iocost:=(No. of Leaf Blocks/MBRC)*mreadtim/sreadtim
cpucost:=#cpu cycles/cpuspeed*(sreadtim*1000)

Formula for Index skip scan :-

Iocost:=no of distinct value(skipped column) * (blevel+leaf_block*density of
column in where clause)

optimizer_index_cost_adj:-

You can use parameter optimizer_index_cost_adj to adjust index scan cost.Just simply multiply the predicate optimizer_index_cost_adj/100 at the end of index cost
calculation formula.The default value in 10g is 100.So it has no effect in cost calculation.

For example :
Cost:=(blevel + ceiling(leaf blocks * effective index selectivity)
+ceiling(cluster_factor*effective table selectivity))* optimizer_index_cost_adj/100

Calculation For Index Scan with Histogram

Formula for Index scan(Histogram) :-

Cost:=blevel + ceiling(leaf blocks * effective index selectivity)
Blevel=4
Leaf_blocks=3142
Selectivity=999/21992
Cost:=147

SQL> exec dbms_stats.gather_table_stats(user,’IT1′,method_opt=>’for columns size 7 c1′,estimate_percent=>100);

We have generated frequency based histogram on column c1.So the cardinality(999) is exactly same as no of rows for
value ‘F’.With out histogram,full table scan was going on as cardinality was calculated as 21992/7 is 3142.

 

SQL> select c1 from it1 where c1=’F’;
999 rows selected.
———————————————————-
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————
| 0 | SELECT STATEMENT | | 999 | 976K| 147 (0)| 00:00:02 |
|* 1 | INDEX RANGE SCAN| IT1_C1 | 999 | 976K| 147 (0)| 00:00:02 |

Undo Advisor help to estimate the undo tablespace size and undo retention to avoid ORA-1555

Prepare by: Nurullah Sharif

Scope: Undo Advisor

 

Undo Advisor help to estimate the undo tablespace size and also advise of undo retention.

SQL> @db
NAME      OPEN_MODE
--------- --------------------
COLLPROD  READ WRITE

 

Undo retention is 900 sec which 15 min

SQL> sho parameter undo
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS2

Undo tablespace Size:

SQL> @tblspc
TABLESPACE_NAME                ALLOCATED_MB    USED_MB FREE_SPACE_MB PERCENT_USED
------------------------------ ------------ ---------- ------------- ------------
SYSAUX                                  300    -88.625       388.625       -29.54
SYSTEM                                  300    -55.625       355.625       -18.54
DATA                                  30720        109         30611          .35
UNDOTBS2                                125       5.75        119.25          4.6
UNDOTBS1                                140    18.8125      121.1875        13.44
USERS                                     5     1.3125        3.6875        26.25
SYSTEM                                  870    514.375       355.625        59.12
SYSAUX                                 1350    961.375       388.625        71.21

8 rows selected.

 

Note: To make the undo tablespace fixed-size, Oracle suggests that you first allow enough time after database creation to run a full workload, thus allowing the undo tablespace to grow to its minimum required size to handle the workload. Then, you can use the Undo Advisor to determine, if desired, how much larger to set the size of the undo tablespace to allow for long-running queries and Oracle Flashback operations.

Historical information in memory:

SQL> set serveroutput on
 SQL> DECLARE
 utbsiz_in_MB NUMBER;
 BEGIN
 utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
 dbms_output.put_line('=================================================================');
 dbms_output.put_line('The Minimum size of the undo tablespace required is : '||utbsiz_in_MB||'
 MB');
 dbms_output.put_line('=================================================================');
 end;
 /
==============================================
The Minimum size of the undo tablespace required is : 72 MB
==============================================

PL/SQL procedure successfully completed.

Function undo_info is used to get information about undo tablespace of the current instance.

it returns undo tablespace name, max possible size, if it is auto extensible, current undo retention value and if the undo tablespace has guaranteed undo retention.

SQL> set serveroutput on
 SQL> DECLARE
 tbs_name VARCHAR2(30);
 tbs_size NUMBER(10);
 tbs_autoextend BOOLEAN;
 tbs_retention NUMBER(5);
 tbs_guarantee BOOLEAN;
 undo_adv BOOLEAN;
 BEGIN
 dbms_output.put_line('=====================================================================');
 undo_adv := dbms_undo_adv.undo_info(tbs_name, tbs_size, tbs_autoextend, tbs_retention, tbs_guarantee);
 If undo_adv=TRUE then
 dbms_output.put_line('UNDO Tablespace Name : ' || tbs_name);
 dbms_output.put_line('UNDO tablespace is '|| CASE WHEN tbs_autoextend THEN 'Auto Extensiable' ELSE 'Fixed Size' END);
 If tbs_autoextend=TRUE then dbms_output.put_line('UNDO Tablespace Maximum size (MB) is : ' || TO_CHAR(tbs_size));
 else dbms_output.put_line('UNDO Tablespace Fixed size (MB) is : ' || TO_CHAR(tbs_size));
 end if;
 dbms_output.put_line('Undo Retention is ' || TO_CHAR(tbs_retention)||' Seconds' ||' Equivelant to ' ||round((tbs_retention/60),2) ||' Minutes');
 dbms_output.put_line('Retention : '||CASE WHEN tbs_guarantee THEN 'Guaranteed ' ELSE 'Not Guaranteed' END);
 else dbms_output.put_line('Function undo_info can only run if parameters undo_management is auto');
 end if;
 dbms_output.put_line('=====================================================================');
 END;
 /

==============================================

UNDO Tablespace Name : UNDOTBS2

UNDO tablespace is Auto Extensiable

UNDO Tablespace Maximum size (MB) is : 32768

Undo Retention is 900 Seconds Equivelant to 15 Minutes

Retention : Not Guaranteed

==============================================

PL/SQL procedure successfully completed.

 

Function longest_query returns the length of the longest query for a given period

SQL> SELECT 'The Length of the Longest Query in Memory is ' || dbms_undo_adv.longest_query LONGEST_QUERY FROM dual;
LONGEST_QUERY
--------------------------------------------------------------------------------
The Length of the Longest Query in Memory is 1472

 

The Output using Start/End time :

SQL> SELECT 'The Length of the Longest Query During This Time Range is ' ||dbms_undo_adv.longest_query(SYSDATE-1/24, SYSDATE) LONGEST_QUERY FROM dual;

LONGEST_QUERY

The Length of the Longest Query During This Time Range is 1080

Function required_retention  returns the required value for parameter undo_retention to satisfy longest query based on undo statistics available

SQL> SELECT 'The Required undo_retention using Statistics In Memory is ' || dbms_undo_adv.required_retention required_retention FROM dual;

REQUIRED_RETENTION

——————————————————————————–

The Required undo_retention using Statistics In Memory is 1472

Current retention is 900,  and the required retention is 1472

 

SQL> SELECT 'The Required undo_retention During This Time Range is ' ||dbms_undo_adv.required_retention(SYSDATE-1/24, SYSDATE) required_retention FROM dual;

REQUIRED_RETENTION

——————————————————————————–

The Required undo_retention During This Time Range is 1080

 

SQL> SELECT 'The Required undo tablespace size using Statistics In Memory is ' || dbms_undo_adv.required_undo_size(900) || ' MB' required_undo_size FROM dual;

REQUIRED_UNDO_SIZE

——————————————————————————–

The Required undo tablespace size using Statistics In Memory is 80 MB

 

Function undo_health is used to check if there is any problem with the current setting of undo_retention and undo tablespace size based on the historical information of given period , and provide recommendation to fix the problem.

 

If the return value is 0, no problem is found. Otherwise, parameter “problem” and “recommendation” are the problem and recommendation on fixing the problem.

 

The Output Parameters are :

 

problem: problem of the system. It can be for example : “long running query may fail” or “undo tablespace cannot satisfy undo_retention”.

recommendation: recommendation on fixing the problem found.

rationale: rationale for the recommendation.

retention: numerical value of retention if recommendation is to change retention.

utbsize: numberical value of undo tablespace size in MB if recommendation is to change undo tablespace size.

 

DECLARE
 prob VARCHAR2(100);
 reco VARCHAR2(100);
 rtnl VARCHAR2(100);
 retn PLS_INTEGER;
 utbs PLS_INTEGER;
 retv PLS_INTEGER;
 BEGIN
 retv := dbms_undo_adv.undo_health(prob, reco, rtnl, retn, utbs);
 dbms_output.put_line('=====================================================================');
 If retv=0 Then dbms_output.put_line('Problem: ' || prob || ' The undo tablespace is OK');
 ELSIF retv=2 Then dbms_output.put_line('Long running queries may fail , The recommendation is :' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('retention: ' || TO_CHAR(retn));
 ELSIF retv=3 Then
 dbms_output.put_line('The Undo tablespace cannot satisfy the longest query ,The recommendation is : ' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('undo tablespace size in MB : ' || TO_CHAR(utbs));
 dbms_output.put_line('retention: ' || TO_CHAR(retn));
 ELSIF retv=4 Then
 dbms_output.put_line('The System does not have an online undo tablespace , The recommendation is : ' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('undo tablespace size in MB : ' || TO_CHAR(utbs));
 ELSIF retv=1 Then
 dbms_output.put_line('The Undo tablespace cannot satisfy the specified undo_retention or The Undo tablespace cannot satisfy auto tuning undo retention , The recommendation is :' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('undo tablespace size in MB : ' || TO_CHAR(utbs));
 end if;
 dbms_output.put_line('=====================================================================');
 END;
 /

PL/SQL procedure successfully completed.

Needed undo tablespace for specified undo retention:-

SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
 SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
 (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
 g.undo_block_per_sec) / (1024*1024)
 "NEEDED UNDO SIZE [MByte]"
 FROM (
 SELECT SUM(a.bytes) undo_size
 FROM v$datafile a,
 v$tablespace b,
 dba_tablespaces c
 WHERE c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#
 ) d,
 v$parameter e,
 v$parameter f,
 (
 SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
 undo_block_per_sec
 FROM v$undostat
 ) g
 WHERE e.name = 'undo_retention'
 AND f.name = 'db_block_size'
 /

How to check and implement best practice for Oracle Database 11g onwards

Author: Saibal Ghosh

About author:

I am an Oracle professional with twenty plus years of experience and have deep knowledge and understanding of various facets of Oracle technology including basic Oracle Database Administration, Performance Tuning, Real Application Clusters, Data Guard and Maximum Availability Architecture. I also work a lot in the Oracle Database Cloud space, and I believe that the a lot of the future database work will be on the Cloud.

1   Scope and Purpose of the 360 degreeAudit

There are two purposes of this audit:

The primary purpose of the audit is to take a step back and look into the database to see whether there is something that is grossly wrongly configured, or some parameters that has not been optimally set, or that there is something that needs to be fixed immediately on priority or there is any item that needs attention in the short term, so that at the end of it, we have the reassurance that the database has been carefully studied diligently for any obvious shortcomings.

The secondary purpose of the audit is to ensure that we make an attempt at following the documented best practices to the extent practicable as well as try to ensure that we are able to leverage the functionalities of Oracle 11g extensively.

This audit does not claim to cover everything inside the database, but attempts to establish a general health and well-being check of the database, as well as point out any problem areas or potential problem areas.

2 Introduction

The ORCLPRD database is a two node RAC Cluster with the combined processing power of the two servers providing greater throughput and Oracle RAC scalability than is available from a single server. The same RAC setup is there in KOLKATA and DELHI.

The ORCLPRD database cluster comprises two interconnected computers or servers that appear as if they are one server to end users and applications. The Oracle RAC option with Oracle Database enables us to cluster Oracle databases. Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.

Basically, the ORCLPRD Oracle Clusterware is a portable cluster management solution that is integrated with the Oracle Database. Oracle Clusterware is a required component for using Oracle RAC that provides the infrastructure necessary to run Oracle RAC. Oracle Clusterware also manages resources, such as Virtual Internet Protocol (VIP) addresses, databases, listeners, services, and so on. In addition, Oracle Clusterware enables both non-clustered Oracle databases and Oracle RAC databases to use the Oracle high-availability infrastructure. Oracle Clusterware along with Oracle Automatic Storage Management (Oracle ASM) (the two together comprise the Oracle Grid Infrastructure enables us to achieve High Availability of the Oracle database). We have extensively used these features to setup the ORCLPRD database.

The Oracle Database with the Oracle Real Application Clusters (RAC) option allows running multiple database instances on different servers in the cluster against a shared set of data files, also known as the database. The database spans multiple hardware systems and yet appears as a single unified database to the application. This enables the utilization of commodity hardware to reduce total cost of ownership and to provide a scalable computing environment that supports various application workloads. Oracle RAC is Oracle’s premier shared disk database clustering technology.

The basic database version and configuration is set out below:

 

Configuration Value
Name of the Database ORCLPRD
Name of the Instances ORCLPRD1, ORCLPRD2
Environment Setting Set . oraenv to ORCLPRD1 or ORCLPRD2
ORACLE_BASE /orasw/app/oracle
ORACLE_HOME /orasw/app/oracle/product/11.2.0/db_1
GRID BASE /orasw/app/grid
GRID HOME /orasw/app/11.2.0/grid

 

The Operating System version is as follows:

The TPC Real Application Cluster setup is a two node RAC cluster, as shown in the screenshot below:

3 CLUSTERWARE CONFIGURATION

Oracle Clusterware includes a high availability framework that provides an infrastructure to manage any application. Oracle Clusterware ensures that the applications it manages start when the system starts and monitors the applications to ensure that they are always available. If a process fails then Oracle Clusterware attempts to restart the process using agent programs (agents). Oracle clusterware provides built-in agents so that we can use shell or batch scripts to protect and manage an application. Oracle Clusterware also provides preconfigured agents for some applications (for example for Oracle TimesTen In-Memory Database).

If a node in the cluster fails, then we can program processes that normally run on the failed node to restart on another node. The monitoring frequency, starting, and stopping of the applications and the application dependencies are configurable.

We also notice that the Cluster Time Synchronization daemon is running in OBSERVER mode, which implies that we are using NTP to synchronize time amongst the nodes in the RAC cluster.

See the screenshot below:

We have followed documented clusterware best practices, namely:

  • We are using DNS to enable Load Balancing. Load Balancing is enabled both at client end as well as server end.
  • We have installed the Oracle software on a local home.
  • Oracle Clusterware and Oracle ASM have been both installed in one home on a non-shared file system called the Grid Infrastructure home.
  • In general, SCAN name is used to resolve IP addresses to take advantage of client and server side load balancing.
  • The ability to migrate client connections to and from the nodes on which we are working is a critical aspect of planned maintenance. Migrating client connections should always be the first step in any planned maintenance activity requiring software shutdown (for example, when performing a rolling upgrade). The potential for problems increase if there are still active database connections when the service switchover commences. To enhance robustness and performance it would be good if we configure all of the following best practices:

 

  • Client is configured to receive FAN notifications and is properly configured for run time connection load balancing and Fast Connection Failover.
  • Oracle Clusterware stops services on the instance to be brought down or relocates services to an alternate instance.
  • Oracle Clusterware returns a Service-Member-Down event.
  • Client that is configured to receive FAN notifications receives a notification for a Service-Member-Down event and moves connections to other instances offering the service.
  • Mirror Oracle Cluster Registry (OCR) and Configure Multiple Voting Disks with Oracle ASM-done in the case of TPC databases
  • It is strongly recommended that the Services created are configured into the clusterware to ensure high availability-this enhances the high availability aspect of the service.

4  Current Database Memory Configuration on the System

Following is the main memory configuration for the ORCLPRD database. The exact same configuration exists for both KOLKATA and DELHI nodes. All memory configurations have been done assuming a high load on the database. Automatic Memory Management is enabled on the database. For doing it we have set (MEMORY_TARGET) and a maximum memory size initialization parameter (MEMORY_MAX_TARGET). The total memory that the instance uses remains relatively constant, based on the value of MEMORY_TARGET, and the instance automatically distributes memory between the system global area (SGA) and the instance program global area (instance PGA). As memory requirements change, the instance dynamically redistributes memory between the SGA and instance PGA.

Memory Size (GB) Remarks
memory_max_target 50 The maximum memory that will ever be available to the system.
memory_target 50 The memory that is currently accessible to the system.
SGA_Target 0 The SGA_TARGET parameter has not been set because we are using AMM.
PGA_aggregate_limit 0 This parameter need not be set because we are using AMM.
Database Buffer Cache 22 Current size of the buffer cache
Result _cache_max_size 128 M The size is kept pretty small, as in general we don’t plan to use Result Cache in our setup.

Following

4.1    System Global Area (SGA) Configuration:

A system global area (SGA) is a group of shared memory structures that contain data and control information for one Oracle database instance. If multiple users are concurrently connected to the same instance, then the data in the instance’s SGA is shared among the users. Consequently, the SGA is sometimes called the shared global area.

An SGA and Oracle processes constitute an Oracle instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA.

The SGA is read/write. All users connected to a multiple-process database instance can read information contained within the instance’s SGA, and several processes write to the SGA during execution of Oracle.

The SGA contains the following data structures:

  • Database buffer cache
  • Redo log buffer
  • Shared pool
  • Java pool
  • Large pool (optional)
  • Streams pool
  • Data dictionary cache
  • Other miscellaneous information

Part of the SGA contains general information about the state of the database and the instance, which the background processes need to access; this is called the fixed SGA. No user data is stored here. The SGA also includes information communicated between processes, such as locking information.

Let’s check the total memory configuration along with the SGA. The total memory allocated for the system is 50GB, but we have not set any value for the SGA_TARGRET, which implies that we are letting Oracle determine the size of the SGA according to its needs. This is technically called Automatic Memory Management or AMM:

Note: If we are using AMM or ASMM then it is imperative that /dev/shm is set to a size at least equal to or greater than the MEMORY_TARGET parameter. In our case, this requirement has been fulfilled.  While /dev/shm is set to 127G, the MEMORY_TARGET is capped at 50G.

Recommendation:  Where the SGA is greater than 8G, as is the case here, then the recommendation is to use is to use HugePages. Ref:

https://docs.oracle.com/cd/E37670_01/E37355/html/ol_about_hugepages.html

This is something that needs to be seriously considered as configuring HugePages has a definite increase in performance based on the fact that without HugePages, the operating system keeps each 4 KB of memory as a page. When it allocates pages to the database System Global Area (SGA), the operating system kernel must continually update its page table with the page lifecycle (dirty, free, mapped to a process, and so on) for each 4 KB page allocated to the SGA.

With HugePages, the operating system page table (virtual memory to physical memory mapping) is smaller, because each page table entry is pointing to pages from 2 MB to 256 MB.

Also, the kernel has fewer pages whose lifecycle must be monitored. For example, if you use HugePages with 64-bit hardware, and you want to map 256 MB of memory, you may need one page table entry (PTE). If you do not use HugePages, and you want to map 256 MB of memory, then you must have 256 MB * 1024 KB/4 KB = 65536 PTEs.

HugePages provides the following advantages:

  • Increased performance through increased (Transaction Lookaside Buffer) TLB hits
  • Pages are locked in memory and never swapped out, which provides RAM for shared memory structures such as SGA
  • Contiguous pages are preallocated and cannot be used for anything else but for System V shared memory (for example, SGA)
  • Less bookkeeping work for the kernel for that part of virtual memory because of larger page sizes

Note: To set up HugePages we need to disable AMM (Automatic Memory Management). This constraint is not there if we are using ASMM (Automatic Shared Memory Management), which is automatically enabled if we are setting the size of SGA to any value greater than 0.

4.2   PGA Configuration

As discussed earlier in the report, since Automatic Memory Management is enabled on the system, the parameter PGA_AGGREGATE_TARGET is also set to 0.

Let us check the PGA performance:

The PGA cache hit percentage is 96%, which is pretty decent, and at the current workload, and there is no PGA related problem on the database.

Recommendation: There are no additional recommendations regarding the PGA configuration as it was has been optimally set up.

 

5 ASM MEMORY CONFIGURATION

5.1       Current setup

  • The minimum MEMORY_TARGET for Oracle ASM is 1 GB. If we set MEMORY_TARGET lower, then Oracle increases the value for MEMORY_TARGET to 1 GB automatically. In this case we are fine. However, there is a known bug – Bug 19434529 – ORA-4031 in ASM shared pool due to leaked “kffil” state objects (Doc ID 19434529.8)
  • The fix is to apply Patch: 19769480

Recommendation: There are no additional recommendations with regard to ASM memory.

5.2   STORAGE CONFIGURATION

Background: When Automatic Storage Management was introduced in Oracle 10gR1, it was simply marketed as the volume manager for the Oracle database. ASM was designed as a purpose built host based volume management and filesystem that is integrated with the Oracle database. It is built on the Striping and Mirroring technology (SAME), which stripes and mirrors disks across as many disks as possible and provides the ability of rebalancing the file layout online whenever the physical storage configuration changes.

From the user’s perspective, ASM exposes a small number of disk groups. These disk groups consist of ASM disks and files that are striped across all the disks in a disk group. The disk groups are global in nature and database instances running individually or in clusters have a shared access to the disk groups and the files within them. The ASM instances communicate amongst themselves and form the ASM cluster.

5.2.1   Current setup:

Let us see the current disk group setup and the free space available:

Disk Group Name Total Size (GB) Currently Available Size (GB)
ORCL_DATA 600 193
ORCL_FRA 400 353
ORCL_REDO 50 42
OCR_VOTE 3 2

So we have 4 disk groups with ample free space available. Now, let us look into the size of the disks that make up the diskgroup. The test is to confirm whether we have equi-sized disks in a disk group. As we can see from the screenshot below, all the disks in a particular disk group are equi-sized. This is important both from a stability as well as performance perspective.

Next we test whether the COMPATIBLE.ASM value is set to 11.1 or higher for the disk group to display in the V$ASM_ATTRIBUTE view output. Attributes that are designated as read-only (Y) can only be set during disk group creation.

Similarly, we check the software compatibility for each disk group. These are also properly configured:

                                We check the ASM_POWER_LIMIT parameter. This parameter gives us an idea of how fast the rebalance operation can take place after performing a (planned) maintenance. Until Oracle 11.2.0.1, the default value is 1 and the maximum value is 11, but from Oracle 11.2.0.2 the maximum value is 1024. We see from the following screenshot that we are using the default rebalancing value.

5.3   Best Practices & Recommendations:

 

                     The number of LUNs (Oracle ASM disks) for each disk group should be at least equal to four times the number of active I/O paths**. For example, if a disk group has two active I/O paths, then minimum of eight LUNs should be used. The LUNs should be of equal size and performance for each disk group. Re: Oracle ASM and Multi-Pathing Technologies” in My Oracle Support Note 294869.1—this needs to be checked with the Storage Administrator.

 

                     **An I/O path is a distinct channel or connection between storage presenting LUNs and the server. An active I/O path is an I/O path in which the I/O load on a LUN is multiplexed through multipathing software.

It is strongly recommended that all Oracle ASM disks in a disk group have similar storage performance and availability characteristics. In storage configurations with mixed speed drives, such as flash memory and hard disk drives (HDD), I/O performance is constrained by the slowest speed drive. Re: https://docs.oracle.com/cd/E11882_01/server.112/e18951/asmprepare.htm#OSTMG02550

5.4   Storage Recommendations:

1.    Consider setting the ASM_POWER_LIMIT parameter to around 200 to speed up rebalancing operations in case of a planned maintenance.

  1. Minimize I/O contention between Oracle ASM disks and other applications by dedicating disks in Oracle ASM disk groups.

6  TABLESPACE CONFIGURATION

6.1                          CURRENT SETUP

First, let us check the tablespaces, their associated datafiles, and the size of such files:

It is noticed that while most of the tablespaces are autoextensible, there are several of them which are not autoextensible. Additionally there are files added to the tablespace which are randomly sized. So it shows that tablespaces are NOT being optimally managed.

Let us see the free space available in the tablespaces. Space is being shown in GB:

Space is pretty much available in all the tablespaces, and as mentioned above since the tablespaces are autoextensible, space management is well taken care of in this system.

 

6.2    Best Practice & Recommendations:

There are a couple of recommendations here:

  1. All tablespaces may be made autoextensible. In such cases the underlying files grow (or is configured at the outset) to ~32 GB before a new file needs to be added to the tablespace. We need to have normal monitoring and threshold alarms configured for both tablespace and storage monitoring.
  2. Or, tablespaces can be made non-extensible, and have datafiles of uniform size (maybe 20G) added. Monitoring of the storage and tablespace has to be done with the same rigor as before.

7    CONTROLFILES AND REDOLOG FILES

7.1    CURRENT SETUP

Controlfiles are multiplexed and they are kept at separate locations, but both inside ASM, once copy in the ORCL_FRA diskgroup, while the other copy is in ORCL_REDO diskgroup.

There are 4 redo log groups created each having 2 members each as can be seen from the screenshot below:

And the size of each logfile is 2048 MB, as can be seen below:

Let us take a sample of the number of log switches made in an hour:

At this point we can see the number of switches is just 3 per hour which is just fine.

 

7.2     Best Practices & Recommendations:

 

Both the controlfiles are multiplexed in ASM-it can be considered to have one copy in ASM and the other on the Unix filesystem or consider keeping the same configuration, but have an additional multiplexing of the controlfile in the Unix filesystem.

Redo Logs have been configured appropriately and there are sufficient number of groups that have been created to ensure that we should not ever encounter the dreaded error:

“Checkpoint not complete, cannot allocate new log” message

 

8   NETWORKING AND SCAN CONFIGURATION

8.1                            Current Setup

Below is the Network and SCAN configuration for the KOLKATA site

KOLKATA

 

SCAN name: epedbp1-scan, Network: 1

 

Type of IP IP Address Remarks
SCAN IP 1 10.XX.26.102 Enabled
SCAN IP 2 10.XX.26.103 Enabled
SCAN IP 3 10.XX.26.104 Enabled
VIP 10.XX.26.101 Enabled
VIP 10.XX.26.107 Enabled
Client IP-Node 1 10.XX.5.187 Enabled
Client IP-Node 2 10.XX.5.155 Enabled
Private IP 192.168.5.1 Enabled
Private IP 192.168.5.2 Enabled

 

Let Let us check whether the round robin algorithm is working while resolving the SCAN name. The round robin algorithm implies that SCAN name resolves to a different IP address each time an attempt is made to connect to the database. This is Load Balancing at the connection level.

Additionally we will run a check on how the SCAN listeners are placed on both the nodes. Since there are three SCAN listeners, typically two of them ought to be placed on one node, while the other will be the remaining node (in the case of a two node RAC configuration).  In an ideal situation there should not be any deviation from a configuration of this sort.

Node 1

Node 2

Finally, let’s have a look at the VIPs

Thus, network and SCAN have been properly set up on the ORCLPRD database.

 

8.2  Best Practices & Recommendations:

The DNS server which resolves the SCAN name should be properly secured in the sense that only authorized personnel should have access to the server. Additionally, there should be a backup of the DNS server, in case there is a requirement for maintenance on the main DNS server, so that there is no interruption of service.

As a best practice the round robin algorithm ought to be enabled on the DNS server, so that connect time load balancing is automatically occurring and the connections are equitably distributed amongst the servers.

Finally, SCAN should be leveraged by using the SCAN name to connect instead of using physical IPS or VIPs. It was noticed that several connections were actually made using VIPS and other IPs-this should be strongly discouraged.

NO CHANGES ON THE NETWORK SHOULD BE MADE WITHOUT TAKING THE CONCERNED DBAs INTO CONFIDENCE, AS NETWORKS IN A REAL APPLICATION CLUSTER CONNECTION IS A MAJOR COMPONENT OF A PROPERLY FUNCTIONING CLUSTERED DATABASE SOLUTION.

9   BASIC HOUSEKEEPING

 

9.1     Alertlogs:

There seem to be no log rotation done in the case of alert logs. The alert log has grown pretty big as can be seen in the screenshot below:

 

9.2     Listener Logs:

the Listener logs which are in .xml format.  Being in XML format, many tools now can be made to read the files unambiguously since the data is now enclosed within meaningful tags. Additionally the listener log files (the XML format) is now rotated. After reaching a certain threshold value the file is renamed to log_1.xml and a new log.xml is created – somewhat akin to the archived log concept in the case of redo log files.  So we are relieved of the task of deleting old listener logs lest they start filling up the system. See the screenshot below:

 

 

However, Oracle still continues to write a version of the Listener logs in the old format. The old format log is still called listener.log but the directory it is created in is different – $ADR_BASE/tnslsnr/Hostname/listener/trace. Unfortunately there is no archiving scheme for this file so this simply kept growing, and this has to be manually trimmed.
Similarly, as part of the housekeeping, the tracefiles (*.trc) which are generated on the system need to be trimmed from time to time.  There is no automatic mechanism to delete these files on the system.

There is a cron job running which deletes old Listener logs, but

9.3    Backups:

The basic backup configuration scheduled through the Oracle Job Scheduler is as follows:

 

  • A Level 0 Backup is taken to ASM every day (02:05 hours).
  • A Level 1 Backup is taken to ASM every Monday, Tuesday, Wednesday, Friday and Saturday (00:05 hours).

The following has been set up in cron

  • A Full Backup to NBU is done every day (00:05 hours).
  • An archivelog deletion script runs every 4 hours compressing the archives and then deleting the input.

The RMAN configuration is set to keep one backup in addition to the current backup, as can be seen from the screenshot below:

9.4       Partitioning scheme:

The table MESSAGELOG has interval partitioning enabled on it (creating a new partition everyday) and an auto-dropping of older partitions coded on it.

This allows for optimization of space usage and ease of administration, as long as we keep a small number of tables under this scheme. Currently we are keeping about three days’ worth of data,-we may sometimes see four partitions, this happens because when the new partition has already been created, but the older partition’s scheduled drop time has not arrived as yet.

9.5   Filesystem:

The Unix filesystem is shown below:

 

/orasw needs to be cleaned up as the available space on that directory is only about 17G and the directory is 78% used already. There seems to be no automatic deletion script in place and this could be a major issue in the scheme of things.

9.6    Best Practices & Recommendations:

Alert logs, tracefiles, Listener logs all need to be deleted from time to time. Using the nifty Unix utility Logrotate to do it is a very good way of taking care of unwanted runaway growth on the database.

In the ORCLPRD database, alert logs, tracefiles and listener logs are NOT being managed that way. This need to be taken care of. There is an observation regarding backups:

  1. All backups ought to be set up using the Oracle Job Scheduler because the scheduler operates at the database level, and the backup will still run if there is a problem on any of the nodes in the cluster as long as one node is up and running. This is not true if the backup is scheduled in the cron and the node on which the job is scheduled in the cron happens to be down.
  2. There is a RMAN level 0 backup being taken everyday, as well RMAN level 1 backup taken five days a week-this is a suboptimal configuration and waste of computing resources (space, memory and CPU).

IT IS STRONGLY SUGGESTED THAT RESTORE AND RECOVERY ARE BOTH TESTED OUT-BOTH FROM THE ASM LEVEL AS WELL AS FROM THE NET BACKUPS.

 

 

10   IMPORTANT PARAMETER SETTINGS:

Below is a set of generic parameter settings which are considered the most important ones from a RAC database optimization and performance management perspective. This list of parameters are by no means an exhaustive list, and it is suggested that one refer to the Oracle documentation (specifically to the Performance Tuning Guide and the Administrator’s Guide) for further information on parameter settings. The parameter settings need to be first tested on a Lab setup before being implemented in the Production setup. 

  

  • Optimize Instance Recovery Time

Instance recovery is the process of recovering the redo thread from the failed instance. Instance recovery is different from crash recovery, which occurs when all instances accessing a database have failed. Crash recovery is the only type of recovery when an instance fails using a single-instance Oracle Database.

When using Oracle RAC, the SMON process in one surviving instance performs instance recovery of the failed instance.

In both Oracle RAC and single-instance environments, checkpointing is the internal mechanism used to bind Mean Time To Recover (MTTR). Checkpointing is the process of writing dirty buffers from the buffer cache to disk. With more aggressive checkpointing, less redo is required for recovery after a failure.  The MTTR on ORCLPRD database is not set. Enable MTTR Advisory by setting FAST_START_MTTR_TARGET to a value greater than zero

The recommendation would be to set it to 300.

 Please refer http://www.oracle.com/goto/maa for best practice.

 

  • Maximize the Number of Processes Performing Transaction Recovery

The FAST_START_PARALLEL_ROLLBACK parameter determines how many processes are used for transaction recovery, which is done after redo application. Optimizing transaction recovery is important to ensure an efficient workload after an unplanned failure. If the system is not CPU bound, setting this parameter to HIGH is a best practice. This causes Oracle to use four times the CPU_COUNT (4 X CPU_COUNT) parallel processes for transaction recovery. The default setting for this parameter is LOW, or two times the CPU_COUNT (2 X CPU_COUNT). See the screenshot below:

 

 

Recommendation: consider setting the parameter fast_start_parallel_rollback parameter to high using:

ALTER SYSTEM SET FAST_START_PARALLEL_ROLLBACK=HIGH SCOPE=BOTH;

When FAST_START_PARALLEL_ROLLBACK is set to HIGH, a system with a large number of CPUs will spawn a lot of parallel recovery slaves which can substantially increase the IOPS rate. In this case the system should not be challenged for I/O before FAST_START_PARALLEL_ROLLBACK is set to HIGH.

The ORCLPRD system is not quite CPU bound as we can see from the following screenshot:

 

 

  • Ensure Asynchronous I/O Is Enabled

Under most circumstances, Oracle Database automatically detects if asynchronous I/O is available and appropriate for a particular platform and enables asynchronous I/O through the DISK_ASYNCH_IO initialization parameter. However, for optimal performance, it is always a best practice to ensure that asynchronous I/O is actually being used. Query the V$IOSTAT_FILE view to determine whether asynchronous I/O is used. In the case of ORCLPRD:

 

  • Set LOG_BUFFER Initialization Parameter to 64 MB or higher.

In our case we are already well below the 64 MB threshold:

However, without knowing the transaction rates, the sizes of the transactions, the number of users etc. it is difficult to say whether this in itself would be a problem, but it would be good to be above the 64 MB threshold to be on the safe side.

 

  • Use Automatic Shared Memory Management and Avoid Memory Paging

For any systems with 4 GB or more memory, disable Automatic Memory Management by setting MEMORY_TARGET=0 and enable Automatic Shared Memory Management by setting SGA_TARGET.

 

Let us check this on our system:

 

Since MEMORY_TARGET is set but SGA_TARGET is not set, this means that Automatic Memory management (AMM) is set on the system. As a best practice, it is being suggested that Linux HugePages ought to be used if the SGA > 8GB, which is the case here, then as a first step we should consider moving to Automatic Shared Memory Management (ASMM) from Automatic Memory Management (AMM).

Additionally, the sum of SGA and PGA memory allocations on the database server should always be less than our system’s physical memory, and conservatively should be less than 75% of total system memory. However, PGA_AGGREGATE_TARGET is not a hard limit, and for some Data Warehouse or reporting applications, the PGA memory can grow to be

3 X PGA_AGGREGATE_TARGET.

It is suggested that we monitor PGA memory and host-based memory utilization using Oracle Enterprise Manager, or by querying v$pgastat and operating systems statistics, to get an accurate understanding of memory utilization.

It is strongly recommended to move towards HugePages so that ASM and database instances can use it for their SGA. HugePages is a feature integrated into the Linux kernel from release 2.6. This feature provides the alternative to the 4K page size providing bigger pages. Using HugePages has the benefit of saving memory resources by decreasing page table overhead while making sure the memory is not paged to disk. This contributes to faster overall memory performance. Next to this overall node stability will benefit from using HugePages.

Ensuring the entire SGA of a database instance is stored in HugePages can be accomplished by setting the init.ora parameter use_large_pages=only. Setting this parameter will ensure that an instance will start only when it can get all of its memory for SGA from HugePages. For this reason the setting use_large_pages=only is recommended for database instances.

For ASM instances leave use_large_pages=true (the default value). This setting still ensures that HugePages are used when available, but also ensures that ASM as part of Grid Infrastructure starts when HugePages are not or insufficiently configured.

 

  • CURSOR SHARING

CURSOR_SHARING=SIMILAR has been deprecated as per MOSC Note 1169017.1. Starting Oracle 11g, Oracle implemented a more intelligent solution known as adaptive cursor sharing making cursor_ sharing=similar obsolete.

So unless we have a very strong reason to do otherwise, it is best to stick with    CURSOR_SHARING=EXACT

 

  • DB_BLOCK_CHECKSUM

This parameter setting should be TYPICAL or FULL on all the databases. In our case it is set to TYPICAL, as can be seen from the following screenshot.

DB_BLOCK_CHECKSUM determines whether DBWn and the direct loader will calculate a checksum (a number calculated from all the bytes stored in the block) and store it in the cache header of every data block when writing it to disk. Checksums are verified when a block is read – only if this parameter is TYPICAL or FULL and the last write of the block stored a checksum. In FULL mode, Oracle also verifies the checksum before a change application from update/delete statements and recomputes it after the change is applied. In addition, Oracle gives every log block a checksum before writing it to the current log.

If this parameter is set to OFFDBWn calculates checksums only for the SYSTEM tablespace, but not for user tablespaces.

Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULLDB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk. Turning on this feature in TYPICAL mode causes only an additional 1% to 2% overhead. In the FULL mode it causes 4% to 5% overhead. Oracle recommends that you set DB_BLOCK_CHECKSUM to TYPICAL. For backward compatibility we preserve use of TRUE (implying TYPICAL) and FALSE (implying OFF) values.

  • Use Oracle Enterprise Manager for better administration

Oracle Enterprise Manager Cloud Control enables us to use the Oracle Enterprise Manager console interface to discover Oracle RAC database and instance targets. We can use Oracle Enterprise Manager to configure Oracle RAC environment alerts. We can also configure special Oracle RAC database tests, such as global cache converts, consistent read requests, and so on.

Oracle Enterprise Manager distinguishes between database- and instance-level alerts in Oracle RAC environments. Alert thresholds for instance-level alerts, such as archive log alerts, can be set at the instance target level. This function enables us to receive alerts for the specific instance if performance exceeds our threshold. We can also configure alerts at the database level, such as setting alerts for tablespaces, to avoid receiving duplicate alerts at each instance. We can define blackouts (which are time periods in which database monitoring is suspended so that maintenance operations do not skew monitoring data or generate needless alerts) for all managed targets of an Oracle RAC database to prevent alerts from occurring while performing maintenance. So overall, we can leverage OEM to provide better manageability and administration of a database, clustered or otherwise.

 

11   SECURITY UPDATES & PATCHES

My Oracle Support provides patch recommendations for the following product lines, if we have a valid Customer Support Identifier Number.

Starting with the October 2013 Critical Patch Update, security fixes for Java SE are released under the normal Critical Patch Update schedule.

A pre-release announcement will be published on the Thursday preceding each Critical Patch Update release. Oracle will issue Security Alerts for vulnerability fixes deemed too critical to wait for distribution in the next Critical Patch Update.

 

  • Oracle Database
  • Oracle Application Server
  • Enterprise Manager Cloud Control (Management Agent/Oracle Management Service)
  • Oracle Fusion Applications
  • Oracle Fusion Middleware
  • Oracle SOA Suite
  • Oracle Real Applications Clusters
  • Oracle Real Applications Clusters with Clusterware
  • Oracle Dataguard
  • Oracle Exadata Database Machine and Exadata Storage Serve
  • Oracle Exalogic
  • Oracle Coherence, Oracle WebLogic Server, Oracle JRockit for non- Exalogic systems
  • Oracle Business Process Management
  • Application Development Framework Runtime

 

Critical Patch Updates are collections of security fixes for Oracle products. They are available to customers with valid support contracts. They are released on the Tuesday closest to the 17th day of January, April, July and October. The next three dates are:

 

  • 17 January 2017
  • 18 April 2017
  • 18 July 2017

and the last date was 18 October 2016.

Additionally, we can also check for recommended patches, security updates, and bug fixes in the My Oracle Support page, and apply those from time to time.

 

12     ORACHK SCORE

We ran the orachk tool on the database. This tool analyzes all the information inside the database, and amongst other things provides a report card with a score. In general, any score above 85 over 100 is considered a good score and a validation that the database is functioning properly. See the screenshot below:

13    CONCLUSION

Please find below a summary of recommendations based on the audit done on the database. While these recommendations are made after due diligence and consideration, it is advised to do a test on a lab setup wherever applicable.

 

  • Apply the latest patches and bug-fixes on the database, especially Patch # 19769480 to prevent known bug-Bug 19434529 – ORA-4031 in ASM shared pool due to leaked “kffil” state objects (Doc ID 19434529.8).

 

  • Consider upgrading to Oracle 12c as Premier Support has already ended (means that Oracle will not offer fixes for new issues unless an extra-cost offer called Extended Support has been purchased forOracle Database 11.2.0.4). Assuming Extended Support is available, it can be extended to DEC-2020 – but then at the usual extra cost. This information can be found in MOS Note: 742060.1. Oracle 12c offers numerous extra features plus an unparalleled ability to scale, which is why there is every reason to upgrade to Oracle 12c.

  

  • Free space under ORCL_FRA and ORCL_DATA are tight, and there could be major space related issues if Applications from both sites are moved to one site. So please consider increasing the storage in the disk groups ORCL_DATA and ORCL_FRA at the earliest.

 

  • Backup & Recovery are sub-optimally configured, and it is very strongly suggested that backups, restore, and recovery are tested at each site, both from the ASM, as well as from the Net Backup.

  

  • Basic housekeeping has a lot of scope for improvement-alert logs, trace files, listener logs have to be deleted on schedule. Jobs can be set up through the Oracle Job Scheduler, or in the crontab, or the UNIX facility of logrotate can be used to control runaway growth of these files. The UNIX filesystem has also to be carefully monitored to check that the free space in the directories remain under the threshold limit.

 

  • Consider implementing Linux HugePages to get a definite improvement in performance. This is especially significant where there is heavy transaction load and concurrency on the database.

 

  • Always connect using the SCAN name epedbp1-scan, as using the SCAN name as opposed to using an IP or a VIP results in a client side load balancing occurring. This in itself is not a major point, but when concurrency increases, it significantly helps in optimizing the connections that go to the database.

 

  • Many of the datafiles added in tablespaces are randomly sized. This is not a best practice. The suggestion here is that henceforth any file added to a tablespace should be sized uniformly. Additionally, there is a mix and match of AUTOEXTENSIBLE and NON-AUTOEXTENSIBLE in the tablespace setup. It is strongly recommended to make all the tablespaces AUTOEXTENSIBLE. Also there is a lot of free space in these tablespaces-consider shrinking the datafiles to release space.
  • Consider implementing the parameters pointed to out in the section relating to parameters.
  • Consider installing Oracle Enterprise Manager Express for a lightweight GUI for monitoring the DB.

 

 

 

Duplicate database until Point in Time recover, using backup location from RAC to single instance

Prepare by:  Nurullah Sharif

Scope: Point in Time Recovery

 

Duplicate database until Point in Time recover, using backup location.

#PointInTimeRecovery #DuplicateDatabase #RestoreDatabaseUsingBackupLocation

 

We are using full backup of 28-11-2017 and archivelog backup of 29-11-2017

 

Step 1: Take full backup and archive log backup.

On target database :

Rman target /

run
{
backup as compressed BACKUPSET incremental level 0 tag RMAN_BKP_SSPRODDB FORMAT ‘/nfs_bkpvol1/BACKUP/RMAN/SSPRODDB/full_db_%t_set%s_piece%p_dbid%I.rman’ database;
backup current controlfile tag RMAN_BKP_SSPRODDB FORMAT ‘/nfs_bkpvol1/BACKUP/RMAN/SSPRODDB/ctl_%t_dbid%I.rman’;
backup as compressed BACKUPSET archivelog all tag RMAN_BKP_SSPRODDB FORMAT ‘/nfs_bkpvol1/BACKUP/RMAN/SSPRODDB/arc_%t_set%s_piece%p_dbid%I.rman’ ;
}

Transfer the backup file to target Environment. To create duplicate database.
Step 2: create pfile –

Source database is running in RAC, you must remove all the parameter which are related to Cluster which require to your configuration of single instance database.

SOURCE’s PFILE FILE-

SSPRODDB1.__db_cache_size=1006632960
SSPRODDB2.__db_cache_size=1174405120
SSPRODDB1.__java_pool_size=16777216
SSPRODDB2.__java_pool_size=16777216
SSPRODDB1.__large_pool_size=16777216
SSPRODDB2.__large_pool_size=16777216
SSPRODDB1.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
SSPRODDB1.__pga_aggregate_target=1342177280
SSPRODDB2.__pga_aggregate_target=1342177280
SSPRODDB1.__sga_target=4026531840
SSPRODDB2.__sga_target=4026531840
SSPRODDB1.__shared_io_pool_size=536870912
SSPRODDB2.__shared_io_pool_size=536870912
SSPRODDB1.__shared_pool_size=2382364672
SSPRODDB2.__shared_pool_size=2214592512
SSPRODDB1.__streams_pool_size=33554432
SSPRODDB2.__streams_pool_size=33554432
*.audit_file_dest=’/u01/app/oracle/admin/SSPRODDB/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’11.2.0.0.0′
*.control_files=’+DATA/ssproddb/controlfile/current.285.916959529′,’+DATA/ssproddb/controlfile/current.284.916959529′
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_domain=”
*.db_name=’SSPRODDB’
*.db_recovery_file_dest_size=1099511627776
*.db_recovery_file_dest=’+FRA’
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=SSPRODDBXDB)’
SSPRODDB2.instance_number=2
SSPRODDB1.instance_number=1
*.log_archive_dest_1=’location=+FRA’
*.log_archive_format=’ARCH_%t_%s_%r.arc’
*.open_cursors=300
*.pga_aggregate_target=1342177280
*.processes=150
*.remote_listener=’scan:1521′
*.remote_login_passwordfile=’exclusive’
*.sga_target=4026531840
SSPRODDB2.thread=2
SSPRODDB1.thread=1
SSPRODDB2.undo_tablespace=’UNDOTBS2′
SSPRODDB1.undo_tablespace=’UNDOTBS1′

 

TARGET’s PFILE –

 

SSP.__db_cache_size=201326592
SSP.__java_pool_size=16777216
SSP.__large_pool_size=33554432
SSP.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
SSP.__pga_aggregate_target=738197504
SSP.__sga_target=1409286144
SSP.__shared_io_pool_size=0
SSP.__shared_pool_size=1107296256
SSP.__streams_pool_size=33554432
*.audit_file_dest=’/oradata1/SSP/adump’
*.audit_trail=’db’
*.compatible=’11.2.0.0.0′
*.control_files=’/oradata1/SSP/control01.ctl’,’/oradata1/SSP/control02.ctl’
*.db_block_size=8192
*.db_domain=”
*.db_name=’SSP’
*.diagnostic_dest=’/oradata1/SSP’
*.memory_target=2048M
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile=’exclusive’
*.undo_tablespace=’UNDOTBS2′
*.undo_tablespace=’UNDOTBS1′

 

 

Target Database name will SSP
Step 3: start database in nomount state and create spfile.

TARGET’s PFILE –

SQL> startup nomount pfile=’/oradata1/SSP/ssppfile.ora’;
ORACLE instance started.
Total System Global Area 2137886720 bytes
Fixed Size 2230072 bytes
Variable Size 1929382088 bytes
Database Buffers 201326592 bytes
Redo Buffers 4947968 bytes

SQL> create spfile from pfile=’/oradata1/SSP/ssppfile.ora’;
File created.

SQL> select instance_name, status from v$instance;
INSTANCE_NAME STATUS
—————- ————
SSP STARTED

SQL> shut immediate
ORA-01507: database not mounted
ORACLE instance shut down.

SQL> startup nomount
ORACLE instance started.
Total System Global Area 2137886720 bytes
Fixed Size 2230072 bytes
Variable Size 1929382088 bytes
Database Buffers 201326592 bytes
Redo Buffers 4947968 bytes

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS
—————- ————
SSP STARTED

SQL> show parameter spfile

NAME TYPE VALUE
———————————— ———– ——————————
spfile string /u01/app/oracle/product/dbhome
_1/dbs/spfileSSP.ora

Database is start with newly created spfile

 

Note: we need to recover database until  29-11-2017 11am, to make sure, we need some specific information from Source database,

We have to find the nearest time of 11am , on which archive been switched.

To find out, run below command in Source database.

 

select sequence#,to_char(completion_time, ‘dd-mm-yyyy hh24:mi:ss’), to_char(next_time,’dd-mm-yyyy hh24:mi:ss’) from v$archived_log order by 2;

We found sequence 2281 were been recorded in time which is near to our required time.
Step 4: Connect RMAN auxiliary and run duplicate command with point in time recovery using until time argument.

rman auxiliary /

RMAN> run {
set newname for database to ‘/oradata1/SSP/%b’;
duplicate target database to SSP nofilenamecheck backup location ‘/oradata1/RMAN/SSPROD’
UNTIL TIME “TO_DATE(‘2017-11-29 11:26:28’, ‘YYYY-MM-DD HH24:MI:SS’)”
logfile
group 1 (‘/oradata1/SSP/redo01.log’) size 50M,
group 2 (‘/oradata1/SSP/redo02.log’) size 50M,
group 3 (‘/oradata1/SSP/redo03.log’) size 50M;
} 3> 4> 5> 6> 7> 8> 9>

executing command: SET NEWNAME
Starting Duplicate Db at 04-DEC-17
contents of Memory Script:
{
sql clone “alter system set db_name =
”SSPRODDB” comment=
”Modified by RMAN duplicate” scope=spfile”;
sql clone “alter system set db_unique_name =
”SSP” comment=
”Modified by RMAN duplicate” scope=spfile”;
shutdown clone immediate;
startup clone force nomount
restore clone primary controlfile from ‘/oradata1/RMAN/SSPROD/ctl_961288234_dbid196682280.rman’;
alter clone database mount;
}
executing Memory Script
sql statement: alter system set db_name = ”SSPRODDB” comment= ”Modified by RMAN duplicate” scope=spfile
sql statement: alter system set db_unique_name = ”SSP” comment= ”Modified by RMAN duplicate” scope=spfile

Oracle instance shut down
Oracle instance started
Total System Global Area 2137886720 bytes
Fixed Size 2230072 bytes
Variable Size 1929382088 bytes
Database Buffers 201326592 bytes
Redo Buffers 4947968 bytes

Starting restore at 04-DEC-17
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=96 device type=DISK
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/oradata1/SSP/control01.ctl
output file name=/oradata1/SSP/control02.ctl
Finished restore at 04-DEC-17

database mounted

released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=96 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=127 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=156 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=189 device type=DISK
allocated channel: ORA_AUX_DISK_5
channel ORA_AUX_DISK_5: SID=221 device type=DISK
allocated channel: ORA_AUX_DISK_6
channel ORA_AUX_DISK_6: SID=3 device type=DISK
allocated channel: ORA_AUX_DISK_7
channel ORA_AUX_DISK_7: SID=34 device type=DISK
allocated channel: ORA_AUX_DISK_8
channel ORA_AUX_DISK_8: SID=66 device type=DISK
allocated channel: ORA_AUX_DISK_9
channel ORA_AUX_DISK_9: SID=97 device type=DISK
allocated channel: ORA_AUX_DISK_10
channel ORA_AUX_DISK_10: SID=128 device type=DISK
allocated channel: ORA_AUX_DISK_11
channel ORA_AUX_DISK_11: SID=159 device type=DISK
allocated channel: ORA_AUX_DISK_12
channel ORA_AUX_DISK_12: SID=190 device type=DISK
contents of Memory Script:
{
set until scn 126463226;
set newname for datafile 1 to
“/oradata1/SSP/system.280.916959463”;
set newname for datafile 2 to
“/oradata1/SSP/sysaux.281.916959463”;
set newname for datafile 3 to
“/oradata1/SSP/undotbs1.282.916959463”;
set newname for datafile 4 to
“/oradata1/SSP/users.283.916959463”;
set newname for datafile 5 to
“/oradata1/SSP/undotbs2.291.916959571”;
set newname for datafile 6 to
“/oradata1/SSP/users.294.940601841”;
set newname for datafile 7 to
“/oradata1/SSP/system.289.942830555”;
set newname for datafile 8 to
“/oradata1/SSP/sysaux.288.944822531”;
restore
clone database
;
}

executing Memory Script
executing command: SET until clause
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME

Starting restore at 04-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4
using channel ORA_AUX_DISK_5
using channel ORA_AUX_DISK_6
using channel ORA_AUX_DISK_7
using channel ORA_AUX_DISK_8
using channel ORA_AUX_DISK_9
using channel ORA_AUX_DISK_10
using channel ORA_AUX_DISK_11
using channel ORA_AUX_DISK_12

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /oradata1/SSP/system.280.916959463
channel ORA_AUX_DISK_1: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288205_set8943_piece1_dbid196682280.rman
channel ORA_AUX_DISK_2: starting datafile backup set restore
channel ORA_AUX_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_2: restoring datafile 00002 to /oradata1/SSP/sysaux.281.916959463
channel ORA_AUX_DISK_2: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288205_set8942_piece1_dbid196682280.rman
channel ORA_AUX_DISK_3: starting datafile backup set restore
channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00003 to /oradata1/SSP/undotbs1.282.916959463
channel ORA_AUX_DISK_3: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288205_set8946_piece1_dbid196682280.rman
channel ORA_AUX_DISK_4: starting datafile backup set restore
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00004 to /oradata1/SSP/users.283.916959463
channel ORA_AUX_DISK_4: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288206_set8947_piece1_dbid196682280.rman
channel ORA_AUX_DISK_5: starting datafile backup set restore
channel ORA_AUX_DISK_5: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_5: restoring datafile 00005 to /oradata1/SSP/undotbs2.291.916959571
channel ORA_AUX_DISK_5: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288206_set8948_piece1_dbid196682280.rman
channel ORA_AUX_DISK_6: starting datafile backup set restore
channel ORA_AUX_DISK_6: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_6: restoring datafile 00006 to /oradata1/SSP/users.294.940601841
channel ORA_AUX_DISK_6: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288206_set8949_piece1_dbid196682280.rman
channel ORA_AUX_DISK_7: starting datafile backup set restore
channel ORA_AUX_DISK_7: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_7: restoring datafile 00007 to /oradata1/SSP/system.289.942830555
channel ORA_AUX_DISK_7: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288205_set8944_piece1_dbid196682280.rman
channel ORA_AUX_DISK_8: starting datafile backup set restore
channel ORA_AUX_DISK_8: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_8: restoring datafile 00008 to /oradata1/SSP/sysaux.288.944822531
channel ORA_AUX_DISK_8: reading from backup piece /oradata1/RMAN/SSPROD/full_db_961288205_set8945_piece1_dbid196682280.rman
channel ORA_AUX_DISK_3: piece handle=/oradata1/RMAN/SSPROD/full_db_961288205_set8946_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_3: restored backup piece 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_5: piece handle=/oradata1/RMAN/SSPROD/full_db_961288206_set8948_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_5: restored backup piece 1
channel ORA_AUX_DISK_5: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_6: piece handle=/oradata1/RMAN/SSPROD/full_db_961288206_set8949_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_6: restored backup piece 1
channel ORA_AUX_DISK_6: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_7: piece handle=/oradata1/RMAN/SSPROD/full_db_961288205_set8944_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_7: restored backup piece 1
channel ORA_AUX_DISK_7: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_8: piece handle=/oradata1/RMAN/SSPROD/full_db_961288205_set8945_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_8: restored backup piece 1
channel ORA_AUX_DISK_8: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_4: piece handle=/oradata1/RMAN/SSPROD/full_db_961288206_set8947_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_4: restored backup piece 1
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: piece handle=/oradata1/RMAN/SSPROD/full_db_961288205_set8943_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_AUX_DISK_2: piece handle=/oradata1/RMAN/SSPROD/full_db_961288205_set8942_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_2: restored backup piece 1
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:35

Finished restore at 04-DEC-17

contents of Memory Script:
{
switch clone datafile all;
}
executing Memory Script
datafile 1 switched to datafile copy
input datafile copy RECID=9 STAMP=961823188 file name=/oradata1/SSP/system.280.916959463
datafile 2 switched to datafile copy
input datafile copy RECID=10 STAMP=961823188 file name=/oradata1/SSP/sysaux.281.916959463
datafile 3 switched to datafile copy
input datafile copy RECID=11 STAMP=961823188 file name=/oradata1/SSP/undotbs1.282.916959463
datafile 4 switched to datafile copy
input datafile copy RECID=12 STAMP=961823188 file name=/oradata1/SSP/users.283.916959463
datafile 5 switched to datafile copy
input datafile copy RECID=13 STAMP=961823188 file name=/oradata1/SSP/undotbs2.291.916959571
datafile 6 switched to datafile copy
input datafile copy RECID=14 STAMP=961823188 file name=/oradata1/SSP/users.294.940601841
datafile 7 switched to datafile copy
input datafile copy RECID=15 STAMP=961823188 file name=/oradata1/SSP/system.289.942830555
datafile 8 switched to datafile copy
input datafile copy RECID=16 STAMP=961823188 file name=/oradata1/SSP/sysaux.288.944822531

contents of Memory Script:
{
set until time “to_date(‘NOV 29 2017 11:26:28’, ‘MON DD YYYY HH24:MI:SS’)”;
recover
clone database
delete archivelog
;
}

executing Memory Script

executing command: SET until clause
Starting recover at 04-DEC-17
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_2
using channel ORA_AUX_DISK_3
using channel ORA_AUX_DISK_4
using channel ORA_AUX_DISK_5
using channel ORA_AUX_DISK_6
using channel ORA_AUX_DISK_7
using channel ORA_AUX_DISK_8
using channel ORA_AUX_DISK_9
using channel ORA_AUX_DISK_10
using channel ORA_AUX_DISK_11
using channel ORA_AUX_DISK_12

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=2278
channel ORA_AUX_DISK_1: reading from backup piece /oradata1/RMAN/SSPROD/arc_961288248_set8959_piece1_dbid196682280.rman
channel ORA_AUX_DISK_2: starting archived log restore to default destination
channel ORA_AUX_DISK_2: restoring archived log
archived log thread=2 sequence=1706
channel ORA_AUX_DISK_2: restoring archived log
archived log thread=1 sequence=2279
channel ORA_AUX_DISK_2: reading from backup piece /oradata1/RMAN/SSPROD/arc_961288248_set8960_piece1_dbid196682280.rman
channel ORA_AUX_DISK_3: starting archived log restore to default destination
channel ORA_AUX_DISK_3: restoring archived log
archived log thread=2 sequence=1707
channel ORA_AUX_DISK_3: reading from backup piece /oradata1/RMAN/SSPROD/arc_961288248_set8961_piece1_dbid196682280.rman
channel ORA_AUX_DISK_4: starting archived log restore to default destination
channel ORA_AUX_DISK_4: restoring archived log
archived log thread=1 sequence=2280
channel ORA_AUX_DISK_4: reading from backup piece /oradata1/RMAN/SSPROD/arc_961374649_set8977_piece1_dbid196682280.rman
channel ORA_AUX_DISK_5: starting archived log restore to default destination
channel ORA_AUX_DISK_5: restoring archived log
archived log thread=2 sequence=1708
channel ORA_AUX_DISK_5: restoring archived log
archived log thread=1 sequence=2281
channel ORA_AUX_DISK_5: reading from backup piece /oradata1/RMAN/SSPROD/arc_961374649_set8978_piece1_dbid196682280.rman
channel ORA_AUX_DISK_3: piece handle=/oradata1/RMAN/SSPROD/arc_961288248_set8961_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_3: restored backup piece 1
channel ORA_AUX_DISK_3: restore complete, elapsed time: 00:00:00
channel ORA_AUX_DISK_2: piece handle=/oradata1/RMAN/SSPROD/arc_961288248_set8960_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_2: restored backup piece 1
channel ORA_AUX_DISK_2: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_4: piece handle=/oradata1/RMAN/SSPROD/arc_961374649_set8977_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_4: restored backup piece 1
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:00:03
channel ORA_AUX_DISK_1: piece handle=/oradata1/RMAN/SSPROD/arc_961288248_set8959_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2278_916930730.dbf thread=1 sequence=2278
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch2_1706_916930730.dbf thread=2 sequence=1706
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2278_916930730.dbf RECID=5 STAMP=961823193
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2279_916930730.dbf thread=1 sequence=2279
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch2_1706_916930730.dbf RECID=4 STAMP=961823192
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch2_1707_916930730.dbf thread=2 sequence=1707
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2279_916930730.dbf RECID=2 STAMP=961823189
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2280_916930730.dbf thread=1 sequence=2280
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch2_1707_916930730.dbf RECID=1 STAMP=961823189
channel ORA_AUX_DISK_5: piece handle=/oradata1/RMAN/SSPROD/arc_961374649_set8978_piece1_dbid196682280.rman tag=RMAN_BKP_SSPRODDB
channel ORA_AUX_DISK_5: restored backup piece 1
channel ORA_AUX_DISK_5: restore complete, elapsed time: 00:00:08
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch2_1708_916930730.dbf thread=2 sequence=1708
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2280_916930730.dbf RECID=3 STAMP=961823191
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2281_916930730.dbf thread=1 sequence=2281
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch1_2281_916930730.dbf RECID=6 STAMP=961823196
channel clone_default: deleting archived log(s)
archived log file name=/u01/app/oracle/product/dbhome_1/dbs/arch2_1708_916930730.dbf RECID=7 STAMP=961823196
media recovery complete, elapsed time: 00:00:05
Finished recover at 04-DEC-17
Oracle instance started

Total System Global Area 2137886720 bytes
Fixed Size 2230072 bytes
Variable Size 1929382088 bytes
Database Buffers 201326592 bytes
Redo Buffers 4947968 bytes

contents of Memory Script:
{
sql clone “alter system set db_name =
”SSP” comment=
”Reset to original value by RMAN” scope=spfile”;
sql clone “alter system reset db_unique_name scope=spfile”;
shutdown clone immediate;
startup clone nomount;
}

executing Memory Script

sql statement: alter system set db_name = ”SSP” comment= ”Reset to original value by RMAN” scope=spfile
sql statement: alter system reset db_unique_name scope=spfile
Oracle instance shut down
connected to auxiliary database (not started)

Oracle instance started

Total System Global Area 2137886720 bytes
Fixed Size 2230072 bytes
Variable Size 1929382088 bytes
Database Buffers 201326592 bytes
Redo Buffers 4947968 bytes

sql statement: CREATE CONTROLFILE REUSE SET DATABASE “SSP” RESETLOGS ARCHIVELOG
MAXLOGFILES 192
MAXLOGMEMBERS 3
MAXDATAFILES 1024
MAXINSTANCES 32
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ( ‘/oradata1/SSP/redo01.log’ ) SIZE 50 M ,
GROUP 2 ( ‘/oradata1/SSP/redo02.log’ ) SIZE 50 M ,
GROUP 3 ( ‘/oradata1/SSP/redo03.log’ ) SIZE 50 M
DATAFILE
‘/oradata1/SSP/system.280.916959463’
CHARACTER SET AL32UTF8

contents of Memory Script:

{
set newname for tempfile 1 to
“/oradata1/SSP/temp.290.916959535”;
switch clone tempfile all;
catalog clone datafilecopy “/oradata1/SSP/sysaux.281.916959463”,
“/oradata1/SSP/undotbs1.282.916959463”,
“/oradata1/SSP/users.283.916959463”,
“/oradata1/SSP/undotbs2.291.916959571”,
“/oradata1/SSP/users.294.940601841”,
“/oradata1/SSP/system.289.942830555”,
“/oradata1/SSP/sysaux.288.944822531”;
switch clone datafile all;
}

executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /oradata1/SSP/temp.290.916959535 in control file

cataloged datafile copy
datafile copy file name=/oradata1/SSP/sysaux.281.916959463 RECID=1 STAMP=961823218
cataloged datafile copy
datafile copy file name=/oradata1/SSP/undotbs1.282.916959463 RECID=2 STAMP=961823218
cataloged datafile copy
datafile copy file name=/oradata1/SSP/users.283.916959463 RECID=3 STAMP=961823218
cataloged datafile copy
datafile copy file name=/oradata1/SSP/undotbs2.291.916959571 RECID=4 STAMP=961823218
cataloged datafile copy
datafile copy file name=/oradata1/SSP/users.294.940601841 RECID=5 STAMP=961823218
cataloged datafile copy
datafile copy file name=/oradata1/SSP/system.289.942830555 RECID=6 STAMP=961823218
cataloged datafile copy
datafile copy file name=/oradata1/SSP/sysaux.288.944822531 RECID=7 STAMP=961823218

datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=961823218 file name=/oradata1/SSP/sysaux.281.916959463
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=961823218 file name=/oradata1/SSP/undotbs1.282.916959463
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=961823218 file name=/oradata1/SSP/users.283.916959463
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=961823218 file name=/oradata1/SSP/undotbs2.291.916959571
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=961823218 file name=/oradata1/SSP/users.294.940601841
datafile 7 switched to datafile copy
input datafile copy RECID=6 STAMP=961823218 file name=/oradata1/SSP/system.289.942830555
datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=961823218 file name=/oradata1/SSP/sysaux.288.944822531

contents of Memory Script:

{
Alter clone database open resetlogs;
}

executing Memory Script

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 12/04/2017 05:06:59
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled

SQL> alter database open resetlogs;
alter database open resetlogs

*

ERROR at line 1:
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled
Please refer to
RMAN Duplicate from RAC backup fails ORA-38856 (Doc ID 334899.1)

SQL> alter system set “_no_recovery_through_resetlogs”=true scope=spfile;
system altered.

SQL> shut immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area 2137886720 bytes
Fixed Size 2230072 bytes
Variable Size 1929382088 bytes
Database Buffers 201326592 bytes
Redo Buffers 4947968 bytes
Database mounted.

SQL> alter database open resetlogs;
Database altered.

SQL> select name, open_mode from v$database;
NAME OPEN_MODE
——— ——————–
SSP READ WRITE

 

 

 

CDB and PDB concepts in Oracle 12c

Oracle introduced PDB from 12c to minimize following drawbacks in 11g

 

In 11g,sometimes we need to create numerous databases in single host which resulted below side effects and eventually slowed down whole system.

1.Too many background processes.

2.High shared memory.

3.Many copies of oracle metadata.

Now We will delve into more deeper what Oracle 12c PDB concepts bring to us

1.Multiple databases in centralized managed platform.

a.Less instance overhead.

b.Less storage.

2.Fast and easy provisioning.

3.Time savings for patch and upgrade.

4.Separation of duties.

5.Ensure full backward compatibility with Non-CDB.

6.Fully operate on RAC.

Naming the containers

Multi-tenant Architecture

1.Redo logs are common.Redo log contains annotated information pertaining to PDB.

2.Control files are common.

3.UNDO tablespace is common.

4.Temporary tablespaces are common, but each PDB can contains temp tablespaces.

5.Each container have it’s own dictionary in SYSTEM and SYSAUX tablespaces.

6.Each PDB will have it’s own tablespace and datafiles.

Concepts of Containers

 

A CDB has new characteristics compared to non-CDBs:

  • Two containers:

– The root (CDB$ROOT)

– The seed PDB (PDB$SEED)

  • Several services: one per container

– Name of root service = name of the CDB (cdb1)

  • Common users in root and seed: SYS,SYSTEM …

Common privileges granted to common users

  • Pre-defined common roles
  • Tablespaces and data files associated to each container:

– root:

— SYSTEM: system-supplied metadata and no user data

— SYSAUX

– seed: SYSTEM, SYSAUX Global

 

Root container can be connected from OS authentication or using root service name.

PDB can only be connected using service name by TNS names.Each PDB by default will create it’s own service.

Automatic Diagnostic repository

 

Manual creation of CDB .I do not recommend it.I recommend to create using DBCA.

oracle@cdv1proccmdbf01:XCORMAN[/u01/app/oracle/product/12.1.0/db_1/dbs]$

The following parameters are basic .

a.Please create pfile

cat initXCORMAN.ora

control_files=’+DATA/XCORMAN/control01.ctl’

DB_NAME=XCORMAN

ENABLE_PLUGGABLE_DATABASE=TRUE

DB_CREATE_FILE_DEST=’+DATA’

 

b.Create the database

 

Export ORACLE_SID=XCORMAN

sqlplus / as sysdba

SQL>Create database XCORMAN;

c.Close and open seed PDB

Set the session with a new parameter:

alter session set “_oracle_script”=true;

  1. Close and open the seed PDB:

 

alter pluggable database pdb$seed close;

alter pluggable database pdb$seed open;

d.Please execute post database creation scripts

  1. Execute catalog.sql and other post-creation scripts.

?/rdbms/admin/catalog.sql

?/rdbms/admin/catblock.sql

?/rdbms/admin/catproc.sql

?/rdbms/admin/catoctk.sql

?/rdbms/admin/owminst.plb

?/sqlplus/admin/pupbld.sql

e.Register into listener

SYS@XCORTST1> alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=10.XX.11.XXX)(PORT=1590))’;

System altered.

SYS@XCORTST1> alter system register;

System altered.

f.Please check in listener status now.

Creation of CDB from DBCA

PDB creation overview

Copies the data files from

PDB$SEED data files

  • Creates tablespaces SYSTEM,

SYSAUX

  • Creates a full catalog including

metadata pointing to Oracle-

supplied objects

  • Creates common users:

– Superuser SYS

  • SYSTEM
  • Creates a local user (PDBA)

granted local PDB_DBA role

  • Creates a new default service SYSAUX Global

Command line PDB creation

SYS@XCORTST1> CREATE PLUGGABLE DATABASE my_new_pdb

ADMIN USER my_pdb_admin IDENTIFIED BY my_pdb_admin

ROLES = (dba)

DEFAULT TABLESPACE my_tbs

DATAFILE ‘+DATA’ SIZE 50M AUTOEXTEND ON  2    3    4    5  ;

Pluggable database created.

SYS@XCORTST1> alter pluggable database my_new_pdb open;

Pluggable database altered.

The datafiles will be created in below directory.

ASMCMD> pwd

+DATA/XCORTST/5F2CCEBE3E091BD4E053850B330AF8E1/DATAFILE

ASMCMD> ls -ltr

Type      Redund  Striped  Time             Sys  Name

DATAFILE  UNPROT  COARSE   NOV 30 01:00:00  Y    MY_TBS.445.961378753

DATAFILE  UNPROT  COARSE   NOV 30 01:00:00  Y    SYSAUX.443.961378749

DATAFILE  UNPROT  COARSE   NOV 30 01:00:00  Y    SYSTEM.442.961378749

Creation of pluggable database using SQL Developer and manage from sql developer

Once a PDB is created using seed PDB or plugging or cloning methods, or even closed, you can view the status of the new or closed PDB by querying the STATUS column of the CDB_PDBS view.

If common users and roles had been previously created, the new or closed PDB must be synchronized to retrieve the new common users and roles from the root. The synchronization is automatically performed if the PDB is opened in read write mode.

If you open the PDB in read only mode, an error is returned.

When a PDB is opened, Oracle Database checks the compatibility of the PDB with the CDB.Each compatibility violation is either a warning or an error. If a compatibility violation is a warning, then the warning is recorded in the alert log, but the PDB is opened normally without displaying a warning message. If a compatibility violation is an error, then an error message is displayed when the PDB is opened, and the error is recorded in the alert log. You must correct the condition that caused each error. When there are errors, access to the PDB is limited to users with RESTRICTED SESSION privilege so that the compatibility violations can be addressed. You can view descriptions of violations by querying PDB_PLUG_IN_VIOLATIONS view.

Clone PDB from Existing PDB using command line

—clone pdb to another pdb—

SYS@XCORTST1> alter  PLUGGABLE DATABASE “PDB_ORCL” open read only;

Pluggable database altered.

SYS@XCORTST1> create PLUGGABLE DATABASE “PDB_ORCL_CLONE” from  “PDB_ORCL”;

Pluggable database created.

SYS@XCORTST1> alter pluggable database PDB_ORCL_CLONE close;

Pluggable database altered.

SYS@XCORTST1> alter pluggable database PDB_ORCL_CLONE open;

Pluggable database altered.

SYS@XCORTST1> alter pluggable database PDB_ORCL_CLONE open read only;

 

SYS@XCORTST1> ALTER PLUGGABLE DATABASE pdb_orcl close;

 

Pluggable database altered.

Plug and unplug PDB to another CDB using command line

In Source CDB

SYS@XCORTST1> ALTER PLUGGABLE DATABASE pdb_orcl UNPLUG INTO  ‘/tmp/pdb_orcl.xml’;

Pluggable database altered.

SYS@XCORTST1> drop PLUGGABLE DATABASE pdb_orcl;

Pluggable database dropped.

Please check the compatibility of unplugged PDB can be plugged to new CDB

 

SET SERVEROUTPUT ON
DECLARE
compatible CONSTANT VARCHAR2(3) :=
CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(pdb_descr_file=> ‘/tmp/pdb_orcl.xml’,pdb_name => ‘pdb_orcl’)
WHEN TRUE THEN ‘YES’
ELSE ‘NO’
END;
BEGIN
DBMS_OUTPUT.PUT_LINE(compatible);
END;
/

In Target CDB

SYS@XCLONEST1> create PLUGGABLE DATABASE “PDB_ORCL” using ‘/tmp/pdb_orcl.xml’ nocopy;

Pluggable database created.

Possible way to Convert non-CDB to PDB of another CDB

There are three possible methods to plug a non-CDB database into a CDB.

Whichever method is used, you have to get the non-CDB into a transitionally-consistent state

and open it in restricted mode.

  • Either use transportable tablespace (TTS) or full conventional export / import or

transportable database (TDB) provided that in the last one any user-defined object

resides in a single user-defined tablespace.

  • Or use DBMS_PDB package to construct an XML file describing the non-CDB data files to

plug the non-CDB into the CDB as a PDB. This method presupposes that the non-CDB is

an Oracle 12c database.

  • Or use replication with GoldenGate

Convert non-CDB to PDB of another CDB example

In non-cdb

Export ORACLE_SID=ORCL

sqlplus / as sysdba

Shutdown immediate;

Startup mount;

Alter database open read only;

Exec edbms_pdb.describe(‘/tmp/orcl.xml’);

In target CDB

sqlplus / as sysdba

Create pluggable database pdb1 using ‘/tmp/orcl.xml’;

Run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script to delete

unnecessary metadata from PDB SYSTEM tablespace. This script must be run before the

PDB can be opened for the first time. This script is required for plugging non-CDBs only.

Sqlplus / as sysdba

Connect sys/xxx@pdb2 as sysdba

@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb;

 

Index creation progress in oracle

----------Progress index creation--------------

set lines 200
 col "Index Operation" for a60
 select sess.sid as "Session ID", sql.sql_text as "Index Operation",
 longops.totalwork, longops.sofar,
 longops.elapsed_seconds/60 as "Runtime Mins",
 longops.time_remaining/60 as "ETA Mins"
 from v$session sess, v$sql sql, v$session_longops longops
 where
 sess.sid=longops.sid
 and sess.sql_address = sql.address
 and sess.sql_address = longops.sql_address
 and sess.status = 'ACTIVE'
 and longops.totalwork > longops.sofar
 and sess.sid not in ( SELECT sys_context('USERENV', 'SID') SID FROM DUAL)
 and upper(sql.sql_text) like '%INDEX%'

Step by step Upgrade cloud control from 13cR1 to 13cR2 (13.2.0.0.0)

Prerequisites for Upgrading to Enterprise
Manager Cloud Control 13c Release 2

1.First latest PSU patch needs to be applied in GRID and ORACLE home.

In my case latest patch of that time was as below

Patch 24412235: GRID INFRASTRUCTURE PATCH SET UPDATE 12.1.0.2.161018 (OCT2016)

Prepatch operation log file location: /u01/app/product/12.1.0/grid/cfgtoollogs/crsconfig/hapatch_2016-12-19_01-58-57PM.log
CRS service brought down successfully on home /u01/app/product/12.1.0/gridStart applying binary patch on home /u01/app/oracle/product/12.1.0/db_1
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db_1Start applying binary patch on home /u01/app/product/12.1.0/grid
/u01/app/product/12.1.0/grid/cfgtoollogs/opatchauto/opatchauBinary patch applied successfully on home /u01/app/product/12.1.0/gridStarting CRS service on home /u01/app/product/12.1.0/grid
Postpatch operation log file location: /u01/app/product/12.1.0/grid/cfgtoollogs/crsconfig/hapatch_2016-12-19_02-06-48PM.log
CRS service started successfully on home /u01/app/product/12.1.0/gridStarting database service on home /u01/app/oracle/product/12.1.0/db_1
Database service successfully started on home /u01/app/oracle/product/12.1.0/db_1Preparing home /u01/app/oracle/product/12.1.0/db_1 after database service restarted
No step execution required………
Prepared home /u01/app/oracle/product/12.1.0/db_1 successfully after database service restartedTrying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db_1
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db_1Verifying patches applied on home /u01/app/product/12.1.0/grid
Patch verification completed with warning on home /u01/app/product/12.1.0/gridVerifying patches applied on home /u01/app/oracle/product/12.1.0/db_1
Patch verification completed with warning on home /u01/app/oracle/product/12.1.0/db_1OPatchAuto successful.

——————————–Summary——————————–

Patching is completed successfully. Please find the summary as follows:

Host:orcldb
SIDB Home:/u01/app/oracle/product/12.1.0/db_1
Summary:

==Following patches were SKIPPED:

Patch: /u01/app/grid/24412235/21436941
Reason: This patch is not applicable to this specified target type – “oracle_database”

Patch: /u01/app/grid/24412235/24007012
Reason: This patch is not applicable to this specified target type – “oracle_database”

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/grid/24412235/23854735
Log: /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-12-19_13-59-18PM_1.log

Patch: /u01/app/grid/24412235/24006101
Log: /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2016-12-19_13-59-18PM_1.log

Host:orcldb
SIHA Home:/u01/app/product/12.1.0/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/grid/24412235/21436941
Log: /u01/app/product/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-12-19_14-01-50PM_1.log

Patch: /u01/app/grid/24412235/23854735
Log: /u01/app/product/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-12-19_14-01-50PM_1.log

Patch: /u01/app/grid/24412235/24006101
Log: /u01/app/product/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-12-19_14-01-50PM_1.log

Patch: /u01/app/grid/24412235/24007012
Log: /u01/app/product/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-12-19_14-01-50PM_1.log

OPatchauto session completed at Mon Dec 19 14:11:39 2016
Time taken to complete the session 15 minutes, 53 seconds

Optimizer Adaptive Feature Disabling Requirements

Ensure that you disable the optimizer adaptive feature (optimizer_adaptive_features=FALSE) in the Oracle Database that houses the Management Repository. To do so, follow these steps:

1. Disable the optimizer adaptive feature by setting the optimizer_adaptive_features parameter to FALSE. To do so, run the following SQL command:
alter system set optimizer_adaptive_features=false scope=both

2. Restart the database.

3. Verify that the changes have taken effect. To do so, run the following SQL command:
show parameter adaptive;
You should see the following output:
NAME TYPE VALUE
———————————————————————
optimizer_adaptive_features boolean FALSE

 

Login and Logoff Trigger Setting Verification Requirements

SQL> SELECT owner,trigger_name FROM sys.dba_triggers WHERE TRIGGERING_EVENT LIKE ‘LOGOFF%’ AND status=’ENABLED’;

OWNER
——————————————————————————–
TRIGGER_NAME
——————————————————————————–
GSMADMIN_INTERNAL
GSMLOGOFF

SQL> alter trigger GSMADMIN_INTERNAL.GSMLOGOFF disable;

Trigger altered.

Selectively Skipping Some Job Type Updates for Reduced Downtime
of Your Enterprise Manager System

 

To skip or postpone some job types from being upgraded, follow these steps:
1. Identify the job types that you want to exclude from being upgraded during the
downtime.
To do so, as a SYSMAN user, log in to the database that houses the Oracle
Management Repository, and run the following query. Particularly look for the
job types that have a large number of active executions.

SELECT job_type, COUNT(1) as n_execs
FROM MGMT_JOB_EXEC_SUMMARY
JOIN MGMT_JOB_TYPE_INFO USING (job_type_id)
WHERE status NOT IN (3,4,5,8,18,19,23)

GROUP BY job_type
HAVING COUNT(1) > 5000
ORDER BY COUNT(1) DESC;

2. Exclude the other job types you identified.
To do so, run the following query to exclude a job type from the
MGMT_PARAMETERS table. For each job type you want to exclude, you must
run one INSERT statement. For example, if you have three job types to exclude,
then run the INSERT statement three times, each with a job type you want to
exclude.
INSERT INTO MGMT_PARAMETERS(parameter_name, parameter_value)
VALUES (‘mgmt_job_skip_job_type_upg.1’, ‘<job type>’);

 

EMKEY Copy Requirements

[IN CASE OF MULTI-OMS UPGRADE, PERFORM THIS STEP ONLY FOR THE
FIRST OMS UPGRADE]

Ensure that you copy the emkey from the existing OMS to the existing Management
Repository. To do so, run the following command on the OMS you are about to
upgrade. Here, <ORACLE_HOME> refers to the Oracle home of the OMS.

[oracle@orcl middleware]$ /u01/app/oemcc/middleware/bin/emctl config emkey -copy_to_repos -sysman_pwd sysm4n4dm1n
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running “emctl config emkey -remove_from_repos”.

Check status now

[oracle@orcl middleware]$ /u01/app/oemcc/middleware/bin/emctl status emkey
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey is configured properly, but is not secure. Secure the EMKey by running “emctl config emkey -remove_from_repos”.

 

Stop Cloud Control now

Stop Cloud Control

export OMS_HOME=/u01/app/oemcc/middleware
export AGENT_HOME=/u01/app/oemcc/agent/agent_inst

$OMS_HOME/bin/emctl stop oms -all
$AGENT_HOME/bin/emctl stop agent
$OMS_HOME/bin/emctl extended oms jvmd stop -all
$OMS_HOME/bin/emctl extended oms adp stop –all

Now you can start Installation of oracle cloud control 13cR2

1.Create required directory where we need to install new software

Create a directory for the new installation.

/u01/app/oemcc13cR2/Middleware

2.Run installer now from the download and unzipped location.

$ chmod u+x em13200_linux64.bin $ ./em13200_linux64.bin

Please uncheck the security updates checkbox and click the “Next” button.
Click the “Yes” button the subsequent warning dialog.

If you wish to check for updates, enter the required details, or check the “Skip” option and click the “Next” button.

If you have performed the prerequisites as described, the installation should pass all prerequisite checks. Click the “Next” button.

Select the “Upgrade an existing Enterprise Manager System” option. Select the “One-System Upgrade” option. Select the OMS to be upgraded, then click the “Next” button.

Enter the new middleware home location, I used “/u01/app/oemcc13cR2/Middleware”, then click the “Next” button.

Enter the passwords for the SYS and SYSMAN users and check both the check boxes, then click the “Next” button.

If you want to change ASM password,you can use below:-

orapwd file=’+DATA’ dbuniquename=’OEMCC01′ password=xxx

 

Now you get warning as  “The upgrade process has detected MD5 certificate usage for some agent-OMS communication”

I have referred below metalink note for securing all clients and OMS

EM 13.2: SHA2 Certificate Pre-upgrade Verification for OMS & Agent (Doc ID 2179909.1)

Please create CA certificates

Download the zip file SecureCommunicationScan.zip to the OMS host. Archive contains three folders (/bin, /data, /metadata).

Unzip SecureCommunicationScan.zip to SecureCommunicationScan Directory

Install/setup of Deployment Procedure

Register Deployment Procedure

$OMS_HOME/bin/emctl register oms metadata -service procedures -core -sysman_pwd XXXXX -file /home/oracle/SecureCommunicationScan/metadata/procedure/Secure_communications_scan.xml

Register Software Library

emctl register oms metadata -service swlib -file “/home/oracle/SecureCommunicationScan/metadata/procedure/Secure_communications_scan.xml” -core

Execution of Deployment Procedure:-

Create input file CheckForMD5Usage.input with below values as template and replace the sample with the values from your environment.

#*****BEGIN FILE*******
agentBeingTested.0.name=appl:1838
agentBeingTested.0.type=oracle_emd
agentBeingTested.0.defaultHostCred=NAME:SYSMAN:appl
agentBeingTested.1.name=orcldb:1838
agentBeingTested.1.type=oracle_emd
agentBeingTested.1.defaultHostCred=NAME:SYSMAN:orcl
#Add more agents
Mgmt_Rep_Outer.0.name=Management Services and Repository
Mgmt_Rep_Outer.0.type=oracle_emrep
Mgmt_Rep_Outer.0.defaultHostCred=NAME:SYSMAN:CREDNAME
util_loc=/home/oracle/SecureCommunicationScan/
dbusername=sysman
dbpasswd=XXX
connectstr=(DESCRIPTION\=(ADDRESS_LIST\=(ADDRESS\=(PROTOCOL\=TCP)(HOST\=orcldb)(PORT\=1613)))(CONNECT_DATA\=(SID\=OEMCC01l)))
#*******END FILE********

 

$OMS_HOME/bin/emcli submit_procedure -name=CheckForMD5Usage -input_file=data:/home/oracle/CheckForMD5Usage.input

Run the command below to create a new OEM Certificate Authority

[oracle@orcl ~]$ $OMS_HOME/bin/emctl secure createca
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Creating CA… Started.
Enter Enterprise Manager Root (SYSMAN) Password :
Successfully created CA with ID 2

Please stop OMS and restart

[oracle@orcl ~]$ $OMS_HOME/bin/emctl stop oms
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Stopping Oracle Management Server…
Oracle Management Server Successfully Stopped
Oracle Management Server is Down
JVMD Engine is Down

[oracle@orcl ~]$ $OMS_HOME/bin/emctl start oms
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Starting Oracle Management Server…
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server …
BI Publisher Server Already Started
BI Publisher Server is Up

Check the login now with SYSMAN

[oracle@orcl ~]$ /u01/app/oemcc/middleware/bin/emcli login -username=SYSMAN
Enter password :

Login successful

Please secure all clients now

[oracle@orcl ~]$ /u01/app/oemcc/middleware/bin/emcli get_ca_info -details > agentlist.txt

Let us check the agentlist.txt file and secure agent using below command for each hosts where agents are running.

[oracle@orcl ~]$ /u01/app/oemcc/middleware/bin/emcli secure_agents -agt_names=”appl:3872″ -username=”oracle” -password=”XXX”
Summary :
Number of valid agents provided for secure : 1, Filtered : 0, Selected : 1
Details :
Number of agents provided for secure (input) : 1
After merging and removing invalid/duplicate agents : 1

Job “SECUREAGENTS JOB 2016-Dec-22 01:48:37” submitted for securing 1 agents
Job ID: 443A783632C06169E0531224540AF536
Execution ID: 443A783632C36169E0531224540AF536

Please check the list now

Please make a report now which will show list of clients already secured.

[oracle@orcl ~]$ /u01/app/oemcc/middleware/bin/emcli get_ca_info -details

Info about CA with ID: 1
CA is not configured
Signature algorithm : sha512
Key strength : 1024
DN: CN=orcl, C=US, ST=CA, L=EnterpriseManager on orcl, OU=EnterpriseManager on orcl, O=EnterpriseManager on orcl
Serial# : -2835457060041615466
Valid From: Wed Nov 09 09:50:34 PET 2016
Valid Till: Sun Nov 08 09:50:34 PET 2026
There are no Agents registered with CA ID 1

Info about CA with ID: 2
CA is configured
Signature algorithm : sha512
Key strength : 1024
DN: CN=orcl, C=US, ST=CA, L=CA2, OU=EnterpriseManager on orcl, O=EnterpriseManager on orcl
Serial# : -4458307202098057612
Valid From: Tue Dec 20 03:12:48 PET 2016
Valid Till: Sat Dec 19 03:12:48 PET 2026
Number of Agents registered with CA ID 2 is 16
app:3872
dbl:3872

Now please secure OMS now

[oracle@orcl ~]$ $OMS_HOME/bin/emctl secure oms -console
Oracle Enterprise Manager Cloud Control 13c Release 1
Copyright (c) 1996, 2015 Oracle Corporation. All rights reserved.
Securing OMS… Started.
Enter Enterprise Manager Root (SYSMAN) Password :
Enter Agent Registration Password :
Securing OMS… Successful
Restart OMS

Please proceed next.

Please check which plugins will be upgraded.

Select any additional plug-ins you want to deploy, then click the “Next” button.

Enter the WebLogic details, then click the “Next” button. Just add a number on to the end of the OMS Instance Base Location specified by default. I used “/u01/app/oemcc13cR2/gc_inst”.

BI published to share location is disabled.

Accept the default ports by clicking the “Next” button.

Please review information, click the “Upgrade” button.

Wait while the installation and configuration take place.

When prompted, run the root scripts, then click the “OK” button.

Make note of the URLs, then click the “Close” button to exit the installer

This information is also available at:

/u01/app/oemcc13cR2/Middleware/install/setupinfo.txt

Please review the URL now.

See the following for information pertaining to your Enterprise Manager installation:

Use the following URL to access:

1. Enterprise Manager Cloud Control URL: https://orcl:7803/em
2. Admin Server URL: https://orcl:7102/console
3. BI Publisher URL: https://orcl:9803/xmlpserver

The following details need to be provided while installing an additional OMS:

1. Admin Server Host Name: orcl
2. Admin Server Port: 7102

You can find the details on ports used by this deployment at : /u01/app/oemcc13cR2/Middleware/install/portlist.ini

NOTE:
An encryption key has been generated to encrypt sensitive data in the Management Repository. If this key is lost, all encrypted data in the Repository becomes unusable.

A backup of the OMS configuration is available in /u01/app/oemcc13cR2/gc_inst/em/EMGC_OMS1/sysman/backup on host orcl. See Cloud Control Administrators Guide for details on how to back up and recover an OMS.

NOTE: This backup is valid only for the initial OMS configuration. For example, it will not reflect plug-ins installed later, topology changes like the addition of a load balancer, or changes to other properties made using emctl or emcli. Backups should be created on a regular basis to ensure they capture the current OMS configuration. Use the following command to backup the OMS configuration:
/u01/app/oemcc13cR2/Middleware/bin/emctl exportconfig oms -dir <backup dir>

Please upgrade the agent now

 

Please run root.sh from following directory in OEMCC Cloud control server.

cd /u01/app/oemcc/agent/agent_13.2.0.0.0
sh root.sh

Please run root.sh to all client servers

For other added servers:-

Please run root.sh in all agents

[root@olg1prbscsdbf02 agent_13.2.0.0.0]#
cd /u01/oemcc/agent_13.2.0.0.0

sh root.sh

Reference:-

EM 13.2: SHA2 Certificate Pre-upgrade Verification for OMS & Agent (Doc ID 2179909.1)

Click to access EMUPG.pdf

Oracle time stamp showing wrong timezone connecting TNS remote

Time stamp problem:-

sqlplus system/****@ORCL

SQL>alter session set nls_date_format=’dd-mm-yyyy hh:mi:ss’;

SQL>select instance_name from v$instance;

SQL>select   sysdate,   current_timestamp,   systimestamp,   localtimestamp from   dual;

The above query will show different time .

How to Fix:-

1.1                               Stop the apps and DB cluster/HAS.

su – root

cd /orasw/app/grid/product/11.2.0/grid/bin

./crsctl stop crs –f

su – grid

cd /orasw/app/grid/product/11.2.0/grid/bin
./crsctl stop has

1.2                               Correct the TZ setting

Correct the TZ setting in the file $GRID_HOME/crs/install/s_crsconfig_<hostname>_env.txt.

Please comment the old TZ value(#TZ=GMT-08:00) in s_crsconfig_<hostname>_env.txt.

Add new TZ value(TZ=US/Pacific).Do not modify any other value.
su – root
cd /orasw/app/grid/product/11.2.0/grid/crs/install/
vi s_crsconfig_<hostname>_env.txt
TZ=US/Pacific
#TZ=GMT-08:00

1.3                               Restart the cluster and DB/HAS

 

su – root
cd /orasw/app/grid/product/11.2.0/grid/bin
./crsctl start crs

For single instance HAS

su – grid

cd /orasw/app/grid/product/11.2.0/grid/bin
./crsctl start has

Checking timestamp now:-

sqlplus system/****@ORCL

SQL>alter session set nls_date_format=’dd-mm-yyyy hh:mi:ss’;

SQL>select instance_name from v$instance;

SQL>select   sysdate,   current_timestamp,   systimestamp,   localtimestamp from   dual;

The above query will show same time .

PostgreSQL Database Backup to S3 and Restore from S3 using simple shell script

Following topic will discuss how to Backup and Restore PostgreSQL Databases directly in S3. This topic will help you to leverage AWS S3 -IA storage to use as Backup Storage for any number of PostgreSQL databases running under an instance.

This topic will cover PostgreSQL Instance running on EC2 or on-premises servers.

Step 1:- Create a new mount point or directory in the database server to use as staging location for placing the backup files before moving into S3.

Example:

 

mkdir /postgresql_bkp

chown postgres:postgres /postgresql_bkp

chmod -R 700 /postgresql_bkp

Step 2:- Now Create a bucket in S3 and inside the bucket create a directory.

Example: s3://phi-postgresql-backup/postgresql01-pgbackup

Step 3:- Now install AWS CLI in the DB server under postgres user following AWS recommended guide:-

http://docs.aws.amazon.com/cli/latest/userguide/awscli-install-linux.html

Configure AWS CLI

http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

Step 4:- Now configure password of postgres user so during pg_dump or psql using script, the password of the user can be taken from the environment variable.

In postgres user home, create a file .pgpass

vi $HOME/.pgpass

localhost:5432:postgres:postgres:somepassword

In .bash_profile make following entry:–

vi $HOME/.bash_profile

[ -f /etc/profile ] && source /etc/profile
PGDATA=/var/lib/pgsql/9.6/data
export PGDATA
PATH=$PATH:/usr/pgsql-9.6/bin
export PATH
PGPASSFILE=/var/lib/pgsql/.pgpass
export PGPASSFILE
PGPASSWORD=somepassword
export PGPASSWORD
SCRIPTS=/var/lib/pgsql/9.6/data/scripts
export SCRIPTS
# If you want to customize your settings,
# Use the file below. This is not overridden
# by the RPMS.
[ -f /var/lib/pgsql/.pgsql_profile ] && source /var/lib/pgsql/.pgsql_profile
export PATH=~/.local/bin:$PATH

Step 5:- Now we will create the scripts to backup the postgresql databases.

In Step 4, we configure $SCRIPT to point a directory /var/lib/pgsql/9.6/data/scripts where we will place our scripts.

The backup script will do the following:-

=> Backup all the databases running in the postgresql instance.

=> Move the backup files to S3 bucket in S3 – IA storage class one after another and remove the backup file from the local storage.

=> Delete backup files older than 7 days from the S3 bucket.

cd $SCRIPTS

vi pg_dump_s3.sh

#!/bin/bash

set -e

# Database credentials
PG_HOST=”localhost”
PG_USER=”postgres”

# S3
S3_PATH=”phi-postgresql-backup/postgresql01-pgbackup”

# get databases list
dbs=`psql -l -t | cut -d’|’ -f1 | sed -e ‘s/ //g’ -e ‘/^$/d’`

# Vars
NOW=$(date +”%m-%d-%Y-at-%H-%M-%S”)
#DIR=”$( cd “$( dirname “${BASH_SOURCE[0]}” )” && pwd )”
DIR=”/postgresql_bkp/pgsql_backup_logical”
SCRIPT=”/var/lib/pgsql/9.6/data/scripts”
echo “PostgreSQL Database Logical Backup is started and stored in s3://phi-postgresql-backup/postgresql01-pgbackup”
for db in $dbs; do
if [ “$db” != “template0” ] && [ “$db” != “template1″ ]; then
# Dump database
pg_dump -Fc -h $PG_HOST -U $PG_USER $db > $DIR/”$NOW”_”$db”.dump
# Copy to S3
aws s3 cp $DIR/”$NOW”_”$db”.dump s3://$S3_PATH/”$NOW”_”$db”.dump –storage-class STANDARD_IA
echo “* Database:” $db “is archived at timestamp:” $NOW
# Delete local file
rm $DIR/”$NOW”_”$db”.dump
fi
# Log
#echo “* Database: ” $db “is archived at timestamp: ” $NOW
done

# Delere old files
echo “Following backups older than 7 days are deleted from s3://phi-postgresql-backup/postgresql01-pgbackup”;
$SCRIPT/s3_autodelete.sh $S3_PATH “7 days”

vi s3_autodelete.sh

#!/bin/bash

# Usage:
# ./s3_autodelete.sh bucket/path “7 days”

set -e

# Maximum date (will delete all files older than this date)
maxDate=`date +%s –date=”-$2″`

# Loop thru files
aws s3 ls s3://$1/ | while read -r line; do
# Get file creation date
createDate=`echo $line|awk {‘print $1″ “$2’}`
createDate=`date -d”$createDate” +%s`

if [[ $createDate -lt $maxDate ]]
then
# Get file name
fileName=`echo $line|awk {‘print $4’}`
if [[ $fileName != “” ]]
then
echo “* Delete $fileName”;
aws s3 rm s3://$1/$fileName
fi
fi
done;

Step 6:- Now schedule the script in cron pg_dump_s3.sh to run every day to take the latest backup to S3 and delete the old backup from S3.
sudo crontab -u postgres -e

0 23 * * * /var/lib/pgsql/9.6/data/scripts/pg_dump_s3.sh > /var/lib/pgsql/9.6/data/log/pg_dump_s3-`date +\%Y\%m\%d\%H\%M\%S` 2>&1

Step 7:- This step will explain the restoration part from S3. Only consideration made in this step is the DB name is a single word name which the script will fetch from the backup file name itself. If the DB name contain any special character then the italic line in the restoration script need to change accordingly to fetch the DB name.

The script will do the following:-

=> From the list of available backups in the S3 bucket, the script will identify the database name and the backup file.

=> Fetch the backup from S3 to local storage.

=> Drop old database and create new database with the same owner, tablespace and privileges.

=> Restore the dump to database.

cd $SCRIPTS

vi pg_restore_s3.sh
#!/bin/bash
set -e

# Usage:
# ./pg_restore_s3.sh backup_file_name

AWS_BUCKET=”phi-postgresql-backup/postgresql01-pgbackup”
DUMP_OBJECT=$1
DIR=”/postgresql_bkp/pgsql_backup_logical”

echo “Postgres restore from s3 – looking for dump in s3 at s3://phi-postgresql-backup/postgresql01-pgbackup”
if [ -n “${DUMP_OBJECT}” ]; then
objectSet=$(aws s3 ls s3://${AWS_BUCKET}/${DUMP_OBJECT}| awk ‘{print $4 }’)
if [ -z “$objectSet” ]; then
echo “Backup file not found in s3 bucket”
else
echo “Downloading dump from s3 – $object”
aws s3 cp s3://phi-postgresql-backup/postgresql01-pgbackup/${DUMP_OBJECT} ${DIR}/${DUMP_OBJECT}
dbName=$(echo $DUMP_OBJECT | awk -F _ ‘{print $2 }’| awk -F . ‘{print $1 }’)
echo “dropping old database $dbName”
dbPgOwner=$(echo “SELECT r.rolname FROM pg_catalog.pg_database d JOIN pg_catalog.pg_roles r ON d.datdba = r.oid JOIN pg_catalog.pg_tablespace t on d.dattablespace
= t.oid WHERE d.datname= ‘$dbName’;”| psql 2>&1)
dbOwner=$(echo $dbPgOwner | awk ‘{print $3 }’)
dbPgTblspc=$(echo “SELECT t.spcname FROM pg_catalog.pg_database d JOIN pg_catalog.pg_roles r ON d.datdba = r.oid JOIN pg_catalog.pg_tablespace t on d.dattablespace
= t.oid WHERE d.datname= ‘$dbName’;”| psql 2>&1)
dbTblspc=$(echo $dbPgTblspc | awk ‘{print $3 }’)
PRE_RESTORE_PSQL=”GRANT ALL PRIVILEGES ON DATABASE $dbName to $dbOwner; REVOKE connect ON DATABASE $dbName FROM PUBLIC;”
dropResult=$(echo “SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = ‘$dbName’; \
DROP DATABASE $dbName;” | psql 2>&1)
if echo $dropResult | grep “other session using the database” -> /dev/null; then
echo “RESTORE FAILED – another database session is preventing drop of database $dbName”
exit 1
fi
createResult=$(echo “CREATE DATABASE $dbName OWNER $dbOwner tablespace = $dbTblspc;” | psql 2>&1)
echo “postgres restore from s3 – filling target database with dump”
if [ -n “$PRE_RESTORE_PSQL” ]; then
echo “postgres restore from s3 – executing pre-restore psql”
printf %s “$PRE_RESTORE_PSQL” | psql
fi
pg_restore -d $dbName $DIR/$DUMP_OBJECT
fi
echo “Postgres restore from s3 – complete – $DUMP_OBJECT $dbName”
else
echo “Please pass the backup dump file name as script argument”
fi

Using the scripts above, you are storing the backup of multiple postgresql databases directly in S3 and also restoring particular dump file to postgresql without any manual effort.