ORA-01111: name for data file is unknown – rename to correct file

Error in dataguard alert log/start managed recovery process:-

SYS@XXX>alter database recover managed standby database;
alter database recover managed standby database
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-01111: name for data file 61 is unknown – rename to correct file
ORA-01110: data file 61: ‘/u01/app/oracle/product/12.1.0/db_1/dbs/UNNAMED00061’
ORA-01157: cannot identify/lock data file 61 – see DBWR trace file
ORA-01111: name for data file 61 is unknown – rename to correct file
ORA-01110: data file 61: ‘/u01/app/oracle/product/12.1.0/db_1/dbs/UNNAMED00061’

 

Solution:-

Check the exact size of the datafile in primary for file_id=61

In Standby,

SYS@XXX>alter system set standby_file_management=manual;

System altered.

SYS@XXX>Alter database create datafile ‘/u01/app/oracle/product/12.1.0/db_1/dbs/UNNAMED00061’ as ‘+DATA/’ size 34358689792;

Database altered.

SYS@XXX>alter system set standby_file_management=auto;

System altered.

Now you will be able to start managed recover process.

Restore Archivelog from particular sequence to another location using tape backup

Error in DR alert log:-

FAL[client]: Failed to request gap sequence
GAP – thread 2 sequence 9432-9473
DBID 2085418592 branch 924361120
FAL[client]: All defined FAL servers have been attempted.
————————————————————
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that’s sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
————————————————————

Now in Primary,do the following steps to restore missing archive log

export ORACLE_SID=INSTANCE_NAME

rman target / catalog rcat/***@RCAT

run {
 allocate channel 'dev_00' type 'sbt_tape' parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OCRMPD1,OB2BARLIST=CDV-PRD-DB-OCRMPD1_MENSUAL)';
 allocate channel 'dev_01' type 'sbt_tape' parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OCRMPD1,OB2BARLIST=CDV-PRD-DB-OCRMPD1_MENSUAL)';
 allocate channel 'dev_02' type 'sbt_tape' parms 'SBT_LIBRARY=/opt/omni/lib/libob2oracle8_64bit.so,ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=OCRMPD1,OB2BARLIST=CDV-PRD-DB-OCRMPD1_MENSUAL)';
 set archivelog destination to '/NFSPREPROD/NEW';
 RESTORE ARCHIVELOG FROM sequence 20453 thread 1;

RESTORE ARCHIVELOG FROM sequence 9432 thread 2;
 }

Monitor steps for Logical Standby Database

1.V$LOGSTDBY_PROCESS displays dynamic information about what is happening to the Data Guard log apply services. This view is helpful when diagnosing performance problems during the logical application of archived redo logs to the standby database, and it can be helpful for other problems. This view is for logical standby databases only.

 

INPUT QUERY:-

 

SELECT STATUS FROM V$LOGSTDBY_PROCESS WHERE TYPE = ‘READER’;

SELECT * FROM V$LOGSTDBY_PROCESS;

 

2.V$LOGMNR_DICTIONARY_LOAD displays information about LogMiner dictionaries for all active LogMiner sessions on the system. Each query of this view will return one row for each attached LogMiner session

 

INPUT QUERY:-

 

SELECT PERCENT_DONE, COMMAND
FROM V$LOGMNR_DICTIONARY_LOAD
WHERE SESSION_ID = (SELECT SESSION_ID FROM V$LOGSTDBY_STATE);

 

3.The following query can be used to check apply process and what archived logs are being mined currently.

 

INPUT QUERY:-

 

select thread# trd, sequence#,
first_change#, next_change#,
dict_begin beg, dict_end end,
to_char(timestamp, ‘hh:mi:ss’) timestamp,
(case when l.next_change# < p.read_scn then ‘YES’
when l.first_change# < p.applied_scn then ‘CURRENT’
else ‘NO’ end) applied
from dba_logstdby_log l, dba_logstdby_progress p
order by thread#, first_change#;

 

4.The DBA_LOGSTDBY_EVENTS view record interesting events that occurred during the operation of SQL Apply. By default, the view records the most recent 100 events. However, you can change the number of recorded events by calling DBMS_LOGSTDBY.APPLY_SET() PL/SQL procedure. If SQL Apply should stop unexpectedly, the reason for the problem is also recorded in this view

INPUT QUERY:-

 

SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS ORDER BY EVENT_TIMESTAMP, COMMIT_SCN;

 

5.V$LOGSTDBY displays dynamic information about what is happening to the Data Guard log apply services. This view is very helpful when diagnosing performance problems during the logical application of archived redo logs to the standby database, and it can be helpful for other problems. This view is for logical standby databases only

 

INPUT QUERY:-

SELECT TYPE, STATUS, HIGH_SCN  FROM V$LOGSTDBY;

 

6.The DBA_LOGSTDBY_LOG view provides dynamic information about archived logs being processed by SQL Apply

 INPUT QUERY:-

select a.sequence_received,a.thread,b.sequence_applied,b.thread from (select max(sequence#) sequence_received,thread# thread from dba_logstdby_log group by thread#) a,
(select max(sequence#) sequence_applied,thread# thread from dba_logstdby_log where applied=’YES’ group by thread#) b where a.thread=b.thread

 

7.This query is used to check latency between primary and logical standby database.

 

INPUT QUERY:-

 

select latest_time,applied_time,mining_time, (latest_time-applied_time)*24*3600 gap_in_second from V$logstdby_progress;

 

 

 

Logical Standby Database setup step by step from RAC to Single Instance in 11g

LOGICAL STANDBY DATABASE STEP BY STEP  

 

Primary DB: CLRVMS (ip:10.3.248.5 and 10.3.248.8)

Logical Standby DB: CLRVMS_B(ip:10.3.248.129)

 

  1. Please check whether Primary database is in archivelog or not.It should be in archivelog.

 

select log_mode from v$database;

LOG_MODE

————

ARCHIVELOG

 

  1. Enable Force Logging,

SQL> ALTER DATABASE FORCE LOGGING;

Database altered.

 

  1. Configure Standby Redo Log on PRIMARY

 

  1. Check the Online log files and sizes on Primary

SQL>SELECT GROUP#,BYTES FROM V$LOG;

Create Standby Redo Log on Primary where number of Standby redo log group should be same with number of online log files and both size should be same:

 

ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 524288000;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 524288000;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 SIZE 524288000;

ALTER DATABASE ADD STANDBY LOGFILE GROUP 8 SIZE 524288000;

 

Verify the standby redo log file groups were created(do this after the creation of stanby database)

SQL> SELECT GROUP#,THREAD#,SEQUENCE#,ARCHIVED,STATUS,bytes FROM V$STANDBY_LOG;

SQL> select group#,bytes from v$standby_log;

4.Please add following entry marked in bold in existing spfile or pfile of primary database CLRVMS for configuring  logical standby database.

 

CLRVMS2.__db_cache_size=4496293888
CLRVMS1.__db_cache_size=4429185024
CLRVMS2.__java_pool_size=33554432
CLRVMS1.__java_pool_size=33554432
CLRVMS1.__large_pool_size=100663296
CLRVMS2.__large_pool_size=33554432
CLRVMS1.__oracle_base=’/opt/app/oracle’#ORACLE_BASE set from environment
CLRVMS2.__oracle_base=’/opt/app/oracle’#ORACLE_BASE set from environment
CLRVMS2.__pga_aggregate_target=3120562176
CLRVMS1.__pga_aggregate_target=3120562176
CLRVMS2.__sga_target=5804916736
CLRVMS1.__sga_target=5804916736
CLRVMS2.__shared_io_pool_size=0
CLRVMS1.__shared_io_pool_size=0
CLRVMS2.__shared_pool_size=1140850688
CLRVMS1.__shared_pool_size=1140850688
CLRVMS2.__streams_pool_size=33554432
CLRVMS1.__streams_pool_size=33554432
*._kghdsidx_count=1
*._log_deletion_policy=’ALL’
*._shared_pool_reserved_pct=10
*.audit_file_dest=’/opt/app/oracle/admin/CLRVMSTT/adump’
*.audit_trail=’DB’
*.cluster_database=true
*.compatible=’11.2.0.0.0′
*.control_files=’+LOG/clrvms/controlfile/current.763.792022797′
*.db_block_size=8192
*.db_create_file_dest=’+DATA’
*.db_create_online_log_dest_1=’+LOG’
*.db_domain=”
*.db_file_name_convert=’+DATA1′,’+DATA’
*.db_name=’CLRVMS’
*.db_recovery_file_dest=’+LOG’
*.db_recovery_file_dest_size=536870912000
*.deferred_segment_creation=FALSE
*.diagnostic_dest=’/opt/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLRVMSXDB)’
*.fal_client=’CLRVMS’
*.fal_server=’CLRVMS_B’
CLRVMS2.instance_number=2
CLRVMS1.instance_number=1
*.job_queue_processes=10
*.log_archive_config=’DG_CONFIG=(CLRVMS,CLRVMS_B)’
*.log_archive_dest_1=’LOCATION=+LOG’
*.log_archive_dest_2=’SERVICE=CLRVMS_B VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CLRVMS_B’
*.log_archive_dest_state_2=’ENABLE’
*.log_archive_format=’%t_%s_%r.dbf’
*.log_file_name_convert=’+LOG1′,’+LOG’
*.memory_max_target=8912896000
*.memory_target=6878658560
CLRVMS2.memory_target=8925478912
CLRVMS1.memory_target=8925478912
*.nls_length_semantics=’CHAR’
*.open_cursors=4000
*.processes=600
*.query_rewrite_enabled=’TRUE’
*.query_rewrite_integrity=’TRUSTED’
*.remote_listener=’vms-db-scan.nmc.vms:1521′
*.remote_login_passwordfile=’exclusive’
*.sec_case_sensitive_logon=FALSE
*.sga_target=0
CLRVMS2.sga_target=0
CLRVMS1.sga_target=0
*.shared_pool_size=0
CLRVMS1.shared_pool_size=0
CLRVMS2.shared_pool_size=1073741824
*.standby_file_management=’AUTO’
CLRVMS2.thread=2
CLRVMS1.thread=1
*.undo_retention=28800
CLRVMS1.undo_tablespace=’UNDOTBS1′
CLRVMS2.undo_tablespace=’UNDOTBS2′

 

  1. Create the parameter file for standby from Primary database and change relevant parameters indicated by sample:

CREATE PFILE=’/tmp/initCLRVMS_B.ora’ from spfile;

 

Sample Standby init.ora file will look below:-

CLRVMS_B.__db_cache_size=1845493760
CLRVMS_B.__java_pool_size=16777216
CLRVMS_B.__large_pool_size=16777216
CLRVMS_B.__oracle_base=’/data/app/oracle11g’#ORACLE_BASE set from environment
CLRVMS_B.__pga_aggregate_target=1493172224
CLRVMS_B.__sga_target=2801795072
CLRVMS_B.__shared_io_pool_size=0
CLRVMS_B.__shared_pool_size=872415232
CLRVMS_B.__streams_pool_size=16777216
*._kghdsidx_count=1
*._shared_pool_reserved_pct=10
*.audit_file_dest=’/data/app/oracle11g/admin/CLRVMS_B/adump’
*.audit_trail=’DB’
*.compatible=’11.2.0.0.0′
*.control_files=’+LOG1/clrvms/controlfile/control01.ctl’#Restore Controlfile
*.db_block_size=8192
*.db_create_file_dest=’+DATA1′
*.db_create_online_log_dest_1=’+LOG1′
*.db_domain=”
*.db_file_name_convert=’+DATA’,’+DATA1′
*.db_name=’CLRVMS_B’#db_name
*.db_recovery_file_dest=’+LOG1′
*.db_recovery_file_dest_size=536870912000
*.db_unique_name=’CLRVMS_B’
*.deferred_segment_creation=FALSE
*.diagnostic_dest=’/data/app/oracle11g’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLRVMSXDB)’
*.fal_client=’CLRVMS_B’
*.fal_server=’CLRVMS’
*.filesystemio_options=’setall’
*.job_queue_processes=5
*.log_archive_config=’DG_CONFIG=(CLRVMS,CLRVMS_B)’
*.log_archive_dest_1=’LOCATION=+LOG1′
*.log_archive_dest_2=’SERVICE=CLRVMS LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=CLRVMS’
*.log_archive_format=’%t_%s_%r.dbf’
*.log_file_name_convert=’+LOG’,’+LOG1′
*.memory_max_target=4G
*.memory_target=4G
*.nls_length_semantics=’CHAR’
*.open_cursors=4000
*.processes=600
*.query_rewrite_enabled=’TRUE’
*.query_rewrite_integrity=’TRUSTED’
*.remote_login_passwordfile=’exclusive’
*.sec_case_sensitive_logon=FALSE
*.standby_file_management=’AUTO’
*.undo_management=’AUTO’
*.undo_retention=28800

 

  1. Connect using RMAN to take backup of the Primary db:

 

export ORACLE_SID=CLRVMS1

rman target / nocatalog log=/tmp/rman_bkp.log << EOF1
run
{
backup as compressed backupset database format ‘/export/home/oracle/stdby/CLRVMS_%U’;
backup as compressed backupset archivelog all format ‘/export/home/oracle/stdby/CLRVMS_ARCH_%U’;
backup current controlfile for standby format ‘/export/home/oracle/stdby/stdby.ctl’;
}
exit;
EOF1

7.Copy the backup files (backupset,standby controlfile,archivelog and password file) to the same location in standby database server.

 

8.Please start the standby database in nomount mode.

 

On standby server

 

export ORACLE_SID=CLRVMS_B

SQL>startup nomount

  1. Create password file for standby database

 

export ORACLE_SID=CLRVMS_B
orapwd file=orapwCLRVMS_B  password=******
OR

copy the Primary DB password file and rename it to standby db name.

Ex:-

cp orapwCLRVMS1 orapwCLRVMS_B

 

  1. Check the checksum of both the password file. Both the password file checksum should be same:

 

cksum orapwCLRVMS1

cksum orapwCLRVMS_B

 

  1. Establish the connectivity between primary (both the nodes) and standby.

CLRVMS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vms-db-scan.nmc.vms)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CLRVMS)
)
)

CLRVMS_B =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.248.129)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CLRVMS_B)
)
)

NOTE: Use TNSPING <servicename> to check the connectivity between PRIMARY and STANDBY

Please add following entry in standby tnsnames.ora

CLRVMS_B =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.3.248.129)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CLRVMS_S)
)
)

CLRVMS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vms-db-scan.nmc.vms)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = CLRVMS)
)
)

 

 

  1. Start the STANDBY instance and use RMAN duplicate to create standby database NOTE: Make sure the ASM instance also running.

$export ORACLE_SID=CLRVMS_B

Sqlplus / as sysdba

SQL>startup nomount
SQL>create spfile from pfile=’/temp/initCLRVMS_B.ora’
SQL> exit

 

$export ORACLE_SID=CLRVMS_B

rman target sys/******@CLRVMS auxiliary /

RMAN> RUN {
allocate auxiliary channel C1 device type disk;
duplicate target database for standby;
}

  1. Create SRL(Standby Redo logs) on standby

 

NOTE:SIZE OF STANDBY LOGFILE SHOULD BE SAME AS ONLINE LOGFILE

For Example,

ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 100M;

 

N.B:- This is not necessary if the Standby logfile is already exist.

 

  1. Start the MRP process in PRIMARY

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE

 

15. To make real time log transfer in standb

SQL> alter database recover managed standby database cancel;

SQL> alter database recover managed standby database using current logfile disconnect

  1. Make Sure that Physical Standby is in Sync with Primary Database

    Use following query on Standby to check:

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1;

Thread Last Sequence Received Last Sequence Applied

———- ———————- ———————

1                 233996                233996

2                 282421                282421

 

  1. Stop Redo Apply on the Physical Standby Database (CLRVMS_B):

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

  

  1. Build a Dictionary in the Redo Data on Primary Database

SQL> EXECUTE DBMS_LOGSTDBY.BUILD;

The DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.

19. Convert to a Logical Standby Database

SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY <new db_name>; (in mount state)

the control file will be modified with the new database name)

Ex:

 

SQL> ALTER DATABASE RECOVER TO LOGICAL STANDBY CLRVMS_B;

 

The redo logs will be applied until the Log Miner dictionary is found in the log files. That could take several minutes. If a dictionary build is not successfully performed on the primary database, this command will never complete. You can cancel the SQL statement by issuing the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL statement from another SQL session.

 

  1. Shutdown and Startup Logical Standby Database in Mount Stage

SQL> SHUTDOWN;
SQL> STARTUP MOUNT;

 

 

  1. Open the Logical Standby Database

SQL> ALTER DATABASE OPEN RESETLOGS;

  1. Start Logical Apply on Standby

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

 

23.Skip the schema that do not need to be replicated

 

SKIP table from SQL Apply: If SQL Apply process hangs to a particular table and you want to skip that table, do the following:

 

  1. please see if you can skip apply for SYS.AUD$ table. If yes, please setup skip rule, so that transaction related with this table will not be applied
    eg:
    stop logical apply
    SQL> exec dbms_logstdby.skip(‘DML’,’CLARITY’,’TABLE_NAME’);

start logical sapply

 

24.Checking whether logical standby is in sync with primary

 

SQL>select a.sequence_received,a.thread,b.sequence_applied,b.thread from (select max(sequence#) sequence_received,thread# thread from dba_logstdby_log group by thread#) a,

(select max(sequence#) sequence_applied,thread# thread from dba_logstdby_log where applied=’YES’ group by thread#) b where a.thread=b.thread

 

SEQUENCE_RECEIVED          THREAD SEQUENCE_APPLIED          THREAD

—————– ————— —————- —————

283420               2           283420               2

235031               1           235031               1

 

FAST START FAILOVER (FSFO) CONFIGURATION USING FAR SYNC NODE AS OBSERVER IN DATAGUARD USING DGMGRL

FAST START FAILOVER (FSFO) CONFIGURATION USING  FAR SYNC NODE AS OBSERVER IN DATAGUARD USING DGMGRL

 

1.For data guard DGMGRL configuration and Far Sync setup please follow below link:-

http://clouddba.co/step-step-oracle-active-data-guard-far-sync-rac/

 

2.Please check current database status

DGMGRL> show database  ebilpd1;

 

Database – ebilpd1

 

Role:               PRIMARY

Intended State:     TRANSPORT-ON

Instance(s):

EBILPD11

EBILPD12

 

Database Status:

SUCCESS

 

DGMGRL> show database ebildr1;

Database – ebildr1

Role:               PHYSICAL STANDBY

Intended State:     APPLY-ON

Transport Lag:      0 seconds (computed 0 seconds ago)

Apply Lag:          0 seconds (computed 0 seconds ago)

Average Apply Rate: 5.00 KByte/s

Real Time Query:    OFF

Instance(s):

EBILDR11 (apply instance)

EBILDR12

 

Database Status:

SUCCESS

 

  1. Enable SYNC transport between Primary and standby

 

EDIT DATABASE EBILDR1 SET PROPERTY RedoRoutes = ‘(LOCAL : EBILPD1 SYNC)’;

EDIT DATABASE EBILPD1 SET PROPERTY RedoRoutes = ‘(LOCAL : ORCLFS SYNC ALT =(EBILDR1 SYNC FALLBACK))’;

edit database EBILDR1 set property FastStartFailoverTarget=’EBILPD1′;

edit database EBILPD1 set property FastStartFailoverTarget=’EBILDR1′;

 

4.Enable flashback on Primary and Standby

SQL>alter database flashback on;

 

5.Now enable fast start failover from dgmgrl after login to Primary

 

export ORACLE_SID=EBILPD1

dgmgrl /

DGMGRL>  enable fast_start failover ;

Enabled.

 

6.But We will get below error as observer is not setup yet

DGMGRL> show configuration

 

Configuration – DGconfig

Protection Mode: MaxAvailability

Members:

ebilpd1 – Primary database

Error: ORA-16820: fast-start failover observer is no longer observing this database

orclfs  – Far sync instance

ebildr1 – (*) Physical standby database

Error: ORA-16820: fast-start failover observer is no longer observing this database

Fast-Start Failover: ENABLED

 

Configuration Status:

ERROR   (status updated 38 seconds ago)

 

7.Now please connect to Far Sync server from dgmgrl invoked from Primary and start Observer.

 

[oracle@EBILPD1 XXX trace]$ dgmgrl /

DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production

 

Copyright (c) 2000, 2013, Oracle. All rights reserved.

 

Welcome to DGMGRL, type “help” for information.

Connected as SYSDG.

DGMGRL> connect sys@orclfs

Password:

Connected as SYSDBA.

DGMGRL> START OBSERVER;

Observer started

 

 

8.From Another terminal, login to primary node and check the status of DGMGRL configuration.

[oracle@EBILPD1 xxx ~]$ dgmgrl /

DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production

 

Copyright (c) 2000, 2013, Oracle. All rights reserved.

 

Welcome to DGMGRL, type “help” for information.

Connected as SYSDG.

DGMGRL> show configuration;

 

Configuration – DGconfig

 

Protection Mode: MaxAvailability

Members:

ebilpd1 – Primary database

orclfs  – Far sync instance

ebildr1 – (*) Physical standby database

 

Fast-Start Failover: ENABLED

 

Configuration Status:

SUCCESS   (status updated 53 seconds ago)

 

Step by Step Oracle Active Data Guard Far Sync RAC

     Step by Step Oracle Active Data Guard Far Sync RAC

Active Data Guard Far Sync, a new capability with Oracle Database 12c, eliminates compromise by extending zero data loss protection to a replica database located at any distance from the primary database. Active Data Guard Far Sync accomplishes this with minimal expense or complexity compared to other extended distance data protection and availability solutions.

We need to ensure zero data loss for various mission critical databases with minimum impact on production.Oracle dataguard ensures zero data loss by ‘SYNC’ parameter.

How ‘SYNC’ works for dataguard:-

The ‘SYNC’ ensures your committed data will be written to disk in standby site as soon as it commits.In primary,the redo generated during commit will be first written in redo buffer and it will transmit to standby redo log as well to ensure committed data has been protected by standby.The primary will not commit the data until it is written in standby redo of DR site.

Performance impact of ‘SYNC’

As the distance between primary and standby increases, however, the total round-trip time required to acknowledge the remote log file write can reach a point where it has too great of an impact on database performance to make it practical to support zero data loss protection.

Oracle Active Data Guard Far Sync Feature

 

  • Is a very slim/light weight ODG special instance with no physical data files of its own acts as a middle layer between a primary database and all its standby databases configured far away.

Use instance caging 3 in order to achieve the smallest possible SGA. Reducing the CPU_COUNT during testing had no effect on the performance of the Far sync instance. » MAA tests determined that a 300MB SGA 300 with CPU_COUNT=1 on Linux was sufficient for Far Sync.(CPU_COUNT=1 SGA_TARGET=300M)

  • Owns only server parameter file (spfile), a standby control file and set of Standby redo logs (SRLs).The Far Sync ODG should be in mount mode.
  • The purpose of Far Sync instance is to receive redo from the primary database synchronously and transport them to respective remote destinations, a maximum of 29 destinations, asynchronously in real time, provides zero data loss fail-over capabilities.
  • Supports Maximum availability and maximum performance protection modes.
  • Advised to deploy/configure closer to the primary database location, roughly 30-150 miles, to avoid the network latency while shipping redo.
  • Best recommended option when standby databases are placed thousands of kilometer away from a primary site.
  • The solo purpose is to offloads/minimizes a primary database performance overheads.
  • Part of an active data guard option that requires separate license.
  • Its role can’t be converted either to primary or any other type of standby database.
  • Consumes minimal server resources, CPU, disk, memory etc.

 

 

Primary FarSync Standby
IP 10.84.1.1, 10.84.1.2 10.84.1.3, 10.84.1.4 10.84.1.5, 10.84.1.6
Database Name EBILPD1 ORCLFS EBILDR1
Instance Type RAC RAC RAC

 

  1. Please take pfile from spfile in primay database.The pfile will be as following.

EBILPD12.__data_transfer_cache_size=0
EBILPD11.__data_transfer_cache_size=0
EBILPD12.__db_cache_size=1442840576
EBILPD11.__db_cache_size=2013265920
EBILPD12.__java_pool_size=234881024
EBILPD11.__java_pool_size=234881024
EBILPD12.__large_pool_size=5469372416
EBILPD11.__large_pool_size=5469372416
EBILPD11.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
EBILPD12.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
EBILPD12.__pga_aggregate_target=2583691264
EBILPD11.__pga_aggregate_target=2583691264
EBILPD12.__sga_target=12884901888
EBILPD11.__sga_target=12884901888
EBILPD12.__shared_io_pool_size=167772160
EBILPD11.__shared_io_pool_size=167772160
EBILPD12.__shared_pool_size=4429185024
EBILPD11.__shared_pool_size=3858759680
EBILPD12.__streams_pool_size=0
EBILPD11.__streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/EBILPD1/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’12.1.0.2.0′
*.control_files=’+REDO_BILLP/EBILPD1/CONTROLFILE/current.256.928072537′,’+FRA/EBILPD1/CONTROLFILE/current.265.928072537′
*.db_block_size=8192
*.db_create_file_dest=’+DATA_BILLP’
*.db_create_online_log_dest_1=’+REDO_BILLP’
*.db_create_online_log_dest_2=’+FRA’
*.db_domain=’world.internal’
*.db_file_multiblock_read_count=8
*.db_files=2048
*.db_name=’EBILPD1′
*.db_recovery_file_dest=’+FRA’
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=EBILPD1XDB)’
*.dml_locks=20272
EBILPD11.instance_number=1
EBILPD12.instance_number=2
*.job_queue_processes=8
*.log_archive_dest_1=’LOCATION=+ARCH_BILLP’
*.log_archive_format=’log_%d_%t_%s_%r.arc’
*.log_buffer=2147483648
*.memory_max_target=0
*.memory_target=0
*.open_cursors=500
*.open_links=10
*.open_links_per_instance=100
*.parallel_max_servers=2590
*.parallel_min_servers=576
*.pga_aggregate_limit=9437184000
*.pga_aggregate_target=2583691264
*.processes=3000
*.remote_login_passwordfile=’exclusive’
*.sga_max_size=12884901888
*.sga_target=12884901888
*.shared_pool_reserved_size=50331648
*.standby_file_management=’AUTO’
EBILPD12.thread=2
EBILPD11.thread=1
*.undo_retention=3600
EBILPD12.undo_tablespace=’UNDOTBS2′
EBILPD11.undo_tablespace=’UNDOTBS1′
*.use_large_pages=’ONLY’

  1. The pfile needs to be change and copied to Far Sync node.The following parameters are changed to make Far Sync pfile ready.

cat initORCLFS1.ora

*.audit_file_dest=’/u01/app/oracle/admin/ORCLFS/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’12.1.0.2.0′
*.control_files=’+DATA_REP/ORCLFS/control.ctl’
*.db_block_size=8192
*.db_create_file_dest=’+DATA_REP
*.db_create_online_log_dest_1=’+REDO_REP
*.db_create_online_log_dest_2=’+FRA’
*.db_domain=’world.internal’
*.db_file_multiblock_read_count=8
*.db_files=2048
*.db_name=’EBILPD1′
*.db_recovery_file_dest=’+FRA’
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLFSXDB)’
*.dml_locks=20272
ORCLFS1.instance_number=1
ORCLFS2.instance_number=2
*.job_queue_processes=8
*.log_archive_dest_1=’LOCATION=+ARCH_REP
*.log_archive_format=’log_%d_%t_%s_%r.arc’
*.log_buffer=2147483648
*.memory_max_target=0
*.memory_target=0
*.open_cursors=500
*.open_links=10
*.open_links_per_instance=100
*.parallel_max_servers=2590
*.parallel_min_servers=576
*.pga_aggregate_limit=9437184000
*.pga_aggregate_target=2583691264
*.processes=3000
*.remote_login_passwordfile=’exclusive’
*.sga_max_size=12884901888
*.sga_target=12884901888
*.shared_pool_reserved_size=50331648
*.standby_file_management=’AUTO’
ORCLFS2.thread=2
ORCLFS1.thread=1
*.undo_retention=3600
ORCLFS2.undo_tablespace=’UNDOTBS2′
ORCLFS1.undo_tablespace=’UNDOTBS1′
*.use_large_pages=’ONLY’
*.db_unique_name=’ORCLFS

  1. The pfile needs to be change and copied to standby node.The following parameters are changed to make standby pfile ready.

cat initEBILDR11.ora

*.audit_file_dest=’/u01/app/oracle/admin/EBILDR1/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’12.1.0.2.0′
*.control_files=’+FRA/EBILDR1/control.ctl’
*.db_block_size=8192
*.db_create_file_dest=’+DATA_CRM
*.db_create_online_log_dest_1=’+REDO_CRM
*.db_create_online_log_dest_2=’+FRA’
*.db_domain=’world.internal’
*.db_file_multiblock_read_count=8
*.db_files=2048
*.db_name=’EBILPD1′
*.db_recovery_file_dest=’+FRA’
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=EBILDR1XDB)’
EBILDR11.instance_number=1
EBILDR12.instance_number=2
*.job_queue_processes=8
*.log_archive_dest_1=’LOCATION=+ARCH_CRM
*.log_archive_format=’log_%d_%t_%s_%r.arc’
*.log_buffer=2147483648
*.memory_max_target=0
*.memory_target=0
*.open_cursors=500
*.open_links=10
*.open_links_per_instance=100
*.remote_login_passwordfile=’exclusive’
*.sga_max_size=16884901888
*.sga_target=16884901888
*.standby_file_management=’AUTO’
EBILDR12.thread=2
EBILDR11.thread=1
*.undo_retention=3600
EBILDR12.undo_tablespace=’UNDOTBS2′
EBILDR11.undo_tablespace=’UNDOTBS1′
#*.use_large_pages=’ONLY’
*.db_unique_name=’EBILDR1

  1. The tns needs to be changed/added and copied to all node to $ORACLE_HOME/network/admin/tnsnames.ora

 

EBILDR1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = DR-scan.world.internal)(PORT = 1591))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = EBILDR1.world.internal)
)
)

EBILPD1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = PROD-scan.world.internal)(PORT = 1591))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = EBILPD1.world.internal)
)
)

ORCLFS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST =FS-scan.world.internal)(PORT = 1591))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCLFS.world.internal)
)

 

  1. Copy password file from primary and copy it to FarSync and Standby ASM.

In Primary

su – grid

ASMCMD> pwget –dbuniquename EBILPD1
+DATA_BILLP/EBILPD1/PASSWORD/pwdebilpd1.256.928072397

ASMCMD> pwcopy +DATA_BILLP/EBILPD1/PASSWORD/pwdebilpd1.256.928072397 /tmp/
copying +DATA/PrimaryDB/PASSWORD/pwdPrimaryDB.276.913723271 -> /tmp/pwdPrimaryDB.276.913723271

In FarSync host:-

su – grid

scp -pr oracle@10.84.44.6:/tmp/pwdebilpd1.256.928072397 /tmp/

ASMCMD>pwcopy /tmp/pwdebilpd1.256.928072397 +DATA_REP/pwdORCLFS.ora

In standby host:-

su – grid

scp -pr oracle@10.84.44.6:/tmp/pwdebilpd1.256.928072397 /tmp/

ASMCMD>pwcopy /tmp/pwdebilpd1.256.928072397 +DATA_CRM/pwdEBILDR1.ora

Create spfile from pfile in both FarSync and standby

In FarSync

SQL> create spfile=’+DATA_REP/ORCLFS/spORCLFS.ora’ from pfile;

In Standby

SQL> create spfile=’+DATA_CRM/EBILDR1/spfileEBILDR1.ora’ from pfile;

  1. Please add SRVCTL service for both standby and FarSync

In FarSync after login using oracle user

srvctl add database -d ORCLFS -o /u01/app/oracle/product/12.1.0/db_1 -r PHYSICAL_STANDBY -s mount -pwfile +DATA_REP/pwdORCLFS.ora -spfile ‘+DATA_REP/ORCLFS/spORCLFS.ora’

srvctl add instance -d ORCLFS -i ORCLFS1 -n prfsdbf01
srvctl add instance -d ORCLFS -i ORCLFS2 -n prfsdbf02

In standby after login using oracle user

srvctl add database -d EBILDR1 -o /u01/app/oracle/product/12.1.0/db_1 -r PHYSICAL_STANDBY -s mount -pwfile +DATA_CRM/pwdEBILDR1.ora -spfile ‘+DATA_CRM/EBILDR1/spfileEBILDR1.ora’

srvctl add instance -d EBILDR1 -i EBILDR11 -n prdrdbf01
srvctl add instance -d EBILDR1 -i EBILDR12 -n prdrdbf02

  1. Please take backup of controlfile for FarSync and copy it to FarSync node

SQL>ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS ‘/tmp/orclfs.ctl’;

scp -pr *.ctl  oracle@10.84.44.6:/tmp/

The following steps needs to be done in FarSync to restore FarSync controlfile

rman target /

restore controlfile to ‘+DATA_REP/ORCLFS/control.ctl’ from ‘/tmp/orclfs.ctl’;

  1. Please add standby redolog file in FarSync and check status

—add standby log—-

set serveroutput on;
declare
log_num number;
log_size number;
log_num_standby number;
begin
for j in (select inst_id i from gv$instance) loop
select count(*) into log_num from v$log where thread#=j.i;
select max(BYTES) into log_size from gv$log where inst_id=j.i ;
for i in 1..(log_num+1) loop
dbms_output.put_line(‘ALTER DATABASE ADD STANDBY LOGFILE THREAD ‘||j.i ||’ SIZE ‘||log_size||’;’);
end loop;
end loop;
end;
/

SQL> select database_role from v$database;

DATABASE_ROLE
—————-
FAR SYNC

SQL> select DB_UNIQUE_NAME,DEST_ROLE from V$DATAGUARD_CONFIG;

no rows selected

SQL>alter system set log_archive_config=’DG_CONFIG=(EBILPD1,ORCLFS,EBILDR1)’;
SQL>alter system set log_archive_dest_2=’service=EBILDR1 ASYNC valid_for=(standby_logfiles,standby_role) db_unique_name=EBILDR1′;

SQL> ALTER DATABASE SET STANDBY TO MAXIMIZE AVAILABILITY;

  1. Enable forced logging in primary

In order to implement Standby Database I enable ‘Forced Logging’. This option ensures that even in the event that a ‘nologging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.

SQL> alter database force logging;

 

  1. In Primary add following dataguard related parameters.

SQL>alter system set log_archive_dest_2=’service=ORCLFS SYNC AFFIRM alternate=log_archive_dest_3 valid_for=(online_logfiles,primary_role) db_unique_name=ORCLFS’;
SQL>alter system set log_archive_dest_3=’service=EBILDR1 SYNC max_failure=1 alternate=log_archive_dest_2 valid_for=(online_logfiles,primary_role) db_unique_name=EBILDR1′;
SQL>alter system set log_archive_dest_state_3=alternate;
SQL>alter system set log_archive_config=’DG_CONFIG=(EBILPD1,ORCLFS,EBILDR1)’;
SQL>alter system set fal_server=’EBILDR1′;
SQL>alter system set standby_file_management=’AUTO’;

SQL> ALTER DATABASE SET STANDBY TO MAXIMIZE AVAILABILITY;

  1. Take RMAN backup in primary to be restored in standby

export ORACLE_SID=EBILPD1

run
{
allocate channel ch1 type disk format ‘/oraclecd/rman/Primary_bkp_for_stndby_%U’;
allocate channel ch2 type disk format ‘/oraclecd/rman/Primary_bkp_for_stndby_%U’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
backup current controlfile for standby;
}

 

  1. In Standby,We need to copy the backup and restore

export ORACLE_SID=EBILDR1

rman target sys/***@EBILPD1 auxiliary /

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

After that please open the standby database for active data guard and start managed recovery.

RMAN>alter database open;

RMAN>alter database recover managed standby database disconnect;

  1. Now please check alert logs and sync.

For Oracle RDBMS Version 12.1 set the parameter ”_redo_transport_stall_time”=60 in all instances within the configuration for best return to synchronization after a node outage with an ALTERNATE Far Sync configuration

DGMGRL configuration

Please enable DG_BROKER using below command

alter system set dg_broker_start=false sid=’*’;

–Please place DG related configuration file in ASM in case of RAC–
alter system set dg_broker_config_file1=’+FRA/dr1.dat’ sid=’*’ scope=both;
alter system set dg_broker_config_file2=’+FRA/dr2.dat’ sid=’*’ scope=both;

alter system set dg_broker_start=true sid=’*’;

Please reset those parameters if you did not use DGMGRL (For example as in above case)

alter system set log_archive_dest_2=”;
alter system set log_archive_dest_3=”;
alter system set log_archive_config=”;
alter system set fal_server=”;

Now login to primary and start DGMGRL configuration

export ORACLE_SID=EBILPD1

dgmgrl /
create configuration ‘DGconfig’ as primary database is EBILPD1 connect identifier is EBILPD1;
ADD FAR_SYNC ORCLFS as connect identifier is ORCLFS;
add database EBILDR1 as connect identifier is EBILDR1;
edit FAR_SYNC ORCLFS set property RedoRoutes='(EBILPD1 :EBILDR1 ASYNC)’;
EDIT DATABASE EBILDR1 SET PROPERTY LogXptMode = ‘SYNC’;
EDIT DATABASE EBILPD1 SET PROPERTY MaxFailure = 0;
EDIT FAR_SYNC ORCLFS SET PROPERTY MaxFailure = 1;
EDIT DATABASE EBILDR1 SET PROPERTY MaxFailure = 0;
EDIT DATABASE EBILDR1 SET PROPERTY RedoRoutes = ‘(LOCAL : EBILPD1 ASYNC)’;

EDIT DATABASE EBILPD1 SET PROPERTY RedoRoutes = ‘(LOCAL : ORCLFS SYNC ALT =(EBILDR1 ASYNC FALLBACK))’;

DISABLE CONFIGURATION;
EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;

ENABLE CONFIGURATION;

Check status

[oracle@EBILPD1 olg1prbscsdbf01 ~]$ dgmgrl /

DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type “help” for information.

Connected as SYSDG.

DGMGRL> show configuration;

Configuration – DGconfig

Protection Mode: MaxAvailability

Members:

ebilpd1 – Primary database

orclfs  – Far sync instance

ebildr1 – Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

SUCCESS   (status updated 57 seconds ago)

Now all Dataguard related configurations will be automatically setup.

In Primary,

log_archive_dest_2 string service=”orclfs”, SYNC AFFIRM delay=0 optional compression=disable max_failure=1 max_connections=1 reopen=300 db_unique_name=”orclfs” net_timeout=30,alternate=LOG_ARCHIVE_DEST_3 v
alid_for=(online_logfile,all_roles)

log_archive_dest_3 string service=”ebildr1″, ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_c
onnections=1 reopen=300 db_unique_name=”ebildr1″ net_timeout=30, alternate=LOG_ARCHIVE_DEST_2 valid_for=(online_logfile,all_roles)

log_archive_dest_state_3 string ALTERNATE

  In FarSync

log_archive_dest_2 string service=”ebildr1″, ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name=”ebildr1″ net_timeout=30, valid_for=(standby_logfile,all_roles)

SQL> show parameter fal;

NAME TYPE VALUE
———————————— ———– ——————————
fal_client string
fal_server string ebilpd1, ebildr1

In standby

SQL> show parameter fal;

NAME TYPE VALUE
———————————— ———– ——————————
fal_client string
fal_server string ebilpd1, orclfs

 

Reference

Click to access farsync-2267608.pdf

 

 

 

 

 

 

 

 

Script to sync check of applied archive log between 2 RAC databases

1.First create the shell script mentioning all environmental variables adjusted to your environment.

vi dataguard_dr_sync_check.sh
 #!/bin/bash
 cd /u01/app/oracle/shells

for k in ORCL
 do
 export ORACLE_BASE=/u01/app/oracle
 export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
 export ORACLE_SID=$k
 $ORACLE_HOME/bin/sqlplus -S monitor@"$k"/xxx << EOF >> dataguard_dr_sync_check_orcl_`date +\%d\%m\%Y\%H`.log
 @dataguard_dr_sync_check_orcl.sql
 exit
 EOF
 done

2.For 2 node RAC,below will be dataguard_dr_sync_check_orcl.sql script .For 4 nodes,just add another 2 sql with thread and union it.

SET HEADING OFF
 SELECT 'SYNC CHECK OF '||NAME||' & '||SUBSTR(NAME,0,LENGTH(NAME)-3)||'EXA1' FROM V$DATABASE;
 SET HEADING ON
 PROMPT ===================================================
 SELECT THREAD, LOG_ARCHIVED,LOG_APPLIED, LOG_ARCHIVED-LOG_APPLIED LOG_GAP
 FROM
 ( SELECT THREAD# THREAD FROM V$ARCHIVED_LOG WHERE THREAD#=1),
 ( SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=1),
 ( SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=1),
 ( SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID=3 AND APPLIED='YES' and THREAD#=1 )
 UNION
 SELECT THREAD, LOG_ARCHIVED,LOG_APPLIED, LOG_ARCHIVED-LOG_APPLIED LOG_GAP
 from
 ( SELECT THREAD# THREAD FROM V$ARCHIVED_LOG WHERE THREAD#=2 ),
 ( SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=2 ),
 ( SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=2 ),
 ( SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID=3 AND APPLIED='YES' and THREAD#=2);

ORACLE DATA GUARD FOR RAC 12C USING SEPARATE REPLICATION NETWORK

ORACLE DATA GUARD FOR RAC 12C USING SEPERATE REPLICATION NETWORK

Table of Contents

Revision history

Rev. Resp Date Note.
PA1   07.10.2015 Initial Version
.

1    Environment

1.1   Hardware

In below table you can find information about hardware on which Oracle components will be installed:
  • Hardware details

Parameter Value Value Value Value
Hostname Kolkata1 Kolkata2 Delhi1 Delhi2
Server type Physical Physical Physical Physical
Role primary primary standby standby
Physical ip 10.51.11.7 10.51.11.8 10.51.11.36 10.51.11.37
Dataguard physical ip 10.51.20.69 10.51.20.70 10.75.20.101 10.75.20.102
Instance name ORCLPD11 ORCLPD12 ORCLDR11 ORCLDR12
 

1.2                      Software

In below table you can find information about software  installed:
  • Hardware details
Parametr Value Value Value  
Hostname Kolkata1 Kolkata2 Delhi1 Delhi2
Operating system RHEL 7.2 RHEL 7.2 RHEL 7.2 RHEL 7.2
Oracle 12.1.0.2.0 12.1.0.2.0 12.1.0.2.0 12.1.0.2.0

1.3        Networking

(This step is only required if you have separate replication network for DR traffic.It is not necessary if there is no separate DR  replication network)

1.3.1          Primary Site

Network in Primaryfor DR(primary database)
su – root

. oraenv

+ASM1

oifcfg iflist -p -n

10.51.20.64 is the subnet IP which we got from above command.

10.51.20.71 and 10.51.20.72 are VIP address.

srvctl add network -k 2 -S 10.51.20.64/255.255.255.240/bond3.161 -w static -v

srvctl add vip -n Kolkata1 -A 10.51.20.71/255.255.255.240/bond3.161 -k 2
srvctl add vip -n Kolkata2 -A 10.51.20.72/255.255.255.240/bond3.161 -k 2

crsctl status res -t

srvctl start vip -n Kolkata1
srvctl start vip -n Kolkata2

 

 

1.3.2       Disaster recovery Site

Network in Secondary for DR(standby database)
su – root

. oraenv

+ASM1

oifcfg iflist -p -n

  10.51.20.96 is the subnet IP which we got from above command.

10.51.20.103 and 10.51.20.104 are VIP address.

srvctl add network -k 2 -S 10.75.20.96/255.255.255.240/bond3.161 -w static -v

srvctl add vip -n Delhi1 -A 10.75.20.103/255.255.255.240/bond3.161 -k 2
srvctl add vip -n Delhi2 -A 10.75.20.104/255.255.255.240/bond3.161 -k 2

crsctl status res -t

srvctl start vip -n Delhi1
srvctl start vip -n Delhi2

 

1.4    LISTENER_DG both for primary and standby
Please use netca to create listener LISTENER_DG dedicated to handle DR traffic

1.4.1      Listener Configuration Kolkata1

Oracle Network Configuration Kolkata1
SID_LIST_LISTENER_DG =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = ORCLPD1.tdeprdcl.internal)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
      (SID_NAME = ORCLPD11)
    )
  )
 [grid@Kolkata1 admin]$ lsnrctl start LISTENER_DG
 LSNRCTL for Linux: Version 12.1.0.2.0 – Production on 28-MAR-2017 09:13:21
 Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 Starting /u01/app/product/12.1.0/grid/bin/tnslsnr: please wait…
 TNSLSNR for Linux: Version 12.1.0.2.0 – Production
System parameter file is /u01/app/product/12.1.0/grid/network/admin/listener.ora
Log messages written to /u01/app/grid/diag/tnslsnr/Kolkata1/listener_dg/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
 connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
————————
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 – Production
Start Date                28-MAR-2017 09:13:21
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/product/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/Kolkata1/listener_dg/alert/log.xml
Listening Endpoints Summary…
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
Services Summary…
Service “ORCLPD1.tdeprdcl.internal” has 1 instance(s).
  Instance “ORCLPD11”, status UNKNOWN, has 1 handler(s) for this service…
The command completed successfully

1.4.2                   Listener Configuration Kolkata2

Oracle Network Configuration Kolkata2
SID_LIST_LISTENER_DG =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = ORCLPD1.tdeprdcl.internal)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
      (SID_NAME = ORCLPD12)
    )
  )
 
[grid@Kolkata2 ~]$ lsnrctl start LISTENER_DG
 
LSNRCTL for Linux: Version 12.1.0.2.0 – Production on 28-MAR-2017 09:15:24
 
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 
Starting /u01/app/product/12.1.0/grid/bin/tnslsnr: please wait…
 
TNSLSNR for Linux: Version 12.1.0.2.0 – Production
System parameter file is /u01/app/product/12.1.0/grid/network/admin/listener.ora
Log messages written to /u01/app/grid/diag/tnslsnr/Kolkata2/listener_dg/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
————————
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 – Production
Start Date                28-MAR-2017 09:15:24
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/product/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/Kolkata2/listener_dg/alert/log.xml
Listening Endpoints Summary…
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
Services Summary…
Service “ORCLPD1.tdeprdcl.internal” has 1 instance(s).
  Instance “ORCLPD12”, status UNKNOWN, has 1 handler(s) for this service…
The command completed successfully

1.4.3    Listener Configuration Delhi1

Oracle Network Configuration Delhi1
 
SID_LIST_LISTENER_DG =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = ORCLDR1.tdeprdcl.internal)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
      (SID_NAME = ORCLDR11)
    )
  )
 
[grid@Delhi1 admin]$ lsnrctl start LISTENER_DG
 
LSNRCTL for Linux: Version 12.1.0.2.0 – Production on 28-MAR-2017 09:20:34
 
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 
Starting /u01/app/product/12.1.0/grid/bin/tnslsnr: please wait…
 
TNSLSNR for Linux: Version 12.1.0.2.0 – Production
System parameter file is /u01/app/product/12.1.0/grid/network/admin/listener.ora
Log messages written to /u01/app/grid/diag/tnslsnr/Delhi1/listener_dg/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
————————
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 – Production
Start Date                28-MAR-2017 09:20:34
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/product/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/Delhi1/listener_dg/alert/log.xml
Listening Endpoints Summary…
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
Services Summary…
Service “ORCLDR1.tdeprdcl.internal” has 1 instance(s).
  Instance “ORCLDR11”, status UNKNOWN, has 1 handler(s) for this service…
The command completed successfully

1.4.4     Listener Configuration Delhi2

Oracle Network Configuration Delhi1
SID_LIST_LISTENER_DG =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = ORCLDR1.tdeprdcl.internal)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db_1)
      (SID_NAME = ORCLDR12)
    )
  )
 
[grid@Delhi2 ~]$ lsnrctl start LISTENER_DG
 
LSNRCTL for Linux: Version 12.1.0.2.0 – Production on 28-MAR-2017 09:23:40
 
Copyright (c) 1991, 2014, Oracle.  All rights reserved.
 
Starting /u01/app/product/12.1.0/grid/bin/tnslsnr: please wait…
 
TNSLSNR for Linux: Version 12.1.0.2.0 – Production
System parameter file is /u01/app/product/12.1.0/grid/network/admin/listener.ora
Log messages written to /u01/app/grid/diag/tnslsnr/Delhi2/listener_dg/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
 
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_DG)))
STATUS of the LISTENER
————————
Alias                     LISTENER_DG
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 – Production
Start Date                28-MAR-2017 09:23:40
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/product/12.1.0/grid/network/admin/listener.ora
Listener Log File         /u01/app/grid/diag/tnslsnr/Delhi2/listener_dg/alert/log.xml
Listening Endpoints Summary…
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_DG)))
Services Summary…
Service “ORCLDR1.tdeprdcl.internal” has 1 instance(s).
  Instance “ORCLDR12”, status UNKNOWN, has 1 handler(s) for this service…
The command completed successfully

1.5    TNS Configuration

1.5.1    TNS Configuration for kolkata1 and kolkata2
Configure /u01/app/oracle/product/12.1.0/db_1/network/admin/tnsnames.ora as below
Oracle TNS Configuration For Kolkata1 and Kolkata2
 

ORCLDR1 =
          (description=
           (load_balance=on)
            (address=(protocol=tcp)(host=10.75.20.103)(port=1592))
            (address=(protocol=tcp)(host=10.75.20.104)(port=1592))
           (connect_data=
     (service_name=ORCLDR1.tdeprdcl.internal)))
 
ORCLPD1 =
          (description=
           (load_balance=on)
            (address=(protocol=tcp)(host=10.51.20.71)(port=1592))
            (address=(protocol=tcp)(host=10.51.20.72)(port=1592))
           (connect_data=
     (service_name=ORCLPD1.tdeprdcl.internal)))

  

1.5.2   TNS Configuration for delhi1 and delhi2

Oracle TNS Configuration For For delhi1 and delhi2
ORCLPD1 =
          (description=
           (load_balance=on)
            (address=(protocol=tcp)(host=10.51.20.71)(port=1592))
            (address=(protocol=tcp)(host=10.51.20.72)(port=1592))
           (connect_data=
     (service_name=ORCLPD1.tdeprdcl.internal)))
 
ORCLDR1 =
          (description=
           (load_balance=on)
            (address=(protocol=tcp)(host=10.75.20.103)(port=1592))
            (address=(protocol=tcp)(host=10.75.20.104)(port=1592))
           (connect_data=
     (service_name=ORCLDR1.tdeprdcl.internal)))

 

 

2 Dataguard Setup Detail

2.1  Architecture

On the orclprod primary database, Data Guard uses the following processes:
  • Log writer (LGWR) – it collects transaction redo information and updates the online redo In synchronous mode, it ships redo information directly to the remote file server (RFS) process on the physical standby database and waits for a confirmation before proceeding. In asynchronous mode, it ships the redo information directly but doesn’t wait before proceeding. In asynchronous mode, LGWR submits the network I/O request to the Log-write Network Server (LNSn) process for that destination
  • Archiver (ARCHn) – ARCH1 creates copy of the online redo logs locally for use in a primary database The ARCH2 process also ships the redo stream to the RFS process while simultaneously archiving the online log. ARCH2 is also responsible for proactively detecting and resolving gaps on the physical standby database.
  • Fetch archive log (FAL) – exists only on the physical standby database only: FAL provides a client/server mechanism for resolving gaps detected in the range of archived redo logs that are generated at the primary database and received at the standby This process is started only when needed and shuts down as soon as it is finished. It is very likely you will not see this process running.
Note: You can configure a primary database to ship redo information to a single standby database by using either LGWR or ARCn, but not both.
On the standby database bscsproddr, Data Guard uses the following processes:
  • Remote file server (RFS) – RFS receives redo information from the primary RFS can write the redo into standby redo logs or directly to archived redo logs. Each LNSn and ARCHn process from the primary database has its own RFS process.
  • Archiver (ARCHn) – ARCH1 process archives the standby redo
  • Managed recovery process (MRP) – exists only for physical standby database only, MRP applies archived redo log information to the physical standby If you start the managed recovery with the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT sql statement, this foreground session performs the recovery. If you use the optional DISCONNECT FROM SESSION clause, the MRP background process starts. If you use Data Guard broker to manage your standby databases, the broker always starts the MRP background process for a physical standby database.

2.2     Enable forced logging

In order to implement Standby Database I enable ‘Forced Logging’. This option ensures that even in the event that a ‘nologging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.
SQL> alter database force logging;
Database altered.

2.3    Configure a standby redo log in Primary

A Standby Redo log is added to enable Data Guard Maximum Availability and Maximum Protection modes. It is important to configure the Standby Redo Logs (SRL) with the same size as the online redo logs.
Standby redolog should be 1 greater than no of primary redo log groups per thread.If You have 2 node RAC and thread has 4 redo log groups,then number of standby redo log will be 5 per thread.Standby redo log should never be multiplexed.
alter system set standby_file_management=manual scope=both sid=’*’;
alter database add standby logfile thread 1 group 9 ‘+FRA’ size 512m;
alter database add standby logfile thread 1 group 10 ‘+FRA’ size 512m;
alter database add standby logfile thread 1 group 11 ‘+FRA’ size 512m;
alter database add standby logfile thread 1 group 12 ‘+FRA’ size 512m;
alter database add standby logfile thread 1 group 13 ‘+FRA’ size 512m;
alter database add standby logfile thread 2 group 14 ‘+FRA’ size 512m;
alter database add standby logfile thread 2 group 15 ‘+FRA’ size 512m;
alter database add standby logfile thread 2 group 16 ‘+FRA’ size 512m;
alter database add standby logfile thread 2 group 17 ‘+FRA’ size 512m;
alter database add standby logfile thread 2 group 18 ‘+FRA’ size 512m;
alter system set standby_file_management=auto scope=both sid=’*’;
 

2.4 Enable archive mode in Primay

Check whether archive mode is enabled in database if it is not enabled:
SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     3
Next log sequence to archive   5
Current log sequence           5
If archive mode was not enabled please execute below commands keeping one node of RAC down:
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1610612736 bytes
Fixed Size                  2096632 bytes
Variable Size             419430920 bytes
Database Buffers         1174405120 bytes
Redo Buffers               14680064 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL>alter database open;

2.5   Create directories adump in Standby

mkdir -p /u01/app/oracle/admin/ORCLDR1/adump

2.6 Copy password file using ASM command from primary to standby.

In Primary
ASMCMD> pwcopy +DATA_BILLP/ORCLPD1/PASSWORD/pwdORCLpd1.256.926689449 /tmp/pwdORCLpd1
In Standby
pwcopy /tmp/pwdORCLpd1 +DATA_BILLP/pwdORCLdr1
copying +DATA_ORCLP/ORCLPD1/PASSWORD/pwdORCLpd1.256.926689449 -> /tmp/pwdORCLpd1

2.7  Add database and instance in CRS using srvctl in standby

srvctl add database -d ORCLDR1 -o /u01/app/oracle/product/12.1.0/db_1 -r PHYSICAL_STANDBY -s mount
srvctl add instance -d ORCLDR1 -i ORCLDR11 -n Delhi1
srvctl add instance -d ORCLDR1 -i ORCLDR12 -n Delhi2
In Standby,to add password file
srvctl add database -d ORCLDR1 -o /u01/app/oracle/product/12.1.0/db_1 -r PHYSICAL_STANDBY -s mount -pwfile +DATA_ORCLP/pwdORCLdr1
To modify password file
srvctl modify database -d ORCLDR1 -pwfile +DATA_ORCLP/pwdORCLdr1

2.8 Copy pfile and start instance in each node of standby with pfile

copy pfile from /tmp to $ORACLE_HOME/dbs and rename the parameter file to init$ORACLE_SID.ora in each node and startup nomount mode for each node of DR.
Please change db_unique_name and SID name in pfile from DR side.
[oracle@Delhi1 ~]$ sqlplus / as sysdba
$SQL*Plus: Release 12.1.0.2.0 Production on Mon Apr 3 09:34:01 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile=’$ORACLE_HOME/dbs/init$ORACLE_SID.ora’;
If you face permission denied issue in ASM,please include oracle under asmadmin
 /usr/sbin/usermod -G oinstall,asmdba,dba,oper,asmadmin,asmoper oracle

2.9   Please check If you connect with SYS user to RMAN in each node individually

2.10   Now We need to take RMAN backup either in ASM diskgroup or any other mount point.

run
{
sql “alter system switch logfile”;
allocate channel ch1 type disk format ‘+DATA/ORCLPD1/BKP/Primary_bkp_for_stndby_%U’;
allocate channel ch2 type disk format ‘+DATA/ORCLPD1/BKP/Primary_bkp_for_stndby_%U’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
backup current controlfile for standby;
sql “alter system archive log current”;
}
Non-ASM Backup
run
{
sql “alter system switch logfile”;
allocate channel ch1 type disk format ‘/U01/ORCLPD1/BKP/Primary_bkp_for_stndby_%U’;
allocate channel ch2 type disk format ‘/U01/ORCLPD1/BKP/Primary_bkp_for_stndby_%U’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
backup current controlfile for standby;
sql “alter system archive log current”;
}

2.11   Manual backup and duplicate for standby(This is not active duplicate)

If you use NFS as share mount point for backup,please use it.No need to copy ASM files as above.
Login to DR dataase
export ORACLE_SID=ORCLDR11
sqlplus / as sysdba
startup nomount pfile=$ORACLE_HOME/dbs/initTEST.ora
rman target sys/****@ORCLPD1 auxiliary /
DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

2.12  Automated backup and restoration to DR (This is active duplicate)

rman target sys/test#123@ORCLPD11 auxiliary sys/test#123@ORCLDR11
run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate channel prim type disk;
allocate auxiliary channel aux type disk;
duplicate target database for standby from active database spfile
parameter_value_convert ‘ORCLPD1′,’ORCLDR1’
set db_file_name_convert=’ORCLPD1′,’ORCLDR1′
set log_file_name_convert=’ORCLPD1′,’ORCLDR1′
set log_archive_max_processes=’10’
set db_unique_name=’ORCLDR1′
set standby_file_management=’AUTO’
set log_archive_config=’dg_config=(ORCLPD1,ORCLDR1)’
set log_archive_dest_1=’location=+ARCH_BILLP valid_for=(all_logfiles,all_roles) db_unique_name=ORCLDR1′
set log_Archive_dest_2=’service=ORCLPD1 async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=ORCLPD1′
set log_archive_config=’dg_config=(ORCLPD1,ORCLDR1)’;
sql channel aux “alter database add standby logfile thread 1 group 9 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 1 group 10 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 1 group 11 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 1 group 12 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 1 group 13 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 2 group 14 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 2 group 15 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 2 group 16 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 2 group 17 ‘+FRA’ size 512m”;
sql channel aux “alter database add standby logfile thread 2 group 18 ‘+FRA’ size 512m”;
sql channel prim “alter system archive log current”;
sql channel aux “alter database recover managed standby database disconnect”;
}
Log file:-
Note:- If you have ASM diskgroup name different from primary,then you need to add below parameters in above script
SET CONTROL_FILES =’+FRA’,’+DATA_ORCLD’
set db_create_file_dest=’+ DATA_ORCLD ‘
SET log_archive_dest_1=’LOCATION=+FRA’

2.13  Re-create standby logfiles

sqlplus / as sysdba
alter database drop standby logfile group 9;
alter database drop standby logfile group 10;
alter database drop standby logfile group 11;
alter database drop standby logfile group 12;
alter database drop standby logfile group 13;
alter database drop standby logfile group 14;
alter database drop standby logfile group 15;
alter database drop standby logfile group 16;
alter database drop standby logfile group 17;
alter database drop standby logfile group 18;
alter database add standby logfile thread 1 group 9 ‘+fra’ size 512M;
alter database add standby logfile thread 1 group 10 ‘+fra’ size 512M;
alter database add standby logfile thread 1 group 11 ‘+fra’ size 512M;
alter database add standby logfile thread 1 group 12 ‘+fra’ size 512M;
alter database add standby logfile thread 1 group 13 ‘+fra’ size 512M;
alter database add standby logfile thread 2 group 14 ‘+fra’ size 512M;
alter database add standby logfile thread 2 group 15 ‘+fra’ size 512M;
alter database add standby logfile thread 2 group 16 ‘+fra’ size 512M;
alter database add standby logfile thread 2 group 17 ‘+fra’ size 512M;
alter database add standby logfile thread 2 group 18 ‘+fra’ size 512M;
 

2.14    Please create spfile add instance for RAC

[oracle@Delhi1 ~]$ sqlplus as sysdba
SQL> show parameter spfile;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
spfile                               string
SQL> create spfile=’+DATA_BSCS/spfileORCLDR1.ora’ from pfile;
File created.
srvctl modify database -d ORCLDR1 -spfile +DATA_ORCLP/spfileORCLDR1.ora
[oracle@Delhi1 ~]$ srvctl start instance -d ORCLDR1 -i ORCLDR11
PRCR-1013 : Failed to start resource ora.orcldr1.db
PRCR-1064 : Failed to start resource ora.orcldr1.db on node Delhi1
CRS-5017: The resource action “ora.orcldr1.db start” encountered the following error:
ORA-00205: error in identifying control file, check alert log for more info
. For details refer to “(:CLSN00107:)” in “/u01/app/grid/diag/crs/Delhi1/crs/trace/crsd_oraagent_oracle.trc”.
CRS-2674: Start of ‘ora.orcldr1.db’ on ‘Delhi1’ failed
Please check the restore log file to identify where control file has been created.
Now you need to change control_files parameter after startup nomount.
alter system set control_files=’+REDO_ORCLP/ORCLDR1/CONTROLFILE/current.316.941562301,+FRA/ORCLDR1/CONTROLFILE/current.330.941562301′ scope=spfile;

2.15 DGMGRL configuration

In Primary:-
alter system set dg_broker_start=false sid=’*’;
alter system set dg_broker_config_file1=’+FRA/dr1ORCLPD1.dat’ sid=’*’;
alter system set dg_broker_config_file2=’+FRA/dr2ORCLPD1.dat’ sid=’*’;
alter system set dg_broker_start=true sid=’*’;
In DR:-
alter system set dg_broker_start=false sid=’*’;
alter system set dg_broker_config_file1=’+FRA/dr1ORCLDR1.dat’ sid=’*’;
alter system set dg_broker_config_file2=’+FRA/dr2ORCLDR1.dat’ sid=’*’;
alter system set dg_broker_start=true sid=’*’;
dgmgrl connect /
CREATE CONFIGURATION dg_config_orcl AS PRIMARY DATABASE IS ORCLPD1 connect identifier is ORCLPD1;
ADD DATABASE ORCLDR1 AS CONNECT IDENTIFIER IS ORCLDR1;
SHOW CONFIGURATION;
ENABLE CONFIGURATION;
SHOW CONFIGURATION;
validate database ORCLPD1;
validate database ORCLDR1;

2.16 Change spfile if required to reflect DR related change in DR site (Delhi) .(Not required if you used DGMGRL)

In Primary:-
alter system set log_archive_config=’dg_config=(ORCLPD1,ORCLDR1)’;
alter system set standby_file_management=AUTO;
alter system set log_archive_dest_2=’service=”ORCLPD1″ LGWR ASYNC NOAFFIRM delay=0 optional max_failure=0 max_connections=1 reopen=300 db_unique_name=”ORCLPD1″ net_timeout=30 valid_for=(all_logfiles,primary_role)’
alter system set fal_server=’ORCLDR1′
In Standby:-
alter system set log_archive_config=’dg_config=(ORCLPD1,ORCLDR1)’;
alter system set standby_file_management=AUTO;
alter system set log_archive_dest_2=’service=”ORCLDR1″ LGWR ASYNC NOAFFIRM delay=0 optional max_failure=0 max_connections=1 reopen=300 db_unique_name=”ORCLDR1″ net_timeout=30 valid_for=(all_logfiles,primary_role)’
alter system set fal_server=’ORCLPD1′

2.17   Start the managed recovery process  if not already started.

sqlplus / as sysdba
alter database recover managed standby database using current logfile disconnect;
If you want to setup active dataguard,Active dataguard require seperate license cost.
1.Mount the database.
2.Open the database.
3.Start log apply using following command:-
alter database recover managed standby database using current logfile disconnect;

2.18  Check the status of redo log apply in standby database. This should be executed in primary

SELECT ARCH.THREAD# “Thread”, ARCH.SEQUENCE# “Last Sequence Received”, APPL.SEQUENCE# “Last Sequence Applied”
FROM
(SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
(SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
WHERE
ARCH.THREAD# = APPL.THREAD#
ORDER BY 1;

2.19   Check the alert log files in both standby and primary.

2.20   The process to switchover and failover dataguard

During actual  DR failover when prod primary will not be available permanently then   DR standby will become primary and one standby database to be created on DR side node2.
  • Switchover:
 Before performing switchover, kindly verify the state of data guard on both the instances by following SQL queries:
SQL> ALTER SESSION SET nls_date_format=’DD-MON-YYYY HH24:MI:SS’;
SQL> SELECT sequence#, first_time, next_time, applied
FROM v$archived_log
ORDER BY sequence#;
59 08-OCT-14 08-OCT-14 YES
60 08-OCT-14 08-OCT-14 YES
61 08-OCT-14 08-OCT-14 YES
SQL> select dest_name,status,error from v$archive_dest where dest_name=’LOG_ARCHIVE_DEST_2′;
DEST_NAME
——————————————————————————–
STATUS ERROR
——— —————————————————————–
LOG_ARCHIVE_DEST_2
VALID
SQL> select message from v$dataguard_status;
Note: This command will give you appropriate message about the data guard current status.
After getting confirmation on data guard smooth activity, we can instantiate switchover activity by issuing following sort of commands:
On Primary database:
Step-1>>
Connect to Primary database and convert primary database to standby.
[oracle@pr ~]$ sqlplus / as sysdba
SQL> alter database commit to switchover to standby;
Database altered.
Step-2>>
Shutdown primary database:
SQL> shutdown immediate;
Step-3>>
Startup nomount old primary database as new standby database:
SQL> startup nomount
ORACLE instance started.
Total System Global Area 972898304 bytes
Fixed Size 2219272 bytes
Variable Size 805307128 bytes
Database Buffers 159383552 bytes
Redo Buffers 5988352 bytes
SQL> alter database mount standby database;
Database altered.
SQL> alter database recover managed standby database disconnect from session;
Database altered.
Verify database role on old primary database:
SQL> select name,open_mode,database_role from v$database;
NAME OPEN_MODE DATABASE_ROLE
——— ——————– —————-
RTS MOUNTED PHYSICAL STANDBY
On Standby database:
Step-4>>
On original standby database, Convert old standby database to primary database:
[oracle@dr ~]$ sqlplus / as sysdba
Step-5>>
Convert old standby database as primary and shutdown database:
SQL> alter database commit to switchover to primary;
Database altered.
SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Step-6>>
Startup old standby database as primary database:
SQL> startup
ORACLE instance started.
Total System Global Area 972898304 bytes
Fixed Size 2219272 bytes
Variable Size 717226744 bytes
Database Buffers 247463936 bytes
Redo Buffers 5988352 bytes
Database mounted.
Database opened.
Verify database role on old standby database:
SQL> select name,open_mode,database_role from v$database;
NAME OPEN_MODE DATABASE_ROLE
——— ——————– —————-
RTS READ WRITE PRIMARY
Switch over activity have completed successfully, our old primary database has become standby and old standby database has become primary database.
Note
To test above switchover activity, generate multiple archive logs on primary database and verify those archive logs being transferred on standby database.
Switchback:
To switchback, you have to follow same above mentioned 6 steps.
  • FAILOVER:

In short, the failover is the deformation of the production (primary) database and activating standby database as the primary. It is not reversible. When enabled, re-create the standby database . What to do in case of failover:
(Important note: Kolkata is the primary server and Delhi is the standby server)
  1. [Kolkata] If the primary database is accessible and running, then it must provide to send redo buffer to the standby database.
SQL> alter system flush redo to standby_db_name;
 SQL>alter system archive log current;
If you don’t receive an error, you can continue with step 5th. In this case, the system can be opened by zero data loss. If you receive an error, We continue with step 2 to open the system at least data loss.
  1. [Delhi] We must run the following query to learn last applied archive log sequence number.
SQL> SELECT UNIQUE THREAD# AS THREAD, MAX(SEQUENCE#) OVER (PARTITION BY thread#) AS LAST from V$ARCHIVED_LOG;
  1. [Delhi] If you can access archive logs which are not copied to standby then copy archives to standby. After copy archive log files we must register them to standby database. This operation must be done for every thread.
SQL> alter database register physical logfile ‘/oracle/ora11g/dbs/arch/ TALIP_991834413_1_102.arc ‘;
  1. [Delhi] Check the standby database for redo gap. If there is a gap then we must copy archive log files and register.
SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP; 
SQL> alter database register physical logfile ‘/oracle/ora11g/dbs/arch/ TALIP_991834413_1_101.arc ‘;
As a result of the above query until it returns to zero.
  1. [Delhi] Stop the redo apply process in standby database.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
  1. [Delhi] Finish to apply archive logs copied from primary.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
If you get an error, it means there are redo logs not applied. Consider 2th and 4th steps. You can also continue with following command;
SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;
In this situation you can open database in 8th step.  If you get no error, continue with 7th step.
  1. [Delhi] Switch standby database to primary database.
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
  1. [Delhi] Open database.
SQL> ALTER DATABASE OPEN;
3                          References

Case Study on Network Bandwidth Calculation in 12c dataguard environment and testing with redo generation rate

Data loading:-

create table t(c1 char(2000),c2 char(2000));

insert into t select 'A','B' from dual connect by rownum<1000000;

commit;
the data loading of 8 GB data took 160 sec .
Now redo generation rate by 2 nodes RAC database as calculated by following example is:-
SYS@EBILPD11> select thread#,sequence#,blocks*block_size/1024/1024 MB,(next_time-first_time)*86400 sec, blocks*block_size/1024/1024/((next_time-first_time)*86400) "MB/s" from v$archived_log where ((next_time-first_time)*86400<>0) and first_time between  to_date('2017/11/15 06:00:00','YYYY/MM/DD HH24:MI:SS') and to_date('2017/11/15 07:00:00','YYYY/MM/DD HH24:MI:SS') and dest_id=2 order by first_time;

THREAD#  SEQUENCE#         MB        SEC       MB/s

———- ———- ———- ———- ———-

1       2264 56.8867188          2 28.4433594

1       2265  56.984375          2 28.4921875

1       2266 72.2739258          3 24.0913086

2       2252 .004882813          3 .001627604

1       2516 56.8862305          1 56.8862305

1       2517 56.8833008          1 56.8833008

1       2518 66.6621094          1 66.6621094

2       2253 .008789063          3 .002929688

1       2519 56.8852539          1 56.8852539

1       2520 56.8828125          1 56.8828125

 

Average redo generation per sec:50 Megabits per second

 

Now Bandwidth calculation:-

Required bandwidth = ((Redo rate bytes per sec. / 0.75) * 8) / 1,000,000 = bandwidth in Mbps

So if we calculate our requirement is as following:-

((50000000/ 0.75) * 8) / 1,000,000 = 533 MBPS

We did not find any lag when we run below query in standby as our WAN bandwidth is 2GBPS.
SQL> select name,value,time_computed,datum_time from v$dataguard_stats where name='transport lag';

 

NAME

——————————–

VALUE

—————————————————————-

TIME_COMPUTED                  DATUM_TIME

—————————— ——————————

transport lag

+00 00:00:00

11/15/2017 08:17:47            11/15/2017 08:17:47

 

 

Network transport lag

We diagnosed the network transport lag using below command which needs to be enabled during the testing in both primary as well as DR
alter system set events '16421 trace name context forever, level 1';
This will generate TT0 tracefile under dump destination.

vi EBILPD11_tt00_16385.trc

File transfer time (micro seconds)       1663153

Begin Statistics:-

DESTINATION 2 – OCI REQUEST

Total count  – OCI REQUEST             122

Total time   – OCI REQUEST             38498

Average time – OCI REQUEST             315

LOG_ARCHIVE_DEST_2 – NETWORK SEND

Total count  – NETWORK SEND            122

Total time   – NETWORK SEND            19605

Average time – NETWORK SEND            160

Total data buffers queued              121

Total data buffers completed           121

Total bytes written                    59650048

Average network send size (blocks)     962

Average network send buffers           1.00

Average buffer turnaround time         6289

Throughput (MB/s)                         34.20

DESTINATION 2 – NETWORK NO-STALL REAP

Total count  – NETWORK NO-STALL REAP   18

Total time   – NETWORK NO-STALL REAP   192

Average time – NETWORK NO-STALL REAP   10

Total network layer time               58295

Percentage of time in network          3.51

—–

Total count  – DISK READ               121

Total time   – DISK READ               30848

Average time – DISK READ               254

Total count  – BUFFER RELEASE          121

Total time   – BUFFER RELEASE          94

Average time – BUFFER RELEASE          0

Total disk layer time                  30942

    Percentage of time in disk layer       1.86

—–

Total count  – SLEEP                   68

Total time   – SLEEP                   1402529

Average time – SLEEP                   20625

Total DG layer time                    1573916

Percentage of time in DG layer         94.63

 

End statistics:-

LOG_ARCHIVE_DEST_2 – NETWORK SEND

Total count  – NETWORK SEND            59

Total time   – NETWORK SEND            8109

Average time – NETWORK SEND            137

Total data buffers queued              58

Total data buffers completed           58

Total bytes written                    59648000

Average network send size (blocks)     2008

Average network send buffers           1.00

Average buffer turnaround time         80387

Throughput (MB/s)                        101.80

DESTINATION 2 – NETWORK NO-STALL REAP

Total count  – NETWORK NO-STALL REAP   53

Total time   – NETWORK NO-STALL REAP   665

Average time – NETWORK NO-STALL REAP   12

DESTINATION 2 – NETWORK STALL REAP

Total count  – NETWORK STALL REAP      26

Total time   – NETWORK STALL REAP      403534

Average time – NETWORK STALL REAP      15520

Total network layer time               439520

Percentage of time in network          78.66

—–

Total count  – DISK READ               58

Total time   – DISK READ               72206

Average time – DISK READ               1244

Total count  – BUFFER RELEASE          58

Total time   – BUFFER RELEASE          70

Average time – BUFFER RELEASE          1

Total disk layer time                  72276

Percentage of time in disk layer       12.93

—–

Total DG layer time                    46968

Percentage of time in DG layer         8.41

 

The above output was taking from a test run where no transport lag was seen. We see in end statistics report ,we spent 79% of the time on the network, 12% of the time in the disk layer, and 8% of the time in the DG layer, mainly due to sleeps.This statistics shows we are almost reaching congestion point as network percentage time increasing and DG layer dropping. Remember,We found only 3% time for network in the beginning before starting loading.

 

References

 

How To Calculate The Required Network Bandwidth Transfer Of Redo In Data Guard Environments (Doc ID 736755.1)

Measuring Network Capacity using oratcptest (Doc ID 2064368.1)

http://www.oracle.com/technetwork/database/availability/async-2587521.pdf