DBMS_CRYPTO example

SQL> CONN SYS/SYS AS SYSDBA
Connected.
SQL> set serveroutput on
SQL>
SQL> set linesize 121
SQL>
SQL> DECLARE
l_credit_card_no VARCHAR2(19) := ‘1234-5678-9012-3456’;
l_ccn_raw RAW(128) := utl_raw.cast_to_raw(l_credit_card_no);
l_key RAW(128) := utl_raw.cast_to_raw(‘abcdefgh’);
l_encrypted_raw RAW(2048);
l_decrypted_raw RAW(2048);
BEGIN
dbms_output.put_line(‘Original : ‘ || l_credit_card_no);

l_encrypted_raw := dbms_crypto.encrypt(l_ccn_raw,
dbms_crypto.des_cbc_pkcs5, l_key);

dbms_output.put_line(‘Encrypted : ‘ ||
RAWTOHEX(utl_raw.cast_to_raw(l_encrypted_raw)));

l_decrypted_raw := dbms_crypto.decrypt(src => l_encrypted_raw,
typ => dbms_crypto.des_cbc_pkcs5, key => l_key);

dbms_output.put_line(‘Decrypted : ‘ ||
utl_raw.cast_to_varchar2(l_decrypted_raw));
END;
/
Original : 1234-5678-9012-3456
Encrypted : 38303843354144343732323245303145433338393341423337343543464345393641
3643354144344245454345424136
Decrypted : 1234-5678-9012-3456

PL/SQL procedure successfully completed.

FAST START FAILOVER (FSFO) CONFIGURATION USING FAR SYNC NODE AS OBSERVER IN DATAGUARD USING DGMGRL

FAST START FAILOVER (FSFO) CONFIGURATION USING  FAR SYNC NODE AS OBSERVER IN DATAGUARD USING DGMGRL

 

1.For data guard DGMGRL configuration and Far Sync setup please follow below link:-

http://clouddba.co/step-step-oracle-active-data-guard-far-sync-rac/

 

2.Please check current database status

DGMGRL> show database  ebilpd1;

 

Database – ebilpd1

 

Role:               PRIMARY

Intended State:     TRANSPORT-ON

Instance(s):

EBILPD11

EBILPD12

 

Database Status:

SUCCESS

 

DGMGRL> show database ebildr1;

Database – ebildr1

Role:               PHYSICAL STANDBY

Intended State:     APPLY-ON

Transport Lag:      0 seconds (computed 0 seconds ago)

Apply Lag:          0 seconds (computed 0 seconds ago)

Average Apply Rate: 5.00 KByte/s

Real Time Query:    OFF

Instance(s):

EBILDR11 (apply instance)

EBILDR12

 

Database Status:

SUCCESS

 

  1. Enable SYNC transport between Primary and standby

 

EDIT DATABASE EBILDR1 SET PROPERTY RedoRoutes = ‘(LOCAL : EBILPD1 SYNC)’;

EDIT DATABASE EBILPD1 SET PROPERTY RedoRoutes = ‘(LOCAL : ORCLFS SYNC ALT =(EBILDR1 SYNC FALLBACK))’;

edit database EBILDR1 set property FastStartFailoverTarget=’EBILPD1′;

edit database EBILPD1 set property FastStartFailoverTarget=’EBILDR1′;

 

4.Enable flashback on Primary and Standby

SQL>alter database flashback on;

 

5.Now enable fast start failover from dgmgrl after login to Primary

 

export ORACLE_SID=EBILPD1

dgmgrl /

DGMGRL>  enable fast_start failover ;

Enabled.

 

6.But We will get below error as observer is not setup yet

DGMGRL> show configuration

 

Configuration – DGconfig

Protection Mode: MaxAvailability

Members:

ebilpd1 – Primary database

Error: ORA-16820: fast-start failover observer is no longer observing this database

orclfs  – Far sync instance

ebildr1 – (*) Physical standby database

Error: ORA-16820: fast-start failover observer is no longer observing this database

Fast-Start Failover: ENABLED

 

Configuration Status:

ERROR   (status updated 38 seconds ago)

 

7.Now please connect to Far Sync server from dgmgrl invoked from Primary and start Observer.

 

[oracle@EBILPD1 XXX trace]$ dgmgrl /

DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production

 

Copyright (c) 2000, 2013, Oracle. All rights reserved.

 

Welcome to DGMGRL, type “help” for information.

Connected as SYSDG.

DGMGRL> connect sys@orclfs

Password:

Connected as SYSDBA.

DGMGRL> START OBSERVER;

Observer started

 

 

8.From Another terminal, login to primary node and check the status of DGMGRL configuration.

[oracle@EBILPD1 xxx ~]$ dgmgrl /

DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production

 

Copyright (c) 2000, 2013, Oracle. All rights reserved.

 

Welcome to DGMGRL, type “help” for information.

Connected as SYSDG.

DGMGRL> show configuration;

 

Configuration – DGconfig

 

Protection Mode: MaxAvailability

Members:

ebilpd1 – Primary database

orclfs  – Far sync instance

ebildr1 – (*) Physical standby database

 

Fast-Start Failover: ENABLED

 

Configuration Status:

SUCCESS   (status updated 53 seconds ago)

 

Step by Step Oracle Active Data Guard Far Sync RAC

     Step by Step Oracle Active Data Guard Far Sync RAC

Active Data Guard Far Sync, a new capability with Oracle Database 12c, eliminates compromise by extending zero data loss protection to a replica database located at any distance from the primary database. Active Data Guard Far Sync accomplishes this with minimal expense or complexity compared to other extended distance data protection and availability solutions.

We need to ensure zero data loss for various mission critical databases with minimum impact on production.Oracle dataguard ensures zero data loss by ‘SYNC’ parameter.

How ‘SYNC’ works for dataguard:-

The ‘SYNC’ ensures your committed data will be written to disk in standby site as soon as it commits.In primary,the redo generated during commit will be first written in redo buffer and it will transmit to standby redo log as well to ensure committed data has been protected by standby.The primary will not commit the data until it is written in standby redo of DR site.

Performance impact of ‘SYNC’

As the distance between primary and standby increases, however, the total round-trip time required to acknowledge the remote log file write can reach a point where it has too great of an impact on database performance to make it practical to support zero data loss protection.

Oracle Active Data Guard Far Sync Feature

 

  • Is a very slim/light weight ODG special instance with no physical data files of its own acts as a middle layer between a primary database and all its standby databases configured far away.

Use instance caging 3 in order to achieve the smallest possible SGA. Reducing the CPU_COUNT during testing had no effect on the performance of the Far sync instance. » MAA tests determined that a 300MB SGA 300 with CPU_COUNT=1 on Linux was sufficient for Far Sync.(CPU_COUNT=1 SGA_TARGET=300M)

  • Owns only server parameter file (spfile), a standby control file and set of Standby redo logs (SRLs).The Far Sync ODG should be in mount mode.
  • The purpose of Far Sync instance is to receive redo from the primary database synchronously and transport them to respective remote destinations, a maximum of 29 destinations, asynchronously in real time, provides zero data loss fail-over capabilities.
  • Supports Maximum availability and maximum performance protection modes.
  • Advised to deploy/configure closer to the primary database location, roughly 30-150 miles, to avoid the network latency while shipping redo.
  • Best recommended option when standby databases are placed thousands of kilometer away from a primary site.
  • The solo purpose is to offloads/minimizes a primary database performance overheads.
  • Part of an active data guard option that requires separate license.
  • Its role can’t be converted either to primary or any other type of standby database.
  • Consumes minimal server resources, CPU, disk, memory etc.

 

 

Primary FarSync Standby
IP 10.84.1.1, 10.84.1.2 10.84.1.3, 10.84.1.4 10.84.1.5, 10.84.1.6
Database Name EBILPD1 ORCLFS EBILDR1
Instance Type RAC RAC RAC

 

  1. Please take pfile from spfile in primay database.The pfile will be as following.

EBILPD12.__data_transfer_cache_size=0
EBILPD11.__data_transfer_cache_size=0
EBILPD12.__db_cache_size=1442840576
EBILPD11.__db_cache_size=2013265920
EBILPD12.__java_pool_size=234881024
EBILPD11.__java_pool_size=234881024
EBILPD12.__large_pool_size=5469372416
EBILPD11.__large_pool_size=5469372416
EBILPD11.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
EBILPD12.__oracle_base=’/u01/app/oracle’#ORACLE_BASE set from environment
EBILPD12.__pga_aggregate_target=2583691264
EBILPD11.__pga_aggregate_target=2583691264
EBILPD12.__sga_target=12884901888
EBILPD11.__sga_target=12884901888
EBILPD12.__shared_io_pool_size=167772160
EBILPD11.__shared_io_pool_size=167772160
EBILPD12.__shared_pool_size=4429185024
EBILPD11.__shared_pool_size=3858759680
EBILPD12.__streams_pool_size=0
EBILPD11.__streams_pool_size=0
*.audit_file_dest=’/u01/app/oracle/admin/EBILPD1/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’12.1.0.2.0′
*.control_files=’+REDO_BILLP/EBILPD1/CONTROLFILE/current.256.928072537′,’+FRA/EBILPD1/CONTROLFILE/current.265.928072537′
*.db_block_size=8192
*.db_create_file_dest=’+DATA_BILLP’
*.db_create_online_log_dest_1=’+REDO_BILLP’
*.db_create_online_log_dest_2=’+FRA’
*.db_domain=’world.internal’
*.db_file_multiblock_read_count=8
*.db_files=2048
*.db_name=’EBILPD1′
*.db_recovery_file_dest=’+FRA’
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=EBILPD1XDB)’
*.dml_locks=20272
EBILPD11.instance_number=1
EBILPD12.instance_number=2
*.job_queue_processes=8
*.log_archive_dest_1=’LOCATION=+ARCH_BILLP’
*.log_archive_format=’log_%d_%t_%s_%r.arc’
*.log_buffer=2147483648
*.memory_max_target=0
*.memory_target=0
*.open_cursors=500
*.open_links=10
*.open_links_per_instance=100
*.parallel_max_servers=2590
*.parallel_min_servers=576
*.pga_aggregate_limit=9437184000
*.pga_aggregate_target=2583691264
*.processes=3000
*.remote_login_passwordfile=’exclusive’
*.sga_max_size=12884901888
*.sga_target=12884901888
*.shared_pool_reserved_size=50331648
*.standby_file_management=’AUTO’
EBILPD12.thread=2
EBILPD11.thread=1
*.undo_retention=3600
EBILPD12.undo_tablespace=’UNDOTBS2′
EBILPD11.undo_tablespace=’UNDOTBS1′
*.use_large_pages=’ONLY’

  1. The pfile needs to be change and copied to Far Sync node.The following parameters are changed to make Far Sync pfile ready.

cat initORCLFS1.ora

*.audit_file_dest=’/u01/app/oracle/admin/ORCLFS/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’12.1.0.2.0′
*.control_files=’+DATA_REP/ORCLFS/control.ctl’
*.db_block_size=8192
*.db_create_file_dest=’+DATA_REP
*.db_create_online_log_dest_1=’+REDO_REP
*.db_create_online_log_dest_2=’+FRA’
*.db_domain=’world.internal’
*.db_file_multiblock_read_count=8
*.db_files=2048
*.db_name=’EBILPD1′
*.db_recovery_file_dest=’+FRA’
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLFSXDB)’
*.dml_locks=20272
ORCLFS1.instance_number=1
ORCLFS2.instance_number=2
*.job_queue_processes=8
*.log_archive_dest_1=’LOCATION=+ARCH_REP
*.log_archive_format=’log_%d_%t_%s_%r.arc’
*.log_buffer=2147483648
*.memory_max_target=0
*.memory_target=0
*.open_cursors=500
*.open_links=10
*.open_links_per_instance=100
*.parallel_max_servers=2590
*.parallel_min_servers=576
*.pga_aggregate_limit=9437184000
*.pga_aggregate_target=2583691264
*.processes=3000
*.remote_login_passwordfile=’exclusive’
*.sga_max_size=12884901888
*.sga_target=12884901888
*.shared_pool_reserved_size=50331648
*.standby_file_management=’AUTO’
ORCLFS2.thread=2
ORCLFS1.thread=1
*.undo_retention=3600
ORCLFS2.undo_tablespace=’UNDOTBS2′
ORCLFS1.undo_tablespace=’UNDOTBS1′
*.use_large_pages=’ONLY’
*.db_unique_name=’ORCLFS

  1. The pfile needs to be change and copied to standby node.The following parameters are changed to make standby pfile ready.

cat initEBILDR11.ora

*.audit_file_dest=’/u01/app/oracle/admin/EBILDR1/adump’
*.audit_trail=’db’
*.cluster_database=true
*.compatible=’12.1.0.2.0′
*.control_files=’+FRA/EBILDR1/control.ctl’
*.db_block_size=8192
*.db_create_file_dest=’+DATA_CRM
*.db_create_online_log_dest_1=’+REDO_CRM
*.db_create_online_log_dest_2=’+FRA’
*.db_domain=’world.internal’
*.db_file_multiblock_read_count=8
*.db_files=2048
*.db_name=’EBILPD1′
*.db_recovery_file_dest=’+FRA’
*.db_recovery_file_dest_size=536870912000
*.diagnostic_dest=’/u01/app/oracle’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=EBILDR1XDB)’
EBILDR11.instance_number=1
EBILDR12.instance_number=2
*.job_queue_processes=8
*.log_archive_dest_1=’LOCATION=+ARCH_CRM
*.log_archive_format=’log_%d_%t_%s_%r.arc’
*.log_buffer=2147483648
*.memory_max_target=0
*.memory_target=0
*.open_cursors=500
*.open_links=10
*.open_links_per_instance=100
*.remote_login_passwordfile=’exclusive’
*.sga_max_size=16884901888
*.sga_target=16884901888
*.standby_file_management=’AUTO’
EBILDR12.thread=2
EBILDR11.thread=1
*.undo_retention=3600
EBILDR12.undo_tablespace=’UNDOTBS2′
EBILDR11.undo_tablespace=’UNDOTBS1′
#*.use_large_pages=’ONLY’
*.db_unique_name=’EBILDR1

  1. The tns needs to be changed/added and copied to all node to $ORACLE_HOME/network/admin/tnsnames.ora

 

EBILDR1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = DR-scan.world.internal)(PORT = 1591))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = EBILDR1.world.internal)
)
)

EBILPD1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = PROD-scan.world.internal)(PORT = 1591))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = EBILPD1.world.internal)
)
)

ORCLFS =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST =FS-scan.world.internal)(PORT = 1591))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCLFS.world.internal)
)

 

  1. Copy password file from primary and copy it to FarSync and Standby ASM.

In Primary

su – grid

ASMCMD> pwget –dbuniquename EBILPD1
+DATA_BILLP/EBILPD1/PASSWORD/pwdebilpd1.256.928072397

ASMCMD> pwcopy +DATA_BILLP/EBILPD1/PASSWORD/pwdebilpd1.256.928072397 /tmp/
copying +DATA/PrimaryDB/PASSWORD/pwdPrimaryDB.276.913723271 -> /tmp/pwdPrimaryDB.276.913723271

In FarSync host:-

su – grid

scp -pr oracle@10.84.44.6:/tmp/pwdebilpd1.256.928072397 /tmp/

ASMCMD>pwcopy /tmp/pwdebilpd1.256.928072397 +DATA_REP/pwdORCLFS.ora

In standby host:-

su – grid

scp -pr oracle@10.84.44.6:/tmp/pwdebilpd1.256.928072397 /tmp/

ASMCMD>pwcopy /tmp/pwdebilpd1.256.928072397 +DATA_CRM/pwdEBILDR1.ora

Create spfile from pfile in both FarSync and standby

In FarSync

SQL> create spfile=’+DATA_REP/ORCLFS/spORCLFS.ora’ from pfile;

In Standby

SQL> create spfile=’+DATA_CRM/EBILDR1/spfileEBILDR1.ora’ from pfile;

  1. Please add SRVCTL service for both standby and FarSync

In FarSync after login using oracle user

srvctl add database -d ORCLFS -o /u01/app/oracle/product/12.1.0/db_1 -r PHYSICAL_STANDBY -s mount -pwfile +DATA_REP/pwdORCLFS.ora -spfile ‘+DATA_REP/ORCLFS/spORCLFS.ora’

srvctl add instance -d ORCLFS -i ORCLFS1 -n prfsdbf01
srvctl add instance -d ORCLFS -i ORCLFS2 -n prfsdbf02

In standby after login using oracle user

srvctl add database -d EBILDR1 -o /u01/app/oracle/product/12.1.0/db_1 -r PHYSICAL_STANDBY -s mount -pwfile +DATA_CRM/pwdEBILDR1.ora -spfile ‘+DATA_CRM/EBILDR1/spfileEBILDR1.ora’

srvctl add instance -d EBILDR1 -i EBILDR11 -n prdrdbf01
srvctl add instance -d EBILDR1 -i EBILDR12 -n prdrdbf02

  1. Please take backup of controlfile for FarSync and copy it to FarSync node

SQL>ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS ‘/tmp/orclfs.ctl’;

scp -pr *.ctl  oracle@10.84.44.6:/tmp/

The following steps needs to be done in FarSync to restore FarSync controlfile

rman target /

restore controlfile to ‘+DATA_REP/ORCLFS/control.ctl’ from ‘/tmp/orclfs.ctl’;

  1. Please add standby redolog file in FarSync and check status

—add standby log—-

set serveroutput on;
declare
log_num number;
log_size number;
log_num_standby number;
begin
for j in (select inst_id i from gv$instance) loop
select count(*) into log_num from v$log where thread#=j.i;
select max(BYTES) into log_size from gv$log where inst_id=j.i ;
for i in 1..(log_num+1) loop
dbms_output.put_line(‘ALTER DATABASE ADD STANDBY LOGFILE THREAD ‘||j.i ||’ SIZE ‘||log_size||’;’);
end loop;
end loop;
end;
/

SQL> select database_role from v$database;

DATABASE_ROLE
—————-
FAR SYNC

SQL> select DB_UNIQUE_NAME,DEST_ROLE from V$DATAGUARD_CONFIG;

no rows selected

SQL>alter system set log_archive_config=’DG_CONFIG=(EBILPD1,ORCLFS,EBILDR1)’;
SQL>alter system set log_archive_dest_2=’service=EBILDR1 ASYNC valid_for=(standby_logfiles,standby_role) db_unique_name=EBILDR1′;

SQL> ALTER DATABASE SET STANDBY TO MAXIMIZE AVAILABILITY;

  1. Enable forced logging in primary

In order to implement Standby Database I enable ‘Forced Logging’. This option ensures that even in the event that a ‘nologging’ operation is done, force logging takes precedence and all operations are logged into the redo logs.

SQL> alter database force logging;

 

  1. In Primary add following dataguard related parameters.

SQL>alter system set log_archive_dest_2=’service=ORCLFS SYNC AFFIRM alternate=log_archive_dest_3 valid_for=(online_logfiles,primary_role) db_unique_name=ORCLFS’;
SQL>alter system set log_archive_dest_3=’service=EBILDR1 SYNC max_failure=1 alternate=log_archive_dest_2 valid_for=(online_logfiles,primary_role) db_unique_name=EBILDR1′;
SQL>alter system set log_archive_dest_state_3=alternate;
SQL>alter system set log_archive_config=’DG_CONFIG=(EBILPD1,ORCLFS,EBILDR1)’;
SQL>alter system set fal_server=’EBILDR1′;
SQL>alter system set standby_file_management=’AUTO’;

SQL> ALTER DATABASE SET STANDBY TO MAXIMIZE AVAILABILITY;

  1. Take RMAN backup in primary to be restored in standby

export ORACLE_SID=EBILPD1

run
{
allocate channel ch1 type disk format ‘/oraclecd/rman/Primary_bkp_for_stndby_%U’;
allocate channel ch2 type disk format ‘/oraclecd/rman/Primary_bkp_for_stndby_%U’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
backup current controlfile for standby;
}

 

  1. In Standby,We need to copy the backup and restore

export ORACLE_SID=EBILDR1

rman target sys/***@EBILPD1 auxiliary /

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

After that please open the standby database for active data guard and start managed recovery.

RMAN>alter database open;

RMAN>alter database recover managed standby database disconnect;

  1. Now please check alert logs and sync.

For Oracle RDBMS Version 12.1 set the parameter ”_redo_transport_stall_time”=60 in all instances within the configuration for best return to synchronization after a node outage with an ALTERNATE Far Sync configuration

DGMGRL configuration

Please enable DG_BROKER using below command

alter system set dg_broker_start=false sid=’*’;

–Please place DG related configuration file in ASM in case of RAC–
alter system set dg_broker_config_file1=’+FRA/dr1.dat’ sid=’*’ scope=both;
alter system set dg_broker_config_file2=’+FRA/dr2.dat’ sid=’*’ scope=both;

alter system set dg_broker_start=true sid=’*’;

Please reset those parameters if you did not use DGMGRL (For example as in above case)

alter system set log_archive_dest_2=”;
alter system set log_archive_dest_3=”;
alter system set log_archive_config=”;
alter system set fal_server=”;

Now login to primary and start DGMGRL configuration

export ORACLE_SID=EBILPD1

dgmgrl /
create configuration ‘DGconfig’ as primary database is EBILPD1 connect identifier is EBILPD1;
ADD FAR_SYNC ORCLFS as connect identifier is ORCLFS;
add database EBILDR1 as connect identifier is EBILDR1;
edit FAR_SYNC ORCLFS set property RedoRoutes='(EBILPD1 :EBILDR1 ASYNC)’;
EDIT DATABASE EBILDR1 SET PROPERTY LogXptMode = ‘SYNC’;
EDIT DATABASE EBILPD1 SET PROPERTY MaxFailure = 0;
EDIT FAR_SYNC ORCLFS SET PROPERTY MaxFailure = 1;
EDIT DATABASE EBILDR1 SET PROPERTY MaxFailure = 0;
EDIT DATABASE EBILDR1 SET PROPERTY RedoRoutes = ‘(LOCAL : EBILPD1 ASYNC)’;

EDIT DATABASE EBILPD1 SET PROPERTY RedoRoutes = ‘(LOCAL : ORCLFS SYNC ALT =(EBILDR1 ASYNC FALLBACK))’;

DISABLE CONFIGURATION;
EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;

ENABLE CONFIGURATION;

Check status

[oracle@EBILPD1 olg1prbscsdbf01 ~]$ dgmgrl /

DGMGRL for Linux: Version 12.1.0.2.0 – 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type “help” for information.

Connected as SYSDG.

DGMGRL> show configuration;

Configuration – DGconfig

Protection Mode: MaxAvailability

Members:

ebilpd1 – Primary database

orclfs  – Far sync instance

ebildr1 – Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:

SUCCESS   (status updated 57 seconds ago)

Now all Dataguard related configurations will be automatically setup.

In Primary,

log_archive_dest_2 string service=”orclfs”, SYNC AFFIRM delay=0 optional compression=disable max_failure=1 max_connections=1 reopen=300 db_unique_name=”orclfs” net_timeout=30,alternate=LOG_ARCHIVE_DEST_3 v
alid_for=(online_logfile,all_roles)

log_archive_dest_3 string service=”ebildr1″, ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_c
onnections=1 reopen=300 db_unique_name=”ebildr1″ net_timeout=30, alternate=LOG_ARCHIVE_DEST_2 valid_for=(online_logfile,all_roles)

log_archive_dest_state_3 string ALTERNATE

  In FarSync

log_archive_dest_2 string service=”ebildr1″, ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name=”ebildr1″ net_timeout=30, valid_for=(standby_logfile,all_roles)

SQL> show parameter fal;

NAME TYPE VALUE
———————————— ———– ——————————
fal_client string
fal_server string ebilpd1, ebildr1

In standby

SQL> show parameter fal;

NAME TYPE VALUE
———————————— ———– ——————————
fal_client string
fal_server string ebilpd1, orclfs

 

Reference

Click to access farsync-2267608.pdf

 

 

 

 

 

 

 

 

Clean *.trc ,*.trm and *.aud older than 1 day clean oracle home

#!/bin/bash

. /home/oracle/.bash_profile

export log=/u01/app/oracle/shells/clean_rdbms_audit.log

echo $ORACLE_HOME | tee $log
echo “DELETING AUDIT LOGS FROM RDBMS” | tee -a $log
date | tee -a $log
df -m $ORACLE_HOME | tee -a $log
/bin/find /u01/app/oracle/ -type f -name “*.trc” -mtime +1 -exec rm {} \;
find $ORACLE_BASE/admin -type f -name “*.aud” -mtime +5 -exec rm {} \;
find $ORACLE_BASE/diag/rdbms -type f -name “*.trc” -mtime +1 -exec rm {} \;
find $ORACLE_BASE/diag/rdbms -type f -name “*.trm” -mtime +1 -exec rm {} \;
find $ORACLE_BASE/diag/rdbms -name “*_lmhb_*.tr?” -exec rm {} \;
sleep 10
df -m $ORACLE_HOME | tee -a $log
echo “END OF DELETING AUDIT LOGS” | tee -a $log
date |tee -a $log
exit 0;

Tablespace usage considering autoextend on

This script will show  tablespaces from all databases in server which are full by 85 percent.This script will take care autoextend on clause.This means it will show tablespace usage if growth is exceeding autoextended space.

1.The shell script:-

for j in ORCL1 ORCL2
do
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
export ORACLE_SID=$i
#$ORACLE_HOME/bin/sqlplus -S monitor/m0n1t0r@”$j” << EOF >> tb_check_all_`date +\%d\%m\%Y\%H`.log
$ORACLE_HOME/bin/sqlplus -S monitor/m0n1t0r@”$j” << EOF >> tb_check_all.log
@tb_check.sql
exit
EOF
done

cat tb_check_all.log | grep -v “rows” > tb1_check_all.log
awk ‘{if (++dup[$0] == 1) print $0;}’ tb1_check_all.log > tb_check_all_`date +\%d\%m\%Y\%H`.log
rm tb*all.log

2.The content of tb_check.sql

SET FEEDBACK OFF
select name DB_NAME,TABLESPACE_NAME,round((TABLESPACE_SIZE*8192)/1024/1024/1024,0) “SIZE”,round((USED_SPACE*8192)/1024/1024/1024,0) “USED”,round(USED_PERCENT,0) “%USED” from dba_tablespace_usage_metrics,v$database where used_percent>85;

Script to sync check of applied archive log between 2 RAC databases

1.First create the shell script mentioning all environmental variables adjusted to your environment.

vi dataguard_dr_sync_check.sh
 #!/bin/bash
 cd /u01/app/oracle/shells

for k in ORCL
 do
 export ORACLE_BASE=/u01/app/oracle
 export ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
 export ORACLE_SID=$k
 $ORACLE_HOME/bin/sqlplus -S monitor@"$k"/xxx << EOF >> dataguard_dr_sync_check_orcl_`date +\%d\%m\%Y\%H`.log
 @dataguard_dr_sync_check_orcl.sql
 exit
 EOF
 done

2.For 2 node RAC,below will be dataguard_dr_sync_check_orcl.sql script .For 4 nodes,just add another 2 sql with thread and union it.

SET HEADING OFF
 SELECT 'SYNC CHECK OF '||NAME||' & '||SUBSTR(NAME,0,LENGTH(NAME)-3)||'EXA1' FROM V$DATABASE;
 SET HEADING ON
 PROMPT ===================================================
 SELECT THREAD, LOG_ARCHIVED,LOG_APPLIED, LOG_ARCHIVED-LOG_APPLIED LOG_GAP
 FROM
 ( SELECT THREAD# THREAD FROM V$ARCHIVED_LOG WHERE THREAD#=1),
 ( SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=1),
 ( SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=1),
 ( SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID=3 AND APPLIED='YES' and THREAD#=1 )
 UNION
 SELECT THREAD, LOG_ARCHIVED,LOG_APPLIED, LOG_ARCHIVED-LOG_APPLIED LOG_GAP
 from
 ( SELECT THREAD# THREAD FROM V$ARCHIVED_LOG WHERE THREAD#=2 ),
 ( SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES' and THREAD#=2 ),
 ( SELECT MAX(SEQUENCE#) LOG_APPLIED FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES' and THREAD#=2 ),
 ( SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID=3 AND APPLIED='YES' and THREAD#=2);

RMAN incremental Backup cumulative and differential

RMAN Incremental backup

This backups only datafile blocks changed since specified previous backup.

1.This strategy could be followed to make incrementally updated backup .These incrementally updated backup will be used to make updated image copy will all roll forward image.

2.Reduce amount of time to take backup.

3.Save network bandwidth

  1. To be able to recover changes to objects created with the NOLOGGING option. For example, direct load inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do, however, change data blocks and so are captured by incremental backups.
  2. To reduce backup sizes for NOARCHIVELOG databases. Instead of making a whole database backup every time, you can make incremental backups.

As with full backups, if you are in ARCHIVELOG mode, you can make incremental backups if the database is open; if the database is in NOARCHIVELOG mode, then you can only make incremental backups after a consistent shutdown.

Level 0 and Level 1 Incremental Backups

Level 0 incremental backup will register all copies of block which has data.This is base for incremental backup .

Difference between full and level 0 backup is:

Level 0 backup will involve in incremental backup strategy.

Why cumulative backup are useful than differential

Cumulative backups are preferable to differential backups when recovery time is more important than disk space, because during recovery each differential backup must be applied in succession. Use cumulative incremental backups instead of differential, if enough disk space is available to store cumulative incremental backups.

 

Differential Incremental Backups

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

 

  • Sunday

An incremental level 0 backup backs up all blocks that have ever been in use in this database.

  • Monday – Saturday

On each day from Monday through Saturday, a differential incremental level 1 backup backs up all blocks that have changed since the most recent incremental backup at level 1 or 0. So, the Monday backup copies blocks changed since Sunday level 0 backup, the Tuesday backup copies blocks changed since the Monday level 1 backup, and so forth.

  • The cycle is repeated for the next week.

Cumulative Incremental Backups

BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks changed since level 0

In the example shown in the following occurs:

  • Sunday

An incremental level 0 backup backs up all blocks that have ever been in use in this database.

  • Monday – Saturday

A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0 backup. Because the most recent level 0 backup was created on Sunday, the level 1 backup on each day Monday through Saturday backs up all blocks changed since the Sunday backup.

  • The cycle is repeated for the next week.

Basic incremetal Backup strategy

you can implement a three-level backup scheme so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily. In this scheme, you never have to apply more than a day’s worth of redo for complete recovery.

Making Incremental Backups: BACKUP INCREMENTAL

After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:

BACKUP INCREMENTAL LEVEL 0 DATABASE;

This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:

BACKUP INCREMENTAL LEVEL 1  TABLESPACE SYSTEM  DATAFILE ‘ora_home/oradata/trgt/tools01.dbf’;

This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.

BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE  TABLESPACE users;

 

Enabling and Disabling Change Tracking to improve performance of incremental backup

To store the change tracking file in the database area, set DB_CREATE_FILE_DEST in the target database. Then issue the following SQL statement to enable change tracking:

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

You can also create the change tracking file in a location you choose yourself, using the following SQL statement:

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING   USING FILE ‘/mydir/rman_change_track.f’ REUSE;

The REUSE option tells Oracle to overwrite any existing file with the specified name.

To disable change tracking, use this SQL statement:

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

If the change tracking file was stored in the database area, then it is deleted when you disable change tracking.

Checking Whether Change Tracking is Enabled

From SQL*Plus, you can query V$BLOCK_CHANGE_TRACKING.STATUS to determine whether change tracking is enabled, and if it is, query V$BLOCK_CHANGE_TRACKING.FILENAME to display the filename.

 

ASM overview and commands

 

Automatic Storage Management (ASM)

A.Overview of Automatic Storage Management (ASM)

Automatic Storage Management (ASM) simplifies administration of Oracle related files by allowing the administrator to reference disk groups rather than individual disks and files, which are managed by ASM. The ASM functionality is an extention of the Oracle Managed Files (OMF) functionality that also includes striping and mirroring to provide balanced and secure storage. The new ASM functionality can be used in combination with existing raw and cooked file systems, along with OMF and manually managed files.
The ASM functionality is controlled by an ASM instance. This is not a full database instance, just the memory structures and as such is very small and lightweight.
The main components of ASM are disk groups, each of which comprise of several physical disks that are controlled as a single unit. The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files. The locations and names for the files are controlled by ASM, but user-friendly aliases and directory structures can be defined for ease of reference.
The level of redundancy and the granularity of the striping can be controlled using templates. Default templates are provided for each file type stored by ASM, but additional templates can be defined as needed.
Failure groups are defined within a disk group to support the required level of redundancy. For two-way mirroring you would expect a disk group to contain two failure groups so individual files are written to two locations.
In summary ASM provides the following functionality:
  • Manages groups of disks, called disk groups.
  • Manages disk redundancy within a disk group.
  • Provides near-optimal I/O balancing without any manual tuning.
  • Enables management of database objects without specifying mount points and filenames.
  • Supports large files.
It is worth taking a quick look at the following section of the documentation to familiarize yourself with the basic requirements recommendations for ASM.

ASM Privilege

B.Initialization Parameters and ASM Instance Creation

The initialization parameters that are of specific interest for an ASM instance are:
  • INSTANCE_TYPE – Set to ASM or RDBMS depending on the instance type. The default is RDBMS.This is only mandatory parameter.
  • DB_UNIQUE_NAME – Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.
  • ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when the POWER clause is omitted from a rebalance operation.
  • ASM_DISKGROUPS – The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration changes are automatically reflected in this parameter.This is ignored in nomount or ALTER DISGROUP ALL MOUNT command.By default the diskgroup which has ocr,votedisk and spfile will be mounteded automatically or which are mentioned in this parameter.
  • ASM_DISKSTRING – Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.
Incorrect usage of parameters in ASM or RDBMS instances result in ORA-15021 errors.
To create an ASM instance first create a file called “init+ASM.ora” in the “/tmp” directory containing the following information.
INSTANCE_TYPE=ASM
Next, using SQL*Plus connect to the idle instance.
export ORACLE_SID=+ASM
sqlplus / as sysdba
Create an spfile using the contents of the “init+ASM.ora” file.
SQL> CREATE SPFILE FROM PFILE=’/tmp/init+ASM.ora’;
File created.
Finally, start the instance with the NOMOUNT option.
SQL> startup nomount
ASM instance started
Total System Global Area  125829120 bytes
Fixed Size                  1301456 bytes
Variable Size             124527664 bytes
Database Buffers                  0 bytes
Redo Buffers                      0 bytes
SQL>
The ASM instance is now ready to use for creating and mounting disk groups. To shutdown the ASM instance issue the following command.

SQL> shutdown

ASM instance shutdown

SQL>

Once an ASM instance is present disk groups can be used for the following parameters in database instances (INSTANCE_TYPE=RDBMS) to allow ASM file creation:
  • DB_CREATE_FILE_DEST
  • DB_CREATE_ONLINE_LOG_DEST_n
  • DB_RECOVERY_FILE_DEST
  • CONTROL_FILES
  • LOG_ARCHIVE_DEST_n
  • LOG_ARCHIVE_DEST
  • STANDBY_ARCHIVE_DEST

C.Startup and Shutdown of ASM Instances

ASM instance are started and stopped in a similar way to normal database instances. The options for the STARTUP command are:
  • FORCE – Performs a SHUTDOWN ABORT before restarting the ASM instance.
  • MOUNT – Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
  • NOMOUNT – Starts the ASM instance without mounting any disk groups.
  • OPEN – This is not a valid option for an ASM instance.
The options for the SHUTDOWN command are:
  • NORMAL – The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
  • IMMEDIATE – The ASM instance waits for any SQL transactions to complete then shuts down. It doesn’t wait for sessions to exit.
  • TRANSACTIONAL – Same as IMMEDIATE.
  • ABORT – The ASM instance shuts down instantly.

D.Administering ASM Disk Groups

There are a few basic points to consider when planning to use ASM:
  • In most cases you will only need two disk groups (DATA and FRA), where DATA holds all database related files and FRA holds the fast recovery area, including multiplexed copies on online redo logs and controlfiles. Typically, the FRA disk group will be twice the size of the DATA disk group, since it must hold all backups.
  • Oracle recommend a minimum of 4 LUNs per disk group, with LUNs using hardware RAID and external redundancy if possible.
  • All LUNs within a disk group should be the same size and have the same performance characteristics.
  • LUNs should be made up from disks dedicated to Oracle, not shared with other applications.
Now let’s look at basic administration of disk groups.

Disks

Disk groups are created using the CREATE DISKGROUP statement. This statement allows you to specify the level of redundancy:
  • NORMAL REDUNDANCY – Two-way mirroring, requiring two failure groups.
  • HIGH REDUNDANCY – Three-way mirroring, requiring three failure groups.
  • EXTERNAL REDUNDANCY – No mirroring for disks that are already protected using hardware mirroring or RAID. If you have hardware RAID it should be used in preference to ASM redundancy, so this will be the standard option for most installations.
In addition failure groups and preferred names for disks can be defined. If the NAME clause is omitted the disks are given a system generated name like “disk_group_1_0001”. The FORCE option can be used to move a disk from another disk group into this one.
CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK
‘/devices/diska1’ NAME diska1,
‘/devices/diska2’ NAME diska2
FAILGROUP failure_group_2 DISK
‘/devices/diskb1’ NAME diskb1,
‘/devices/diskb2’ NAME diskb2;
Disk groups can be deleted using the DROP DISKGROUP statement.

DROP DISKGROUP disk_group_1 INCLUDING CONTENTS;

Disks can be added or removed from disk groups using the ALTER DISKGROUP statement. Remember that the wildcard “*” can be used to reference disks so long as the resulting string does not match a disk already used by an existing disk group.
— Add disks.
ALTER DISKGROUP disk_group_1 ADD DISK
‘/devices/disk*3’,
‘/devices/disk*4’;
— Drop a disk.

ALTER DISKGROUP disk_group_1 DROP DISK diska2;

Disks can be resized using the RESIZE clause of the ALTER DISKGROUP statement. The statement can be used to resize individual disks, all disks in a failure group or all disks in the disk group. If the SIZE clause is omitted the disks are resized to the size of the disk returned by the OS.
— Resize a specific disk.

ALTER DISKGROUP disk_group_1 RESIZE DISK diska1 SIZE 100G;

— Resize all disks in a failure group.

ALTER DISKGROUP disk_group_1 RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 100G;

— Resize all disks in a disk group.

ALTER DISKGROUP disk_group_1 RESIZE ALL SIZE 100G;

The UNDROP DISKS clause of the ALTER DISKGROUP statement allows pending disk drops to be undone. It will not revert drops that have completed, or disk drops associated with the dropping of a disk group.

ALTER DISKGROUP disk_group_1 UNDROP DISKS;

Disk groups can be rebalanced manually using the REBALANCE clause of the ALTER DISKGROUP statement. If the POWER clause is omitted the ASM_POWER_LIMIT parameter value is used. Rebalancing is only needed when the speed of the automatic rebalancing is not appropriate.

ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;

Disk groups are mounted at ASM instance startup and unmounted at ASM instance shutdown. Manual mounting and dismounting can be accomplished using the ALTER DISKGROUP statement as seen below.

ALTER DISKGROUP ALL DISMOUNT;

ALTER DISKGROUP ALL MOUNT;

ALTER DISKGROUP disk_group_1 DISMOUNT;

ALTER DISKGROUP disk_group_1 MOUNT;

Templates

Templates are named groups of attributes that can be applied to the files within a disk group. The following example show how templates can be created, altered and dropped.
— Create a new template.

ALTER DISKGROUP disk_group_1 ADD TEMPLATE my_template ATTRIBUTES (MIRROR FINE);

— Modify template.

ALTER DISKGROUP disk_group_1 ALTER TEMPLATE my_template ATTRIBUTES (COARSE);

— Drop template.

ALTER DISKGROUP disk_group_1 DROP TEMPLATE my_template;

Available attributes include:
  • UNPROTECTED – No mirroring or striping regardless of the redundancy setting.
  • MIRROR – Two-way mirroring for normal redundancy and three-way mirroring for high redundancy. This attribute cannot be set for external redundancy.
  • COARSE – Specifies lower granuality for striping. This attribute cannot be set for external redundancy.
  • FINE – Specifies higher granularity for striping. This attribute cannot be set for external redundancy.

Directories

A directory heirarchy can be defined using the ALTER DISKGROUP statement to support ASM file aliasing. The following examples show how ASM directories can be created, modified and deleted.
— Create a directory.

ALTER DISKGROUP disk_group_1 ADD DIRECTORY ‘+disk_group_1/my_dir’;

— Rename a directory.

ALTER DISKGROUP disk_group_1 RENAME DIRECTORY ‘+disk_group_1/my_dir’ TO ‘+disk_group_1/my_dir_2’;

— Delete a directory and all its contents.

ALTER DISKGROUP disk_group_1 DROP DIRECTORY ‘+disk_group_1/my_dir_2’ FORCE;

Aliases

Aliases allow you to reference ASM files using user-friendly names, rather than the fully qualified ASM filenames.

— Create an alias using the fully qualified filename.

ALTER DISKGROUP disk_group_1 ADD ALIAS ‘+disk_group_1/my_dir/my_file.dbf’

FOR ‘+disk_group_1/mydb/datafile/my_ts.342.3’;

— Create an alias using the numeric form filename.

ALTER DISKGROUP disk_group_1 ADD ALIAS ‘+disk_group_1/my_dir/my_file.dbf’

FOR ‘+disk_group_1.342.3’;

— Rename an alias.

ALTER DISKGROUP disk_group_1 RENAME ALIAS ‘+disk_group_1/my_dir/my_file.dbf’

TO ‘+disk_group_1/my_dir/my_file2.dbf’;

— Delete an alias.

ALTER DISKGROUP disk_group_1 DELETE ALIAS ‘+disk_group_1/my_dir/my_file.dbf’;

Attempting to drop a system alias results in an error.

Files

Files are not deleted automatically if they are created using aliases, as they are not Oracle Managed Files (OMF), or if a recovery is done to a point-in-time before the file was created. For these circumstances it is necessary to manually delete the files, as shown below.

— Drop file using an alias.

ALTER DISKGROUP disk_group_1 DROP FILE ‘+disk_group_1/my_dir/my_file.dbf’;

— Drop file using a numeric form filename.

ALTER DISKGROUP disk_group_1 DROP FILE ‘+disk_group_1.342.3’;

— Drop file using a fully qualified filename.

ALTER DISKGROUP disk_group_1 DROP FILE ‘+disk_group_1/mydb/datafile/my_ts.342.3’;

Checking Metadata

The internal consistency of disk group metadata can be checked in a number of ways using the CHECK clause of the ALTER DISKGROUP statement.

— Check metadata for a specific file.

ALTER DISKGROUP disk_group_1 CHECK FILE ‘+disk_group_1/my_dir/my_file.dbf’

— Check metadata for a specific failure group in the disk group.

ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;

— Check metadata for a specific disk in the disk group.

ALTER DISKGROUP disk_group_1 CHECK DISK diska1;

— Check metadata for all disks in the disk group.

ALTER DISKGROUP disk_group_1 CHECK ALL;

E.ASM Views

 

The ASM configuration can be viewed using the V$ASM_% views, which often contain different information depending on whether they are queried from the ASM instance, or a dependant database instance.

View ASM Instance DB Instance
V$ASM_ALIAS Displays a row for each alias present in every disk group mounted by the ASM instance. Returns no rows
V$ASM_CLIENT Displays a row for each database instance using a disk group managed by the ASM instance. Displays a row for the ASM instance if the database has open ASM files.
V$ASM_DISK Displays a row for each disk discovered by the ASM instance, including disks which are not part of any disk group. Displays a row for each disk in disk groups in use by the database instance.
V$ASM_DISKGROUP Displays a row for each disk group discovered by the ASM instance. Displays a row for each disk group mounted by the local ASM instance.
V$ASM_FILE Displays a row for each file for each disk group mounted by the ASM instance. Displays no rows.
V$ASM_OPERATION Displays a row for each file for each long running operation executing in the ASM instance. Displays no rows.
V$ASM_TEMPLATE Displays a row for each template present in each disk group mounted by the ASM instance. Displays a row for each template present in each disk group mounted by the ASM instance with which the database instance communicates.

F.ASM Filenames

 

There are several ways to reference ASM file. Some forms are used during creation and some for referencing ASM files. The forms for file creation are incomplete, relying on ASM to create the fully qualified name, which can be retrieved from the supporting views. The forms of the ASM filenames are summarised below.

Filename Type Format
Fully Qualified ASM Filename +dgroup/dbname/file_type/file_type_tag.file.incarnation
Numeric ASM Filename +dgroup.file.incarnation
Alias ASM Filenames +dgroup/directory/filename
Alias ASM Filename with Template +dgroup(template)/alias
Incomplete ASM Filename +dgroup
Incomplete ASM Filename with Template +dgroup(template)

SQL and ASM

ASM filenames can be used in place of conventional filenames for most Oracle file types, including controlfiles, datafiles, logfiles etc. For example, the following command creates a new tablespace with a datafile in the disk_group_1 disk group.

CREATE TABLESPACE my_ts DATAFILE ‘+disk_group_1’ SIZE 100M AUTOEXTEND ON;

G.Migrating to ASM Using RMAN

 

The following method shows how a primary database can be migrated to ASM from a disk based backup:

  • Disable change tracking (only available in Enterprise Edition) if it is currently being used.

SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

  • Shutdown the database.

SQL> SHUTDOWN IMMEDIATE

  • Modify the parameter file of the target database as follows:
    • Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the relevant ASM disk groups.
    • Remove the CONTROL_FILES parameter from the spfile so the control files will be moved to the DB_CREATE_* destination and the spfile gets updated automatically. If you are using a pfile the CONTROL_FILES parameter must be set to the appropriate ASM files or aliases.
  • Start the database in nomount mode.

RMAN> STARTUP NOMOUNT

  • Restore the controlfile into the new location from the old location.

RMAN> RESTORE CONTROLFILE FROM ‘old_control_file_name’;

  • Mount the database.

RMAN> ALTER DATABASE MOUNT;

  • Copy the database into the ASM disk group.

RMAN> BACKUP AS COPY DATABASE FORMAT ‘+disk_group’;

  • Switch all datafile to the new ASM location.

RMAN> SWITCH DATABASE TO COPY;

  • Open the database.

RMAN> ALTER DATABASE OPEN;

  • Create new redo logs in ASM and delete the old ones.
  • Enable change tracking if it was being used.

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

H.Oracle 11g Features

 

New Disk Group Compatibility Attributes

Oracle 11g ASM includes two new compatibility attributes that determine the version of the ASM and database software that can use specific disk groups:

  • COMPATIBLE.ASM – The minimum version of the ASM software that can access the disk group. In 11g, the default setting is 10.1.
  • COMPATIBLE.RDBMS – The minimum COMPATIBLE database initialization parameter setting for any database instance that uses the disk group. In 11g, the default setting is 10.1.

The compatibility versions of a disk group can only be increased, not decreased. If you have increased the version by mistake, you will need to create a new disk group.

 

The disk group compatibility attributes can be set during disk group creation by adding the ATTRIBUTE clause to the CREATE DISKGROUP command.

 

CREATE DISKGROUP data DISK ‘/dev/raw/*’

ATTRIBUTE ‘compatible.asm’ = ‘11.1’;

CREATE DISKGROUP data DISK ‘/dev/raw/*’

ATTRIBUTE ‘compatible.rdbms’ = ‘11.1’, ‘compatible.asm’ = ‘11.1’;

 

The disk group compatibility attributes for existing disk groups can be altered using the SET ATTRIBUTE clause to the ALTER DISKGROUP command.

 

ALTER DISKGROUP data SET ATTRIBUTE ‘compatible.asm’ = ‘11.1’;

ALTER DISKGROUP data SET ATTRIBUTE ‘compatible.rdbms’ = ‘11.1’;

 

The current compatibility settings are available from the V$ASM_DISKGROUP and V$ASM_ATTRIBUTE views.

 

COLUMN name FORMAT A10

COLUMN compatibility FORMAT A20

COLUMN database_compatibility FORMAT A20

SELECT group_number, name, compatibility, database_compatibility FROM v$asm_diskgroup;

GROUP_NUMBER NAME       COMPATIBILITY        DATABASE_COMPATIBILI

———— ———- ——————– ——————–

1 DATA       11.1.0.0.0           11.1.0.0.0

1 row selected.

SQL>

COLUMN name FORMAT A20

COLUMN value FORMAT A20

SELECT group_number, name, value FROM v$asm_attribute ORDER BY group_number, name;

GROUP_NUMBER NAME                 VALUE

———— ——————– ——————–

1 au_size              1048576

1 compatible.asm       11.1.0.0.0

1 compatible.rdbms     11.1

1 disk_repair_time     3.6h

4 rows selected.

SQL>

I.Fast Mirror Resync

 

During transient disk failures within a failure group, ASM keeps track of the changed extents that need to be applied to the offline disk. Once the disk is available, only the changed extents are written to resynchronize the disk, rather than overwriting the contents of the entire disk. This can speed up the resynchronization process considerably.

 

Fast mirror resync is only available when the disk groups compatibility attributes are set to 11.1 or higher.

ALTER DISKGROUP disk_group_1 SET ATTRIBUTE ‘compatible.asm’ = ‘11.1’;

ALTER DISKGROUP disk_group_1 SET ATTRIBUTE ‘compatible.rdbms’ = ‘11.1;

 

ASM drops disks if they remain offline for more than 3.6 hours. The disk groups default time limit is altered by changing the DISK_REPAIR_TIME parameter with a unit of minutes (M or m) or hours (H or h).

— Set using the hours unit of time.

ALTER DISKGROUP disk_group_1 SET ATTRIBUTE ‘disk_repair_time’ = ‘4.5h’;

— Set using the minutes unit of time.

ALTER DISKGROUP disk_group_1 SET ATTRIBUTE ‘disk_repair_time’ = ‘300m’;

The DROP AFTER clause of the ALTER DISKGROUP command is used to override the disk group default DISK_REPAIR_TIME.

— Use the default DISK_REPAIR_TIME for the diskgroup.

ALTER DISKGROUP disk_group_1 OFFLINE DISK D1_0001;

— Override the default DISK_REPAIR_TIME.

ALTER DISKGROUP disk_group_1 OFFLINE DISK D1_0001 DROP AFTER 30m;

If a disk goes offline during a rolling upgrade, the timer is not started until after the rolling upgrade is complete.

 

J.Rolling Upgrade

 

Clustered ASM instances for 11g onwards can be upgraded using a rolling upgrade. The ASM cluster is placed in rolling upgrade mode by issuing the following command from one of the nodes.

ALTER SYSTEM START ROLLING MIGRATION TO 11.2.0.0.0;

Once the cluster is in rolling upgrade mode each node in turn can be shutdown, upgraded and started. The cluster runs in a mixed version environment until the upgrade is complete. In this state, the cluster is limited to the following operations:

  • Mount and dismount of the disk groups.
  • Open, close, resize, and delete of database files.
  • Access to local fixed views and fixed packages.

The current status of the ASM cluster can be determined using the following query.

SELECT SYS_CONTEXT(‘sys_cluster_properties’, ‘cluster_state’) FROM dual;

Once the last node is upgraded, the rolling upgrade is stopped by issuing the following command, which checks all ASM instances are at the appropriate version, turns off rolling upgrade mode and restarts any pending rebalance operations.

ALTER SYSTEM STOP ROLLING MIGRATION;

Restrictions and miscellaneous points about the rolling upgrade process include:

  • The Oracle clusterware must be fully patched before an ASM rolling upgrade is started.
  • Rolling upgrades are only available from 11g onwards, so this method is not suitable for 10g to 11g upgrades.
  • This method can be used to rollback to the previous version if the rolling upgrade fails before completion.
  • If the upgrade fails, any rebalancing operations must complete before a new upgrade can be attempted.
  • New instances joining the cluster during a rolling upgrade are automatically placed in rolling upgrade mode.
  • If all instances in a cluster are stopped during a rolling upgrade, once the instances restart they will no longer be in rolling upgrade mode. The upgrade must be initiated as if it were a new process.

K.SYSASM Privilege and OSASM OS Group

 

The introduction of ASM moved the management of storage away from system adminstrators and into the DBA territory. Unfortunately this isn’t how every company operates. As a result, some system administrators were required to have access to privileged users to handle disk storage. Oracle addresses this issue with the introduction of the SYSASM privilege and the OSASM operating system group, which provide two mechanisms to enable the separation of storage and database administration duties.

 

Users can be created in the ASM instance in a similar manner to database users. Granting these users the SYSASM privilege allows them to connect to the ASM instance and perform administration tasks. First, connect to the ASM instance.

 

$ export ORACLE_SID=+ASM

$ sqlplus / as sysasm

Next, create a new user in the ASM instance and grant it the SYSASM privilege. The user is now able to connect using SYSASM.

SQL> CREATE USER asm_user_1 IDENTIFIED by asm_password;

User created.

SQL> GRANT SYSASM TO asm_user_1;

Grant succeeded.

SQL> CONN asm_user_1/asm_password AS SYSASM

Connected.

SQL>

Alternatively, assigning an operating system user to the OSASM group (asmadmin) allows then to connect as SYSASM using OS authentication. The following example creates a new OS user assigned to the OSASM group (asmadmin), which is immediately able to connect using SYSASM.

# useradd tim_hall -G asmadmin

# su – tim_hall

$ export ORACLE_SID=+ASM

$ export ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1

$ $ORACLE_HOME/bin/sqlplus / as sysasm

SQL*Plus: Release 11.1.0.6.0 – Production on Fri Aug 8 16:48:37 2008

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>

L. Scalability and Performance Enhancements

 

ASM files are stored in a disk group as a collection of extents. In Oracle 10g each extent mapped directly to one allocation unit (AU), but in Oracle 11g an extent can be made up of one or more allocation units. As files get larger, the extent size can grow to reduce the size of the extent map describing the file, thus saving memory.

 

When the disk group compatibility attributes are set to 11.1 or higher, the extent size will automatically grow as the file grows. The first 20,000 extents match the allocation unit size (1*AU). The next 20,000 extents are made up of 8 allocation units (8*AU). Beyond that point, the extent size becomes 64 allocation units (64*AU).

 

In addition to the automatic expansion of the extent sizes, Oracle 11g also allows control over the allocation unit size using the ATTRIBUTE clause in the CREATE DISKGROUP statement, with values ranging from 1M to 64M.

CREATE DISKGROUP disk_group_2

EXTERNAL REDUNDANCY

DISK ‘/dev/sde1’

ATRRIBUTE ‘au_size’ = ’32M’;

The combination of expanding extent sizes and larger allocation units should result in increased I/O performance for very large databases.

 

M.New ASMCMD Commands and Options

 

The ASM command line utility includes several new commands. The following example output assumes you have already started the utility using the “-p” option.

$ asmcmd -p

ASMCMD [+] >

The usage notes for each command is available by issuing the “help <command>” command from within the asmcmd utility, so I will avoid displaying all this information here.

 

The lsdsk command lists information about ASM disks from the V$ASM_DISK_STAT and V$ASM_DISK views. The summary usage is shown below.

lsdsk [-ksptcgHI] [-d <diskgroup_name>] [pattern]

An example of the output from the basic command and the “-k” option are shown below.

ASMCMD [+] > lsdsk

Path

/dev/sdc1

/dev/sdd1

/dev/sde1

ASMCMD [+] > lsdsk -d data -k

Total_MB  Free_MB  OS_MB  Name       Failgroup  Library  Label  UDID  Product  Redund   Path

8189     6961   8189  DATA_0000  DATA_0000  System                         UNKNOWN  /dev/sdc1

8189     6961   8189  DATA_0001  DATA_0001  System                         UNKNOWN  /dev/sdd1

8189     6950   8189  DATA_0002  DATA_0002  System                         UNKNOWN  /dev/sde1

ASMCMD [+] >

The cp command allows files to be copied between ASM and local or remote destinations. The summary usage is shown below.

cp [-ifr] <[\@connect_identifier:]src> <[\@connect_identifier:]tgt>

The following example copies the current USERS datafile from ASM to the local file system.

ASMCMD [+] > cp +DATA/db11g/datafile/users.273.661514191 /tmp/users.dbf

source +DATA/db11g/datafile/users.273.661514191

target /tmp/users.dbf

copying file(s)…

file, /tmp/users.dbf, copy committed.

ASMCMD [+] >

There seems to be some concern that the cp command doesn’t work for control files (here).

 

The md_backup command makes a copy of the metadata for one or more disk groups. The summary usage is shown below.

md_backup [-b location_of_backup] [-g dgname [-g dgname …]]

An example of the command is shown below. The resulting file contains all the metadata needed to recreate the ASM setup.

ASMCMD [+] > md_backup -b /tmp/backup.txt -g data

Disk group to be backed up: DATA

ASMCMD [+] >

The md_restore command allows you to restore a disk group from the metadata created by the md_backup command. It also allows a certain amount of manipulation of the final disk groups during the restore. The summary usage is shown below.

md_restore -b <backup_file> [-li]

[-t (full)|nodg|newdg] [-f <sql_script_file>]

[-g ‘<diskgroup_name>,<diskgroup_name>,…’]

[-o ‘<old_diskgroup_name>:<new_diskgroup_name>,…’]

A straight restore of the backup shown previously is shown below.

ASMCMD [+] > md_restore -b /tmp/backup.txt -t full -g data

The remap command repairs a range of physical blocks on disk. The contents of each block is not validated, so only blocks exhibiting read errors are repaired. The summary usage is shown below.

remap <disk group name> <disk name> <block range>

An example of the command is show below.

ASMCMD [+] > remap data data_0001 5000-5999

For detailed usage information see the ASMCDM Command Reference.

 

N.Copy backup from ASM diskgroup to another ASM diskgroup to remote host

 

asmcmd

cd +EOCDATA/backup

cp primary_bkp_for_stndby_bjr03dba_1_1 sys/oraeocprod@172.22.203.170.+ASM1:+EOCFRA/bkp/primary_bkp_for_stndby_bjr03dba_1_1

 

O.Preferred Read Failure Groups

 

In Oracle 10g, ASM always reads the primary copy of the mirrored extent set. This isn’t a problem when both nodes and both failure groups are all located in the same site, but it can be inefficient for extended clusters, causing needless network traffic. Oracle 11g allows each node to define a preferred failure group, allowing nodes in extended clusters to access local failure groups in preference to remote ones.

 

To configure preferred read failure groups the disk group compatibility attributes must be set to 11.1 or higher. Once the compatibility options are correct, the ASM_PREFERRED_READ_FAILURE_GROUPS parameter is set to the preferred failure groups for each node.

SELECT name, failgroup FROM v$asm_disk;

NAME                           FAILGROUP

—————————— ——————————

DATA_0000                      DATA_0000

DATA_0001                      DATA_0001

DATA_0002                      DATA_0002

3 rows selected.

SQL>

ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = ‘data.data_0000’, ‘data.data_0001’, ‘data.data_0002’;

P.Fast Rebalance

 

The “ALTER DISKGROUP … MOUNT” statement allows disk groups to be mounted in restricted mode.

SQL> SELECT name FROM v$asm_diskgroup;

NAME

——————————

DATA

SQL> ALTER DISKGROUP data DISMOUNT;

Diskgroup altered.

SQL> ALTER DISKGROUP data MOUNT RESTRICTED;

Diskgroup altered.

SQL> ALTER DISKGROUP data DISMOUNT;

Diskgroup altered.

SQL> ALTER DISKGROUP data MOUNT;

Diskgroup altered.

SQL>

In a RAC environment, a disk group mounted in RESTRICTED mode can only be accessed by a single instance. The restricted disk group is not available to any ASM clients, even on the node where it is mounted.

 

Using RESTRICTED mode improves the performance of rebalance operations in a RAC environment as it elimitates the need for lock and unlock extent map messaging that occurs between ASM instances. Once the rebalance operation is complete, the disk group should be dismounted then mounted in NORMAL mode (the default).

 

Q.Miscellaneous Disk Group Maintenance Enhancements

  • The CREATE DISKGROUP and ALTER DISKGROUP commands include a new ATTRIBUTE clause (compatible, disk_repair_time, au_size).
  • The CHECK clause of the ALTER DISKGROUP command has been simplified so there are only two options, NOREPAIR and REPAIR, available, with NOREPAIR as the default. Summarized errors are displayed, with full error messages writen to the alert log.
  •         ALTER DISKGROUP data CHECK; — Like NOREPAIR
  •         ALTER DISKGROUP data CHECK NOREPAIR;

ALTER DISKGROUP data CHECK REPAIR;

  • Disk groups can now be mounted in restricted mode, which can improve performance of some maintenance tasks.
  • The ALTER DISKGOUP command now includes ONLINE and OFFLINE clauses so disks can be taken offline for repair before being brought back online.
  •         — Individual disks.
  •         ALTER DISKGROUP data OFFLINE DISK ‘disk_0000’, ‘disk_0001’;
  •         ALTER DISKGROUP data ONLINE DISK ‘disk_0000’, ‘disk_0001’;
  •         — Failure groups.
  •         ALTER DISKGROUP data OFFLINE DISKS IN FAILGROUP ‘fg_0000’;
  •         ALTER DISKGROUP data ONLINE DISKS IN FAILGROUP ‘fg_0000’;
  •         — Bring online all disks in disk group.

ALTER DISKGROUP data ONLINE ALL;

  • Disk groups that can’t be mounted by ASM can now be dropped using the FORCE keyword of the DROP DISKGROUP command.

DROP DISKGROUP data FORCE;

Unix basic command and shell script for DBA and SYSADMIN

DIRECTORY STRUCTURE IN UNIX

 

Directory Contains
bin/ Required Boot-time binaries
boot/ Boot configuration files for the OS loader and kernel image
dev/ Device files
etc/ System configuration files and scripts
home/ User/Sub branch directories
lib/ Main OS shared libraries and kernel modules
Lost+found/ Storage directory for “recovered” files
mnt/ Temporary point to connect devices to
proc/ Pseudo directory structure containing information about the kernel, currently running processes and resource allocation
root/ Linux (non-standard) home directory for the root user. Alternate location being the / directory itself
sbin/ System administration binaries and tools
tmp/ Location of temporary files
usr/ Difficult to define – it contains almost everything else including local binaries, libraries, applications and packages (including X Windows)
var/ Variable data, usually machine specific. Includes spool directories for mail and news

 

GENERAL UNIX COMMANDS

 

TO SEE WHERE COMMANDS ARE LOCATED

echo $PATH

/bin:/usr/bin:/usr/local/bin:$HOME/bin

which ls

SEEING HELP COMMAND

date –help

info date

man date

whatis date

Extended Documents:=/usr/share/doc

TO CHANGE CONSOLE

Ctrl+alt+(F1-F7)

 

CHANGING DIRECTORY

cd /home/deb/work

cd ..[One level up]

LISTING DIRECTORIES AND FILES

ls /

ls –al [All hidden files]

ls –l [long listing]

ls –R[recurses through sub-directories]

ls -d[Listing directories only]

ls -ltr|grep ^d[Listing directories only]

List of command options

Option Action

 

-a list hidden files

-d list the name of the current directory

-F show directories with a trailing ’/’ executable

files with a trailing ’*’

-g show group ownership of file in long listing

-i print the inode number of each file

-l long listing giving details about files and direc-tories

-R list all subdirectories encountered

-t sort by time modified instead of name

COPYING FILES

cp source destination

cp –P source destination [Preserves permissions]

cp –R source destination [Recursively copies the files]

cp ~helper/tmp/for_john tmp/comments[To copy a file from another user’s directory using a relative

pathname.]

cp /usr/lib/more.help .[To copy a file from another directory to the current working

directory, preserving the file name:]

MOVING AND RENAMING

mv file1 file2 [ Renaming]

mv source destination [ Move]

REMOVING FILES AND DIRECTORIES

rm chap[1-2][wildcard is used]

rm -i chap1[Removing interactively]

rm -rf[Deleting forefully file or directories]

rmdir deb[removing empty directory]

TO CHECK FILE TYPES

file deb.sh

Displaying the first few lines of a file

head /usr/local/doc/vi.txt[To display the first 10 lines of a file]

head -5 *.xdh[To display the first 5 lines of several files]

 

Displaying the last part of a file

 

tail fred[To display the last 10 lines of a text file]

tail -50c help.txt[To display the last 50 characters in a file]

tail -c -512 foo[Copies last 512 bytes from foo]

tail -c +512 foo[Copies everything after skippiong 512 bytes]

 

HANDLING PRINTERS

 

lp deb.lis[Printing file]

lp -d laser deb.lis[If there are more printers]

lp -n3 deb.lis[Printing 3 copies]

lpstat[To view print queue]

cancel laser[To cancel printing of current printer]

cancel pr1-320[Cancel job with request id pr1-320]

lpq[To see print que job number]

lprm 31[Removes job number 31]

lpr -p laser deb.lis[Prints on printer laser]

lpstat -c[To see pending jobs]

 

COUNTING LINES

 

wc deb.txt

wc -c[No. of characters]

wc -l[No. of lines]

 

COMPRESSING FILES

 

compress deb.txt

uncompress deb.txt

gzip deb

gunzip deb.z

 

USING TAR COMMAND

 

# display contents of a file
tar tvf myfile.tar

# display contents of a diskette (Solaris)
volcheck
tar tvf /vol/dev/rdiskette0/unnamed_floppy

# copy files to a tar file
tar cvf myfile.tar *.sql

# format floppy, and copy files to it (Solaris)
fdformat -U -b floppy99
tar cvf /vol/dev/rdiskette0/floppy99 *.sql

# append files to a tar file
tar rvfn myfile.tar *.txt

# extract files from a tar filem to current dir
tar xvf myfile.tar

 

SORTING

 

sort /etc/passwd
sort in order based on the whole line

sort -r /etc/passwd
reverse the order of the sort

sort +2n -t: /etc/passwd
sort on third field, where field delimiter is : (skip the first two fields)

sort +2n -t: -n /etc/passwd
same sort but treat the field as numbers not ASCII characters

sort –o telnos telnos(sorts telnos and replaces its content with the sorted output)

 

DUPLICATE ROW BY UNIQUE COMMAND

 

uniq names
remove duplicate lines from names and display them on the screen

uniq names uniq.names
remove duplicates lines from names and put them into uniq.names

uniq -d names
display all duplicate lines

 

Using symbolic links

 

ln -s source linkname

ln -s reports/reportA publications/my_report

 

TO DETERMINE DISK USAGE

 

# display disk free, in KB
df -kt

# display disk usage, in KB for directory
du -k mydir

# display directory disk usage, sort by largest first
du -ahx .|sort -rh|head -20

 

 

FIND Command

 Find files by text search

$ find ~ -name “*.txt” -print

To find all ‘*.txt’ files in the current directory you are in and any sub-directories:

$ find . -name “*.txt” -print

To find all filenames with at least one upper case character, in your current directory

and any sub-directories:

$ find . -name “[A-Z]*” -print

To find files in /etc directory that begin with host, and any other characters after

that:

$ find /etc -name “host*” -print

To find all files in your $HOME directory:

$ find ~ -name “*” -print or find . -print

 

To bring the system to zero per cent response time, start at the root level and drill

through all directories listing ‘all’ files. If you want to remain on good terms with

your sys admin be very careful with this option!

$ find / -name “*” -print

Find all files that begin with two lower case characters, followed by two numbers,

followed by .txt. The following find command could return a file called ax37.txt.

$ find . -name “[a-z][a-z][0–9][0–9].txt” -print

 

Find files by perm mode

 

To find files with a permission of 755 which is read, write and executable by

owner and read and execute by everyone else,

$ find . -perm 755 -print

 

To find files that are read, write and executable for everyone (watch out for these) use

this find command. Put a dash in front of the octal number. The perm stands for

permissions and 007 is based on the notation you use for the chmod (absolute) mode.

$ find . -perm -007 -print

 

Ignoring directories

 

To display the files in the apps directory when you do not want find to search

in /apps/bin:

$ find /apps -name “/apps/bin” -prune -o -print

 

Find files by user and nouser

 

To find files by owner, you supply the actual login name of that user. To find files

owned by user dave in your $HOME directory:

$ find ~ -user dave -print

 

To find files owned by uucp in /etc:

$ find /etc -user uucp -print

 

To find files of a user that has been deleted, use the -nouser option. This

finds files that have no valid userid in the /etc/passwd file. You do not have to

supply a user name with this option; find will go off and do all the work. To find

all files not owned by a current user in the /home directory tree:

$ find /home -nouser -print

 

 

Find files by group and nogroup

 

Like the user and nouser option, find has the same options for groups. To find all

files with the group membership of ‘accts’ that are in the /apps directory:

$ find /apps -group accts -print

To find all files that have no valid group membership just supply the -nogroup

option. Here find starts at the root of the file system.

$ find / -nogroup -print

 

Find files by modification times

 

To find all files that have been modified in the last five days:

$ find / -mtime -5 -print

To find files in /var/adm directory that have not been modified in the last three

days:

$ find /var/adm -mtime +3 -print

 

Finding files that are newer or older in days or minutes

 

$ find . -newer age.awk ! -newer belts.awk -exec ls -l {} \;

  

Delete old trace files from DIAG location of Oracle Database

find /u01/app/oracle -type f -name “*.trc” -mtime +1 -exec rm {} \;

find /u01/app/oracle -type f -name “*.trm” -mtime +0.5 -exec rm {}  \;

find /u01/app/oracle -type f -name “*.aud” -mtime +0.5 -exec rm {} \;

  VI EDITOR

List of insert commands

 

To do this … Command

Insert text after the cursor                             a

Insert text before the cursor                           i

Append text at the end of  the current line     A

Insert text at the start of the current line        I

Open a new line above the  current line        o

Open a new line below the current line         O

 

Recovering files after a system crash

 

vi -r help.xdh

 

Viewing a file

view filename

 

Moving the cursor along a line

Press the ESC key to enter command mode before using these com-mands.

To move to Do this …

 

next character                        l

previous character                 h

next word                              w

next n words                         wn

previous word                       b

previous n words                  bn

end of current word              e

start of current line                0 (zero)

end of current line                  $

Moving the cursor between lines

Press the ESC key to enter command mode before using these com-mands.

To move to Do this …

 

next line down                        j

(same column)

start of next line down           +

previous line                           k

(same column)

start of previous line

Deleting characters

Press the ESC key to enter command mode before using these com-mands.

To delete Do this

 

current character                   x

previous character                dh

Deleting words and lines

Press the ESC key to enter command mode before using these com-mands.

To delete Do this

current word                         dw

previous word                       db

entire line                               dd

to end of line                         d$

to start of line                        d0 (zero)

next n lines                           ndd

 

To get back a word or line that you have just deleted enter the command:

p

Searching for text

Press the ESC key to enter command mode before using these com-mands.

To search Do this …

forward for a pattern /pattern

backward for a pattern ?pattern

repeat previous search n

repeat previous search in N

reverse direction

Examples of replacing text

To replace one word in the current line with another:

:s/that/which

 

To replace every instance of one word in the current line with

another:

:s/destroy/delete/g

This replaces every occurrence of the word “destroy” with the

word “delete” in the current line.

 

To replace every occurrence of a word throughout the file, with

another:

:g/insert/s//add/g

This replaces every occurrence of “insert” with “add”.

To replace ^M from text file

:%s/^v^M/ /g

 

UNIX Scheduler and Process

CRONTAB

 1st column Minutes 1–59

2nd column Hour 1–23 (0 is midnight)

3rd column Day_of_month 1–31

4th column Month 1–12

5th column Weekday 0–6 (0 is Sunday)

6th column Command to run

 

Here are some examples of crontab entries:

 

30 21 * * * /apps/bin/cleanup.sh

 

runs a script called cleanup.sh in /apps/bin at 9:30 every night.

Cron and crontab 33

45 4 1,10,22 * * /apps/bin/backup.sh

runs a script called backup.sh in /apps/bin at 4:45 a.m., on the 1st, 10th and 22nd

of each month.

10 1 * * 6,0 /bin/find -name “core” -exec rm {} \;

runs a find command at 1:10 a.m. only on Saturday and Sunday.

0,30 18-23 * * * /apps/bin/dbcheck.sh

runs a script called dbcheck.sh in /apps/bin at every 30 minutes past the hour,

between 18:00 and 23:00.

0 23 * * 6 /apps/bin/qtrend.sh

runs a script called qtrend.sh in /apps/bin at 11:00 p.m. every Saturday.

 

The general format of the crontab is:

Crontab [-u user] -e -l -r

where:

-u is the user login name

-e edits the crontab file

-l lists the crontab file

-r deletes the crontab file

 

Submitting a command to the background

 

$ find /etc -name “srm.conf” -print >find.dt 2>&1 &

 

Checking the process using ps

 

$ ps x|grep 28305

28305 p1 S 0:00 sh /root/ps1

28350 p1 S 0:00 grep 28305

 

Killing a background job

 

kill -signal [process_number]

 

$ kill 28305

$ kill -9 28305

 

Submitting a job using nohup

 

nohup command >myout.file 2>&1

 

$ nohup ps1 &

 

 CONTROLLING ACCESS TO FILE SYSTEM

 

1.To display the permissions on a single file:

ls -l file1

-rw-r–r– 2 unixjohn 3287 Apr 8 12:10 file1

2.This displays the following information about the file file1.

 

-rw-r–r– – access permissions

2 – number of links to this file

unixjohn – owner

3287 – size in bytes

Apr 8 12:10 – date and time last modified

3.There are three types of permissions:

r – read the file or list files in the directory

w – write to the file or directory

x – execute the file or search the directory

4.Each of these permissions can be set for any one of three types of

user:

u – the user who owns the file (usually you)

g – members of the group the owner belongs to

o – all other users

The access permissions for all three types of user can be given as a

string of nine characters:

user      group      others

r w x         r w x            r w x

5.Changing permission

  1. To give yourself permission to execute a file that you own:

chmod u+x file1

This gives you execute permission for the file file1.

  1. To give members of your group permission to read a file:

chmod g+r file2

This gives the group permission to read the file file2.

  1. To give read permission to everyone for a particular type of file:

chmod a+r *.pub

This gives everyone permission to read all files with the extension

.pub.

  1. To give the group write and execute permission:

chmod g+wx $HOME/SCCS

This gives all members of the group permission to place files in

5.Setting access permissions numerically

There is a shorthand way of setting permissions by using octal num-bers.

Read permission is given the value 4, write permission the value

2 and execute permission 1.

r w x

4 2 1

These values are added together for any one user category:

1 = execute only

2 = write only

3 = write and execute (1+2)

4 = read only

5 = read and execute (4+1)

6 = read and write (4+2)

7 = read and write and execute (4+2+1)

So any access permission can be expressed as three digits. For exam-ple:

user group others

chmod 640 file1 rw- r– —

chmod 754 file1 rwx r-x r–

chmod 664 file1 rw- rw- r–

6.Creating user and groups

useradd -u 21- -g dba -c “THE RDBMS” -d /home/oracle -s /bin/ksh -m oracle

groupadd -g 241 dba

Informations are stored in /etc/passwd annd /etc/shadow

7.Setting limits on file size

ulimit 234567

8.Modifying and removing users

usermod -s /bin/bash oracle

userdel oracle

9.Changing password

passwd

# change group to staff for this file
chgrp staff myfile

# change owner to jsmith for this file
chown jsmith myfile

 

Shell input and output

 

ECHO

 

$ echo “What is your name :\c”

$ read name

 

You can put variables within echo statements as well as escape codes. In this

example, the terminal bell is rung, your $HOME directory is displayed, and the

command tty is evaluated by the shell.

$ echo “\007your home directory is $HOME, you are connected on ‘tty‘”

your home directory is /home/dave, you are connected on /dev/ttyp1

 

If it’s LINUX then . . .

You have to put a ‘-n’ after the echo to suppress the new line:

$ echo -n “What is your name :”

You have to put a ‘-e’ after echo for the escape code to work:

$ echo -e “\007your home directory is $HOME, you are connected on

‘tty‘”

your home directory is /home/dave, you are connected on /dev/ttyp1

 

READ

 

read variable1 variable2 . . .

 

$ read name

Hello I am superman

$ echo $name

Hello I am superman

 

$ read name surname

John Doe

$ echo $name $surname

John Doe

 

$ pg var_test

#!/bin/sh

# var_test

echo “First Name :\c”

read 51

read name

echo “Middle Name :\c”

read middle

echo “Last name :\c”

read surname

 

If it’s LINUX then . . .

Remember to use the ‘-n’ echo option.

$ pg var_test

#!/bin/sh

# var_test

echo “First Name :\c”

read name

echo “Middle Name :\c”

read middle

echo “Last name :\c”

read surname

 

PIPES

 

$ who | awk ‘{print $1″\t”$2}’

matthew pts/0

louise pts/1

 

$ df -k | awk ‘{print $1}’| grep -v “Filesystem”

 

$ df -k | awk ‘{print $1}’| grep -v “Filesystem”|sed s’/\/dev\///g’

hda5

hda8

hda6

hdb5

hdb1

hda7

hda1

 

TEE

 

The tee command acts the way it is pronounced. It takes the input and sends

one copy to the standard output and another copy to a file. If you want to see

your output and save it to a file at the same time, then this is the command for

you.

The general format is:

tee -a files

 

$ who | tee who.out

louise pts/1 May 20 12:58 (193.132.90.9)

matthew pts/0 May 20 10:18 (193.132.90.1)

cat who.out

louise pts/1 May 20 12:58 (193.132.90.9)

matthew pts/0 May 20 10:18 (193.132.90.1)

 

$ find etc usr/local home -depth -print | cpio -ovC65536 -O \

 

$ echo ” myscript is now running, check out any errors…in

myscript.log” | tee -a myscript.log

$ myscript | tee -a myscript.log

 

$ echo “stand-by disk cleanup starting in 1 minute” | tee/dev/console

$ sort myfile | tee -a accounts.log

$ myscript | tee -a accounts.log

 

 

Standard input, output and errors

 

Redirecting standard output

 

$ cat passwd | awk -F: ‘{print $1}’| sort 1>sort.out

$ ls -l | grep ^d >>files.out

$ ls account* >> files.out

 

Redirecting standard input

sort < names.txt

sort <names.txt > names.out

 

$ cat >> myfile <<MAYDAY

> Hello there I am using a $TERM terminal

> and my user name is $LOGNAME

> bye…

> MAYDAY

$ pg myfile

Hello there I am using a vt100 terminal

and my user name is dave

 

Redirecting standard error

 

To redirect standard errors you specify the file descriptor ‘2’. Let’s first look at an

example, since they always explain better. Here grep searches for a pattern called

‘trident’ on a file called missiles.

$ grep “trident” missiles

grep: missiles: No such file or directory

 

 

grep reports no such file, and sends its errors to the terminal which is the default.

Let’s now send all errors to a file, in fact to the system dustbin, ‘/dev/null’.

$ grep “trident” missiles 2>/dev/null

 

Combining standard output and error

 

$ cat account_qtr.doc account_end.doc 1> accounts.out 2> accounts.err

 

Merging standard output and standard error

 

When merging output and errors, the shell evaluates the command from left to

right, which is really all you need to know when putting mergers together. Here’s an

example.

$ cleanup >cleanup.out 2>&1

In the above example the script cleanup directs all output (>) to a file called

cleanup.out, and all errors (2), are directed to (>) the same place as the output

(&1), which is cleanup.out.

$ grep “standard” * > grep.out 2>&1

In the above example all output from the grep command is put into the output file

grep.out. When you use here documents, you will probably need to capture all the

output to a file in case errors are encountered. To do this you need to use 2>&1 as

part of the command. Here’s how to do it:

$ cat>> filetest 2>&1 <<MAYDAY

> This is my home $HOME directory

> MAYDAY

 

 

Command execution order

Using &&

 

Here’s the general format of &&:

command1 && command2

 

The operation of these commands is fairly straightforward. The command on

the left of the && (command1) must be returned true (successfully executed) if the

command on the right (command2) is to be executed; or to put it another way, ‘If

this command works’ && ‘then execute this command’.

 

$ cp justice.doc justice.bak && echo “if you are seeing this then cp was

OK”

if you are seeing this then cp was OK

 

$ mv /apps/bin /apps/dev/bin && rm -r /apps/bin

In the above example, the directory /apps/bin is to be moved to /apps/dev/bin;

if this fails then the deletion of the directory /apps/bin will not happen.

 

In the following example, a file is to be sorted with the output going to a file

called quarter.sort; if this is successful, the file will be printed.

$ sort quarter_end.txt > quarter.sorted && lp quarter.sorted

 

Using ||

 

command1 || command2

The operation of the || is slightly different. If the command on the left of the ||

(command1) fails then execute the command on the right of the || (command2); or

to put it another way, ‘If this command fails’ || ‘then execute this command’.

Here’s another simple example that illustrates the use of the ||:

$ cp wopper.txt oops.txt || echo “if you are seeing this cp failed”

cp: wopper.txt: No such file or directory

if you are seeing this cp failed

The copy has failed so now the command on the right-hand side of the || is

executed.

A more practical example is this. I want to extract the first and fifth fields

from an accounts file and stick the output into a temp file. If the extraction does not

work, I want to be mailed.

$ awk ‘{print$1,$5}’ acc.qtr >qtr.tmp || echo “Sorry the payroll

extraction didn’t work” | mail dave

 

You don’t always have to use system commands; here I run the script comet on a file

called month_end.txt. If the script bombs out, the shell should terminate (exit).

$ comet month_end.txt || exit.

 

 

Using GREP

 

Searching more than one file

If I wanted to search for the string “sort” in all my .doc files in my directory I

would do this:

$ grep “sort” * .doc

Or I could search through all the files that have the word “sort it”.

$ grep “sort it” *

 

Line matches

 

$ grep -c “48” data.f

$ 4

 

Line numbers

 

$ grep -n “48” data.f

1:48 Dec 3BC1997 LPSX 68.00 LVX2A 138

2:483 Sept 5AP1996 USP 65.00 LVX2C 189

5:484 Nov 7PL1996 CAD 49.00 PLV2C 234

6:483 May 5PA1998 USP 37.00 KVM9D 644

 

Do not match

 

$ grep -v “48” data.f

47 Oct 3ZL1998 LPSX 43.00 KVM9D 512

219 Dec 2CC1999 CAD 23.00 PLV2C 68

216 Sept 3ZL1998 USP 86.00 KVM9E 234

 

Getting an exact match

 

$ grep ’48\>’ data.f

48 Dec 3BC1997 LPSX 68.00 LVX2A 138

 

Being case-sensitive

 

$ grep -i “sept” data.f

483 Sept 5AP1996 USP 65.00 LVX2C 189

216 sept 3ZL1998 USP 86.00 KVM9E 234

 

Pattern ranges

 

$ grep ’48[34]’ data.f

483 Sept 5AP1996 USP 65.00 LVX2C 189

484 nov 7PL1996 CAD 49.00 PLV2C 234

483 may 5PA1998 USP 37.00 KVM9D 644

 

Don’t match at the beginning of a line

 

$ grep ‘^[^48]’ data.f

219 dec 2CC1999 CAD 23.00 PLV2C 68

216 sept 3ZL1998 USP 86.00 KVM9E 234

 

Trapping upper and lower cases

 

$ grep ‘[Ss]ept’ data.f

483 Sept 5AP1996 USP 65.00 LVX2C 189

216 sept 3ZL1998 USP 86.00 KVM9E 234

 

Matching any characters

 

$ grep ‘K…D’ data.f

47 Oct 3ZL1998 LPSX 43.00 KVM9D 512

483 may 5PA1998 USP 37.00 KVM9D 644

 

$ grep ‘[A-Z][A-Z]..C’ data.f

483 Sept 5AP1996 USP 65.00 LVX2C 189

219 dec 2CC1999 CAD 23.00 PLV2C 68

484 nov 7PL1996 CAD 49.00 PLV2C 234

 

Date searching

 

$ grep ‘5..199[6,8]’ data.f

483 Sept 5AP1996 USP 65.00 LVX2C 189

483 may 5PA1998 USP 37.00 KVM9D 644

 

$ grep ‘[0-9]\ {3\}[8]’ data.f

47 Oct 3ZL1998 LPSX 43.00 KVM9D 512

483 may 5PA1998 USP 37.00 KVM9D 644

216 sept 3ZL1998 USP 86.00 KVM9E 234

 

Combining ranges

 

Staying with the use of the [ ] brackets to extract information, suppose we want to

get city codes where the first character could be any number, the second character

between 0 and 5 and the third between 0 and 6. If we use this pattern we’ll get the

following output:

$ grep ‘[0-9][0-5][0-6]’ data.f

48 Dec 3BC1997 LPSX 68.00 LVX2A 138

483 Sept 5AP1996 USP 65.00 LVX2C 189

47 Oct 3ZL1998 LPSX 43.00 KVM9D 512

219 dec 2CC1999 CAD 23.00 PLV2C 68

484 nov 7PL1996 CAD 49.00 PLV2C 234

483 may 5PA1998 USP 37.00 KVM9D 644

216 sept 3ZL1998 USP 86.00 KVM9E 234

 

Well, we certainly got a lot of information back. What we want is included but we

also got other records we didn’t want. However, looking at our pattern, all records

that have been returned are the correct ones according to our rules. We need to

specify that the pattern must start at the beginning of each line. We can use the ^ for

that.

$ grep ‘^[0-9][0-5][0-6]’ data.f

216 sept 3ZL1998 USP 86.00 KVM9E 234

 

Occurrences in a pattern

 

If we want to extract any row that has a number 4 repeated at least twice, we could

use this:

$ grep ‘4\{2,\}’ data.f

483 may 5PA1998 USP 37.00 KVM9D 644

 

$ grep ‘9\{3,\}’ data.f

219 dec 2CC1999 CAD 23.00 PLV2C 68

 

If you want to search for only so many occurrences then take the comma out.

This will search for only two occurrences of the number 9.

$ grep ‘9\{2\}’ data.f

88 The grep family

 

There may be a need to match between, say, two and six occurrences of a

number or maybe a letter. This will match between two and six occurrences of the

number 8 that ends with 3:

 

$ grep ‘6\{2,6}3’ myfile

83 – no match

888883 – match

8884 – no match

88883 – match

 

Blank lines

$ grep ‘^$’ myfile

 

Matching special characters

 

If you wish to search for characters that have special meanings, like one of the

following, $ . ‘ ” * [ ] ^ ( ) | \ + ?, then you must place a \ in front of them. Suppose

you want to search for all lines that contain a period (.), you would do this:

$ grep ‘\.’ myfile

 

Searching for ip addresses

Part of my job is looking after our DNS servers, which means maintaining a lot of

ip addresses that cover different networks. Our address ip file can contain over 200

addresses. Sometimes I want to look at just, say, the ‘nnn.nnn’ network addresses,

and forget about the rest that have only two digits in the second part, i.e. nnn.nn..

To extract all these nnn.nnn. addresses, use [0-9]\{3\}\.[0-0\[3\}\. This expression is

saying any number repeated three times followed by a period, any number repeated

three times followed by a period.

$ grep ‘[0-9]\{3\}\.[0-0\[3\}\.’ Ipfile

 

Pattern matching with wildcards

Let’s take a look at the use of wildcards in using grep. Suppose we have a file like

this:

$ pg testfile

looks

likes

looker

long

Here is what is displayed when using the following grep pattern:

$ grep ‘l.*s’ testfile

looks

likes

$ grep ‘l.*k.’ testfile

looks

likes

$ grep ‘ooo*’ testfile

looks

If you want to find a word only at the end of a line, try this:

$ grep ‘device$’ *

That will search all files for lines that have the word ‘device’ at the end of each line.

Class names 91

 

All this is saying is that you have typed a filename that does not exist. If we

use the -s switch in grep we can silence these error messages.

$ grep -s “louise” /etc/password

 

Using grep on a string

grep is not only reserved for files; you can also use grep on strings. All you need to

do is echo the string then pipe it through to grep.

$ STR=”Mary Joe Peter Pauline”

$ echo $STR | grep “Mary”

Mary Joe Peter Pauline

 

 

egrep

 

egrep stands for expression or extended grep depending on who you listen

Egrep accepts the full range of regular expressions. One of the nice features of egrep is that you can store your strings in a file and pass them into egrep. We do this with the -f switch. If we create a file called grep strings and then type 484 and 47 into it:

$ pg grepstrings

484

47

$ egrep -f grepstrings data.f

this would match all records with 484 or 47 in them. The -f switch really becomes

useful if you want to match a lot of patterns, and typing them in on the command

line becomes awkward.

If we want to search for store codes 32L or 2CC we can use the bar sign (|),

which means one or the other or both (all), separated by the bar sign.

$ egrep ‘(3ZL|2CC)’ data.f

47 Oct 3ZL1998 LPSX 43.00 KVM9D 512

219 dec 2CC1999 CAD 23.00 PLV2C 68

216 sept 3ZL1998 USP 86.00 KVM9E 234

You can use as many bars as you want. If we wanted to see if users louise,

matty or pauline were logged into the system we could use the who command and

pipe the output through to egrep.

$ who | egrep (louise|matty|pauline)

louise pty8

matty tty02

pauline pty2

You can also exclude certain strings using the caret sign (^). If I wanted to see who

was on the system, but I did not want to know if users matty and pauline were on I

could do this:

$ who | egrep -v ‘^(matty|pauline)’

If you want to search a directory listing for files that were called shutdown,

shutdowns, reboot or reboots, this is easily accomplished with egrep.

$ egrep ‘(shutdown | reboot) (s)?’ *

 

USING AWK

 

$ pg grade.txt

M.Tansley 05/99 48311 Green 8 40 44

J.Lulu 06/99 48317 green 9 24 26

P.Bunny 02/99 48 Yellow 12 35 28

J.Troll 07/99 4842 Brown-3 12 26 26

L.Tansley 05/99 4712 Brown-2 12 30 28

 

 

Printing all records

$ awk ‘{print $0}’ grade.txt

 

Printing individual records

 

$ awk ‘{print $1,$4}’ grade.txt

 

Printing report headers

 

$ awk ‘BEGIN {print “Name Belt\n——————————–“}

{print $1″\t”$4}’ grade.txt

Name Belt

—————————————

M.Tansley Green

J.Lulu green

P.Bunny Yellow

J.Troll Brown-3

L.Tansley Brown-3

 

Printing report trailers

$ awk ‘BEGIN {print “Name\n——-“} {print $1} END {“end-of-report”}’

grade.txt

 

Matching

 

$ awk {if($4~/Brown/) print $0}’ grade.txt

J.Troll 07/99 4842 Brown-3 12 26 26

L.Tansley 05/99 4712 Brown-2 12 30 28

 

$ awk ‘$0 ~ /Brown/’ grade.txt

J.Troll 07/99 4842 Brown-3 12 26 26

L.Tansley 05/99 4712 Brown-2 12 30 28

 

Exact match

 

$ awk ‘{if($3~/48/) print $0}’ grade.txt

M.Tansley 05/99 48311 Green 8 40 44

J.Lulu 06/99 48317 green 9 24 26

P.Bunny 02/99 48 Yellow 12 35 28

J.Troll 07/99 4842 Brown-3 12 26 26

 

Not matched

 

$ awk ‘$0 !~ /Brown/’ grade.txt

M.Tansley 05/99 48311 Green 8 40 44

J.Lulu 06/99 48317 green 9 24 26

P.Bunny 02/99 48 Yellow 12 35 28

We could have targeted just the belt grade field ‘field-4’ and done the test this way:

$ awk ‘{if($4!~/Brown/) print $0}’ grade.txt

M.Tansley 05/99 48311 Green 8 40 44

J.Lulu 06/99 48317 green 9 24 26

P.Bunny 02/99 48 Yellow 12 35 28

 

Less than

 

$ awk ‘{if ($6 < $7) print $0 “$1 Try better at the next comp”}’

grade.txt

M.Tansley Try better at the next comp

J.Lulu Try better at the next comp

 

Any characters

$ awk ‘$1 ~/^ . . . a/’ grade.txt

M.Tansley 05/99 48311 Green 8 40 44

L.Tansley 05/99 4712 Brown-2 12 30 28

 

Match either

 

$ awk ‘$0~/(Yellow|Brown)/’ grade.txt

P.Bunny 02/99 48 Yellow 12 35 28

J.Troll 07/99 4842 Brown-3 12 26 26

L.Tansley 05/99 4712 Brown-2 12 30 28

 

AND

$ awk ‘{if ($1==”P.Bunny” && $4==”Yellow”)print $0}’ grade.txt

P.Bunny 02/99 48 Yellow 12 35 28

 

OR

$ awk ‘{if ($4==”Yellow” || $4~/Brown/) print $0}’ grade.txt

P.Bunny 02/99 48 Yellow 12 35 28

J.Troll 07/99 4842 Brown-3 12 26 26

L.Tansley 05/99 4712 Brown-2 12 30 28

 

awk built-in variables

 

ARGC The number of command-line arguments

ARGV The array of command-line arguments

ENVIRON Holds the current system environment variables in the array

FILENAME The name of the current file awk is scanning

FNR The record number in the current file

FS Sets the input field separator; same as the command-line -F option

NF Number of fields in the current record

NR The number of records read so far

OFS Output field separator

ORS Output record separator

RS Controls the record separator

 

NF, NR and FILENAME

 

$ awk ‘{print NF,NR,$0}END{print FILENAME}’ grade.txt

7 1 M.Tansley 05/99 48311 Green 8 40 44

7 2 J.Lulu 06/99 48317 green 9 24 26

7 3 P.Bunny 02/99 48 Yellow 12 35 28

7 4 J.Troll 07/99 4842 Brown-3 12 26 26

7 5 L.Tansley 05/99 4712 Brown-2 12 30 28

grade.txt

 

$ awk ‘{if (NR >0 && $4~/Brown/)print $0}’ grade.txt

J.Troll 07/99 4842 Brown-3 12 26 26

L.Tansley 05/99 4712 Brown-2 12 30 28

 

Assigning input fields to field variable names

 

$ awk ‘{name=$1;belts=$4; if(belts ~/Yellow/)print name” is belt

“belts}’ grade.txt

P.Bunny is belt Yellow

 

Comparing fields with values

 

$ awk ‘{if($6 < 27)print$0}’ grade.txt

J.Lulu 06/99 48317 green 9 24 26

J.Troll 07/99 4842 Brown-3 12 26 26

 

 

$ awk ‘BEGIN {BASELINE=”27 “}{if($6 < BASELINE)print$0}’ grade.txt

J.Lulu 06/99 48317 green 9 24 26

J.Troll 07/99 4842 Brown-3 12 26 26

 

SED COMMAND

 

[root@debasis cls]# cat deb

debasis 1

debraj  1

debajyoti 1

debasis 1

 

[root@debasis cls]# sed ‘s/debasis/deb/g’ deb

deb 1

debraj  1

debajyoti 1

deb 1

 

[root@debasis cls]# sed ‘s/[^ ]*/M&/’ deb

Mdebasis 1

Mdebraj  1

Mdebajyoti 1

Mdebasis 1

 

[root@debasis cls]# sed ‘s/[^ ]*/&M/’ deb

debasisM 1

debrajM  1

debajyotiM 1

debasisM 1

 

[root@debasis cls]# sed ‘s/[^ ]*/&M/’ deb

debasisM 1

debrajM  1

debajyotiM 1

debasisM 1

 

[root@debasis cls]# sed ‘/debasis/ s/1/2/’ deb

debasis 2

debraj  1

debajyoti 1

debasis 2

 

[root@debasis cls]# sed ‘1 s/1/2/’ deb

debasis 2

debraj  1

debajyoti 1

debasis 1

 

[root@debasis cls]# sed ‘1,2 s/1/2/’ deb

debasis 2

debraj  2

debajyoti 1

debasis 1

 

[root@debasis cls]# sed -e ‘s/debasis/deb/’ -e ‘s/debraj/deba/’ deb

deb 1

deba  1

debajyoti 1

deb 1

 

[cls@debasis cls]$ sed ‘

> /debraj/ s/1/2/

> /debajyoti/ s/1/3/’ deb

debasis 1

debraj  2

debajyoti 3

debasis 1

 

[cls@debasis cls]$ sed ‘/debraj/ d’ deb

debasis 1

debajyoti 1

debasis 1

 

[cls@debasis cls]$ sed ‘1,3 d’ deb

 

[cls@debasis cls]$ sed ‘/^deb/ d’ deb

 

[cls@debasis cls]$ sed ‘/1$/ d’ deb

 

[cls@debasis cls]$ sed ‘/debasis/ s/1/2/;/debraj/ d’ deb

debasis 2

debajyoti 1

debasis 2

 

[cls@debasis cls]$ sed ‘$a\

> stop!!!\

> I am lost’ deb

debasis 1

debraj  1

debajyoti 1

debasis 1

stop!!!

I am lost

 

[cls@debasis cls]$ sed ‘3a\

stop!!!\

I am lost’ deb

debasis 1

debraj  1

debajyoti 1

stop!!!

I am lost

debasis 1

 

[cls@debasis cls]$ sed ‘3i\

stop!!!\

I am lost’ deb

debasis 1

debraj  1

stop!!!

I am lost

debajyoti 1

debasis 1

 

[cls@debasis cls]$ sed ‘/debasis/ c\

> no use’ deb

no use

debraj  1

debajyoti 1

no use

 

 

 ENVIRONMENT AND SHELL

 

Local variables

$variable_name=value or ${variable_name = value}

 

Displaying a variable

 

$ GREAT_PICTURE=”die hard”

$ echo ${GREAT_PICTURE}

die hard

$ DOLLAR=99

$ echo ${DOLLAR}

99

$ LAST_FILE=ZLPSO.txt

$ echo ${LAST_FILE}

ZLPSO.txt

 

Clearing a variable

 

unset variable_name

 

$ PC=enterprise

$ echo ${PC}

enterprise

$ unset PC

$ echo ${PC}

 

Displaying all local shell variables

 

$ set

PWD=/root

SHELL=/bin/sh

SHLVL=1

TERM=vt100

UID=7

USER=dave

dollar=99

great_picture=die hard

last_file=ZLPSO.txt

 

Using variables to hold arguments for system commands

 

$ SOURCE=”/etc/passwd”

$ DEST=”/tmp/passwd.bak”

$ cp ${SOURCE} ${DEST}

 

Making a variable read-only

 

$ TAPE_DEV=”/dev/rmt/0n”

$ echo ${TAPE_DEV}

/dev/rmt/0n

$ readonly TAPE_DEV

$ TAPE_DEV=”/dev/rmt/1n”

sh: TAPE_DEV: read-only variable

 SHELL SCRIPTING

 

File status tests

-d This is a directory

-f This is a regular file

-L This is a symbolic link

-r This file is readable

-s This file has a size greater than zero, not empty

-w This file is writeable

-u This file has the suid bit set

-x This is executable

 

We will use both test methods to test if the file scores.txt is writeable. We will use

the last status command to test it. Remember a zero status is OK, anything else is

an error.

$ ls -l scores.txt

-rw-r–r– 1 dave admin 0 May 15 11:29 scores.txt

$ [ -w scores.txt ]

$ echo $?

0

$ test -w scores.txt

$ echo $?

0

 

cutting columns(-c)

 

 cut -c 1-20 deb [show columns from c1-c10]

cut -c1 /etc/passwd
get the first character from every line

cut -c1,5,10-20 /etc/passwd
get the first, fifth character and every character between 10 and 20

cut -d: -f2 /etc/passwd
get the second field

cut -d: -f3- /etc/passwd
get all fields from the third on

cut –f1,3- telneos > deb

cut –d’ ‘ –f2- file (delimiter as space)

cut –f1,3 telnos address>telnos.all(selecting particular columns from two files and pasting in one)

colrm 8 12 twinkle (deleting 8 and 12 column)

 

pasting columns

 

paste adc efg>>abcefg

paste –d: abc efg>>abcefg

paste telos – > telnos.new

 

using cut and paste to reorganize a file

 

cut –f1,3 telnos > temp

cut –f4- telnos>temp2

cut –f2 telos|paste temp-temp2>telnos.new

 

 

Using logical operators with tests

 

-a Logical AND, true, if both sides of the operator are true

-o Logical OR, true, if either sides of the operator can be true

! Logical NOT, true, if the condition is false

 

-rw-r–r– 1 root root 0 May 15 11:29 scores.txt

-rwxr-xr– 1 root root 0 May 15 11:49 results.txt

The following example tests whether both files are readable.

$ [ -w results.txt -a -w scores.txt ]

$ echo $?

0

 

Testing strings

 

= The two strings are equal

!= The two strings are not equal

-z This string is null

-n This string is not null

 

To test if the environment variable EDITOR is empty:

$ [ -z $EDITOR ]

$ echo $?

1

No, it isn’t. Is it set to vi?

$ [ $EDITOR = “vi” ]

$ echo $?

0

 

Testing numbers

 

-eq The two numbers are equal

-ne The two numbers are not equal

-gt The first number is greater than the second number

-lt The first number is less than the second number

-le The first number is less than or equal to the second number

-gt The first number is greater than or equal to the second number

 

$ NUMBER=130

$ [ “$NUMBER” -eq “130” ]

$ echo $?

0

 

Using expr

 

$ expr 900 + 600

1500

$ expr 30 / 3

10

$ expr 30 / 3 / 2

5

 

Incrementing counters

 

Expr does the incrementing of values when using loops. First the loop is initialized

to zero. Then one is added to the variable loop. The use of the quotes means

command substitution, which basically means take the output from the command

(expr) and put it in the variable loop.

$ LOOP=0

$ LOOP=‘expr $LOOP + 1‘

 

 

Pattern matching

 

$ VALUE=accounts.doc

$ expr $VALUE : October 8, ‘.*’

12

 

Control flow structures

 

$ pg iftest

#!/bin/sh

# iftest

# this is a comment line, all comment lines start with a #

if [ “10” -lt “12” ]

then

# yes 10 is less than 12

echo “Yes, 10 is less than 12”

fi

 

Testing values of variables

 

$ pg iftest2

#!/bin/sh

# if test2

echo -n “Enter your name :”

read NAME

# did the user just hit return ????

if [ “$NAME” = “” ] ; then

echo “You did not enter any information”

fi

 

 

 

 

 

 

Basic oracle database creation and maintenance in windows and Linux

 

DB CREATION STEPS IN WINDOWS

SETTING AN ENVIRONMENT(FOR WINDOWS)

ORADIM -new -sid <new_sid> -intpwd oracle -startmode AUTO -pfile c:\oracle\admin\<new_sid>\pfile\init<new_sid>.ora

 

PASSWORD FILE MANAGEMENT

 

Check in the initclone.ora REMOTE_LOGIN_PASSWORDFILE is set to    exclusive or shared.  If this is set, then a valid passwordfile should exist in ORACLE_HOME/dbs or created using orapwd  as

orapwd file=/u01/oracle/V816/dbs/orapwV722 password=oracle entries=1

 STARTING UP WITH NOMOUNT OPTION

SQL>STARTUP NOMOUNT PFILE=’c:\oracle\admin\mbb\pfile\init.ora’;

THE CRATE DATABASE COMMAND in win NT:-

SQL>CREATE DATABASE “mbb”
CONTROLFILE REUSE
LOGFILE GROUP 1
(‘c:/oracle/oradata/mbb/redo01.log’)
SIZE 5M REUSE,
GROUP 2
(‘c:/oracle/oradata/mbb/redo02.log’) SIZE 5M REUSE
MAXLOGFILES 4
MAXLOGMEMBERS 2
MAXLOGHISTORY 0
MAXDATAFILES 254
MAXINSTANCES 1
NOARCHIVELOG
CHARACTER SET “WE8MSWIN1252”
NATIONAL CHARACTER SET “AL16UTF16”
DATAFILE ‘c:/oracle/oradata/mbb/system01.dbf’ SIZE 80m
AUTOEXTEND ON NEXT 5M MAXSIZE UNLIMITED
UNDO TABLESPACE UNDOTBS
DATAFILE ‘c:/oracle/oradata/mbb/undo01.dbf’ SIZE 35m
DEFAULT TEMPORARY TABLESPACE TEMP
TEMPFILE ‘c:/oracle/oradata/mbb/temp01.dbf’ SIZE 20M;

 

DB CREATION STEPS FOR LINUX

The Instance, the Database, and the Data Dictionary
==================================================

An Oracle server is an instance and a database; the two are separate, but connected. The instance is memory structures and processes, in your RAM and on your CPU(s); its existence is transient; it can be started and stopped. The database is files on disk; once created, it persists until it is deleted. Creating an instance is nothing more than building the memory structures and starting the processes. Creating a database is done by the instance as a once-off operation, and the instance can then open and close it many times subsequently. The database is worthless without the instance.

Within the database there is a set of tables and other segments called the data dictionary. The data dictionary describes all the logical and physical structures in the database, including all the segments that store user data.

The process of database creation is creating the bare minimum of physical structures needed to store the data dictionary, and then creating the data dictionary within them.

An instance is defined by an instance parameter file. The parameter file contains directives that define how the instance should be built in memory: the size of the memory structures, the behavior of the background processes. After building the instance, it is said to be in no mount mode. In no mount mode, the instance exists but has not connected to a database. Indeed, the database may not even exist at this point.

All parameters, either specified by the parameter file or implied, have defaults, except for one: the parameter DB_NAME. The DB_NAME parameter names the database to which the instance will connect. This name is also embedded in the controlfile. There is one parameter, CONTROL_FILES, that tells the instance the location of the controlfile. This parameter defines the connection between the instance and the database. When the instance reads the controlfile (which it will find by reading the CONTROL_FILES parameter) if there is a mismatch in database names, the database will not mount. In mount mode, the instance has successfully connected to the controlfile. If the controlfile is damaged or non-existent, it will be impossible to mount the database. The controlfile is small, but vital

Within the controlfile, there are pointers to the other files (the online redo log files and the datafiles) that make up the rest of the database. Having mounted the database, the instance can open the database by locating and opening these other files. An open database is a database where the instance has opened all the available online redo log files and datafiles. Also within the controlfile, there is a mapping of datafiles to tablespaces. This lets the instance identify the datafile(s) that make(s) up the SYSTEM tablespace. In the SYSTEM tablespace, it will find the data dictionary. The data dictionary lets the instance resolve references to objects referred to in SQL code to the segments in which they reside, and work out where, physically, the objects are.

The creation of a database server must therefore involve these steps:

Create the instance.

Create the database.

Create the data dictionary.

In practice, the steps are divided slightly differently:

Create the instance.

Create the database and the data dictionary objects.

Create the data dictionary views.

The data dictionary as initially created with the database is fully functional but unusable. It has the capability for defining and managing user data but cannot by used by normal human beings because its structure is too abstruse. Before users (or DBAs) can actually use the database, a set of views must be created on top of the data dictionary that will present it in a human-understandable form.

The data dictionary itself is created by running a set of SQL scripts that exist in the ORACLE_HOME/rdbms/admin directory. These are called by the CREATE DATABASE command. The first is sql.bsq, which then calls several other scripts. These scripts issue a series of commands that create all the tables and other objects that make up the data dictionary.

The views and other objects that make the database usable are generated with more scripts in the ORACLE_HOME/rdbms/admin directory, prefixed with “cat”. Examples of these are catalog.sql and catproc.sql, which should always be run immediately after database creation. There are many other optional “cat” scripts that will enable certain features—some of these can be run at creation time; others might be run subsequently to install the features at a later date.

DB Installation in silent mode
==============================

nohup ./runInstaller -silent -force \
FROM_LOCATION=/home/oracle/database/stage/products.xml \
oracle.install.option=INSTALL_DB_SWONLY \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=/opt/oracle/oraInventory \
ORACLE_HOME=/opt/oracle/11g/product/11.2.2 \
ORACLE_HOME_NAME=”OraDb11g22_Home1″ \
ORACLE_BASE=/opt/oracle \
oracle.install.db.InstallEdition=EE \
oracle.install.db.isCustomInstall=false \
oracle.install.db.DBA_GROUP=oinstall \
oracle.install.db.OPER_GROUP=oinstall \
DECLINE_SECURITY_UPDATES=true &

 

Using the DBCA to Create a Database
===================================

export DISPLAY=10.10.10.65:0.0

To launch the DBCA on Linux, first set the environment variables that should always be set for any Linux DBA session: ORACLE_BASE, ORACLE_HOME, PATH, and LD_LIBRARY_PATH. This is an example of a script that will do this:

export ORACLE_BASE=/u02/app/db11g
export ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

These are the steps to follow to create a database:

Create a parameter file and (optionally) a password file.

Use the parameter file to build an instance in memory.

Issue the CREATE DATABASE command. This will generate, as a minimum, a controlfile, two online redo log files, two datafiles for the SYSTEM and SYSAUX tablespaces, and a data dictionary.

Run SQL scripts to generate the data dictionary views and the supplied PL/ SQL packages.

Run SQL scripts to generate the Enterprise Manager Database Control, and any options (such as Java) that the database will require.

On Windows systems, there is an additional step because Oracle runs as a Windows service. Oracle provides a utility, oradim.exe, to assist you in creating this service.

SETTING AN ENVIRONMENT(FOR LINUX)

 

% setenv ORACLE_SID mynewdb

(A) Change the working directory to /etc and edit the oratab file to put the entry for CLOBE instance.

(B) Setup login profile for the Oracle user having dba group.

ORACLE_SID=CLONE

Export ORACLE_SID

Or .oraenv

The CREATE DATABASE Command
===========================

This is an example of the CreateDB.sql script:

connect “SYS”/”&&sysPassword” as SYSDBA
set echo on
spool D:\oracle\app\admin\ocp11g\scripts\CreateDB.log
startup nomount pfile=”D:\oracle\app\admin\ocp11g\scripts\init.ora”;
CREATE DATABASE “ocp11g”
MAXINSTANCES 8
MAXLOGHISTORY 1
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
DATAFILE ‘D:\oracle\app\oradata\ocp11g\system01.dbf’
SIZE 300M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE ‘D:\oracle\app\oradata\ocp11g\sysaux01.dbf’
SIZE 120M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED
SMALLFILE DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE
‘D:\oracle\app\oradata\ocp11g\temp01.dbf’ SIZE 20M REUSE
AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
SMALLFILE UNDO TABLESPACE “UNDOTBS1” DATAFILE
‘D:\oracle\app\oradata\ocp11g\undotbs01.dbf’ SIZE 200M REUSE
AUTOEXTEND ON NEXT 5120K MAXSIZE UNLIMITED
CHARACTER SET WE8MSWIN1252
NATIONAL CHARACTER SET AL16UTF16
LOGFILE GROUP 1 (‘D:\oracle\app\oradata\ocp11g\redo01.log’) SIZE 51200K,
GROUP 2 (‘D:\oracle\app\oradata\ocp11g\redo02.log’) SIZE 51200K,
GROUP 3 (‘D:\oracle\app\oradata\ocp11g\redo03.log’) SIZE 51200K
USER SYS IDENTIFIED BY “&&sysPassword”
USER SYSTEM IDENTIFIED BY “&&systemPassword”;
spool off

CREATE DATABASE USING DBCA SILENTLY

================================

dbca -silent \
-createDatabase \
-templateName /u01/app/oracle/product/12.1.0/db_1/assistants/dbca/templates/General_Purpose.dbc \
-gdbName EEOCMSC.tdenopcl.internal \
-sid EEOCMSC \
-SysPassword 3ricss0n \
-SystemPassword 3ricss0n \
-emConfiguration NONE \
-redoLogFileSize 100 \
-recoveryAreaDestination FRA \
-storageType ASM \
-asmSysPassword 3ricss0n \
-diskGroupName DATA \
-characterSet AL32UTF8 \
-nationalCharacterSet AL16UTF16 \
-totalMemory 4072 \
-databaseType MULTIPURPOSE

EMCTL start
===========

To start Database Control, use the emctl utility. This will be located in the ORACLE_HOME/bin directory. The three commands to start or stop Database Control and to check its status are

emctl start dbconsole
emctl stop dbconsole
emctl status dbconsole

The ORACLE_HOME and ORACLE_SID are needed so that emctl can find the Database Control configuration files. These are in three places: the directory ORACLE_HOME/sysman/config has general configuration directives that will apply to all Database Control instances running from the Oracle Home (one per database), and also ORACLE_HOME/hostname_sid/sysman/config and a similarly named directory beneath

To identify the port, you can use emctl. As shown in Figure 5–1, the output of emctl status dbconsole shows the port on which Database Control should be running. Alternatively, you can look in the file ORACLE_HOME/install/portlist.ini, which lists all the ports configured by the OUI and DBCA.

DB shutdown and startup
=======================

So use of the SYSDBA privilege logs you on to the instance as user SYS, the most powerful user in the database and the owner of the data dictionary. Use of the SYSOPER privilege connects you as a user PUBLIC. PUBLIC is not a user in any normal sense—he/she is a notional user with administration privileges, but (by default) with no privileges that let him/her see or manipulate data. You should connect with either of these privileges only when you need to carry out procedures that no normal user can do.

When the database is SHUTDOWN, all files are closed and the instance does not exist. In NOMOUNT mode, the instance has been built in memory (the SGA has been created and the background processes started, according to whatever is specified in its parameter file), but no connection has been made to a database. It is indeed possible that the database does not yet exist. In MOUNT mode, the instance locates and reads the database control file. In OPEN mode, all database files are located and opened and the database is made available for use by end users. The startup process is staged: whenever you issue a startup command, it will go through these stages. It is possible to stop the startup part way. For example, if your control file is damaged, or a multiplexed copy is missing, you will not be able to mount the database, but by stopping in NOMOUNT mode you may be able to repair the damage. Similarly, if there are problems with any datafiles or redo log files, you may be able to repair them in MOUNT mode before transitioning the database to OPEN mode.

At any stage, how does the instance find the files it needs, and exactly what happens? Start with NOMOUNT. When you issue a startup command, Oracle will attempt to locate a parameter file. There are three default filenames. On Unix they are

$ORACLE_HOME/dbs/spfileSID.ora
$ORACLE_HOME/dbs/spfile.ora
$ORACLE_HOME/dbs/initSID.ora

Transactional No new user connections are permitted; existing sessions that are not in a transaction will be terminated; sessions currently in a transaction are allowed to complete the transaction and will then be terminated. Once all sessions are terminated, the database will shut down.

Immediate No new sessions are permitted, and all currently connected sessions are terminated. Any active transactions are rolled back, and the database will then shut down.

Abort As far as Oracle is concerned, this is the equivalent of a power cut. The instance terminates immediately. Nothing is written to disk, no file handles are closed, and there is no attempt to terminate transactions that be in progress in any orderly fashion.

Shutdown and startup procedure
==============================

Use SQL*Plus to start an instance and open a database, then Database Control to shut it down. If the database is already open, do this in the other order. Note that if you are working on Windows, the Windows service for the database must be running. It will have a name of the form OracleServiceSID, where SID is the name of the instance.

Log on to the computer as a member of the operating system group that owns the ORACLE_HOME, and set the environment variables appropriately for ORACLE_HOME and PATH and ORACLE_SID, as described in Chapter 4.

Check the status of the database listener, and start it if necessary. From an operating system prompt:

lsnrctl status
lsnrctl start

Check the status of the Database Control console, and start it if necessary. From an operating system prompt:

emctl status dbconsole
emctl start dbconsole
Launch SQL*Plus, using the /nolog switch to prevent an immediate logon prompt:

sqlplus /nolog

Connect as SYS with operating system authentication:

connect / as sysdba

Start the instance only:

startup nomount;

Mount the database:

alter database mount;

Open the database:

alter database open;

Confirm that the database is open by querying a data dictionary view:

select count(*) from dba_data_files;

Parameter files
===============

select name,value from v$parameter order by name;
select name,value from v$spparameter order by name;

create spfile [=’spfilename’] from pfile [=’pfilename’];
create pfile [=’pfilename’] from spfile [=’spfilename’] ;

select s.name,s.value
from v$spparameter s join v$parameter p on s.name=p.name
where p.isbasic=’TRUE’ order by name;

SELECT DECODE(value, NULL, ‘PFILE’, ‘SPFILE’) “Init File Type”
FROM sys.v_$parameter WHERE name = ‘spfile’;

You can also use the V$SPPARAMETER view to check if you are using a PFILE or not: if the “value” column is NULL for all parameters, you are using a PFILE.

SQL> ALTER SYSTEM RESET timed_statistics SCOPE=SPFILE SID=‘*’;
SQL> ALTER SYSTEM SET timed_statistics = ” SCOPE=SPFILE;

ISDEFAULT VARCHAR2(9) Indicates whether the parameter is set to the default value (TRUE) or the parameter value was specified in the parameter file (FALSE)

ISSES_MODIFIABLE VARCHAR2(5) Indicates whether the parameter can be changed with ALTER SESSION (TRUE) or not (FALSE)

ISSYS_MODIFIABLE VARCHAR2(9) Indicates whether the parameter can be changed with ALTER SYSTEM and when the change takes effect:
•IMMEDIATE – Parameter can be changed with ALTER SYSTEM regardless of the type of parameter file used to start the instance. The change takes effect immediately.
•DEFERRED – Parameter can be changed with ALTER SYSTEM regardless of the type of parameter file used to start the instance. The change takes effect in subsequent sessions.
•FALSE – Parameter cannot be changed with ALTER SYSTEM unless a server parameter file was used to start the instance. The change takes effect in subsequent instances.

ISINSTANCE_MODIFIABLE VARCHAR2(5) For parameters that can be changed with ALTER SYSTEm, indicates whether the value of the parameter can be different for every instance (TRUE) or whether the parameter must have the same value for all Real Application Clusters instances (FALSE). If the ISSYS_MODIFIABLE column is FALSE, then this column is always FALSE.

ISMODIFIED VARCHAR2(10) Indicates whether the parameter has been modified after instance startup:
•MODIFIED – Parameter has been modified with ALTER SESSION
•SYSTEM_MOD – Parameter has been modified with ALTER SYSTEM (which causes all the currently logged in sessions’ values to be modified)
•FALSE – Parameter has not been modified after instance startup

ISADJUSTED VARCHAR2(5) Indicates whether Oracle adjusted the input value to a more suitable value (for example, the parameter value should be prime, but the user input a non-prime number, so Oracle adjusted the value to the next prime number)

Viewing Information About the Database
======================================

DATABASE_PROPERTIES Displays permanent database properties
GLOBAL_NAME Displays the global database name
V$DATABASE Contains database information from the control file