PostgreSQL DBA part2 :-How to connect postgresql database

A.Connecting postgresql locally using port

Please identify port in postgresql.conf under /opt/edb/as9.6/data and connect using psql

There are pre-requisites for allowing local connection.This pg_hba.conf file is under  /opt/edb/as9.6/data.Please make sure last parameter should be md5 not peer.

The below content is for example what should be in pg_hba.conf

# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5

cd /opt/edb/as9.6/bin

-bash-4.1$ ./psql -p 5444 edb enterprisedb
 psql.bin (9.6.2.7)
 Type "help" for help.
port:-5444
Database name:-edb
Password for enterprisedb user:-enterprisedb

B.Allowing remote connections

  1. As a super user, open /etc/postgresql/9.6/main/pg_hba.conf (Ubuntu) or /var/lib/pgsql/9.6/data/pg_hba.conf (Red Hat) in a text editor.

  2. Scroll down to the line that describes local socket connections. It may look like this:

    local   all             all                                      peer
    
  3. Change to:

    host    all             all             0.0.0.0/0               trust
    
    
    1. Save and close the file.

    2. In the same directory, open postgresql.conf.

Under the section on Connection Settings, add or replace the line that starts with listen_addresses to respond to all requests:

listen_addresses = '*'

Please restart postgresql service now.

C.Use psql command from client system as follows(Before that you need to setup pg_hba.conf as above to allow remote connection)

psql -h PostgreSQL-IP-ADDRESS -U USERNAME -d DATABASENAME

Connect to remote server by IP address 192.168.1.5 and login using debasis user to connect to edb database, use:
$ psql -h 192.168.1.5 -U debasis -d edb
Where,

  • -h 192.168.1.5 : Specifies the host name of the machine or IP address (192.168.1.5) on which the server is running.
  • -U debasis : Connect to the database as the debasis username instead of the default. You must have account and permission to connect as debasis user.
  • -d edb : Specifies the name of the database (edb) to connect to.

D.Connect to edb sql

cd /opt/edb/as9.6/bin

-bash-4.1$ ./edb-psql edb
psql.bin (9.6.2.7)
Type “help” for help.

edb=#

Here edb is database name.

 

 

 

DOCKER CONCEPT FOR DBA– PART 10 : How to secure docker private repository using openssl certificate key

This document explains how to secure private repository using openssl key

Before we can use a private repository, we will need to secure it and offer user authentication. Let’s create a self-signed certificate, use the ‘registry’ container by Docker to create basic user authentication, and then copy the files where they need to go on the hosting server.

root@deb-VirtualBox:~# mkdir certs
root@deb-VirtualBox:~# mkdir auth

Make entry in /etc/hosts with your host IP

169.254.156.101 myregistrydomain.com

Now let me create directory where certificate will reside

root@deb-VirtualBox:~# mkdir -p /etc/docker/certs.d/myregistrydoman.com:5000

Please install openssl if it is not installed

root@deb-VirtualBox:~/certs# apt-get install openssl
Reading package lists... Done
Building dependency tree
Reading state information... Done
openssl is already the newest version (1.0.2g-1ubuntu13.5).
0 upgraded, 0 newly installed, 0 to remove and 113 not upgraded.

Now let me create self signed certificate and pull registry docker image

root@deb-VirtualBox:~/certs# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /root/certs/dockerrepo.key -x509 -days 365 -out /root/certs/dockerrepo.crt -subj /CN=myregistrydomain.com
Generating a 4096 bit RSA private key
................................................................................................................................................................++
.....................................................................................................................................................................................................++
writing new private key to '/root/certs/dockerrepo.key'


root@deb-VirtualBox:~/certs# cd /etc/docker/certs.d/myregistrydoman.com:5000

root@deb-VirtualBox:/etc/docker/certs.d/myregistrydoman.com:5000# cp /root/certs/dockerrepo.crt ca.crt


root@deb-VirtualBox:/etc/docker/certs.d/myregistrydoman.com:5000# docker pull registry:2
2: Pulling from library/registry
81033e7c1d6a: Pull complete
b235084c2315: Pull complete
c692f3a6894b: Pull complete
ba2177f3a70e: Pull complete
a8d793620947: Pull complete
Digest: sha256:672d519d7fd7bbc7a448d17956ebeefe225d5eb27509d8dc5ce67ecb4a0bce54
Status: Downloaded newer image for registry:2

Let me create password and user

cd /root

root@deb-VirtualBox:~# docker run --entrypoint htpasswd registry:2 -Bbn test password > auth/htpasswd

Now that we have the security work done for our private registry, we can deploy and configure it for use. We will test it locally, and then log in and test via a remote system.

root@deb-VirtualBox:~# docker run -d -p 5000:5000 -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/dockerrepo.crt -e REGISTRY_HTTP_TLS_KEY=/certs/dockerrepo.key -v `pwd`/auth:/auth -e REGISTRY_AUTH=htpasswd -e REGISTRY_AUTH_HTPASSWD_RELM="Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry:2
0d007ea4f85da395b2de25b6a6c5c1178c06e4df8cf859bf5ef5808b655c30a2

Let me pull demo docker image for securing it and test

root@deb-VirtualBox:~# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
f70adabe43c0: Pull complete
Digest: sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
Status: Downloaded newer image for busybox:latest

The image is now downloaded

root@deb-VirtualBox:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mywebserver latest 1d666b38b200 2 days ago 330MB
mywebserver v1 d749c49988b5 2 days ago 330MB
mywebserver1 v1 a23dab42b0ee 2 days ago 321MB
optimized v1 dd055168eb91 2 days ago 301MB
customimage v1 9528f83769a3 2 days ago 301MB
hello-world latest e38bc07ac18e 3 weeks ago 1.85kB
centos 6 70b5d81549ec 3 weeks ago 195MB
centos latest e934aafc2206 3 weeks ago 199MB
busybox latest 8ac48589692a 4 weeks ago 1.15MB
registry 2 d1fd7d86a825 3 months ago 33.3MB

Now let me tag with my new docker name and push with self signed password protected

root@deb-VirtualBox:~# docker tag busybox myregistrydomain.com:5000/my-busybox

root@deb-VirtualBox:~# docker push myregistrydomain.com:5000/my-busybox

root@deb-VirtualBox:~# docker login myregistrydomain.com:5000/my-busybox
Username: test
Password:

Now let me test how we can push the private docker image in another host

Make same entry in /etc/hosts in the host you want to pull private docker.

copy ca.crt from first host to this server in the location /etc/docker/certs.d/myregistrydomain.com\:5000/

root@deb-VirtualBox2:~# cd /etc/docker
root@deb-VirtualBox2:~# mkdir certs.d
root@deb-VirtualBox2:~# cd /etc/docker/certs.d
root@deb-VirtualBox2:~# mkdir myregistrydomain.com\:5000/

root@deb-VirtualBox2:~# chown root:root ca.crt

Now we are good to test if we can pull the image from hub

root@deb-VirtualBox2:~# docker login myregistrydomain.com:5000

root@deb-VirtualBox2:~# docker pull myregistrydomain.com:5000/my-busybox

Monitor real time active session statistics by ORAPUB tool

The tool was devised by Craig from ORAPUB which is excellent to monitor what happens in session real time.

1.First I need to down OSM tool kit from ORAPUB. First we need to run the following scripts to create some pre-requisite tables under sys/system schema.

a)osmprep.sql

-- ******************************************************
-- * Copyright Notice : (c)1998-2014,2015 OraPub, Inc.
-- * Filename : osmprep.sql 
-- * Author : Craig A. Shallahamer
-- * Original : 17-AUG-98
-- * Last Modified : 23-Dec-2014
-- * Description : OSM preperation script
-- * Usage : start osmprep.sql 
-- ******************************************************

prompt 
prompt OraPub System Monitor - Interactive (OSM-I) installation script.
prompt 
prompt (c)1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008 by OraPub, Inc.
prompt (c)2009,2010,2011,2012,2013,2014,2015 by OraPub, Inc.
prompt 
prompt There is absolutely no guarantee with this software. You may
prompt use this software at your own risk, not OraPub's risk. 
prompt No value is implied or stated.
prompt
prompt You may need to run $ORACLE_HOME/rdbms/admin/catblock.sql
prompt
prompt Connect as the user who will be using the OSM.
prompt
prompt Press ENTER to continue.
accept x

prompt
prompt Creating interim tables for delta calculations
prompt
-- Just for rtsysx8.sql
drop table o$sysstat;
create table o$sysstat as select * from v$sysstat where 1=0;
--
drop table system_event_snap;
create table system_event_snap as select * from v$system_event where 1=0;
drop table event_histogram_snap;
create table event_histogram_snap as select * from v$event_histogram where 1=0;
drop table op_sys_time_model_snap;
create table op_sys_time_model_snap as select * from v$sys_time_model where 1=0;
drop table op_os_cpu_snap;
create table op_os_cpu_snap as select stat_name,value from v$osstat where 1=0;

drop table o$system_event_snap;
create table o$system_event_snap as select * from v$system_event where 1=0;

drop table o$sys_time_model;
create table o$sys_time_model as select * from v$sys_time_model where 1=0;

drop table o$rtsysx_sql;
create table o$rtsysx_sql ( 
 sql_id varchar2(13),sql_address raw(8),cpu_time number, elapsed_time number,
 sorts number, executions number, parse_calls number, disk_reads number,
 buffer_gets number, rows_processed number, sql_text varchar2(1000)
);
create unique index o$rtsysx_sql on o$rtsysx_sql (sql_id);

-- rtsysx8.sql
drop table o$rtsysx_sql8;
create table o$rtsysx_sql8 ( 
 sql_address raw(8),cpu_time number, elapsed_time number,
 sorts number, executions number, parse_calls number, disk_reads number,
 buffer_gets number, rows_processed number, sql_text varchar2(1000)
);
create unique index o$rtsysx_sql8 on o$rtsysx_sql8 (sql_address);

-- rtcx.sql
drop table o$rtcx_snap;
create table o$rtcx_snap (category varchar2(100),time_cs number, waits number);

-- collection scripts (e.g., rss.sql, ash scripts,) need these two functions

create or replace function get_interval_s_fnc(i_intrvl interval day to second) return number is
begin
 return extract(day from i_intrvl) * 86400 +
 extract(hour from i_intrvl) * 3600 +
 extract(minute from i_intrvl) * 60 +
 extract(second from i_intrvl);
 exception
 when others then begin return null;
 end;
end;
/


prompt To categorize wait events for OSM reports, run:
prompt
prompt For pre-10g systems, run event_type_nc.sql
prompt For 10g and beyond, run event_type.sql

prompt
prompt Once you cateogrize the wait events, the installation is complete.
prompt
prompt Menu is osmi.sql 
prompt
prompt ENJOY!!
prompt

b)Then I will run another script

-- ********************************************************************
-- * Copyright Notice : (c)2001-2014 OraPub, Inc.
-- * Filename : event_type.sql - For 10g+ databases!!!!!
-- * Author : Craig A. Shallahamer
-- * Original : 11-may-01
-- * Last Update : 19-May-2015 (dealing with I/O events)
-- * Description : event_type.sql - Loads "event type" table
-- * which is used by other scripts.
-- * Usage : start event_type.sql
-- * This is usually run (manually) after osmprep.sql
-- ********************************************************************

prompt
prompt file: event_type.sql for Oracle 10g and beyond...
prompt
prompt About to categorize wait events for OSM reports.
prompt
prompt Press ENTER to re-create the o$event_type table.
accept x

-- Event TYPES are defined as follows:

-- ior - IO read related wait
-- iow - IO write related wait
-- other - "real" but not explicitly categorized
-- idle/bogus - idle events, usually not useful

drop table o$event_type
/
create table o$event_type (
event varchar2(64),
type varchar2(64)
)
/

insert into o$event_type
 select name,'other'
 from v$event_name
 where wait_class in ('Administrative','Application','Cluster','Concurrency',
 'Configuration','Other','Scheduler','Queuing','Scheduler')
/
insert into o$event_type
 select name,'bogus'
 from v$event_name
 where wait_class in ('Idle','Network')
/
insert into o$event_type
 select name,'ior'
 from v$event_name
 where wait_class in ('Commit','System I/O','User I/O')
 and name like '%read%'
/
insert into o$event_type
 select name,'iow'
 from v$event_name
 where wait_class in ('Commit','System I/O','User I/O')
 and name like '%write%'
/
insert into o$event_type
select name, 'other' from v$event_name
minus
select event, 'other' from o$event_type
/

-----------------------------------------------------------------------------------------
----- DO NOT REMOVE THE BELOW LINES as they make adjustments to the broad inserts above.
-----------------------------------------------------------------------------------------

-- select 'update o$event_type set type=''iow'' where event = '''||name||''';' from v$event_name where name like '%I/O%';

-- The below is not perfect cleary, but probably good enough... you still must think...

update o$event_type set type='iow' where event = 'Parameter File I/O';
update o$event_type set type='iow' where event = 'Disk file operations I/O';
update o$event_type set type='iow' where event = 'Disk file I/O Calibration';
update o$event_type set type='iow' where event = 'Datapump dump file I/O';
update o$event_type set type='iow' where event = 'dbms_file_transfer I/O';
update o$event_type set type='iow' where event = 'DG Broker configuration file I/O';
update o$event_type set type='iow' where event = 'Log archive I/O';
--update o$event_type set type='iow' where event = 'RMAN backup & recovery I/O';
update o$event_type set type='iow' where event like 'RMAN backup % recovery I/O';
update o$event_type set type='iow' where event = 'Standby redo I/O';
update o$event_type set type='iow' where event = 'RMAN Disk slave I/O';
update o$event_type set type='iow' where event = 'RMAN Tape slave I/O';
update o$event_type set type='iow' where event = 'DBWR slave I/O';
update o$event_type set type='iow' where event = 'LGWR slave I/O';
update o$event_type set type='iow' where event = 'Archiver slave I/O';
update o$event_type set type='iow' where event = 'db file async I/O submit';
update o$event_type set type='iow' where event = 'concurrent I/O completion';
update o$event_type set type='iow' where event = 'Archive Manager file transfer I/O';
--update o$event_type set type='iow' where event = 'resmgr:large I/O queued';
--update o$event_type set type='iow' where event = 'resmgr:small I/O queued';
update o$event_type set type='iow' where event = 'utl_file I/O';
update o$event_type set type='iow' where event = 'TEXT: File System I/O';
update o$event_type set type='iow' where event = 'ASM Fixed Package I/O';
update o$event_type set type='iow' where event = 'ASM Staleness File I/O';
update o$event_type set type='iow' where event = 'Data Guard: RFS disk I/O';


update o$event_type set type='iow' where event = 'free buffer waits';
update o$event_type set type='other' where event like 'latch%';
update o$event_type set type='other' where event like 'enq%';
update o$event_type set type='other' where event like 'cursor%';
update o$event_type set type='iow' where event like 'log%sync%';
update o$event_type set type='iow' where event like 'log%write%';
update o$event_type set type='iow' where event like 'db%write%';
update o$event_type set type='iow' where event like 'log%file%switch%';
update o$event_type set type='iow' where event like 'LGWR%';
update o$event_type set type='ior' where event like 'log%read%';
update o$event_type set type='ior' where event like 'db%read%';
update o$event_type set type='iow' where event like 'i/o%slave%wait';
update o$event_type set type='iow' where event = 'write complete waits';
update o$event_type set type='bogus' where event = 'wait for unread message on broadcast channel';
update o$event_type set type='bogus' where event = 'wait for unread message on multiple broadcast channels';
--update o$event_type set type='bogus' where event like 'db%file%async%submit%';
-- update o$event_type set type='bogus' where event like 'DLM%cross%inst%call%completion';
update o$event_type set type='bogus' where event like 'rdbms ipc reply';

-- for 12c
update o$event_type set type='bogus' where event like 'LGWR worker group idle';

-----------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------


commit;

col name format a50 trunc
col wait_class format a20
col type format a20

prompt
prompt OraPub Categorization Summary
prompt ------------------------------------------------

select distinct type,
count(*)
from o$event_type
group by type
order by type
/
select count(*) from o$event_type
/

prompt
prompt Oracle Categorization Summary
prompt ------------------------------------------------

select distinct wait_class, count(*) from v$event_name group by wait_class order by wait_class
/
select count(*) from v$event_name
/

2.Now the following script can be saved under /home/oracle/rss.sql

-- ******************************************************
-- * Copyright Notice : (c)2014 OraPub, Inc.
-- * Filename : rss.sql - Realtime Session Sampler
-- * Author : Craig A. Shallahamer
-- * Original : 07-Jun-2014
-- * Last Modified : 15-Jul-2014
-- * Description : Collect and display session activity in realtime
-- * Usage : @rss sid_low sid_high serial#_low serial#_high cpu|wait|% event%|% delay_s
-- * @rss 0 999999 0 999999 % % 1
-- * @rss 0 999999 0 999999 cpu % 1
-- * @rss 0 999999 0 999999 wait % 1
-- * @rss 0 999999 0 999999 wait db%file%seq% 1
-- * @rss 328 328 1 1 % % 0.25
-- ******************************************************

set echo off verify off heading off
--set echo on verify on heading on

def sidLow=&1
def sidHigh=&2
def serLow=&3
def serHigh=&4
def state=&5
def wePartial=&6
def delay=&7

prompt OraPub Realtime Session Sampler - collection and display
prompt
prompt Every &delay second(s), session ID from &sidLow to &sidHigh and serial# from &serLow to &serHigh
prompt in a &state state will be sampled and displayed.
prompt When waiting, only display when wait event is like &wePartial
prompt 
prompt Output will be written to the /tmp/rss_sql.txt file.
prompt To stream output in realtime, in another window do: tail -f /tmp/rss_sql.txt
prompt
accept x prompt "To begin sampling press ENTER. Otherwise break out now."

create or replace directory temp_dir as '/tmp';

prompt To stop sampling, break out (e.g., CNTRL-C)
prompt
prompt Sampling started...

set serveroutput on

begin
declare
 type cv_typ is ref cursor;
 cv cv_typ;
 sql_to_run_v varchar2(999);

fHandle UTL_FILE.FILE_TYPE;

cntr_v number;
 next_sample_time_ts_var timestamp;

current_timestamp_v timestamp;
 sid_v number;
 serial#_v number;
 username_v varchar2(30);
 state_v varchar2(19);
 event_v varchar2(64);
 wecat_v varchar2(64);
 sql_id_v varchar2(13);
 p1_v number;
 p2_v number;
 p3_v number;

begin
 dbms_session.set_identifier('osm:i rss.sql');

fHandle := UTL_FILE.FOPEN('TEMP_DIR', 'rss_sql.txt', 'W');

UTL_FILE.PUT_LINE(fHandle, 'Starting sampling...');
 utl_file.fflush(fHandle);

cntr_v := 0;

while 1 = 1
 loop

cntr_v := cntr_v + 1;

select current_timestamp + interval '0 00:00:&delay' day to second
 into next_sample_time_ts_var
 from dual;

begin

open cv for
 select
 current_timestamp,
 sid,
 serial#,
 decode(username,'','-',username),
 decode(state,'WAITING','WAIT','CPU '),
 nvl(sql_id,'-'),
 decode(state,'WAITING',decode(o.type,'bogus','idle',o.type),'-'),
 decode(state,'WAITING',vs.event,'-') ,
 decode(state,'WAITING',p1,''),
 decode(state,'WAITING',p2,''),
 decode(state,'WAITING',p3,'')
 from v$session vs,
 o$event_type o
 where sid between &sidLow and &sidHigh
 and serial# between &serLow and &serHigh
 and decode(state,'WAITING','WAIT','CPU') like upper('&state%')
 and vs.event like '&wePartial%'
 and vs.event = o.event(+)
 and o.type != 'bogus';

loop
 fetch cv into
 current_timestamp_v, 
 sid_v,
 serial#_v,
 username_v,
 state_v,
 sql_id_v,
 wecat_v,
 event_v,
 p1_v, p2_v, p3_v
 ;
 exit when cv%NOTFOUND;

UTL_FILE.PUT_LINE(fHandle, lpad(trim(cntr_v),3,' ')||' '|| to_char(current_timestamp_v,'HH24:MI:SS:FF3')||' '||lpad(trim( sid_v ),5,' ')||lpad(trim(serial#_v),6,' ')||' '||lpad(trim(username_v),10,' ')||' '||state_v||' '||rpad(trim(sql_id_v),20,' ')||' '||rpad(trim(wecat_v),5,' ')||' '||rpad(trim(event_v),30,' ')||' ['||p1_v||','||p2_v||','||p3_v||']');
 utl_file.fflush(fHandle);

end loop;

EXCEPTION when others then
 cntr_v := cntr_v + 1;
 end;

dbms_lock.sleep( greatest( get_interval_s_fnc( next_sample_time_ts_var - current_timestamp ),0));

end loop;
 UTL_FILE.PUT_LINE(fHandle, 'Ending sampling...');
 UTL_FILE.FCLOSE(fHandle);
end;
end;
/

3.Now execute the script to take sames of either individual user or many users

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql 10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Situation:I want to same all sessions of my database

SQL> @rss.sql 0 99999 0 99999 % % 0.125

Reference

http://blog.orapub.com/20140811/watch-oracle-db-session-activity-with-the-real-time-session-sampler.html

DOCKER CONCEPT FOR DBA– PART 9 : How to use docker tar command in docker to re-instantiate new docker

1.Creating docker tar file from docker image.We can transfer that tar file and import to another docker instance.

root@deb-VirtualBox:~# docker image save mywebserver:v1>mywebserver.tar
root@deb-VirtualBox:~# ls -ltr mywebserver.tar
-rw-r–r– 1 root root 330399232 May 2 19:53 mywebserver.tar

2.Remove the existing docker image 

root@deb-VirtualBox:~# docker rmi -f mywebserver:v1
Untagged: mywebserver:v1

3.Please note the image is deleted

root@deb-VirtualBox:~# docker images

4.Now we can re-import docker image from docker tar

root@deb-VirtualBox:~# docker import mywebserver.tar mywebserver:v1
sha256:d749c49988b578c7f3183eccdf135520f4b1e8778a3770c4ad9755f32d243b3f

root@deb-VirtualBox:~# docker import mywebserver.tar mywebserver:latest
sha256:1d666b38b200ec0c6c47399890b15ba856b3ae2811cd8c90080f1faa81ed9c56

5.I have 2 images of the mywebserver

root@deb-VirtualBox:~# docker images

DOCKER CONCEPT FOR DBA – PART 8 How to build docker image from docker file

1.You need to create docker file first.Below is example of small test docker file.

root@deb-VirtualBox:~# cat test_docker.file
 FROM centos:latest
 RUN yum update -y

Here please note that FROM clause denotes first we need to pull centos image and RUN clause will run yum update on top of it.

2.Now I will build the docker image .

-t option is used to provide tag name

-f option is used to provide docker file name.

You need to run the following command from same path where your docker file resides.

root@deb-VirtualBox:~# docker build -t customimage:v1 -f test_docker.file .
 Sending build context to Docker daemon 8.704kB
 Step 1/2 : FROM centos:latest
 latest: Pulling from library/centos
 469cfcc7a4b3: Pull complete
 Digest: sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16
 Status: Downloaded newer image for centos:latest
 ---> e934aafc2206
 Step 2/2 : RUN yum update -y
 ---> Running in 21686d3260e4
 Loaded plugins: fastestmirror, ovl
 http://ftp.iitm.ac.in/centos/7.4.1708/updates/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://ftp.iitm.ac.in/centos/7.4.1708/updates/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
 Trying other mirror.
 Determining fastest mirrors
 * base: mirror.nbrc.ac.in
 * extras: mirror.nbrc.ac.in
 * updates: mirror.nbrc.ac.in
 No packages marked for update
 Removing intermediate container 21686d3260e4
 ---> 9528f83769a3
 Successfully built 9528f83769a3
 Successfully tagged customimage:v1

3.Now let me run the docker image command to check details of our latest build

root@deb-VirtualBox:~# docker images

Please note centos image has been pulled and on top of it our customimage has been built.

4.Now you can use no-cache during pull the image

root@deb-VirtualBox:~# docker build --pull --no-cache -t optimized:v1 -f test_docker.file .
 Sending build context to Docker daemon 8.704kB
 Step 1/2 : FROM centos:latest
 latest: Pulling from library/centos
 Digest: sha256:989b936d56b1ace20ddf855a301741e52abca38286382cba7f44443210e96d16
 Status: Image is up to date for centos:latest
 ---> e934aafc2206
 Step 2/2 : RUN yum update -y
 ---> Running in b2319a7d1d89
 Loaded plugins: fastestmirror, ovl
 http://ftp.iitm.ac.in/centos/7.4.1708/updates/x86_64/repodata/repomd.xml: [Errno 12] Timeout on http://ftp.iitm.ac.in/centos/7.4.1708/updates/x86_64/repodata/repomd.xml: (28, 'Connection timed out after 30001 milliseconds')
 Trying other mirror.
 Determining fastest mirrors
 * base: del-mirrors.extreme-ix.org
 * extras: del-mirrors.extreme-ix.org
 * updates: del-mirrors.extreme-ix.org
 No packages marked for update
 Removing intermediate container b2319a7d1d89
 ---> dd055168eb91
 Successfully built dd055168eb91
 Successfully tagged optimized:v1

5.Now let me write complex docker file

root@deb-VirtualBox:~# cat mycomplexdockerfile.file
FROM centos:6
LABEL maintainer="debasis.tcs@gmail.com"
RUN yum update -y && yum install -y httpd net-tools
RUN mkdir -p /run/httpd
RUN rm -rf /run/http/* /tmp/httpd*
CMD echo "Remember to check your container IP address"
ENV ENVIRONMENT="production"
EXPOSE 80
ENTRYPOINT apachectl "-DFOREGROUND"

6.Let me explain the syntex which we used above

 

FROM centos:6 (This will pull centos image as base)
LABEL maintainer=”debasis.tcs@gmail.com” (This is deprecated)
RUN yum update -y && yum install -y httpd net-tools(It will run all yum package install command)
RUN mkdir -p /run/httpd (Create directory)
RUN rm -rf /run/http/* /tmp/httpd* (Remove directory)
CMD echo “Remember to check your container IP address” (Echo something important)
ENV ENVIRONMENT=”production” (Setting env variable)
EXPOSE 80 (The apache service will listener to port 80)
ENTRYPOINT apachectl “-DFOREGROUND”

 

7.Building docker image

docker build -t mywebserver:v1 -f mycomplexdockerfile.file .

8.Please check the image again

root@deb-VirtualBox:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mywebserver v1 a23dab42b0ee 7 minutes ago 321MB
optimized v1 dd055168eb91 11 hours ago 301MB
customimage v1 9528f83769a3 11 hours ago 301MB
hello-world latest e38bc07ac18e 2 weeks ago 1.85kB
centos 6 70b5d81549ec 3 weeks ago 195MB
centos latest e934aafc2206 3 weeks ago 199MB
root@deb-VirtualBox:~# docker run -d --name mytestweb --rm mywebserver:v1
c0d3e30d1ab22523e7be4958f15cad6dca122a368306a23dd1573a958adabebd

9.Please check that docker is running now

root@deb-VirtualBox:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c0d3e30d1ab2 mywebserver:v1 "/bin/sh -c 'apachec…" 45 seconds ago Up 44 seconds 80/tcp mytestweb

10.Let me check IP address of the docker

root@deb-VirtualBox:~# docker inspect mytestweb | grep -i ipaddress
 "SecondaryIPAddresses": null,
 "IPAddress": "172.17.0.2",
 "IPAddress": "172.17.0.2",

11.Let me check using elinks

root@deb-VirtualBox:~# elinks http://172.17.0.2

12.Now we can use argument in docker file

root@deb-VirtualBox:~# cat mycomplexdockerfile_arg.file
ARG TARGETVERSION=6
FROM centos:${TARGETVERSION}
LABEL maintainer="debasis.tcs@gmail.com"
RUN yum update -y && yum install -y httpd net-tools
RUN mkdir -p /run/httpd
RUN rm -rf /run/http/* /tmp/httpd*
CMD echo "Remember to check your container IP address"
ENV ENVIRONMENT="production"
EXPOSE 80
ENTRYPOINT apachectl "-DFOREGROUND"

13.I will again re-build the docker

root@deb-VirtualBox:~# docker build -t mywebserver1:v1 -f mycomplexdockerfile_arg.file .

14.We can check history of command executed in docker build

root@deb-VirtualBox:~# docker image history mywebserver1:v1

15.You may use no-trunc option to see full command line used in docker run

PostgreSQL DBA part1 :-Installation of PostgreSQL 9.6 in Linux

I will show how you can install PostgreSQL  9.6 in Linux 7.This document is based on guidance provided by EDB (Enterprise distributor of PostgreSQL)

Step 1:-First we need to check Supported platform as below

The Advanced Server 9.6 RPM packages are supported on the following platforms:
64 bit Linux:

Red Hat Enterprise Linux 6.x and 7.x

CentOS 6.x and 7.x

SLES 12

The Advanced Server 9.6 graphical (or interactive) installers are supported on the following platforms:

64 bit Linux:

Red Hat Enterprise Linux 6.x and 7.x

CentOS 6.x and 7.x

Oracle Enterprise Linux 6.x and 7.x

Ubuntu 14.04 LTS and 16.04 LTS

Debian 7 and 8

SELinux Enterprise 12.x

64 bit Windows:

Windows Server 2016

Windows Server 2012 R2 Server

Windows Server 2008 R2 Server

The Connectors (JDBC/.NET/ODBC/OCL) are supported on Windows 7 (32 bit client), Windows 8 and Windows 10 clients.

Step 2:-Let me  Change SELinux to permissive mode:

[root@debasiseric4 ~]# vi /etc/selinux/config
#SELINUX=enforcing
SELINUX=permissive

[root@debasiseric4 ~]# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31
[root@debasiseric4 ~]# setenforce Permissive
[root@debasiseric4 ~]# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31

Step 3:-Now I will download PostgreSQL 9.6 from EDB

https://www.enterprisedb.com/enterprisedb-software-downloads

Please save the file to the local drive and ftp to server where postgreSQL needs to be installed.

 

Step 4:-Please download JDK 1.8 and transfer to Linux host.After that please install/untar JDK

Step 5:-Now I will install postgreSQL

A.Please provide execute permission to root
[root@debasiseric4 user]# ./edb-as96-meta-9.6.2.7-2-linux-x64.run
-bash: ./edb-as96-meta-9.6.2.7-2-linux-x64.run: Permission denied
[root@debasiseric4 user]# chmod 777 edb-as96-meta-9.6.2.7-2-linux-x64.run
B.Please export new JDK 8 home.
[root@debasiseric4 jdk1.8.0_151]# export PATH=/home/user/jdk1.8.0_151/bin:$PATH
[root@debasiseric4 jdk1.8.0_151]# which java
/home/user/jdk1.8.0_151/bin/java
[root@debasiseric4 jdk1.8.0_151]# java -version
java version “1.8.0_151”
C.Now continue the installation
[root@debasiseric4 user]# ./edb-as96-meta-9.6.2.7-2-linux-x64.run
Language Selection  Please select the installation language
[1] English – English
[2] Japanese –
[3] Simplified Chinese –
[4] Traditional Chinese –
[5] Korean –
Please choose an option [1] :
Please choose an option [1] :
—————————————————————————-
Welcome to the EDB Postgres Advanced Server Setup Wizard.
—————————————————————————-
Please read the following License Agreement. You must accept the terms of this
agreement before continuing with the installation.
Press [Enter] to continue:
Limited Use Software License Agreement
Version 2.9
Do you accept this license? [y/n]: y
—————————————————————————-
User Authentication
This installation requires a registration with EnterpriseDB.com. Please enter
your credentials below. If you do not have an account, Please create one now on
https://www.enterprisedb.com/user-login-registration

Email [debasis.tcs@gmail.com]:

Password :

—————————————————————————-
Please specify the directory where EDB Postgres Advanced Server will be
installed.

Installation Directory [/opt/edb]:

—————————————————————————-
Select the components you want to install.

EDB Postgres Advanced Server [Y/n] :Y

EDB Postgres Connectors [Y/n] :Y

EDB Postgres Advanced Server Infinite Cache [Y/n] :Y

EDB Postgres Migration Toolkit [Y/n] :Y

EDB Postgres pgAdmin 4 [Y/n] :Y

EDB Postgres pgPool-II [Y/n] :Y

EDB Postgres pgPool-II Extensions [Y/n] :Y

EDB Postgres Advanced Server EDB*Plus [Y/n] :Y

EDB Postgres Advanced Server Slony Replication [Y/n] :Y

EDB Postgres pgBouncer [Y/n] :Y

Is the selection above correct? [Y/n]: Y

—————————————————————————-
Additional Directories

Please select a directory under which to store your data.

Data Directory [/opt/edb/as9.6/data]:

Please select a directory under which to store your Write-Ahead Logs.

Write-Ahead Log (WAL) Directory [/opt/edb/as9.6/data/pg_xlog]:

—————————————————————————-
Configuration Mode

EDB Postgres Advanced Server always installs with database compatibility features for Oracle(R) and maintains full PostgreSQL compliance. Select your style preference for installation defaults and samples.

The Oracle configuration will cause the use of certain objects (e.g. DATE data types, string operations, etc.) to produce results compatible with Oracle, create the same Oracle sample tables, and have the database match Oracle examples used in the documentation.

Configuration Mode

[1] Compatible with Oracle
[2] Compatible with PostgreSQL
Please choose an option [1] : 1

—————————————————————————-
Please provide a password for the database superuser (enterprisedb). A locked
Unix user account (enterprisedb) will be created if not present.

Password :
Retype Password :
—————————————————————————-
Additional Configuration

Please select the port number the server should listen on.

Port [5444]:

Select the locale to be used by the new database cluster.

Locale

[1] [Default locale]
[2] aa_DJ
[3] aa_DJ.iso88591
[4] aa_DJ.utf8
[5] aa_ER
[6] aa_ER@saaho
[7] aa_ER.utf8
[8] aa_ER.utf8@saaho
[9] aa_ET
[10] aa_ET.utf8
[11] af_ZA
[12] af_ZA.iso88591
[13] af_ZA.utf8
[14] am_ET
[15] am_ET.utf8
[16] an_ES
[17] an_ES.iso885915
[18] an_ES.utf8
[19] ar_AE
[20] ar_AE.iso88596
[21] ar_AE.utf8
[22] ar_BH
[23] ar_BH.iso88596
[24] ar_BH.utf8
[25] ar_DZ
[26] ar_DZ.iso88596
[27] ar_DZ.utf8
[28] ar_EG
[29] ar_EG.iso88596
[30] ar_EG.utf8
[31] ar_IN
[32] ar_IN.utf8
[33] ar_IQ
[34] ar_IQ.iso88596
[35] ar_IQ.utf8
[36] ar_JO
[37] ar_JO.iso88596
[38] ar_JO.utf8
[39] ar_KW
[40] ar_KW.iso88596
[41] ar_KW.utf8
[42] ar_LB
[43] ar_LB.iso88596
[44] ar_LB.utf8
[45] ar_LY
[46] ar_LY.iso88596
[47] ar_LY.utf8
[48] ar_MA
[49] ar_MA.iso88596
[50] ar_MA.utf8
[51] ar_OM
[52] ar_OM.iso88596
[53] ar_OM.utf8
[54] ar_QA
[55] ar_QA.iso88596
[56] ar_QA.utf8
[57] ar_SA
[58] ar_SA.iso88596
[59] ar_SA.utf8
[60] ar_SD
[61] ar_SD.iso88596
[62] ar_SD.utf8
[63] ar_SY
[64] ar_SY.iso88596
[65] ar_SY.utf8
[66] ar_TN
[67] ar_TN.iso88596
[68] ar_TN.utf8
[69] ar_YE
[70] ar_YE.iso88596
[71] ar_YE.utf8
[72] as_IN
[73] as_IN.utf8
[74] ast_ES
[75] ast_ES.iso885915
[76] ast_ES.utf8
[77] ayc_PE
[78] ayc_PE.utf8
[79] az_AZ
[80] az_AZ.utf8
[81] be_BY
[82] be_BY.cp1251
[83] be_BY@latin
[84] be_BY.utf8
[85] be_BY.utf8@latin
[86] bem_ZM
[87] bem_ZM.utf8
[88] ber_DZ
[89] ber_DZ.utf8
[90] ber_MA
[91] ber_MA.utf8
[92] bg_BG
[93] bg_BG.cp1251
[94] bg_BG.utf8
[95] bho_IN
[96] bho_IN.utf8
[97] bn_BD
[98] bn_BD.utf8
[99] bn_IN
[100] bn_IN.utf8
[101] bo_CN
[102] bo_CN.utf8
[103] bo_IN
[104] bo_IN.utf8
[105] bokmal
[106] bokmål
[107] br_FR
[108] br_FR@euro
[109] br_FR.iso88591
[110] br_FR.iso885915@euro
[111] br_FR.utf8
[112] brx_IN
[113] brx_IN.utf8
[114] bs_BA
[115] bs_BA.iso88592
[116] bs_BA.utf8
[117] byn_ER
[118] byn_ER.utf8
[119] C
[120] ca_AD
[121] ca_AD.iso885915
[122] ca_AD.utf8
[123] ca_ES
[124] ca_ES@euro
[125] ca_ES.iso88591
[126] ca_ES.iso885915@euro
[127] ca_ES.utf8
[128] ca_FR
[129] ca_FR.iso885915
[130] ca_FR.utf8
[131] ca_IT
[132] ca_IT.iso885915
[133] ca_IT.utf8
[134] catalan
[135] crh_UA
[136] crh_UA.utf8
[137] croatian
[138] csb_PL
[139] csb_PL.utf8
[140] cs_CZ
[141] cs_CZ.iso88592
[142] cs_CZ.utf8
[143] cv_RU
[144] cv_RU.utf8
[145] cy_GB
[146] cy_GB.iso885914
[147] cy_GB.utf8
[148] czech
[149] da_DK
[150] da_DK.iso88591
[151] da_DK.iso885915
[152] da_DK.utf8
[153] danish
[154] dansk
[155] de_AT
[156] de_AT@euro
[157] de_AT.iso88591
[158] de_AT.iso885915@euro
[159] de_AT.utf8
[160] de_BE
[161] de_BE@euro
[162] de_BE.iso88591
[163] de_BE.iso885915@euro
[164] de_BE.utf8
[165] de_CH
[166] de_CH.iso88591
[167] de_CH.utf8
[168] de_DE
[169] de_DE@euro
[170] de_DE.iso88591
[171] de_DE.iso885915@euro
[172] de_DE.utf8
[173] de_LU
[174] de_LU@euro
[175] de_LU.iso88591
[176] de_LU.iso885915@euro
[177] de_LU.utf8
[178] deutsch
[179] doi_IN
[180] doi_IN.utf8
[181] dutch
[182] dv_MV
[183] dv_MV.utf8
[184] dz_BT
[185] dz_BT.utf8
[186] eesti
[187] el_CY
[188] el_CY.iso88597
[189] el_CY.utf8
[190] el_GR
[191] el_GR.iso88597
[192] el_GR.utf8
[193] en_AG
[194] en_AG.utf8
[195] en_AU
[196] en_AU.iso88591
[197] en_AU.utf8
[198] en_BW
[199] en_BW.iso88591
[200] en_BW.utf8
[201] en_CA
[202] en_CA.iso88591
[203] en_CA.utf8
[204] en_DK
[205] en_DK.iso88591
[206] en_DK.utf8
[207] en_GB
[208] en_GB.iso88591
[209] en_GB.iso885915
[210] en_GB.utf8
[211] en_HK
[212] en_HK.iso88591
[213] en_HK.utf8
[214] en_IE
[215] en_IE@euro
[216] en_IE.iso88591
[217] en_IE.iso885915@euro
[218] en_IE.utf8
[219] en_IN
[220] en_IN.utf8
[221] en_NG
[222] en_NG.utf8
[223] en_NZ
[224] en_NZ.iso88591
[225] en_NZ.utf8
[226] en_PH
[227] en_PH.iso88591
[228] en_PH.utf8
[229] en_SG
[230] en_SG.iso88591
[231] en_SG.utf8
[232] en_US
[233] en_US.iso88591
[234] en_US.iso885915
[235] en_US.utf8
[236] en_ZA
[237] en_ZA.iso88591
[238] en_ZA.utf8
[239] en_ZM
[240] en_ZM.utf8
[241] en_ZW
[242] en_ZW.iso88591
[243] en_ZW.utf8
[244] es_AR
[245] es_AR.iso88591
[246] es_AR.utf8
[247] es_BO
[248] es_BO.iso88591
[249] es_BO.utf8
[250] es_CL
[251] es_CL.iso88591
[252] es_CL.utf8
[253] es_CO
[254] es_CO.iso88591
[255] es_CO.utf8
[256] es_CR
[257] es_CR.iso88591
[258] es_CR.utf8
[259] es_CU
[260] es_CU.utf8
[261] es_DO
[262] es_DO.iso88591
[263] es_DO.utf8
[264] es_EC
[265] es_EC.iso88591
[266] es_EC.utf8
[267] es_ES
[268] es_ES@euro
[269] es_ES.iso88591
[270] es_ES.iso885915@euro
[271] es_ES.utf8
[272] es_GT
[273] es_GT.iso88591
[274] es_GT.utf8
[275] es_HN
[276] es_HN.iso88591
[277] es_HN.utf8
[278] es_MX
[279] es_MX.iso88591
[280] es_MX.utf8
[281] es_NI
[282] es_NI.iso88591
[283] es_NI.utf8
[284] es_PA
[285] es_PA.iso88591
[286] es_PA.utf8
[287] es_PE
[288] es_PE.iso88591
[289] es_PE.utf8
[290] es_PR
[291] es_PR.iso88591
[292] es_PR.utf8
[293] es_PY
[294] es_PY.iso88591
[295] es_PY.utf8
[296] es_SV
[297] es_SV.iso88591
[298] es_SV.utf8
[299] estonian
[300] es_US
[301] es_US.iso88591
[302] es_US.utf8
[303] es_UY
[304] es_UY.iso88591
[305] es_UY.utf8
[306] es_VE
[307] es_VE.iso88591
[308] es_VE.utf8
[309] et_EE
[310] et_EE.iso88591
[311] et_EE.iso885915
[312] et_EE.utf8
[313] eu_ES
[314] eu_ES@euro
[315] eu_ES.iso88591
[316] eu_ES.iso885915@euro
[317] eu_ES.utf8
[318] fa_IR
[319] fa_IR.utf8
[320] ff_SN
[321] ff_SN.utf8
[322] fi_FI
[323] fi_FI@euro
[324] fi_FI.iso88591
[325] fi_FI.iso885915@euro
[326] fi_FI.utf8
[327] fil_PH
[328] fil_PH.utf8
[329] finnish
[330] fo_FO
[331] fo_FO.iso88591
[332] fo_FO.utf8
[333] français
[334] fr_BE
[335] fr_BE@euro
[336] fr_BE.iso88591
[337] fr_BE.iso885915@euro
[338] fr_BE.utf8
[339] fr_CA
[340] fr_CA.iso88591
[341] fr_CA.utf8
[342] fr_CH
[343] fr_CH.iso88591
[344] fr_CH.utf8
[345] french
[346] fr_FR
[347] fr_FR@euro
[348] fr_FR.iso88591
[349] fr_FR.iso885915@euro
[350] fr_FR.utf8
[351] fr_LU
[352] fr_LU@euro
[353] fr_LU.iso88591
[354] fr_LU.iso885915@euro
[355] fr_LU.utf8
[356] fur_IT
[357] fur_IT.utf8
[358] fy_DE
[359] fy_DE.utf8
[360] fy_NL
[361] fy_NL.utf8
[362] ga_IE
[363] ga_IE@euro
[364] ga_IE.iso88591
[365] ga_IE.iso885915@euro
[366] ga_IE.utf8
[367] galego
[368] galician
[369] gd_GB
[370] gd_GB.iso885915
[371] gd_GB.utf8
[372] german
[373] gez_ER
[374] gez_ER@abegede
[375] gez_ER.utf8
[376] gez_ER.utf8@abegede
[377] gez_ET
[378] gez_ET@abegede
[379] gez_ET.utf8
[380] gez_ET.utf8@abegede
[381] gl_ES
[382] gl_ES@euro
[383] gl_ES.iso88591
[384] gl_ES.iso885915@euro
[385] gl_ES.utf8
[386] greek
[387] gu_IN
[388] gu_IN.utf8
[389] gv_GB
[390] gv_GB.iso88591
[391] gv_GB.utf8
[392] ha_NG
[393] ha_NG.utf8
[394] hebrew
[395] he_IL
[396] he_IL.iso88598
[397] he_IL.utf8
[398] hi_IN
[399] hi_IN.utf8
[400] hne_IN
[401] hne_IN.utf8
[402] hr_HR
[403] hr_HR.iso88592
[404] hr_HR.utf8
[405] hrvatski
[406] hsb_DE
[407] hsb_DE.iso88592
[408] hsb_DE.utf8
[409] ht_HT
[410] ht_HT.utf8
[411] hu_HU
[412] hu_HU.iso88592
[413] hu_HU.utf8
[414] hungarian
[415] hy_AM
[416] hy_AM.utf8
[417] ia_FR
[418] ia_FR.utf8
[419] icelandic
[420] id_ID
[421] id_ID.iso88591
[422] id_ID.utf8
[423] ig_NG
[424] ig_NG.utf8
[425] ik_CA
[426] ik_CA.utf8
[427] is_IS
[428] is_IS.iso88591
[429] is_IS.utf8
[430] italian
[431] it_CH
[432] it_CH.iso88591
[433] it_CH.utf8
[434] it_IT
[435] it_IT@euro
[436] it_IT.iso88591
[437] it_IT.iso885915@euro
[438] it_IT.utf8
[439] iu_CA
[440] iu_CA.utf8
[441] iw_IL
[442] iw_IL.iso88598
[443] iw_IL.utf8
[444] ja_JP
[445] ja_JP.eucjp
[446] ja_JP.ujis
[447] ja_JP.utf8
[448] japanese
[449] japanese.euc
[450] ka_GE.utf8
[451] kk_KZ.utf8
[452] kl_GL
[453] kl_GL.iso88591
[454] kl_GL.utf8
[455] km_KH
[456] km_KH.utf8
[457] kn_IN
[458] kn_IN.utf8
[459] kok_IN
[460] kok_IN.utf8
[461] ko_KR
[462] ko_KR.euckr
[463] ko_KR.utf8
[464] korean
[465] korean.euc
[466] ks_IN
[467] ks_IN@devanagari
[468] ks_IN.utf8
[469] ks_IN.utf8@devanagari
[470] ku_TR
[471] ku_TR.iso88599
[472] ku_TR.utf8
[473] kw_GB
[474] kw_GB.iso88591
[475] kw_GB.utf8
[476] ky_KG
[477] ky_KG.utf8
[478] lb_LU
[479] lb_LU.utf8
[480] lg_UG
[481] lg_UG.iso885910
[482] lg_UG.utf8
[483] li_BE
[484] li_BE.utf8
[485] lij_IT
[486] lij_IT.utf8
[487] li_NL
[488] li_NL.utf8
[489] lithuanian
[490] lo_LA
[491] lo_LA.utf8
[492] lt_LT
[493] lt_LT.iso885913
[494] lt_LT.utf8
[495] lv_LV
[496] lv_LV.iso885913
[497] lv_LV.utf8
[498] mag_IN
[499] mag_IN.utf8
[500] mai_IN
[501] mai_IN.utf8
[502] mg_MG
[503] mg_MG.iso885915
[504] mg_MG.utf8
[505] mhr_RU
[506] mhr_RU.utf8
[507] mi_NZ
[508] mi_NZ.iso885913
[509] mi_NZ.utf8
[510] mk_MK
[511] mk_MK.iso88595
[512] mk_MK.utf8
[513] ml_IN
[514] ml_IN.utf8
[515] mni_IN
[516] mni_IN.utf8
[517] mn_MN
[518] mn_MN.utf8
[519] mr_IN
[520] mr_IN.utf8
[521] ms_MY
[522] ms_MY.iso88591
[523] ms_MY.utf8
[524] mt_MT
[525] mt_MT.iso88593
[526] mt_MT.utf8
[527] my_MM
[528] my_MM.utf8
[529] nan_TW@latin
[530] nan_TW.utf8@latin
[531] nb_NO
[532] nb_NO.iso88591
[533] nb_NO.utf8
[534] nds_DE
[535] nds_DE.utf8
[536] nds_NL
[537] nds_NL.utf8
[538] ne_NP
[539] ne_NP.utf8
[540] nhn_MX
[541] nhn_MX.utf8
[542] niu_NU
[543] niu_NU.utf8
[544] niu_NZ
[545] niu_NZ.utf8
[546] nl_AW
[547] nl_AW.utf8
[548] nl_BE
[549] nl_BE@euro
[550] nl_BE.iso88591
[551] nl_BE.iso885915@euro
[552] nl_BE.utf8
[553] nl_NL
[554] nl_NL@euro
[555] nl_NL.iso88591
[556] nl_NL.iso885915@euro
[557] nl_NL.utf8
[558] nn_NO
[559] nn_NO.iso88591
[560] nn_NO.utf8
[561] no_NO
[562] no_NO.ISO-8859-1
[563] norwegian
[564] nr_ZA
[565] nr_ZA.utf8
[566] nso_ZA
[567] nso_ZA.utf8
[568] nynorsk
[569] oc_FR
[570] oc_FR.iso88591
[571] oc_FR.utf8
[572] om_ET
[573] om_ET.utf8
[574] om_KE
[575] om_KE.iso88591
[576] om_KE.utf8
[577] or_IN
[578] or_IN.utf8
[579] os_RU
[580] os_RU.utf8
[581] pa_IN
[582] pa_IN.utf8
[583] pap_AN
[584] pap_AN.utf8
[585] pa_PK
[586] pa_PK.utf8
[587] pl_PL
[588] pl_PL.iso88592
[589] pl_PL.utf8
[590] polish
[591] portuguese
[592] POSIX
[593] ps_AF
[594] ps_AF.utf8
[595] pt_BR
[596] pt_BR.iso88591
[597] pt_BR.utf8
[598] pt_PT
[599] pt_PT@euro
[600] pt_PT.iso88591
[601] pt_PT.iso885915@euro
[602] pt_PT.utf8
[603] romanian
[604] ro_RO
[605] ro_RO.iso88592
[606] ro_RO.utf8
[607] ru_RU
[608] ru_RU.iso88595
[609] ru_RU.koi8r
[610] ru_RU.utf8
[611] russian
[612] ru_UA
[613] ru_UA.koi8u
[614] ru_UA.utf8
[615] rw_RW
[616] rw_RW.utf8
[617] sa_IN
[618] sa_IN.utf8
[619] sat_IN
[620] sat_IN.utf8
[621] sc_IT
[622] sc_IT.utf8
[623] sd_IN
[624] sd_IN@devanagari
[625] sd_IN.utf8
[626] sd_IN.utf8@devanagari
[627] se_NO
[628] se_NO.utf8
[629] shs_CA
[630] shs_CA.utf8
[631] sid_ET
[632] sid_ET.utf8
[633] si_LK
[634] si_LK.utf8
[635] sk_SK
[636] sk_SK.iso88592
[637] sk_SK.utf8
[638] slovak
[639] slovene
[640] slovenian
[641] sl_SI
[642] sl_SI.iso88592
[643] sl_SI.utf8
[644] so_DJ
[645] so_DJ.iso88591
[646] so_DJ.utf8
[647] so_ET
[648] so_ET.utf8
[649] so_KE
[650] so_KE.iso88591
[651] so_KE.utf8
[652] so_SO
[653] so_SO.iso88591
[654] so_SO.utf8
[655] spanish
[656] sq_AL
[657] sq_AL.iso88591
[658] sq_AL.utf8
[659] sq_MK
[660] sq_MK.utf8
[661] sr_ME
[662] sr_ME.utf8
[663] sr_RS
[664] sr_RS@latin
[665] sr_RS.utf8
[666] sr_RS.utf8@latin
[667] ss_ZA
[668] ss_ZA.utf8
[669] st_ZA
[670] st_ZA.iso88591
[671] st_ZA.utf8
[672] sv_FI
[673] sv_FI@euro
[674] sv_FI.iso88591
[675] sv_FI.iso885915@euro
[676] sv_FI.utf8
[677] sv_SE
[678] sv_SE.iso88591
[679] sv_SE.iso885915
[680] sv_SE.utf8
[681] swedish
[682] sw_KE
[683] sw_KE.utf8
[684] sw_TZ
[685] sw_TZ.utf8
[686] szl_PL
[687] szl_PL.utf8
[688] ta_IN
[689] ta_IN.utf8
[690] ta_LK
[691] ta_LK.utf8
[692] te_IN
[693] te_IN.utf8
[694] tg_TJ.utf8
[695] th_TH.utf8
[696] ti_ER
[697] ti_ER.utf8
[698] ti_ET
[699] ti_ET.utf8
[700] tig_ER
[701] tig_ER.utf8
[702] tk_TM
[703] tk_TM.utf8
[704] tl_PH
[705] tl_PH.iso88591
[706] tl_PH.utf8
[707] tn_ZA
[708] tn_ZA.utf8
[709] tr_CY
[710] tr_CY.iso88599
[711] tr_CY.utf8
[712] tr_TR
[713] tr_TR.iso88599
[714] tr_TR.utf8
[715] ts_ZA
[716] ts_ZA.utf8
[717] tt_RU
[718] tt_RU@iqtelif
[719] tt_RU.utf8
[720] tt_RU.utf8@iqtelif
[721] turkish
[722] ug_CN
[723] ug_CN.utf8
[724] uk_UA
[725] uk_UA.koi8u
[726] uk_UA.utf8
[727] unm_US
[728] unm_US.utf8
[729] ur_IN
[730] ur_IN.utf8
[731] ur_PK
[732] ur_PK.utf8
[733] uz_UZ
[734] uz_UZ@cyrillic
[735] uz_UZ.iso88591
[736] uz_UZ.utf8@cyrillic
[737] ve_ZA
[738] ve_ZA.utf8
[739] vi_VN
[740] vi_VN.utf8
[741] wa_BE
[742] wa_BE@euro
[743] wa_BE.iso88591
[744] wa_BE.iso885915@euro
[745] wa_BE.utf8
[746] wae_CH
[747] wae_CH.utf8
[748] wal_ET
[749] wal_ET.utf8
[750] wo_SN
[751] wo_SN.utf8
[752] xh_ZA
[753] xh_ZA.iso88591
[754] xh_ZA.utf8
[755] yi_US
[756] yi_US.cp1255
[757] yi_US.utf8
[758] yo_NG
[759] yo_NG.utf8
[760] yue_HK
[761] yue_HK.utf8
[762] zh_CN
[763] zh_CN.gb2312
[764] zh_CN.utf8
[765] zh_HK.utf8
[766] zh_SG
[767] zh_SG.gb2312
[768] zh_SG.utf8
[769] zh_TW.euctw
[770] zh_TW.utf8
[771] zu_ZA
[772] zu_ZA.iso88591
[773] zu_ZA.utf8
Please choose an option [1] :

Install sample tables and procedures. [Y/n]: Y

—————————————————————————-
Dynatune Dynamic Tuning:
Server Utilization

Please select the type of server to determine the amount of system resources
that may be utilized:

[1] Development (e.g. a developer’s laptop)
[2] General Purpose (e.g. a web or application server)
[3] Dedicated (a server running only EDB Postgres)
Please choose an option [2] :

—————————————————————————-
Dynatune Dynamic Tuning:
Workload Profile

Please select the type of workload this server will be used for:

[1] Transaction Processing (OLTP systems)
[2] General Purpose (OLTP and reporting workloads)
[3] Reporting (Complex queries or OLAP workloads)
Please choose an option [1] :

—————————————————————————-
Advanced Configuration

—————————————————————————-
PgBouncer Listening Port [6432]:

—————————————————————————-
Service Configuration

Autostart PgBouncer Service [Y/n]:Y

Autostart pgAgent Service [Y/n]:Y

Update Notification Service [Y/n]: Y

The Update Notification Service informs, downloads and installs whenever
security patches and other updates are available for your EDB Postgres Advanced
Server installation.

—————————————————————————-
Pre Installation Summary

Following settings will be used for installation:
Data Directory [/opt/edb/as9.6/data]:
Installation Directory: /opt/edb
Data Directory: /opt/edb/as9.6/data
WAL Directory: /opt/edb/as9.6/data/pg_xlog
Database Port: 5444
Database Superuser: enterprisedb
Operating System Account: enterprisedb
Database Service: edb-as-9.6
PgBouncer Listening Port: 6432
Press [Enter] to continue:
—————————————————————————-
Setup is now ready to begin installing EDB Postgres Advanced Server on your
computer.
Do you want to continue? [Y/n]: Y
—————————————————————————-
Please wait while Setup installs EDB Postgres Advanced Server on your computer.
Installing EDB Postgres Advanced Server
0% ______________ 50% ______________ 100%
########################################
Installing EDB Postgres Advanced Server …
Installing EDB Postgres Advanced Server pgAgent …
Installing EDB Postgres Connectors …
Installing EDB Postgres Migration Toolkit …
Installing EDB Postgres Advanced Server EDB*Plus …
Installing EDB Postgres Advanced Server Infinite Cache …
Installing EDB Postgres pgAdmin 4 …
Installing EDB Postgres Advanced Server Slony Replication …
Installing EDB Postgres pgPool-II …
Installing EDB Postgres pgPool-II Extensions …

Step 6:-Now I will do post Installation Steps:

A. Change the ownership of edb to enterprisedb:enterprisedb

[root@debasiseric4 ]# cd /opt/
[root@debasiseric4 opt]# chown -R enterprisedb.enterprisedb edb/
[root@debasiseric4 opt]# ll
total 8
drwxr-xr-x. 9 enterprisedb enterprisedb 4096 Mar 24 14:36 edb

B. Set profile

-bash-4.1$ pwd
/opt/edb/as9.6
-bash-4.1$ cat pgplus_env.sh

# Environment

export PATH=/opt/edb/as9.6/bin:$PATH
export EDBHOME=/opt/edb/as9.6
export PGDATA=/opt/edb/as9.6/data
export PGDATABASE=edb
# export PGUSER=enterprisedb
export PGPORT=5444
export PGLOCALEDIR=/opt/edb/as9.6/share/locale

-bash-4.1$ cat .profile
. pgplus_env.sh
-bash-4.1$ pwd
/opt/PostgresPlus/9.5AS

3. Stop and Start the pg_ctl server

-bash-4.1$ pg_ctl -D $PGDATA stop -mf
[root@debasiseric4 ~]# /etc/init.d/ppas-9.6 stop -mf

[root@debasiseric4 ~]# pg_ctl -D $PGDATA start -w
-bash-4.1$ /etc/init.d/ppas-9.6 start
pg_ctl: server is running (PID: 4869)
/opt/edb/as9.6/bin/edb-postgres “-D” “/opt/PostgresPlus/9.5AS/data”
-bash-4.1$ /etc/init.d/ppas-9.6 status
INFO: [PID: 4869]
INFO: [CMD:]
MSG: [ppas-9.6 is running]
INFO: [Please see service script file /var/log/ppas-9.6/ppas-9.6_script.log for details]

 

Oracle DBaaS 1z0-160 part 33 :-Patching in Oracle cloud RAC database

Step 1:-Please check available patch and run precheck

Step 2:-Pre-check results failed with below message

Step 3:-Update RDK using below command to update cloud tool

[opc@pocracdemo1 ~]$ raccli update rdk -tag latest

{

 “jobId” : “41”,

 “requestStatus” : “SUCCESS”

}

Step 4:-Please apply patch now

From command line

Apply a patch to the Oracle Database home using the apply patch subcommand with the -db option:

$ raccli apply patch -db -tag tag-name

Where tag-name is the name of the patch. To find out the tag name for the latest available patch, see What’s New for Oracle Database Cloud Service.

Oracle Virtual Box VM:-How to create network setting and interface in Oracle Virtual Box to connect internet as well as connect from host

This note will describe the process to connect Oracle Virtual box VM directly from your host using ssh as well as from VM to internet connection.

I will use 2 types of VM interface.

a)NAT network for internet connection from VM

b)Host only adapter for ssh connection from host to the VM.

Step 1:-First I will go to File->Preferences and check/create NAT network with below settings in Oracle Virtual box :-

 

Step 2:-Create host-only network as well

DHCP server should not be enable here

Step 3:-Please create first network interface (NAT)

Step 4:-Please create host-only adapter interface

step 5:-Please check persistent rules under /etc/udev/rules.d and modify mac address if possible.The MC address should be same which was visible in step 3 and step 4

Step 6:-Please create ifcfg-eth1 and ifcfg-eth2 using below settings.ETH1 is for NAT and ETH2 is for host-only adapter.Please check HWADDR value as MAC address in previous steps.

Step 7:-Now reboot your VM or restart individual Network interface (eth1 and eth2) using ifup or ifdown command.

Step 8:-Please check ifconfig -a

step 9:-Please enable root login

enable  PermitRootLogin to yes in  /etc/ssh/sshd_config.

[root@rac1 yum.repos.d]# service sshd restart

Redirecting to /bin/systemctl restart sshd.service

Step 10:-Now you can check whether you connect to host using IP 169.254.156.100.You can also check ping to google.com after login to VM

Oracle DBaaS 1z0-160 part 32 :-On-premise database migration to oracle cloud using data pump conventional Export/Import

On-premise database migration to oracle cloud using data pump conventional Export/Import

This is simplest migration if you prefer to migrate one schema to oracle cloud.

Source:-On-premise database(Hostname:-gnssrv01 )

Target:-Oracle cloud database (Hostname:-myclone)

Step 1:-On the on-premises database host, create an operating system directory to use for the on-premises database export files.

$ mkdir /u01/app/oracle/admin/orcl/dpdump/for_cloud

Step 2:-Create a directory object in the on-premises database to reference the operating system directory.

[oracle@gnssrv01 admin]$ sqlplus test/test@orclpdb
SQL*Plus: Release 12.2.0.1.0 Production on Thu Apr 26 07:38:24 2018

SQL> CREATE DIRECTORY dp_for_cloud AS '/u01/app/oracle/admin/orcl/dpdump/for_cloud';
Directory created.

Step 3:-Please take export dump of the schema.On the on-premises database host, invoke Data Pump Export as the SYSTEM user or another user with the DATAPUMP_EXP_FULL_DATABASE role and export the on-premises schemas. Provide the password for the user when prompted.

oracle@gnssrv01 admin]$ expdp system/XXX@orclpdb SCHEMAS=test DIRECTORY=dp_for_cloud
Export: Release 12.2.0.1.0 - Production on Thu Apr 26 07:40:38 2018
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/********@orclpdb SCHEMAS=test DIRECTORY=dp_for_cloud
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "TEST"."MYTAB"                              9.613 MB   72620 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
  /u01/app/oracle/admin/orcl/dpdump/for_cloud/expdat.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Apr 26 07:42:07 2018 elapsed 0 00:01:18

Step 4:-On the Database Cloud Service compute node, create a directory for the dump file.

$ mkdir /u01/app/oracle/admin/ORCL/dpdump/from_onprem

Step 5:-On the on-premises database host, use the SCP utility to transfer the dump file to the Database Cloud Service compute node.

Please follow below note on how to use private key to transfer using scp to remote host.

[oracle@gnssrv01 ~]$ scp  -i my_priv_key /u01/app/oracle/admin/orcl/ dpdump/for_cloud/expdat.dmp oracle@144.21.83.108:/u01/app/ oracle/admin/ORCL/dpdump/from_onprem/expdat.dmp


Step 6:-On the Database Cloud Service compute node, invoke SQL*Plus and log in to the database as the SYSTEM user.

[oracle@myclone admin]$ sqlplus system/Bppimt#123@orclpdb
SQL*Plus: Release 12.2.0.1.0 Production on Fri Apr 27 03:32:50 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Last Successful login time: Fri Apr 27 2018 03:32:24 +00:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> CREATE DIRECTORY dp_from_onprem AS '/u01/app/oracle/admin/ORCL/dpdump/from_onprem';
Directory created.

Step 7:-Now import the schema in oracle cloud database

[oracle@myclone admin]$ impdp system/Bppimt#123@orclpdb SCHEMAS=test DIRECTORY=dp_from_onprem

Import: Release 12.2.0.1.0 – Production on Fri Apr 27 03:34:00 2018

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 – 64bit Production
Master table “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_SCHEMA_01″: system/********@orclpdb SCHEMAS=test DIRECTORY=dp_from_onprem
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported “TEST”.”MYTAB” 9.613 MB 72620 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully completed at Fri Apr 27 03:34:38 2018 elapsed 0 00:00:29

 

 

Cache buffer chain CBC latch simple query to investigate

For deep understanding on CBC,please refer to below links:-

http://www.proligence.com/pres/ioug14/2014_431_Nanda_ppr.pdf

and

Video tutorial from ORAPUB.

I am just going to mention some useful queries to investigate or identify CBC latch

Step 1:-You as user fire a sql query from client.

select * from cbctest;

Step 2:-Now oracle will search metadata and determine what is your file_id and block_id to locate the block to be fetched  from buffer cache or physical disk.

select
col,
dbms_rowid.rowid_relative_fno(rowid) rfile#,
dbms_rowid.rowid_block_number(rowid) block#
from cbc;


Step 3:-Let me for example choose file_id = 6 and block_id=2288143.Now oracle will determine DBA value from it.

SQL> select dbms_utility.make_data_block_address(6,2288143) from dual;

 

Step 4:-If there are 3 chains,oracle will use module function  that returns the reminder from an input after dividing it by 3:

SQL> select mod(27453967,3) from dual;

MOD(27340036,3)
—————
1

So this block will be put into chain #1.

Simulation of CBC test:-

From one session,run following:-
declare
 aa varchar2(1000);
 begin
 for i in 1..1000000 loop
 select count(1) into aa from cbc ;
 end loop;
 end;
 /
From another session ,run following:-
declare
 begin
 for i in 1..1000000 loop
 update CBC set c1='MAITY';
 end loop;
 commit;
 end;
 /
From oratop,I can see “Latch: cache buffers chains” now.

Mechanism:-

1.Oracle once determine file_id and block_id,create a hash value first.
2.After determining hash value,oracle will acquire latch to hash into hash bucket and move across to chain to ask block header whether the block is already in memory or it needs to be fetched from disk.
When the block comes to the buffer cache, Oracle applies a hash function to determine the buffer chain number and places the block in a buffer in that chain alone. Similarly, while looking up a specific buffer, Oracle applies the same hash function to the DBA, instantly knows the chain the buffer will be found and walks that specific buffer only.This makes accessing a buffer much easier compared to searching the entire cache.

 

Why CBC cache buffer chain occurs

Problem:-

a)When different  session try to acquire latch (Your buffer is very popular) chain #1.It needs to spin and sleep several times.
b)When different session try to acquire latch chain #1 and chain #2.It needs to spin and sleep several times.
Remember,latch is exclusive and if one session is trying to acquire latch,other session will be in queue.

No problem:-

When different session try to acquire latch chain #1 and chain #4.There will be no contention as they are in different latch .

Now how we can identify CBC problem

Step 5:-Let us find the latch number of “cache buffers chains”

SQL> select latch# from v$latch where name = 'cache buffers chains';

LATCH#
———-
228

If you check the values of the two hidden parameters explained earlier, you will see:
_db_block_hash_buckets 524288
_db_block_hash_latches 16384

Step 7:-Now check longest sleep and gets by executing following query 1 minutes gap and check whether sleep and gets are increasing.

select * from v$latch_children where latch#=228 order by sleeps desc;

Step 8:-You can also identify the buffer causing cache buffer chain wait for session you are experiencing CBC problem.Let us say session# 208 is experiencing CBC contention.

select p1, p1raw, p1text
 from v$session where sid = 208;
Here please take a note of p1raw which is hash address.

Step 8:-Please verify gets,misses and sleeps from v$latch_children

select gets, misses, sleeps, name
 from v$latch where addr = '000000014AFC7A70';

Step 9:-Also determine touch count of the corresponding block .

select dbarfil, dbablk, tch
 from x$bh
 where hladdr = '000000014AFC7A70';

Identify overall problem on CBC 

Step 10:-You can also check from v$session_wait or v$active_session_history 

select p1raw,count(*) from v$session_wait where event like '%cache%buffers%' group by event, p1raw order by 2 desc

or

select p1, count(*)
 from v$active_session_history
 where sample_time < sysdate – 1/24
 and event = 'latch: cache buffers chain'
 group by event, p1
 order by 3 desc

Step 11:-The easiest way is to dump the block and get the object ID from the dump file. Here is how you dump the above mentioned block.

alter system dump datafile 6 block min 220 block max 220;
or we can get block_id of corresponding address.
select hladdr, file#, dbablk, decode(state,1,'cur ',3,'CR',state) ST, tch
 from x$bh where hladdr in
 (select addr from (select addr from v$latch_children where addr='000000B9CA3336A0'
 order by sleeps, misses,immediate_misses desc )where rownum <2)

Step 12:-Now we can identify the segment_name which may be table or index using below query.

select segment_name
 from dba_extents
 where file_id = 48
 and 94182 between block_id and block_id + blocks - 1
 and rownum = 1

Solving problem:-

a)CBC latch waits are caused by popularity of the blocks by different processes. If you reduce the popularity, you reduce the chances that two processes will wait for the same
buffer. Note: you can’t completely eliminate the waits; you can only reduce it. To reduce is, reduce logical I/O. For instance,Nested Loops revisit the same object several times causing the buffers to be accessed multiple times. If you rewrite the query to avoid NLs, you will significantly reduce the chance that one process will wait for the CBC latch.
b)You can minimize less no of rows per block.The less the number of rows in a block, the less popular the block will be. You can reduce the number of rows in a block by increasing PCTFREE or using ALTER TABLE … MINIMIZE RECORDS_PER_BLOCK.
c) If that does not help, you can partition a table. That forces the data block address to be recomputed for each partition, making it more likely that the buffers will end up in different buffer chains and hence the competition for the same chain will be less.