Postgres Database Monitoring using Grafana Dashboard using template

1. We need to make sure that Grafana is installed already.

2. We need following postgresql dashboard to be imported in Grafana.

https://grafana.com/dashboards/4164

3. Please download and install influxDB in postgresql server to be added in Grafana monitoring.

wget https://dl.influxdata.com/influxdb/releases/influxdb-1.7.4.x86_64.rpm

sudo yum localinstall influxdb-1.7.4.x86_64.rpm

4. Please configure and restart influxDB

Please change influxdb.conf

/etc/influxdb/influxdb.conf

We need to enable it

# Determines whether user authentication is enabled over HTTP/HTTPS.

   auth-enabled = true

Please restart influxdb

# systemctl start influxdb

# systemctl status influxdb

# systemctl enable influxdb

5. Please configure and restart Telegraf

Please create telegraf user and create the database in influxdb

#curl -XPOST “http://localhost:8086/query” –data-urlencode “q=CREATE USER telegraf WITH PASSWORD ‘metricsmetricsmetricsmetrics’ WITH ALL PRIVILEGES”

#influx -username ‘telegraf’ -password ‘metricsmetricsmetricsmetrics’

[root@xxx ~]#  influx -username ‘telegraf’ -password ‘metricsmetricsmetricsmetrics’

Connected to http://localhost:8086 version 1.7.4

InfluxDB shell version: 1.7.4

Enter an InfluxQL query

>create database telegraf

> SHOW DATABASES

name: databases

name

—-

_internal

Telegraf

Please install telegraf now

wget https://dl.influxdata.com/telegraf/releases/telegraf-1.9.4-1.x86_64.rpm

sudo yum localinstall telegraf-1.9.4-1.x86_64.rpm

  • Please configure telegraf.conf to enable postgresql monitoring. You need to change IP to postgres server accordingly.

/etc/telegraf/telegraf.conf

###############################################################################

#                            OUTPUT PLUGINS                                   #

###############################################################################

# Configuration for sending metrics to InfluxDB

[[outputs.influxdb]]

  ## The full HTTP or UDP URL for your InfluxDB instance.

  ##

  ## Multiple URLs can be specified for a single cluster, only ONE of the

  ## urls will be written to each interval.

  # urls = [“unix:///var/run/influxdb.sock”]

  # urls = [“udp://127.0.0.1:8089”]

   urls = [“http://10.x.x.x:8086”]

  ## The target database for metrics; will be created as needed.

   database = “telegraf”

  ## If true, no CREATE DATABASE queries will be sent.  Set to true when using

  ## Telegraf with a user without permissions to create databases or when the

  ## database already exists.

  # skip_database_creation = false

  ## Name of existing retention policy to write to.  Empty string writes to

  ## the default retention policy.  Only takes effect when using HTTP.

  # retention_policy = “”

  ## Write consistency (clusters only), can be: “any”, “one”, “quorum”, “all”.

  ## Only takes effect when using HTTP.

   write_consistency = “any”

  ## Timeout for HTTP messages.

  # timeout = “5s”

  ## HTTP Basic Auth

   username = “telegraf”

   password = “metricsmetricsmetricsmetrics”

# # Read metrics from one or many postgresql servers

 [[inputs.postgresql]]

#   ## specify address via a url matching:

#   ##   postgres://[pqgotest[:password]]@localhost[/dbname]\

#   ##       ?sslmode=[disable|verify-ca|verify-full]

#   ## or a simple string:

#   ##   host=localhost user=pqotest password=… sslmode=… dbname=app_production

#   ##

#   ## All connection parameters are optional.

#   ##

#   ## Without the dbname parameter, the driver will default to a database

#   ## with the same name as the user. This dbname is just for instantiating a

#   ## connection with the server and doesn’t restrict the databases we are trying

#   ## to grab metrics for.

#   ##

address = “host=10.x.x.x user=pg password=pgdbpass dbname=postgres port=1523 sslmode=disable”

  • Please restart telegraf now

#telegraf

6. Now we need to login to Grafana and create 3 data sources.

For influxDB following should be data source configuration

  • PostgreSQL Master node configuration
  • PostgreSQL Slave node configuration
  • Please import the dashboard now

PostgreSQL DBA part 7 :-PostgreSQL full and incremental backup restore using BART 2.0 tool

BART is a backup and recovery tool for EDB Postgres Databases

EDB Backup and Recovery Tool (BART) is a key component of an enterprise-level Postgres-based data management strategy.  BART implements retention policies and point-in-time recovery requirements for large-scale Postgres deployments. Now available, Bart 2.0 provides block-level incremental backup.

P.C https://www.enterprisedb.com/products/edb-postgres-platform/edb-backup-and-recovery-tool

A.Full backup and restore using BART 2.0 for EDB

 

1. Prerequisite: This step is required for YUM configuration to install BART tool from internet

First enable internet on server via PROXY setting as informed by Sysadmin:
Go to the /etc/profile, uncoment last lines 

proxc="http://172.16.0.65:3128/"
HTTP_PROXY=$proxc
HTTPS_PROXY=$proxc
FTP_PROXY=$proxc
http_proxy=$proxc
https_proxy=$proxc
ftp_proxy=$proxc
export HTTP_PROXY HTTPS_PROXY FTP_PROXY http_proxy https_proxy ftp_proxy
Go to the /etc/wgetrc, uncoment

http_proxy = http://172.16.0.65:3128/
https_proxy = http:// 172.16.0.65:3128/
ftp_proxy = http:// 172.16.0.65:3128/
Go to the /etc/yum.conf, uncoment

proxy=http://172.16.0.65:3128

2. Configure yum repository for BART tool

2.1 Configure your /etc/yum.repos.d with the edb repository information
rpm -Uvh http://yum.enterprisedb.com/edbrepos/edb-repo-9.6-4.noarch.rpm
(** Download the mentioned RPM)
2.2 This RPM will created the edb.repo file in the /etc/yum.repo.d/ directory path

2.3 Edit the /etc/yum.repos.d/edb.repo  file to enable (change enabled=0 to enabled=1)
 the repositories you want to use ([edbas96, enterprisedb-tools, enterprisedb-dependencies]) and specify your username / password in the connection string: (sample edb.repo file present in last point)

2.4 We need some extra package for BART installation for which we need to configure below yum  configuration repository

[root@rac1 yum.repos.d]# cat epel.repo
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-source-6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[root@rac1 yum.repos.d]# cat epel-testing.repo
[epel-testing]
name=Extra Packages for Enterprise Linux 6 - Testing - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/testing/6/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-epel6&arch=$basearch
failovermethod=priority
enabled=0
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-testing-debuginfo]
name=Extra Packages for Enterprise Linux 6 - Testing - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/testing/6/$basearch/debug
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-debug-epel6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-testing-source]
name=Extra Packages for Enterprise Linux 6 - Testing - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/testing/6/SRPMS
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=testing-source-epel6&arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

3. Run the following yum install commands:

yum install epel-release*
yum install edb-as96-server-client (if installing on a host without the psql client)
yum install edb-bart20

4. Please set the profile now

 

Login to enterprisedb user from root and set the profile

-bash-4.1$ vi .profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi

# User specific environment and startup programs
export PATH=/usr/edb/bart2.0/bin/:$PATH
PATH=$PATH:$HOME/bin

export PATH
alias config='vi /usr/edb/bart2.0/etc/bart.cfg'

. pgplus_env.sh

5. Please create directory where backup needs to be taken

[root@rac1 bin]# mkdir -p /u01/edb_bkp
[root@rac1 bin]# chmod -R 777 /u01/edb_bkp
[root@rac1 bin]# chown enterprisedb:enterprisedb /u01/edb_bkp

6.Edit the BART configuration file

 

[BART]
bart_host= enterprisedb@127.0.0.1
backup_path = /u01/edb_bkp
pg_basebackup_path = /u01/edb/as9.6/bin/pg_basebackup
logfile = /tmp/bart.log
scanner_logfile = /tmp/bart_scanner.log
xlog_method = stream
retention_policy = 1 BACKUPS

[PPAS96]
host = 127.0.0.1
port = 5444
user = enterprisedb
cluster_owner = enterprisedb
description = "PPAS 96 server"
backup_name = edb_%year-%month-%dayT%hour:%minute
archive_command='cp %p %a/%f'
#allow_incremental_backups=enabled

7.Enable ssh to self user

-bash-4.1$ chown enterprisedb .ssh
-bash-4.1$ chgrp enterprisedb .ssh
-bash-4.1$ chmod 700 .ssh
-bash-4.1$
-bash-4.1$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/u01/edb/as9.6/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /u01/edb/as9.6/.ssh/id_rsa.
Your public key has been saved in /u01/edb/as9.6/.ssh/id_rsa.pub.
The key fingerprint is:
f7:72:e1:17:c3:5d:c9:3a:f8:e9:eb:9c:6e:e0:53:be enterprisedb@edb2prod
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . .|
| o.|
| .....|
| S ...o+ .|
| . +.ooo |
| o Bo. |
| =o+. |
| =Eo |
+-----------------+
-bash-4.1$ pwd
/u01/edb/as9.6
-bash-4.1$ cd .ssh/
-bash-4.1$ ll
total 12
-rw-------. 1 enterprisedb enterprisedb 1675 Nov 7 21:13 id_rsa
-rw-r--r--. 1 enterprisedb enterprisedb 403 Nov 7 21:13 id_rsa.pub
-rw-r--r--. 1 enterprisedb enterprisedb 1968 Nov 3 21:47 known_hosts
-bash-4.1$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
-bash-4.1$ chmod 600 ~/.ssh/authorized_keys
-bash-4.1$ ssh enterprisedb@127.0.0.1
The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established.
RSA key fingerprint is b7:b2:44:3f:5b:19:8a:7d:13:a3:a9:45:67:45:7d:8c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '127.0.0.1' (RSA) to the list of known hosts.
Last login: Thu Apr 20 10:33:36 2017 from 172.19.8.95

8.Add the appropriate entries on pg_hba.conf file:

local all enterprisedb trust
host replication enterprisedb 127.0.0.1/32 trust
host template1 enterprisedb 127.0.0.1/32 trust

9.Now we can check whether backup configuration is fine or not

-bash-4.1$ bart check-config
INFO: Verifying that pg_basebackup is executable
INFO: success - pg_basebackup(/u01/edb/as9.6/bin/pg_basebackup) returns version 9.600000

-bash-4.1$ bart check-config -s ppas96
INFO: Checking server ppas96
INFO: Verifying cluster_owner and ssh/scp connectivity
INFO: success
INFO: Verifying user, host, and replication connectivity
INFO: success
INFO: Verifying that user is a database superuser
INFO: success
INFO: Verifying that cluster_owner can read cluster data files
INFO: success
INFO: Verifying that you have permission to write to vault
INFO: success
INFO: /u01/edb_bkp/ppas96
INFO: Verifying database server configuration
INFO: success
INFO: Verifying that WAL archiving is working
INFO: waiting 30 seconds for /u01/edb_bkp/ppas96/archived_wals/000000020000000000000027
INFO: waiting 20 seconds for /u01/edb_bkp/ppas96/archived_wals/000000020000000000000027
INFO: waiting 10 seconds for /u01/edb_bkp/ppas96/archived_wals/000000020000000000000027
ERROR: can't verify archiving: required file has not reached archived_wals directory

10.Now we can initialize the catalog

-bash-4.1$ bart INIT -s PPAS96 -o
INFO: setting archive_command for server 'ppas96'
WARNING: archive_command is set. server restart is required

11.Take full backup now

-bash-4.1$ bart backup -s ppas96 -F p
INFO: creating backup for server 'ppas96'
INFO: backup identifier: '1527547489386'
106132/106132 kB (100%), 1/1 tablespace

INFO: backup completed successfully
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1527547489386
BACKUP NAME: edb_2018-05-28T18:44
BACKUP PARENT: none
BACKUP LOCATION: /u01/edb_bkp/ppas96/1527547489386
BACKUP SIZE: 118.45 MB
BACKUP FORMAT: plain
BACKUP TIMEZONE: US/Eastern
XLOG METHOD: stream
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 0
START WAL LOCATION: 000000020000000000000029
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-05-28 18:44:50 EDT
STOP TIME: 2018-05-28 18:44:53 EDT
TOTAL DURATION: 3 sec(s)

12.Verify the backup now

-bash-4.1$ bart show-backups
SERVER NAME BACKUP ID BACKUP NAME BACKUP PARENT BACKUP TIME BACKUP SIZE WAL(s) SIZE WAL FILES STATUS

ppas96 1527547489386 edb_2018-05-28T18:44 none 2018-05-28 18:44:53 EDT 118.45 MB 0.00 bytes 0 active

13.Create a demo database (tst) and create some sample tables.We will verify if we will be able to restore this database and table after we fully restore backup

psql
edb=# create database tst;
CREATE DATABASE

sales=# \c tst
You are now connected to database "tst" as user "enterprisedb".
tst=# create table article(id int);
CREATE TABLE
tst=# insert into article values (1);
INSERT 0 1
tst=# table article;
id
----
1
(1 row)

14.Create the full backup again

-bash-4.1$ bart backup -s ppas96 -F p
INFO: creating backup for server 'ppas96'
INFO: backup identifier: '1527547795678'
117720/117720 kB (100%), 1/1 tablespace

INFO: backup completed successfully
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1527547795678
BACKUP NAME: edb_2018-05-28T18:49
BACKUP PARENT: none
BACKUP LOCATION: /u01/edb_bkp/ppas96/1527547795678
BACKUP SIZE: 129.58 MB
BACKUP FORMAT: plain
BACKUP TIMEZONE: US/Eastern
XLOG METHOD: stream
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 0
START WAL LOCATION: 00000002000000000000002B
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-05-28 18:49:59 EDT
STOP TIME: 2018-05-28 18:50:00 EDT
TOTAL DURATION: 1 sec(s)

15.Simulate the crash now after moving data directory to another name and create empty directory

-bash-4.1$ cd /u01/edb/as9.6/
-bash-4.1$ mv data data_old
-bash-4.1$ mkdir -p /u01/edb/as9.6/data

16.Please restore full backup now

-bash-4.1$ bart RESTORE -s PPAS96 -i 1527547795678 -p /u01/edb/as9.6/data/
INFO: restoring backup '1527547795678' of server 'ppas96'
INFO: base backup restored
INFO: archiving is disabled
INFO: permissions set on $PGDATA
INFO: restore completed successfully

17.Before starting cluster, let’s create a recovery.conf file

open /u01/edb/as9.6/recovery.conf and add the following lines

restore_command='cp /u01/edb/as9.6/data/archived_wals/%f %p'

18.Please restart database

-bash-4.1$ pg_ctl -D /u01/edb/as9.6/data -mf start
server starting
-bash-4.1$ 2018-05-29 19:02:37 EDT LOG: redirecting log output to logging collector process
2018-05-29 19:02:37 EDT HINT: Future log output will appear in directory "pg_log".

19.After the restore finished, the recovery.conf is automatically renamed to recovery.done because PostgreSQL will go into recovery mode again after a restart otherwise.

-bash-4.1$ cat recovery.done
restore_command=’cp /u01/edb/as9.6/data/archived_wals/%f %p’

20.Now we will check if the database and table restored successfully after restore full database

-bash-4.1$ psql
psql.bin (9.6.2.7)
Type "help" for help.

edb=# \c tst
You are now connected to database "tst" as user "enterprisedb".
tst=# table article;
id
----
1
(1 row)

B.Incremental backup and restore using BART 2.0 for EDB

 

1.Please enable allow_incremental_backup in BART configuration file

Set the parameter allow_incremental_backups to ENABLED in the BART configuration file.Restart the server and post that check whether the INCREMENTAL BACKUP feature has been enabled or not.

2.With BART 2.0 there is a new component called the “WAL scanner”. The binary is at the same location as BART itself

3.Start WAL_SCANNER

The WAL scanner (as the name implies) scans the WALs that are archived from your PostgreSQL instances to the BART host for changes and then writes a so called “modified block map (MBM)” (this is the reason why wal compression is not supported in incremental backup). As this should happen as soon as the WALs arrive on the BART host the WAL scanner should run all the time.
We need to start it in daemon mode :

-bash-4.1$ /usr/edb/bart2.0/bin/bart-scanner --daemon
-bash-4.1$ ps -ef|grep bart
54323 27284 1 0 20:30 ? 00:00:00 /usr/edb/bart2.0/bin/bart-scanner --daemon
54323 27285 27284 0 20:30 ? 00:00:00 /usr/edb/bart2.0/bin/bart-scanner --daemon
54323 27318 4555 0 20:30 pts/0 00:00:00 grep bart

4.Re-check configuration settings:

-bash-4.1$ bart check-config -s PPAS96
INFO: Checking server PPAS96
INFO: Verifying cluster_owner and ssh/scp connectivity
INFO: success
INFO: Verifying user, host, and replication connectivity
INFO: success
INFO: Verifying that user is a database superuser
INFO: success
INFO: Verifying that cluster_owner can read cluster data files
INFO: success
INFO: Verifying that you have permission to write to vault
INFO: success
INFO: /u01/edb_bkp/ppas96
INFO: Verifying database server configuration
INFO: success
INFO: Verifying that WAL archiving is working
INFO: waiting 30 seconds for /u01/edb_bkp/ppas96/archived_wals/00000003000000000000002C
INFO: success
INFO: Verifying that bart-scanner is configured and running
INFO: success

5.Execute a DB FULL backup post enabling WAL_SCANNER:

-bash-4.1$ bart backup -s PPAS96 --backup-name full0
INFO: creating backup for server 'ppas96'
INFO: backup identifier: '1527640567564'
pg_basebackup: WAL streaming can only be used in plain mode
Try "pg_basebackup --help" for more information.

ERROR: backup failed for server 'ppas96'
command failed with exit code 1
Try "pg_basebackup --help" for more information.

Changed the xlog_method = fetch in bart config file and  execute following command

-bash-4.1$ bart INIT -s PPAS96 -o
INFO: setting archive_command for server 'ppas96'
WARNING: archive_command is set. server restart is required
-bash-4.1$ bart backup -s PPAS96 --backup-name full0
INFO: creating backup for server 'ppas96'
INFO: backup identifier: '1527640746966'
134117/134117 kB (100%), 1/1 tablespace

INFO: backup completed successfully
INFO: backup checksum: 7de18e9077a038e2944ee4b0f92e2612 of base.tar
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1527640746966
BACKUP NAME: full0
BACKUP PARENT: none
BACKUP LOCATION: /u01/edb_bkp/ppas96/1527640746966
BACKUP SIZE: 130.97 MB
BACKUP FORMAT: tar
BACKUP TIMEZONE: US/Eastern
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 1
ChkSum File
7de18e9077a038e2944ee4b0f92e2612 base.tar

TABLESPACE(s): 0
START WAL LOCATION: 00000003000000000000002F
STOP WAL LOCATION: 00000003000000000000002F
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-05-29 20:39:08 EDT
STOP TIME: 2018-05-29 20:39:08 EDT
TOTAL DURATION: 0 sec(s)
-bash-4.1$ bart show-backups
SERVER NAME BACKUP ID BACKUP NAME BACKUP PARENT BACKUP TIME BACKUP SIZE WAL(s) SIZE WAL FILES STATUS

ppas96 1527640746966 full0 none 2018-05-29 20:39:09 EDT 130.98 MB 16.00 MB 1 active
ppas96 1527547795678 edb_2018-05-28T18:49 none 2018-05-28 18:50:00 EDT 129.58 MB 48.00 MB 3 active
ppas96 1527547489386 edb_2018-05-28T18:44 none 2018-05-28 18:44:53 EDT 118.45 MB 0.00 bytes 0 active

6.Now create a demo table and insert some rows

-bash-4.1$ psql
psql.bin (9.6.2.7)
Type "help" for help.

edb=# \c tst
You are now connected to database "tst" as user "enterprisedb".
tst=# create table chk_incr(c1 char(10));
CREATE TABLE
tst=# insert into chk_incr values('DEBASIS');
INSERT 0 1
tst=# table chk_incr;
c1
------------
DEBASIS
(1 row)

7.Please take incremental backup now with tag incr1

-bash-4.1$ bart backup -s ppas96 -F p --parent 1527640746966 --backup-name incr1
INFO: creating incremental backup for server 'ppas96'
INFO: checking mbm files /u01/edb_bkp/ppas96/archived_wals
INFO: new backup identifier generated 1527641049438
INFO: reading directory /u01/edb_bkp/ppas96/archived_wals
INFO: all files processed
NOTICE: pg_stop_backup complete, all required WAL segments have been archived
INFO: incremental backup completed successfully
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1527641049438
BACKUP NAME: incr1
BACKUP PARENT: 1527640746966
BACKUP LOCATION: /u01/edb_bkp/ppas96/1527641049438
BACKUP SIZE: 33.34 MB
BACKUP FORMAT: plain
BACKUP TIMEZONE: US/Eastern
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 0
START WAL LOCATION: 000000030000000000000031
STOP WAL LOCATION: 000000030000000000000032
BACKUP METHOD: pg_start_backup
BACKUP FROM: master
START TIME: 2018-05-29 20:44:10 EDT
STOP TIME: 2018-05-29 20:44:12 EDT
TOTAL DURATION: 2 sec(s)

8.Please insert another values in the table.

-bash-4.1$ psql
psql.bin (9.6.2.7)
Type "help" for help.

edb=# \c tst
You are now connected to database "tst" as user "enterprisedb".
tst=# insert into chk_incr values('MAITY');
INSERT 0 1
tst=# table chk_incr;
c1
------------
DEBASIS
MAITY
(2 rows)

tst=# quit

9.Please take another incremental backup tag as incr2

-bash-4.1$ bart backup -s ppas96 -F p --parent 1527640746966 --backup-name incr2
INFO: creating incremental backup for server 'ppas96'
INFO: checking mbm files /u01/edb_bkp/ppas96/archived_wals
INFO: new backup identifier generated 1527641642651
INFO: reading directory /u01/edb_bkp/ppas96/archived_wals
INFO: all files processed
NOTICE: pg_stop_backup complete, all required WAL segments have been archived
INFO: incremental backup completed successfully
INFO:
BACKUP DETAILS:
BACKUP STATUS: active
BACKUP IDENTIFIER: 1527641642651
BACKUP NAME: incr2
BACKUP PARENT: 1527640746966
BACKUP LOCATION: /u01/edb_bkp/ppas96/1527641642651
BACKUP SIZE: 33.34 MB
BACKUP FORMAT: plain
BACKUP TIMEZONE: US/Eastern
XLOG METHOD: fetch
BACKUP CHECKSUM(s): 0
TABLESPACE(s): 0
START WAL LOCATION: 000000030000000000000034
STOP WAL LOCATION: 000000030000000000000035
BACKUP METHOD: pg_start_backup
BACKUP FROM: master
START TIME: 2018-05-29 20:54:03 EDT
STOP TIME: 2018-05-29 20:54:04 EDT
TOTAL DURATION: 1 sec(s)

10.Please check all backups available now

-bash-4.1$ bart show-backups
SERVER NAME BACKUP ID BACKUP NAME BACKUP PARENT BACKUP TIME BACKUP SIZE WAL(s) SIZE WAL FILES STATUS

ppas96 1527641642651 incr2 full0 2018-05-29 20:54:04 EDT 33.34 MB active
ppas96 1527641049438 incr1 full0 2018-05-29 20:44:11 EDT 33.34 MB active
ppas96 1527640746966 full0 none 2018-05-29 20:39:09 EDT 130.98 MB 112.00 MB 7 active
ppas96 1527547795678 edb_2018-05-28T18:49 none 2018-05-28 18:50:00 EDT 129.58 MB 48.00 MB 3 active
ppas96 1527547489386 edb_2018-05-28T18:44 none 2018-05-28 18:44:53 EDT 118.45 MB 0.00 bytes 0 active

11.Now I will restore incr1 tagged backup.Please create empty directory /u01/edb_incr1 owner as enterprisedb.

-bash-4.1$ bart restore -s ppas96 -i 1527641049438 -p /u01/edb_incr1/
INFO: restoring incremental backup '1527641049438' of server 'ppas96'
INFO: base backup restored
INFO: archiving is disabled
INFO: permissions set on $PGDATA
INFO: incremental restore completed successfully

12.Please stop old instance and restart instance with new path where we restored incremental backup.

-bash-4.1$ pg_ctl -D /u01/edb/as96/data -mf stop
waiting for server to shut down.... done
server stopped
-bash-4.1$ pg_ctl -D /u01/edb_incr1 -mf start
server starting
-bash-4.1$ 2018-05-30 02:54:34 EDT LOG: redirecting log output to logging collector process
2018-05-30 02:54:34 EDT HINT: Future log output will appear in directory "pg_log".

13.Now we can check we successfully restored incr1 backup verifying table data.

-bash-4.1$ psql
psql.bin (9.6.2.7)
Type "help" for help.

edb=# \c tst
You are now connected to database "tst" as user "enterprisedb".
tst=# table chk_incr;
c1
------------
DEBASIS
(1 row)

PostgreSQL DBA part 6 :-PostgreSQL high availability setup using EFM

PostgreSQL high availability setup using EFM

 

A.Introduction

This document provides you the overview of High Availability Setup for EDB Postgres 9.5 database.

An EDB Postgres Failover Manager (EFM) cluster is comprised of Failover Manager Processes that reside on the following hosts on a network:

  1. A Master node – The Master node is the primary database server that is servicing database clients.
  2. One or more Standby nodes – A Standby node is a streaming replication server associated with the Master node.
  3. Witness Node (optional)- The Witness node confirms assertions of either the Master or a Standby in a fail over scenario. A cluster does not need a dedicated witness node if the cluster contains three or more nodes; if you do not have a third cluster member that is a database host, you can add a dedicated Witness node.

 

B.Architecture(For best practice)

The Architecture for this environment shown in the below picture. It has 2 application nodes (VM1 &VM2) in primary site which are connected to VIP of EDB .  Application nodes are running on JBOSS 6.4 application server and both of the servers are running in standalone mode, pointing to the same metadata deployed at the same time (Which is the VIP : 172.XX.XX.5:5444/edb).

For EDB it has two nodes (VM3 and VM4) as active and standby mode , which is pointed by VIP. VM3 and VM4 are connected in Sync mode. For DR site , two DB nodes are present which are VM5 and VM6 . Both VM5 and VM6 are connected in ASYNC mode with main DB node on primary site.

The below configuration is made on EDB Test instance and will consist of 1 Master server and 1 standby server. The standby will be in sync with the primary to ensure no data loss happens during failover.

But for our test case,We have simplified installation where We have one site in primary and read replica or standby site in secondary.The whole test was done in AWS.

Below are the host details where the installation will be made:

NODE IP ROLE
edb1tst 172.31.20.93 Primary
edb2tst 172.31.20.82 Standby

C.Install PostgreSQL in both primary and standby nodes using below :-

https://clouddba.co/postgresql-dba-part1-installation-of-postgresql-9-6-in-linux/

D.Install EFM 2.1 RPM which is package provided by EDB for HA setup

[root@edb1tst home]# rpm -ivh ./edb_tools_redhat_rhel-6-x86_64/efm21-2.1.1-1.rhel6.x86_64.rpm

warning: ./edb_tools_redhat_rhel-6-x86_64/efm21-2.1.1-1.rhel6.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 7e30651c: NOKEY

Preparing...                ########################################### [100%]

1:efm21                  ########################################### [100%]

E.Modify the /opt/edb/as9.6/data/pg_hba.conf  to include all the entries of Primary and standby nodes:

For Primary

# "local" is for Unix domain socket connections only
host all all 0.0.0.0/0 trust
local all all trust
# IPv4 local connections:
host edb all 172.31.20.93/32 trust
# IPv6 local connections:
##host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication enterprisedb trust
host prod all 172.31.20.93/32 trust
host prod all 172.31.20.82/32 trust
host replication enterprisedb 172.31.20.93/32 trust
host replication enterprisedb 172.31.20.82/32 trust

For Standby

# "local" is for Unix domain socket connections only
host all all 0.0.0.0/0 trust
local all all trust
# IPv4 local connections:
host edb all 172.31.20.82/32 trust
# IPv6 local connections:
##host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication enterprisedb trust
host prod all 172.31.20.93/32 trust
host prod all 172.31.20.82/32 trust
host replication enterprisedb 172.31.20.93/32 trust
host replication enterprisedb 172.31.20.82/32 trust

Please restart DB

pg_ctl -D /opt/edb/as9.6/data stop
pg_ctl -D /opt/edb/as9.6/data start
pg_ctl -D /opt/edb/as9.6/data status

F.Check and turn off the FIREWALL on the servers (both Primary and Standby)

 

G.Encrypt efm password on both nodes:For my case username is enterprisedb and password is enterprisedb

/usr/efm-2.1/bin/efm encrypt efm
This utility will generate an encrypted password for you to place in your
EFM cluster property file.
Please enter the password and hit enter:
Please enter the password again to confirm:
The encrypted password is: 2ae8d8d6fbfb742863990dc444e9434f
Please paste this into your cluster properties file.
        db.password.encrypted=2ae8d8d6fbfb742863990dc444e9434f

Troubleshoot if in case it fails:-

### Error encoutered

[root@edb2prod home]# efm encrypt efm
This utility will generate an encrypted password for you to place in your
EFM cluster property file.
Please enter the password and hit enter:
Please enter the password again to confirm:
Could not encode password: Illegal key size or default parameters
If full-strength encryption has not been installed in the current version of Java, search for “<vendor name> java full strength encryption” for more information.
### Solution
For this download “Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files 8” for JDK 1.8 from below link and copy that to /usr/java/jdk1.8.0_121/jre/lib/security location:
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
[root@edb2test security]# ll *jar*
-rw-r--r--. 1 root root 3035 Dec 20  2013 local_policy.jar
-rw-r--r--. 1 root root 3405 Apr 11 15:47 local_policy.jar_bkp
-rw-r--r--. 1 root root 3023 Dec 20  2013 US_export_policy.jar
-rw-r--r--. 1 root root 2920 Apr 11 15:47 US_export_policy.jar_bkp

[root@edb2test security]# efm encrypt efm
This utility will generate an encrypted password for you to place in your
EFM cluster property file.
Please enter the password and hit enter:
Please enter the password again to confirm:
The encrypted password is: 2ae8d8d6fbfb742863990dc444e9434f
Please paste this into your cluster properties file.
        db.password.encrypted=2ae8d8d6fbfb742863990dc444e9434f
H.STEP-BY-STEP IMPLEMENTATION PROCEDURE

Steps in PRIMARY server:

1. Add the below contents to pgpass

su – enterprisedb -c “echo \”*:*:*:enterprisedb:enterprisedb\” > ~/.pgpass”
su – enterprisedb -c “chmod 600 ~/.pgpass”

2.Set permissions and ownerships

 -bash-4.1$ chown enterprisedb:enterprisedb data

-bash-4.1$ chmod -R 700 data

3.Check the status of pg_ctl process as below:

 

-bash-4.1$ /opt/edb/as9.6/bin/pg_ctl -D /opt/edb/as9.6/data status

pg_ctl: server is running (PID: 683)

/opt/edb/as9.6/bin/edb-postgres "-D" "/opt/edb/as9.6/data"

NOTE: Here in this installation we already had a single node instance running on the system, and application connectivity was established. Since we’re only changing the single installation to HA hence we just checked the above status.

If this is a fresh installation then execute the below command:

 [root@rac4 as9.6]# su - enterprisedb -c "/opt/edb/as9.6/bin/initdb -D /opt/edb/as9.6/data"
 And then check the status:

/opt/edb/as9.6/bin/pg_ctl -D /opt/edb/as9.6/data start

4. Modify the efm.nodes to include the standby server entries:

 5.Modify the bash profile as below and run encrypt the EFM password as below:

6.Export the below variables on the terminal as root user and set the password for enterprisedb user

 

export YUM_USER=XXXXX
export YUM_PASSWORD=test#123
export MASTER_IP=172.31.20.93
export STDBY_IP=172.31.20.82
export YOUR_EMAIL=XX.XX@gmail.com
export PWD_ENCRYPT=2ae8d8d6fbfb742863990dc444e9434f
export PGDATA=/opt/edb/as9.6/data

 

su - enterprisedb -c "/opt/edb/as9.6/bin/psql -p 5444 -d edb -c \"alter user enterprisedb identified by enterprisedb\""

 7.Add the below entries at the end of conf file

vi /opt/edb/as9.6/data/postgresql.conf

 

wal_level = hot_standby
max_wal_senders = 3
hot_standby = on
archive_mode = on
archive_command = 'cd .'
logging_collector = on    (Check is it already on or not)

8.Restart the database again

pg_ctl -D /opt/edb/as9.6/data stop
pg_ctl -D /opt/edb/as9.6/data start
pg_ctl -D /opt/edb/as9.6/data status

9.Add the following entries in efm.properties file:

 

sed -i "s/db.user=/db.user=enterprisedb/" /etc/efm-2.1/efm.properties
sed -i "s/db.password.encrypted=/db.password.encrypted=$PWD_ENCRYPT/" /etc/efm-2.1/efm.properties
sed -i "s/db.port=/db.port=5444/" /etc/efm-2.1/efm.properties
sed -i "s/db.database=/db.database=edb/" /etc/efm-2.1/efm.properties
sed -i "s/admin.port=/admin.port=7809/" /etc/efm-2.1/efm.properties
sed -i "s/user.email=/user.email=$YOUR_EMAIL/" /etc/efm-2.1/efm.properties
sed -i "s/bind.address=/bind.address=$MASTER_IP:7800/" /etc/efm-2.1/efm.properties
sed -i "s/is.witness=/is.witness=false/" /etc/efm-2.1/efm.properties
sed -i "s/db.service.owner=/db.service.owner=enterprisedb/" /etc/efm-2.1/efm.properties
sed -i "s;db.recovery.conf.dir=;db.recovery.conf.dir=/opt/edb/as9.6/data;g" /etc/efm-2.1/efm.properties
sed -i "s/db.bin=/db.bin=\/opt\/edb\/as9.6\/bin/" /etc/efm-2.1/efm.properties
sed -i "s/auto.allow.hosts=false/auto.allow.hosts=true/" /etc/efm-2.1/efm.properties

 10.Execute the below command to check whether we’re getting the edb prompt:

 

NOTE: We received the above highlighted error during starting the services. For this edit the pingServerIp property in efm.properties file to the local hostname: pingServerIp=172.19.18.95 (this is because the server is not having internet connectivity presently)

11.Restart the pg_ctl services

pg_ctl -D /opt/edb/as9.6/data stop
pg_ctl -D /opt/edb/as9.6/data start
pg_ctl -D /opt/edb/as9.6/data status

NOTE: We have got errors in the mentioned file:

Resolution:

vi “/opt/edb/as9.6/data/postgresql.conf”

logging_collector = on    #(Check is it already on or not) (**comment it as highlighted)

Steps in STANDBY server:

 

1.Export the variables below in terminal as root user:

export YUM_USER=XXXXX
export YUM_PASSWORD=test#123
export MASTER_IP=172.31.20.93
export STDBY_IP=172.31.20.82
export YOUR_EMAIL=XX.XX@gmail.com
export PWD_ENCRYPT=2ae8d8d6fbfb742863990dc444e9434f
export PGDATA=/opt/edb/as9.6/data

 2.Modify the ownership and permission same as primary

 su - enterprisedb -c "echo \"*:*:*:enterprisedb:enterprisedb\" > ~/.pgpass"
 su - enterprisedb -c "chmod 600 ~/.pgpass"

 3.Take backup of data directory if present earlier:

Then execute the below commands:

[root@edb2tst as9.6]# mv data_bkp data

[root@edb2tst as9.6]# mkdir data

[root@edb2tst as9.6]# chown enterprisedb:enterprisedb data

[root@edb2tst as9.6]# chmod 0700 data

[root@edb2tst as9.6]# su - enterprisedb -c "/opt/edb/as9.6/bin/pg_basebackup -h 172.31.20.93 -p 5444 -R -D /opt/edb/as9.6/data"

NOTICE:  pg_stop_backup complete, all required WAL segments have been archived

4.Start Server

[root@edb2tst data]# su – enterprisedb -c ” /opt/edb/as9.6/bin/pg_ctl -w -D /opt/edb/as9.6/data start”

5.Add the following entries in conf file:

[root@edb2tst data]# echo trigger_file=\'/opt/edb/as9.6/trigger.file\' >> /opt/edb/as9.6/recovery.conf

[root@edb2tst data]# echo recovery_target_timeline='latest' >>/opt/edb/as9.6/recovery.conf

 6.copy efm.proporties file from node 1(primary) to standby

 7.Edit the bind.address to the below entry on standby

bind.address=172.31.20.82:7800

Add the below entry to efm.nodes file

172.31.20.93:7800

I.Final steps to start EFM and check .These steps needs to be done both nodes

 

1.Edit both the efm.nodes and put the IP of all nodes in the cluster:

In primary

vi /etc/efm-2.1/efm.nodes

172.31.20.93:7800
172.31.20.82:7800

 In Standby

vi /etc/efm-2.1/efm.nodes

172.31.20.93:7800
172.31.20.82:7800

2.Stop/Start both the DB and efm service

su - enterprisedb -c "/opt/edb/as9.6/bin/pg_ctl -D /opt/edb/as9.6/data  stop"

su - enterprisedb -c "/opt/edb/as9.6/bin/pg_ctl -D /opt/edb/as9.6/data  stop"

3.Stop and start EFM services

service efm-2.1 stop

service efm-2.1 start
4.Now you can check cluster status
[root@ip-172-31-20-93 ~]# /usr/efm-2.1/bin/efm cluster-status efm
Cluster Status: efm
VIP:

Agent Type Address Agent DB Info
--------------------------------------------------------------
Standby 172.31.20.82 UP UP
Master 172.31.20.93 UP UP

Allowed node host list:
172.31.20.93 172.31.20.82

Membership coordinator: 172.31.20.82

Standby priority host list:
172.31.20.82

Promote Status:

DB Type Address XLog Loc Info
--------------------------------------------------------------
Master 172.31.20.93 0/700DDE0
Standby 172.31.20.82 0/700DDE0

Standby database(s) in sync with master. It is safe to promote.

5.How to establish SYNC 

For establishing SYNCHRONOUS (ie SYNC) connectivity :

Modify the below parameter in the below file:

In primary:

-bash-4.1$ vi postgresql.conf

-bash-4.1$ pwd

/opt/edb/as9.6/data 

 

In standby:

-bash-4.1$ vi recovery.conf

standby_mode = 'on'

primary_conninfo = 'user=enterprisedb password=enterprisedb host=172.19.18.95 port=5444 sslmode=prefer sslcompression=1 krbsrvname=postgres application_name = eocapp'

trigger_file='/opt/edb/as9.6/data/trigger.file' recovery_target_timeline=latest

 

PostgreSQL DBA part 5 :- Create user and grant privilege in postgresql

edb=# create user test identified by test;
CREATE ROLE
edb=# grant connect on database edb to test;
GRANT
edb=# grant usage on schema public to test;
GRANT
edb=# create table t(c1 char(10));
CREATE TABLE
edb=# insert into t values('DEB');
INSERT 0 1
edb=# commit;
COMMIT
edb=# grant select,insert on emp to test;
GRANT
edb=# \c edb test;
Password for user test:
You are now connected to database "edb" as user "test".
edb=> select * from emp;
 empno | ename | job | mgr | hiredate | sal | comm | de
ptno
-------+--------+-----------+------+--------------------+---------+---------+---
-----
 7369 | SMITH | CLERK | 7902 | 17-DEC-80 00:00:00 | 800.00 | |
 20

PostgreSQL DBA part 4 :-How to create database in postgresql

How to create database in postgresql

Create Database command can be used to create a database in a cluster.
−Syntax:
Creating Databases
CREATE DATABASE name [ [ WITH ] [ OWNER [=] dbowner][ TEMPLATE [=] template ][ ENCODING [=] encoding ][ TABLESPACE [=] tablespace][ CONNECTION LIMIT [=] connlimit ] ]

1.Create database in postgresql  named prod with user as enterprisedb.Please note enterprisedb is OS user as well.

edb=# create database prod owner enterprisedb;
CREATE DATABASE

2.Connect to newly created database in postgresql.

edb=# \connect prod
You are now connected to database "prod" as user "enterprisedb".

3.Please check whether you can see newly created database

prod=# select datname from pg_database;
 datname
-----------
 postgres
 edb
 template1
 template0
 prod
(5 rows)

prod=# exit

PostgreSQL Parameters(The following are most common parameters in postgresql.conf)

A.Many Ways to set postgresql parameters

•Some parameters can be changed per session using the SETcommand.
•Some parameters can be changed at the user level using ALTER USER.
•Some parameters can be changed at the database level using ALTER DATABASE.
•The SHOW command can be used to see settings.
•The pg_settings and pg_file_settings catalog table lists settings information.

B.PostgreSQL Parameters(The following are most common parameters in postgresql.conf)

1.Connection Settings

 

 Parameter  Default Value  Description
listen_addresses localhost Specifies the addresses on which the server is to listen for connections. Use * for all.
port 5432 The port the server listens on.
max_connections 100 Maximum number of concurrent connections the server can support.
superuser_reserved_connections 3 Number of connection slots reserved for superusers.
unix_socket_directory /tmp Directory to be used for UNIX socket connections to the server
unix_socket_permissions 0777 access permissions of the Unix-domain socket.

2.Security and Authentication Settings

 

 Parameter  Default Value  Description
authentication_timeout 1 minute Maximum time to complete client authentication, in seconds.
ssl off Enables SSL connections.
ssl_ca_file Specifies the name of the file containing the SSL server certificate authority (CA).
ssl_cert_file Specifies the name of the file containing the SSL server certificate.
ssl_key_file Specifies the name of the file containing the SSL server private key.
ssl_ciphers List of SSL ciphers that may be used for secure connections.

3.Memory Settings

 

 Parameter  Default Value  Description
shared_buffers 128 MB Size of PostgreSQL shared buffer pool for a cluster.
temp_buffers 8 MB Amount of memory used by each backend for caching temporary table data.
work_mem 4 MB Amount of memory used for each sort or hash operation before switching to temporary disk files.
maintenance_work_mem 64 MB Amount of memory used for each index build or VACUUM.
temp_file_limit 1 amount of disk space that a session can use for temporary files. A transaction attempting to exceed this limit will be cancelled. Default is unlimited.

4.Query Planner Settings

 

 Parameter  Default Value  Description
random_page_cost 4.0 Estimated cost of a random page fetch, in abstract cost units. May need to be reduced to account for caching effects.
seq_page_cost 1.0 Estimated cost of a sequential page fetch, in abstract cost units. May need to be reduced to account for caching effects. Must always set random_page_cost >= seq_page_cost
effective_cache_size 4 GB Used to estimate the cost of an index scan. Rule of thumb is 75% of system memory.

5.Write Ahead Log Settings

 

 Parameter  Default value  Description
wal_level minimum Determines how much information is written to the WAL. Change this to enable replication. Other values are archive, logical and hot_standby.
fsync on Turn this off to make your database much faster –and silently cause arbitrary corruption in case of a system crash.
wal_buffer -1, autotune) The amount of memory used in shared memory for WAL data. The default setting of -1 selects a size equal to 1/32nd (about 3%) of shared_buffers.
min_wal_size 80 MB The WAL size to start recycling the WAL files.
max_wal_size 1 GB The WAL size to start checkpoint.Controls the number of WAL Segments(16MB each) after which checkpoint is forced.
checkpoint_timeout 5 minutes Maximum time between checkpoints.
wal_compression off The WAL of Full Page write will be compressed and written.

6.Where To Log

log_destination-Valid values are combinations of stderr, csvlog, syslog, and eventlog, depending on platform.
logging_collector-Enables advanced logging features. csvlogrequires logging_collector.
‒log_directory-Directory where log files are written. Requires logging_collector.
‒log_filename-Format of log file name (e.g. postgresql-%Y-%M-%d.log). Allows regular log rotation. Requires logging_collector.
‒log_file_mode(default 0600) -On Unix systems this parameter sets the permissions for log files when logging_collectoris enabled.
‒log_rotation_age-Automatically rotate logs after this much time. Requires logging_collector.
‒log_rotation_size-Automatically rotate logs when they get this big. Requires logging_collector.

7.When To Log

 

Parameter Default value Description
client_min_messages NOTICE Messages of this severity level or above are sent to the client
log_min_messages WARNING Messages of this severity level or above are sent to the server.
log_min_error_statement ERROR When a message of this severity or higher is written to the server log, the statement that caused it is logged along with it.
log_min_duration_statement default -1, disabled When a statement runs for at least this long, it is written to the server log, with its duration.

8.What to log

 

Parameter Default value Description
log_connections off Log successful connections to the server log.
log_disconnections off Log some information each time a session disconnects, including the duration of the session.
log_error_verbosity default Can also select “terse” or “verbose”.
log_duration off Log duration of each statement.
log_line_prefix Additional details to log with each line.
log_statement none Legal values are none, ddl, mod (DDL and all other data-modifying statements), or all.
log_temp_files -1 Log temporary files of this size or larger, in kilobytes.
log_checkpoints off Causes checkpoints and restartpoints to be logged in the server log.

9.Background Writer Settings

 

Parameter Default value Description
bgwriter_delay 200 ms Specifies time between activity rounds for the background writer.
bgwriter_lru_maxpages 100 Maximum number of pages that the background writer may clean per activity round.
bgwriter_lru_multiplier 2.0 Multiplier on buffers scanned per round. By default, if system thinks 10 pages will be needed, it cleans 10 * bgwriter_lru_multiplierof 2.0 = 20.

10.Statement Behavior

 

Parameter Default value Description
search_path This parameter specifies the order in which schemas are searched. The default value for this parameter is”$user”, public.
default_tablespace Name of the tablespace in which to objects are created by default.
temp_tablespaces Tablespaces name(s) in which temporary objects are created.
statement_timeout Postgres will abort any statement that takes over the specified number of milliseconds A value of zero (the default) turns this off.

11.Vacuum Cost Settings

 

Parameter Default value Description
vacuum_cost_delay 0 ms The length of time, in milliseconds, that the process will wait when the cost limit is exceeded.
vacuum_cost_page_hit 1 The estimated cost of vacuuming a buffer found in the PostgreSQL buffer pool.
vacuum_cost_page_miss 10 The estimated cost of vacuuming a buffer that must be read into the buffer pool.
vacuum_cost_page_dirty 20 The estimated cost charged when vacuum modifies a buffer that was previously clean.
vacuum_cost_limit 200 The accumulated cost that will cause the vacuuming process to sleep.

12.Autovacuum Settings

 

Parameter Default value Description
autovacuum on Controls whether the autovacuum launcher runs, and starts worker processes to vacuum and analyze tables.
log_autovacuum_min_duration -1 Autovacuum tasks running longer than this duration are logged. Can now be specified per table.
autovacuum_max_workers 3 Maximum number of autovacuum worker processes which may be running at one time.
autovacuum_work_mem default -1, to use maintenance_work_mem Maximum amount of memory used by each autovacuum worker.

 

PostgreSQL DBA part 3 :-How to start/stop postgresql database

A.Stopping postgresql database

pg_ctl can be used to stop a database cluster

•pg_ctl supports three modes of shutdown
−smart quit after all clients have disconnected
−fast (default)quit directly, with proper shutdown
−immediate quit without complete shutdown; will lead to recovery

•Syntax
−pg_ctl stop [-W] [-t SECS] [-D DATADIR] [-s] [-m SHUTDOWN-MODE]

-bash-4.1$ pwd
/opt/edb/as9.6/bin
-bash-4.1$ ./pg_ctl -D /opt/edb/as9.6/data -mf stop
waiting for server to shut down.... done
server stopped
-bash-4.1$ ps -ef|grep postgresql
54322 12838 12225 0 09:30 pts/2 00:00:00 grep postgresql

B.Starting postgresql database

pg_ctl can be used to start a database cluster

Choose a unique port for postmaster
•Change the port in postgresql.conf

•pg_ctl utility can be used to start a cluster
•Syntax
−pg_ctl start [options]
−Options
-D location of the database storage area
-l write (or append) server log to FILENAME
-w wait until operation completes
-t seconds to wait when using -w option

-bash-4.1$ ./pg_ctl -D /opt/edb/as9.6/data -mf start
server starting
-bash-4.1$ 2018-05-08 09:30:19 IST LOG: redirecting log output to logging collector process
2018-05-08 09:30:19 IST HINT: Future log output will appear in directory "pg_log".
-bash-4.1$ ps -ef|grep post
root 1947 1 0 May07 ? 00:00:00 /usr/libexec/postfix/master
postfix 1956 1947 0 May07 ? 00:00:00 qmgr -l -t fifo -u
postfix 12663 1947 0 09:13 ? 00:00:00 pickup -l -t fifo -u
54322 12841 1 0 09:30 pts/2 00:00:00 /opt/edb/as9.6/bin/edb-postgres -D /opt/edb/as9.6/data
54322 12842 12841 0 09:30 ? 00:00:00 postgres: logger process
54322 12844 12841 0 09:30 ? 00:00:00 postgres: checkpointer process
54322 12845 12841 0 09:30 ? 00:00:00 postgres: writer process
54322 12846 12841 0 09:30 ? 00:00:00 postgres: wal writer process
54322 12847 12841 0 09:30 ? 00:00:00 postgres: autovacuum launcher process
54322 12848 12841 0 09:30 ? 00:00:00 postgres: stats collector process
54322 12849 12841 0 09:30 ? 00:00:00 postgres: bgworker: dbms_aq launcher

C.pg_controldatacan be used to view the control information for a database cluster

-bash-4.1$ ./pg_controldata /opt/edb/as9.6/data
pg_control version number: 960
Catalog version number: 201608131
Database system identifier: 6551197085840965172
Database cluster state: in production
pg_control last modified: Tue 08 May 2018 09:30:19 AM IST
Latest checkpoint location: 0/274D828
Prior checkpoint location: 0/274D7B8
Latest checkpoint's REDO location: 0/274D828
Latest checkpoint's REDO WAL file: 000000010000000000000002
Latest checkpoint's TimeLineID: 1
Latest checkpoint's PrevTimeLineID: 1
Latest checkpoint's full_page_writes: on
Latest checkpoint's NextXID: 0:2347
Latest checkpoint's NextOID: 24581
Latest checkpoint's NextMultiXactId: 1
Latest checkpoint's NextMultiOffset: 0
Latest checkpoint's oldestXID: 2326
Latest checkpoint's oldestXID's DB: 1
Latest checkpoint's oldestActiveXID: 0
Latest checkpoint's oldestMultiXid: 1
Latest checkpoint's oldestMulti's DB: 1
Latest checkpoint's oldestCommitTsXid:0
Latest checkpoint's newestCommitTsXid:0
Time of latest checkpoint: Tue 08 May 2018 09:29:55 AM IST
Fake LSN counter for unlogged rels: 0/1
Minimum recovery ending location: 0/0
Min recovery ending loc's timeline: 0
Backup start location: 0/0
Backup end location: 0/0
End-of-backup record required: no
wal_level setting: minimal
wal_log_hints setting: off
max_connections setting: 100
max_worker_processes setting: 8
max_prepared_xacts setting: 0
max_locks_per_xact setting: 64
track_commit_timestamp setting: off
Maximum data alignment: 8
Database block size: 8192
Blocks per segment of large relation: 131072
WAL block size: 8192
Bytes per WAL segment: 16777216
Maximum length of identifiers: 64
Maximum columns in an index: 32
Maximum size of a TOAST chunk: 1996
Size of a large-object chunk: 2048
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by value
Data page checksum version: 0

PostgreSQL DBA part2 :-How to connect postgresql database

A.Connecting postgresql locally using port

Please identify port in postgresql.conf under /opt/edb/as9.6/data and connect using psql

There are pre-requisites for allowing local connection.This pg_hba.conf file is under  /opt/edb/as9.6/data.Please make sure last parameter should be md5 not peer.

The below content is for example what should be in pg_hba.conf

# "local" is for Unix domain socket connections only
local all all md5
# IPv4 local connections:
host all all 127.0.0.1/32 md5

cd /opt/edb/as9.6/bin

-bash-4.1$ ./psql -p 5444 edb enterprisedb
 psql.bin (9.6.2.7)
 Type "help" for help.
port:-5444
Database name:-edb
Password for enterprisedb user:-enterprisedb

B.Allowing remote connections

  1. As a super user, open /etc/postgresql/9.6/main/pg_hba.conf (Ubuntu) or /var/lib/pgsql/9.6/data/pg_hba.conf (Red Hat) in a text editor.

  2. Scroll down to the line that describes local socket connections. It may look like this:

    local   all             all                                      peer
    
  3. Change to:

    host    all             all             0.0.0.0/0               trust
    
    
    1. Save and close the file.

    2. In the same directory, open postgresql.conf.

Under the section on Connection Settings, add or replace the line that starts with listen_addresses to respond to all requests:

listen_addresses = '*'

Please restart postgresql service now.

C.Use psql command from client system as follows(Before that you need to setup pg_hba.conf as above to allow remote connection)

psql -h PostgreSQL-IP-ADDRESS -U USERNAME -d DATABASENAME

Connect to remote server by IP address 192.168.1.5 and login using debasis user to connect to edb database, use:
$ psql -h 192.168.1.5 -U debasis -d edb
Where,

  • -h 192.168.1.5 : Specifies the host name of the machine or IP address (192.168.1.5) on which the server is running.
  • -U debasis : Connect to the database as the debasis username instead of the default. You must have account and permission to connect as debasis user.
  • -d edb : Specifies the name of the database (edb) to connect to.

D.Connect to edb sql

cd /opt/edb/as9.6/bin

-bash-4.1$ ./edb-psql edb
psql.bin (9.6.2.7)
Type “help” for help.

edb=#

Here edb is database name.

 

 

 

DOCKER CONCEPT FOR DBA– PART 10 : How to secure docker private repository using openssl certificate key

This document explains how to secure private repository using openssl key

Before we can use a private repository, we will need to secure it and offer user authentication. Let’s create a self-signed certificate, use the ‘registry’ container by Docker to create basic user authentication, and then copy the files where they need to go on the hosting server.

root@deb-VirtualBox:~# mkdir certs
root@deb-VirtualBox:~# mkdir auth

Make entry in /etc/hosts with your host IP

169.254.156.101 myregistrydomain.com

Now let me create directory where certificate will reside

root@deb-VirtualBox:~# mkdir -p /etc/docker/certs.d/myregistrydoman.com:5000

Please install openssl if it is not installed

root@deb-VirtualBox:~/certs# apt-get install openssl
Reading package lists... Done
Building dependency tree
Reading state information... Done
openssl is already the newest version (1.0.2g-1ubuntu13.5).
0 upgraded, 0 newly installed, 0 to remove and 113 not upgraded.

Now let me create self signed certificate and pull registry docker image

root@deb-VirtualBox:~/certs# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /root/certs/dockerrepo.key -x509 -days 365 -out /root/certs/dockerrepo.crt -subj /CN=myregistrydomain.com
Generating a 4096 bit RSA private key
................................................................................................................................................................++
.....................................................................................................................................................................................................++
writing new private key to '/root/certs/dockerrepo.key'


root@deb-VirtualBox:~/certs# cd /etc/docker/certs.d/myregistrydoman.com:5000

root@deb-VirtualBox:/etc/docker/certs.d/myregistrydoman.com:5000# cp /root/certs/dockerrepo.crt ca.crt


root@deb-VirtualBox:/etc/docker/certs.d/myregistrydoman.com:5000# docker pull registry:2
2: Pulling from library/registry
81033e7c1d6a: Pull complete
b235084c2315: Pull complete
c692f3a6894b: Pull complete
ba2177f3a70e: Pull complete
a8d793620947: Pull complete
Digest: sha256:672d519d7fd7bbc7a448d17956ebeefe225d5eb27509d8dc5ce67ecb4a0bce54
Status: Downloaded newer image for registry:2

Let me create password and user

cd /root

root@deb-VirtualBox:~# docker run --entrypoint htpasswd registry:2 -Bbn test password > auth/htpasswd

Now that we have the security work done for our private registry, we can deploy and configure it for use. We will test it locally, and then log in and test via a remote system.

root@deb-VirtualBox:~# docker run -d -p 5000:5000 -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/dockerrepo.crt -e REGISTRY_HTTP_TLS_KEY=/certs/dockerrepo.key -v `pwd`/auth:/auth -e REGISTRY_AUTH=htpasswd -e REGISTRY_AUTH_HTPASSWD_RELM="Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd registry:2
0d007ea4f85da395b2de25b6a6c5c1178c06e4df8cf859bf5ef5808b655c30a2

Let me pull demo docker image for securing it and test

root@deb-VirtualBox:~# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
f70adabe43c0: Pull complete
Digest: sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
Status: Downloaded newer image for busybox:latest

The image is now downloaded

root@deb-VirtualBox:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mywebserver latest 1d666b38b200 2 days ago 330MB
mywebserver v1 d749c49988b5 2 days ago 330MB
mywebserver1 v1 a23dab42b0ee 2 days ago 321MB
optimized v1 dd055168eb91 2 days ago 301MB
customimage v1 9528f83769a3 2 days ago 301MB
hello-world latest e38bc07ac18e 3 weeks ago 1.85kB
centos 6 70b5d81549ec 3 weeks ago 195MB
centos latest e934aafc2206 3 weeks ago 199MB
busybox latest 8ac48589692a 4 weeks ago 1.15MB
registry 2 d1fd7d86a825 3 months ago 33.3MB

Now let me tag with my new docker name and push with self signed password protected

root@deb-VirtualBox:~# docker tag busybox myregistrydomain.com:5000/my-busybox

root@deb-VirtualBox:~# docker push myregistrydomain.com:5000/my-busybox

root@deb-VirtualBox:~# docker login myregistrydomain.com:5000/my-busybox
Username: test
Password:

Now let me test how we can push the private docker image in another host

Make same entry in /etc/hosts in the host you want to pull private docker.

copy ca.crt from first host to this server in the location /etc/docker/certs.d/myregistrydomain.com\:5000/

root@deb-VirtualBox2:~# cd /etc/docker
root@deb-VirtualBox2:~# mkdir certs.d
root@deb-VirtualBox2:~# cd /etc/docker/certs.d
root@deb-VirtualBox2:~# mkdir myregistrydomain.com\:5000/

root@deb-VirtualBox2:~# chown root:root ca.crt

Now we are good to test if we can pull the image from hub

root@deb-VirtualBox2:~# docker login myregistrydomain.com:5000

root@deb-VirtualBox2:~# docker pull myregistrydomain.com:5000/my-busybox

Monitor real time active session statistics by ORAPUB tool

The tool was devised by Craig from ORAPUB which is excellent to monitor what happens in session real time.

1.First I need to down OSM tool kit from ORAPUB. First we need to run the following scripts to create some pre-requisite tables under sys/system schema.

a)osmprep.sql

-- ******************************************************
-- * Copyright Notice : (c)1998-2014,2015 OraPub, Inc.
-- * Filename : osmprep.sql 
-- * Author : Craig A. Shallahamer
-- * Original : 17-AUG-98
-- * Last Modified : 23-Dec-2014
-- * Description : OSM preperation script
-- * Usage : start osmprep.sql 
-- ******************************************************

prompt 
prompt OraPub System Monitor - Interactive (OSM-I) installation script.
prompt 
prompt (c)1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008 by OraPub, Inc.
prompt (c)2009,2010,2011,2012,2013,2014,2015 by OraPub, Inc.
prompt 
prompt There is absolutely no guarantee with this software. You may
prompt use this software at your own risk, not OraPub's risk. 
prompt No value is implied or stated.
prompt
prompt You may need to run $ORACLE_HOME/rdbms/admin/catblock.sql
prompt
prompt Connect as the user who will be using the OSM.
prompt
prompt Press ENTER to continue.
accept x

prompt
prompt Creating interim tables for delta calculations
prompt
-- Just for rtsysx8.sql
drop table o$sysstat;
create table o$sysstat as select * from v$sysstat where 1=0;
--
drop table system_event_snap;
create table system_event_snap as select * from v$system_event where 1=0;
drop table event_histogram_snap;
create table event_histogram_snap as select * from v$event_histogram where 1=0;
drop table op_sys_time_model_snap;
create table op_sys_time_model_snap as select * from v$sys_time_model where 1=0;
drop table op_os_cpu_snap;
create table op_os_cpu_snap as select stat_name,value from v$osstat where 1=0;

drop table o$system_event_snap;
create table o$system_event_snap as select * from v$system_event where 1=0;

drop table o$sys_time_model;
create table o$sys_time_model as select * from v$sys_time_model where 1=0;

drop table o$rtsysx_sql;
create table o$rtsysx_sql ( 
 sql_id varchar2(13),sql_address raw(8),cpu_time number, elapsed_time number,
 sorts number, executions number, parse_calls number, disk_reads number,
 buffer_gets number, rows_processed number, sql_text varchar2(1000)
);
create unique index o$rtsysx_sql on o$rtsysx_sql (sql_id);

-- rtsysx8.sql
drop table o$rtsysx_sql8;
create table o$rtsysx_sql8 ( 
 sql_address raw(8),cpu_time number, elapsed_time number,
 sorts number, executions number, parse_calls number, disk_reads number,
 buffer_gets number, rows_processed number, sql_text varchar2(1000)
);
create unique index o$rtsysx_sql8 on o$rtsysx_sql8 (sql_address);

-- rtcx.sql
drop table o$rtcx_snap;
create table o$rtcx_snap (category varchar2(100),time_cs number, waits number);

-- collection scripts (e.g., rss.sql, ash scripts,) need these two functions

create or replace function get_interval_s_fnc(i_intrvl interval day to second) return number is
begin
 return extract(day from i_intrvl) * 86400 +
 extract(hour from i_intrvl) * 3600 +
 extract(minute from i_intrvl) * 60 +
 extract(second from i_intrvl);
 exception
 when others then begin return null;
 end;
end;
/


prompt To categorize wait events for OSM reports, run:
prompt
prompt For pre-10g systems, run event_type_nc.sql
prompt For 10g and beyond, run event_type.sql

prompt
prompt Once you cateogrize the wait events, the installation is complete.
prompt
prompt Menu is osmi.sql 
prompt
prompt ENJOY!!
prompt

b)Then I will run another script

-- ********************************************************************
-- * Copyright Notice : (c)2001-2014 OraPub, Inc.
-- * Filename : event_type.sql - For 10g+ databases!!!!!
-- * Author : Craig A. Shallahamer
-- * Original : 11-may-01
-- * Last Update : 19-May-2015 (dealing with I/O events)
-- * Description : event_type.sql - Loads "event type" table
-- * which is used by other scripts.
-- * Usage : start event_type.sql
-- * This is usually run (manually) after osmprep.sql
-- ********************************************************************

prompt
prompt file: event_type.sql for Oracle 10g and beyond...
prompt
prompt About to categorize wait events for OSM reports.
prompt
prompt Press ENTER to re-create the o$event_type table.
accept x

-- Event TYPES are defined as follows:

-- ior - IO read related wait
-- iow - IO write related wait
-- other - "real" but not explicitly categorized
-- idle/bogus - idle events, usually not useful

drop table o$event_type
/
create table o$event_type (
event varchar2(64),
type varchar2(64)
)
/

insert into o$event_type
 select name,'other'
 from v$event_name
 where wait_class in ('Administrative','Application','Cluster','Concurrency',
 'Configuration','Other','Scheduler','Queuing','Scheduler')
/
insert into o$event_type
 select name,'bogus'
 from v$event_name
 where wait_class in ('Idle','Network')
/
insert into o$event_type
 select name,'ior'
 from v$event_name
 where wait_class in ('Commit','System I/O','User I/O')
 and name like '%read%'
/
insert into o$event_type
 select name,'iow'
 from v$event_name
 where wait_class in ('Commit','System I/O','User I/O')
 and name like '%write%'
/
insert into o$event_type
select name, 'other' from v$event_name
minus
select event, 'other' from o$event_type
/

-----------------------------------------------------------------------------------------
----- DO NOT REMOVE THE BELOW LINES as they make adjustments to the broad inserts above.
-----------------------------------------------------------------------------------------

-- select 'update o$event_type set type=''iow'' where event = '''||name||''';' from v$event_name where name like '%I/O%';

-- The below is not perfect cleary, but probably good enough... you still must think...

update o$event_type set type='iow' where event = 'Parameter File I/O';
update o$event_type set type='iow' where event = 'Disk file operations I/O';
update o$event_type set type='iow' where event = 'Disk file I/O Calibration';
update o$event_type set type='iow' where event = 'Datapump dump file I/O';
update o$event_type set type='iow' where event = 'dbms_file_transfer I/O';
update o$event_type set type='iow' where event = 'DG Broker configuration file I/O';
update o$event_type set type='iow' where event = 'Log archive I/O';
--update o$event_type set type='iow' where event = 'RMAN backup & recovery I/O';
update o$event_type set type='iow' where event like 'RMAN backup % recovery I/O';
update o$event_type set type='iow' where event = 'Standby redo I/O';
update o$event_type set type='iow' where event = 'RMAN Disk slave I/O';
update o$event_type set type='iow' where event = 'RMAN Tape slave I/O';
update o$event_type set type='iow' where event = 'DBWR slave I/O';
update o$event_type set type='iow' where event = 'LGWR slave I/O';
update o$event_type set type='iow' where event = 'Archiver slave I/O';
update o$event_type set type='iow' where event = 'db file async I/O submit';
update o$event_type set type='iow' where event = 'concurrent I/O completion';
update o$event_type set type='iow' where event = 'Archive Manager file transfer I/O';
--update o$event_type set type='iow' where event = 'resmgr:large I/O queued';
--update o$event_type set type='iow' where event = 'resmgr:small I/O queued';
update o$event_type set type='iow' where event = 'utl_file I/O';
update o$event_type set type='iow' where event = 'TEXT: File System I/O';
update o$event_type set type='iow' where event = 'ASM Fixed Package I/O';
update o$event_type set type='iow' where event = 'ASM Staleness File I/O';
update o$event_type set type='iow' where event = 'Data Guard: RFS disk I/O';


update o$event_type set type='iow' where event = 'free buffer waits';
update o$event_type set type='other' where event like 'latch%';
update o$event_type set type='other' where event like 'enq%';
update o$event_type set type='other' where event like 'cursor%';
update o$event_type set type='iow' where event like 'log%sync%';
update o$event_type set type='iow' where event like 'log%write%';
update o$event_type set type='iow' where event like 'db%write%';
update o$event_type set type='iow' where event like 'log%file%switch%';
update o$event_type set type='iow' where event like 'LGWR%';
update o$event_type set type='ior' where event like 'log%read%';
update o$event_type set type='ior' where event like 'db%read%';
update o$event_type set type='iow' where event like 'i/o%slave%wait';
update o$event_type set type='iow' where event = 'write complete waits';
update o$event_type set type='bogus' where event = 'wait for unread message on broadcast channel';
update o$event_type set type='bogus' where event = 'wait for unread message on multiple broadcast channels';
--update o$event_type set type='bogus' where event like 'db%file%async%submit%';
-- update o$event_type set type='bogus' where event like 'DLM%cross%inst%call%completion';
update o$event_type set type='bogus' where event like 'rdbms ipc reply';

-- for 12c
update o$event_type set type='bogus' where event like 'LGWR worker group idle';

-----------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------


commit;

col name format a50 trunc
col wait_class format a20
col type format a20

prompt
prompt OraPub Categorization Summary
prompt ------------------------------------------------

select distinct type,
count(*)
from o$event_type
group by type
order by type
/
select count(*) from o$event_type
/

prompt
prompt Oracle Categorization Summary
prompt ------------------------------------------------

select distinct wait_class, count(*) from v$event_name group by wait_class order by wait_class
/
select count(*) from v$event_name
/

2.Now the following script can be saved under /home/oracle/rss.sql

-- ******************************************************
-- * Copyright Notice : (c)2014 OraPub, Inc.
-- * Filename : rss.sql - Realtime Session Sampler
-- * Author : Craig A. Shallahamer
-- * Original : 07-Jun-2014
-- * Last Modified : 15-Jul-2014
-- * Description : Collect and display session activity in realtime
-- * Usage : @rss sid_low sid_high serial#_low serial#_high cpu|wait|% event%|% delay_s
-- * @rss 0 999999 0 999999 % % 1
-- * @rss 0 999999 0 999999 cpu % 1
-- * @rss 0 999999 0 999999 wait % 1
-- * @rss 0 999999 0 999999 wait db%file%seq% 1
-- * @rss 328 328 1 1 % % 0.25
-- ******************************************************

set echo off verify off heading off
--set echo on verify on heading on

def sidLow=&1
def sidHigh=&2
def serLow=&3
def serHigh=&4
def state=&5
def wePartial=&6
def delay=&7

prompt OraPub Realtime Session Sampler - collection and display
prompt
prompt Every &delay second(s), session ID from &sidLow to &sidHigh and serial# from &serLow to &serHigh
prompt in a &state state will be sampled and displayed.
prompt When waiting, only display when wait event is like &wePartial
prompt 
prompt Output will be written to the /tmp/rss_sql.txt file.
prompt To stream output in realtime, in another window do: tail -f /tmp/rss_sql.txt
prompt
accept x prompt "To begin sampling press ENTER. Otherwise break out now."

create or replace directory temp_dir as '/tmp';

prompt To stop sampling, break out (e.g., CNTRL-C)
prompt
prompt Sampling started...

set serveroutput on

begin
declare
 type cv_typ is ref cursor;
 cv cv_typ;
 sql_to_run_v varchar2(999);

fHandle UTL_FILE.FILE_TYPE;

cntr_v number;
 next_sample_time_ts_var timestamp;

current_timestamp_v timestamp;
 sid_v number;
 serial#_v number;
 username_v varchar2(30);
 state_v varchar2(19);
 event_v varchar2(64);
 wecat_v varchar2(64);
 sql_id_v varchar2(13);
 p1_v number;
 p2_v number;
 p3_v number;

begin
 dbms_session.set_identifier('osm:i rss.sql');

fHandle := UTL_FILE.FOPEN('TEMP_DIR', 'rss_sql.txt', 'W');

UTL_FILE.PUT_LINE(fHandle, 'Starting sampling...');
 utl_file.fflush(fHandle);

cntr_v := 0;

while 1 = 1
 loop

cntr_v := cntr_v + 1;

select current_timestamp + interval '0 00:00:&delay' day to second
 into next_sample_time_ts_var
 from dual;

begin

open cv for
 select
 current_timestamp,
 sid,
 serial#,
 decode(username,'','-',username),
 decode(state,'WAITING','WAIT','CPU '),
 nvl(sql_id,'-'),
 decode(state,'WAITING',decode(o.type,'bogus','idle',o.type),'-'),
 decode(state,'WAITING',vs.event,'-') ,
 decode(state,'WAITING',p1,''),
 decode(state,'WAITING',p2,''),
 decode(state,'WAITING',p3,'')
 from v$session vs,
 o$event_type o
 where sid between &sidLow and &sidHigh
 and serial# between &serLow and &serHigh
 and decode(state,'WAITING','WAIT','CPU') like upper('&state%')
 and vs.event like '&wePartial%'
 and vs.event = o.event(+)
 and o.type != 'bogus';

loop
 fetch cv into
 current_timestamp_v, 
 sid_v,
 serial#_v,
 username_v,
 state_v,
 sql_id_v,
 wecat_v,
 event_v,
 p1_v, p2_v, p3_v
 ;
 exit when cv%NOTFOUND;

UTL_FILE.PUT_LINE(fHandle, lpad(trim(cntr_v),3,' ')||' '|| to_char(current_timestamp_v,'HH24:MI:SS:FF3')||' '||lpad(trim( sid_v ),5,' ')||lpad(trim(serial#_v),6,' ')||' '||lpad(trim(username_v),10,' ')||' '||state_v||' '||rpad(trim(sql_id_v),20,' ')||' '||rpad(trim(wecat_v),5,' ')||' '||rpad(trim(event_v),30,' ')||' ['||p1_v||','||p2_v||','||p3_v||']');
 utl_file.fflush(fHandle);

end loop;

EXCEPTION when others then
 cntr_v := cntr_v + 1;
 end;

dbms_lock.sleep( greatest( get_interval_s_fnc( next_sample_time_ts_var - current_timestamp ),0));

end loop;
 UTL_FILE.PUT_LINE(fHandle, 'Ending sampling...');
 UTL_FILE.FCLOSE(fHandle);
end;
end;
/

3.Now execute the script to take sames of either individual user or many users

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql 10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Situation:I want to same all sessions of my database

SQL> @rss.sql 0 99999 0 99999 % % 0.125

Reference

http://blog.orapub.com/20140811/watch-oracle-db-session-activity-with-the-real-time-session-sampler.html