Oracle PGA (Program Global Area) in 12c

What is PGA

The Program Global Area (PGA) is a private memory region that contains the data and control information for a server process. Only a server process can access the PGA. Oracle Database reads and writes information in the PGA on behalf of the server process. An example of such information is the run-time area of a cursor. Each time a cursor is executed, a new run-time area is created for that cursor in the PGA memory region of the server process executing that cursor.

For complex queries (such as decision support queries), a big portion of the run-time area is dedicated to work areas allocated by memory intensive operators, including:

  • Sort-based operators, such as ORDER BY, GROUP BY, ROLLUP, and window functions
  • Hash-join
  • Bitmap merge
  • Bitmap create
  • Write buffers used by bulk load operations

A sort operator uses a work area (the sort area) to perform the in-memory sorting of a set of rows. Similarly, a hash-join operator uses a work area (the hash area) to build a hash table from its left input.

 

Work Area Sizes

Oracle Database enables you to control and tune the sizes of work areas. Generally, bigger work areas can significantly improve the performance of a particular operator at the cost of higher memory consumption. The available work area sizes include:

  • OptimalOptimal size is when the size of a work area is large enough that it can accommodate the input data and auxiliary memory structures allocated by its associated SQL operator. This is the ideal size for the work area.
  • One-passOne-pass size is when the size of the work area is below optimal size and an extra pass is performed over part of the input data. With one-pass size, the response time is increased.
  • Multi-passMulti-pass size is when the size of the work area is below the one-pass threshold and multiple passes over the input data are needed. With multi-pass size, the response time is dramatically increased because the size of the work area is too small compared to the input data size.

For example, a serial sort operation that must sort 10 GB of data requires a little more than 10 GB to run as optimal size and at least 40 MB to run as one-pass size. If the work area is less than 40 MB, then the sort operation must perform several passes over the input data.

When sizing the work area, the goal is to have most work areas running with optimal size (more than 90%, or even 100% for pure OLTP systems), and only a small number of them running with one-pass size (less than 10%). Multi-pass executions should be avoided for the following reasons:

  • Multi-pass executions can severely degrade performance.A high number of multi-pass work areas has an exponentially adverse effect on the response time of its associated SQL operator.
  • Running one-pass executions does not require a large amount of memory.Only 22 MB is required to sort 1 GB of data in one-pass size.

Even for DSS systems running large sorts and hash-joins, the memory requirement for one-pass executions is relatively small. A system configured with a reasonable amount of PGA memory should not need to perform multiple passes over the input data.

Setting the Initial Value for PGA_AGGREGATE_TARGET

Set the initial value of the PGA_AGGREGATE_TARGET initialization parameter based on the amount of available memory for the Oracle database instance. This value can then be tuned and dynamically modified at the instance level. By default, Oracle Database uses 20% of the SGA size for this value. However, this setting may be too low for a large DSS system.

To set the initial value for PGA_AGGREGATE_TARGET:

  1. Determine how much of the total physical memory to reserve for the operating system and other non-Oracle applications running on the same system.For example, you might decide to reserve 20% of the total physical memory for the operating system and other non-Oracle applications, dedicating 80% of the memory on the system to the Oracle database instance.
  2. Divide the remaining available memory between the SGA and the PGA:
    • For OLTP systems, the PGA memory typically makes up a small fraction of the available memory, leaving most of the remaining memory for the SGA.Oracle recommends initially dedicating 20% of the available memory to the PGA, and 80% to the SGA. Therefore, the initial value of the PGA_AGGREGATE_TARGET parameter for an OLTP system can be calculated as:PGA_AGGREGATE_TARGET = (total_mem * 0.8) * 0.2 where total_mem is the total amount of physical memory available on the system.
    • For DSS systems running large, memory-intensive queries, PGA memory can typically use up to 70% of the available memory.Oracle recommends initially dedicating 50% of the available memory to the PGA, and 50% to the SGA. Therefore, the initial value of the PGA_AGGREGATE_TARGET parameter for a DSS system can be calculated as:PGA_AGGREGATE_TARGET = (total_mem * 0.8) * 0.5 where total_mem is the total amount of physical memory available on the system.

For example, if an Oracle database instance is configured to run on a system with 4 GB of physical memory, and if 80% (or 3.2 GB) of the memory is dedicated to the Oracle database instance, then initially set PGA_AGGREGATE_TARGET to 640 MB for an OLTP system, or 1,600 MB for a DSS system.

About PGA_AGGREGATE_LIMIT

The PGA_AGGREGATE_LIMIT initialization parameter enables you to specify a hard limit on PGA memory usage. If the PGA_AGGREGATE_LIMIT value is exceeded, Oracle Database aborts or terminates the sessions or processes that are consuming the most untunable PGA memory in the following order:

  • Calls for sessions that are consuming the most untunable PGA memory are aborted.
  • If PGA memory usage is still over the PGA_AGGREGATE_LIMIT, then the sessions and processes that are consuming the most untunable PGA memory are terminated.

In determining the sessions and processes to abort or terminate, Oracle Database treats parallel queries as a single unit.

By default, the PGA_AGGREGATE_LIMIT parameter is set to the greater of 2 GB, 200% of the PGA_AGGREGATE_TARGET value, or 3 MB times the value of the PROCESSES parameter. However, it will not exceed 120% of the physical memory size minus the total SGA size. The default value is printed into the alert log. A warning message is printed in the alert log if the amount of physical memory on the system cannot be determined.

AUTOMATIC PGA MEMORY MANAGEMENT

Automatic PGA memory management resolved above mentioned issues by allowing DBA to allocate an aggregate PGA to all the server processes for all the SQL operations which could be distributed as per the requirement. In this case, Oracle dynamically adapts the SQL memory allocation based on

  • – PGA memory available
  • – SQL operator needs
  • – System workload

With automatic PGA memory management, sizing of SQL work areas for all dedicated server sessions is automatic and all *_AREA_SIZE initialization parameters are ignored for these sessions. At any given time, the total amount of PGA memory available to active work areas on the instance is automatically derived from the parameter PGA_AGGREGATE_TARGET. This amount is set to the value of PGA_AGGREGATE_TARGET minus the PGA memory allocated for other purposes (for example, session memory). The resulting PGA memory is then allotted to individual active work areas based on their specific memory requirements.

  To implement it, two parameters need to be set.

WORKAREA_SIZE_POLICY = AUTO
PGA_AGGREGATE_TARGET = Target size of PGA for the total instance.

Given the specified target PGA, Oracle itself tunes the size of individual PGA’s depending

Earlier releases required DBA to manually specify the maximum workarea size for each type of SQL operator.

PGA Overallocation vs. tunable and non-tunable areas

PGA has two components : Tunable and untunable

Untunable PGA : consists of

  • Context information of each session
  • Each open cursor
  • PL/SQL, OLAP or Java memory

This component of PGA can’t be tuned i.e. whatever is memory is needed it will be consumed else the operation fails.

Tunable PGA : consists of memory available to SQL work areas (used by various sort operations)

  •  approx. 90% of PGA in DSS systems
  •  approx. 10% of PGA in OLTP systems

This component is tunable in the sense that memory available and hence consumed may be less than what is needed  and the operation will still complete but may spill to disk. Hence, increasing available memory may improve performance of such operations.

We can control the number of cursors by the init.ora OPEN_CURSORS parameter, but if this is exceeded we get an ORA-1000 error, which is clearly undesirable. See unpublished Note:1012266.6 – “Overview of ORA-1000 Maximum Number of Cursors Exceeded” for more info. More importantly, however, we have no control over the size of a cursor, and users may open very large cursors dependent on their SQL or PLSQL.
 
 Also note that if we set PGA_AGGREGATE_TARGET too small to accommodate the non-tunable part of the PGA plus the minimum memory required to execute the tunable part, then Oracle cannot honour the PGA_AGGREGATE_TARGET value, and will attempt to allocate extra memory. This is known as overallocation, and an estimation of this can be seen in the view V$PGA_TARGET_ADVICE under the column ESTD_OVERALLOC_COUNT.

As PGA memory is divided as tunable and non-tunable areas, while tunable is constrained under PGA_AGGREGATE_TARGET, non-tunable can be over allocated in any size (till ORA-04030), and Oracle records these activities in column ESTD_OVERALLOC_COUNT of V$PGA_TARGET_ADVICE. Therefore ESTD_OVERALLOC_COUNT is caused by over request of non-tunable areas.

Before each overallocation, probably Oracle tries to deallocate certain less used memory (for example, LRU Algorithm) at first. If not satisfied, new memory is allocated.

OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once. Under overallocation, session cursor caches could be subject to memory deallocation.

Test Case

Test case 1:-

First we will create a table with 3 GB size.My pga_aggregate_limit  =2G and pga_aggregate_target =1G.We will sort the table to check the behavior of PGA.

Initially before running the query let me capture the statistics.

col NAME for a40;
SELECT NAME, VALUE, UNIT FROM V$PGASTAT
where NAME in ( ‘aggregate PGA target parameter’, ‘aggregate PGA auto target’, ‘total PGA inuse ‘,
‘total PGA allocated’, ‘maximum PGA used for auto workareas’, ‘cache hit percentage’, ‘over allocation count’);
  4
NAME                                          VALUE UNIT
—————————————- ———- ————
aggregate PGA target parameter           1073741824 bytes
aggregate PGA auto target                 821412864 bytes
total PGA allocated                       216955904 bytes
maximum PGA used for auto workareas               0 bytes
over allocation count                             0
cache hit percentage                            100 percent
6 rows selected.
SQL> show parameter pga;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
pga_aggregate_limit                  big integer 2G
pga_aggregate_target                 big integer 1G
SQL> select bytes/1024/1024/1024 from dba_segments where segment_name=’T1′;
BYTES/1024/1024/1024
——————–
         3.31640625
SQL> SELECT name profile, cnt, DECODE(total, 0, 0, ROUND(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (SUM(value) over ()) total
 FROM V$SYSSTAT
WHERE name
 LIKE ‘workarea exec%’);  2    3    4    5
PROFILE                                                                 CNT
—————————————————————- ———-
PERCENTAGE
———-
workarea executions – optimal                                          6002
      100
workarea executions – onepass                                             0
        0
workarea executions – multipass                                           0
        0
SQL>exec dbms_workload_repository.create_snapshot;
SQL> COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);SQL> SQL>   2    3    4    5    6
NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter            1024
total PGA inuse                     144.376953
total PGA allocated                 196.063477
over allocation count                        0

Now I will run the query in another session

SQL> set autotrace traceonly;
SQL> select * from t1 order by c1;
2999999 rows selected.

In the same time I will check what happens in WORK AREA using the following query:

col sid for 99999
col operation for a20
col start_time for a20
col expected_size_MB for 99,999.9
col Act_Used_MB for 99,999.9
col Max_Used_MB for 99,999.9
col TEMP_Seg_MB for 999,999
select to_char(sql_exec_start, ‘yyyy-mm-dd hh24:mi:ss’) start_time, sql_id,
      to_number(decode(sid, 65535, NULL, sid)) sid,
      operation_type operation, trunc(expected_size/1024/1024) expected_size_MB,
      trunc(actual_mem_used/1024/1024) Act_Used_MB, trunc(max_mem_used/1024/1204) Max_Used_MB,
      number_passes pass, trunc(tempseg_size/1024/1024) TEMP_Seg_MB
FROM v$sql_workarea_active
ORDER BY 1,2;
START_TIME           SQL_ID           SID OPERATION            EXPECTED_SIZE_MB ACT_USED_MB MAX_USED_MB       PASS TEMP_SEG_MB
——————– ————- —— ——————– —————- ———– ———– ———- ———–
2018-02-08 07:37:42  0k6yvhdy619tc    357 SORT (v2)                        14.0        14.0        87.0          1       2,955

Let me capture the statistics again.

SQL> col NAME for a40;

SQL> SELECT NAME, VALUE, UNIT FROM V$PGASTAT
where NAME in ( ‘aggregate PGA target parameter’, ‘aggregate PGA auto target’, ‘total PG  2  A inuse ‘,
 ‘total PGA allocated’, ‘maximum PGA used for auto workareas’, ‘cache hit percentage’, ‘over allocation count’);
NAME                                          VALUE UNIT
—————————————- ———- ————
aggregate PGA target parameter           1073741824 bytes
aggregate PGA auto target                 830739456 bytes
total PGA allocated                       203883520 bytes
maximum PGA used for auto workareas       107125760 bytes
over allocation count                             0
cache hit percentage                          51.45 percent
6 rows selected.
SQL> COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);SQL> SQL>   2    3    4    5    6
NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter            1024
total PGA inuse                     143.730469
total PGA allocated                 194.438477
over allocation count                        0
SQL>  SELECT name profile, cnt, DECODE(total, 0, 0, ROUND(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (SUM(value) over ()) total
 FROM V$SYSSTAT
WHERE name
 LIKE ‘workarea exec%’);  2    3    4    5
PROFILE                                                                 CNT
—————————————————————- ———-
PERCENTAGE
———-
workarea executions – optimal                                          6484
      100
workarea executions – onepass                                             2
        0
workarea executions – multipass                                           0
        0

 Test case 2:-Let me run the sort again.

SQL>  set autotrace traceonly;
SQL> select * from t1 order by c1;
2999999 rows selected.

Let me capture the statistics again

 SQL> COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);SQL> SQL>   2    3    4    5    6
NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter            1024
total PGA inuse                     153.366211
total PGA allocated                 209.775391
over allocation count                        0
SQL> SELECT NAME, VALUE, UNIT FROM V$PGASTAT
where NAME in ( ‘aggregate PGA target parameter’, ‘aggregate PGA auto target’, ‘total PG  2  A inuse ‘,
 ‘total PGA allocated’, ‘maximum PGA used for  3   auto workareas’, ‘cache hit percentage’, ‘over allocation count’);
NAME                                     VALUE UNIT
———————————– ———- ————
aggregate PGA target parameter      1073741824 bytes
aggregate PGA auto target            823560192 bytes
total PGA allocated                  216297472 bytes
maximum PGA used for auto workareas  107125760 bytes
over allocation count                        0
cache hit percentage                     50.93 percent
6 rows selected.
SQL> SELECT name profile, cnt, DECODE(total, 0, 0, ROUND(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (SUM(value) over ()) total
 FROM V$SYSSTAT
WHERE name
 LIKE ‘workarea exec%’);  2    3    4    5
PROFILE                                                                 CNT
—————————————————————- ———-
PERCENTAGE
———-
workarea executions – optimal                                          7095
      100
workarea executions – onepass                                             4
        0
workarea executions – multipass                                           0
        0

AWR statistics related to PGA

 

 

Test Case 3:-

Let me increase the PGA_AGGREGATE_TARGET to 10GB.

SQL> set autotrace traceonly;

SQL> exec dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

SQL> select * from t1 order by c1;

2999999 rows selected.

In the same time I will check what happens in WORK AREA using the following query:

SQL> col sid for 99999
SQL> col operation for a20
col start_time for a20
SQL> SQL> col expected_size_MB for 99,999.9
col Act_Used_MB for 99,999.9
SQL> SQL> col Max_Used_MB for 99,999.9
col TEMP_Seg_MB for 999,999
SQL> SQL> select to_char(sql_exec_start, ‘yyyy-mm-dd hh24:mi:ss’) start_time, sql_id,
      to_number(decode(sid, 65535, NULL, sid)) sid,
 2    3         operation_type operation, trunc(expected_size/1024/1024) expected_size_MB,
      trunc(actual_mem_used/1024/1024) Act_Used_MB, trunc(max_mem_used/1024/1204) Max_Used_MB,
 4    5         number_passes pass, trunc(tempseg_size/1024/1024) TEMP_Seg_MB
FROM v$sql_workarea_active
 6    7  ORDER BY 1,2;
START_TIME           SQL_ID           SID OPERATION            EXPECTED_SIZE_MB ACT_USED_MB MAX_USED_MB       PASS TEMP_SEG_MB
——————– ————- —— ——————– —————- ———– ———– ———- ———–
2018-02-08 07:58:23  0k6yvhdy619tc    357 SORT (v2)                         1.0          .0       870.0          1       2,955
COL NAME FOR A35
SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);
NAME                                  VALUE_MB
———————————– ———-
aggregate PGA target parameter           10240
total PGA inuse                     1081.51172
total PGA allocated                 1245.37207
over allocation count                        0

Let me capture statistics again after query is executed.

SQL> SELECT name profile, cnt, DECODE(total, 0, 0, ROUND(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (SUM(value) over ()) total
 FROM V$SYSSTAT
WHERE name
 LIKE ‘workarea exec%’);   2    3    4    5
PROFILE                                                                 CNT PERCENTAGE
—————————————————————- ———- ———-
workarea executions – optimal                                          7711        100
workarea executions – onepass                                             8          0
workarea executions – multipass                                           0          0
SQL> SELECT NAME, VALUE, UNIT FROM V$PGASTAT
 where NAME in ( ‘aggregate PGA target parameter’, ‘aggregate PGA auto target’, ‘total PG  2  A inuse ‘,
 ‘total PGA allocated’, ‘maximum PGA used for  3   auto workareas’, ‘cache hit percentage’, ‘over allocation count’);  3
NAME                                     VALUE UNIT
———————————– ———- ————
aggregate PGA target parameter      1.0737E+10 bytes
aggregate PGA auto target           9517178880 bytes
total PGA allocated                  218789888 bytes
over allocation count                        0
cache hit percentage                     50.57 percent

AWR report related to PGA:-

Test case 4:-

Now we will test about Non-tunable component like PL/SQL area and we will see effect of  PGA_AGGREGATE_LIMIT.

Let me set the PGA parameters to less value

SQL> show parameter pga;
NAME                                 TYPE        VALUE
———————————— ———– ——————————
pga_aggregate_limit                  big integer 20G
pga_aggregate_target                 big integer 10G
SQL> alter system set pga_aggregate_target=1G;
System altered.
SQL> alter system set pga_aggregate_limit=2G;
System altered.

Now create PL/SQL table and execute it

SQL> create or replace package demo_pkg    As
type array is table of char(2000) index by binary_integer;
g_data array;
end;
begin
for i in 1 .. 200000
loop
demo_pkg.g_data(i) := ‘x’;
end loop;
end;
/

Let me capture the statistics

SQL> SELECT name profile, cnt, DECODE(total, 0, 0, ROUND(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (SUM(value) over ()) total
 FROM V$SYSSTAT
WHERE name
 LIKE ‘workarea exec%’);   2    3    4    5
PROFILE                                                                 CNT
—————————————————————- ———-
PERCENTAGE
———-
workarea executions – optimal                                          7954
      100
workarea executions – onepass                                             8
        0
workarea executions – multipass                                           0
        0
SQL> SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);  2    3    4    5    6
NAME                                                               VALUE_MB
—————————————————————- ———-
aggregate PGA target parameter                                         1024
total PGA inuse                                                   604.05957
total PGA allocated                                              656.980469
over allocation count                                                     0

Now I will increase 2000000 to simulate error .This will cause PL/SQL table to allocate more memory in PGA.

SQL> begin
for i in 1 .. 2000000
loop
demo_pkg.g_data(i) := ‘x’;
end loop;
end;
/  2    3    4    5    6    7
begin
*
ERROR at line 1:
ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Let me capture statistics after failure

SQL> SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);  2    3    4    5    6
NAME                                                               VALUE_MB
—————————————————————- ———-
aggregate PGA target parameter                                         1024
total PGA inuse                                                  593.813477
total PGA allocated                                              627.732422
over allocation count                                            .000016212
SQL> SELECT name profile, cnt, DECODE(total, 0, 0, ROUND(cnt*100/total)) percentage
 FROM (SELECT name, value cnt, (SUM(value) over ()) total
 FROM V$SYSSTAT
WHERE name
 LIKE ‘workarea exec%’);  2    3    4    5
PROFILE                                                                 CNT
—————————————————————- ———-
PERCENTAGE
———-
workarea executions – optimal                                          8067
      100
workarea executions – onepass                                             8
        0
workarea executions – multipass                                           0
        0

Test case 5:

Now I will increase pga_aggregate_limit to 10G and check the behavior.The PL/SQL table will run fine now.

SQL> alter system set pga_aggregate_limit=10G;
System altered.
SQL> exec dbms_workload_repository.create_snapshot;
PL/SQL procedure successfully completed.
SQL> begin
for i in 1 .. 2000000
loop
demo_pkg.g_data(i) := ‘x’;
end loop;
end;
/  2    3    4    5    6    7
PL/SQL procedure successfully completed.
SQL>  SELECT NAME, VALUE/1024/1024 VALUE_MB
FROM   V$PGASTAT
WHERE NAME IN (‘aggregate PGA target parameter’,
‘total PGA allocated’,
‘total PGA inuse’,
‘over allocation count’);  2    3    4    5    6
NAME                                                               VALUE_MB
—————————————————————- ———-
aggregate PGA target parameter                                         1024
total PGA inuse                                                  4607.88965
total PGA allocated                                              4641.04688
over allocation count                                            .000023842

AWR report status:-

Reference:-

https://docs.oracle.com/database/121/TGDBA/tune_pga.htm#TGDBA95344

http://oracleinaction.com/tune-pga-i/

Index in 12c by example test case

A. B-Tree Indexes

I have discussed more details in the below blog.

Index internal oracle using simple test case

B.Invisible Indexes

Deciding which columns to index is hard. The primary key is automatically indexed, the foreign keys should also be indexed, but then what? Even more difficult is deciding which index to remove that might be a bad index. Every time a record is inserted, all of the indexes have to be updated. If the column of an index is updated, the index has to be updated.Use /*+ use_invisible_indexes */ hint to access all invisible indexes.

Test Case:-

SQL> create table t1 as select * from dba_objects;

SQL>create index I1_T1 on t1(OBJECT_ID);

SQL> select OBJECT_NAME from t1 where object_id=30;

---------------------------------------------------------------------------------------------

| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                    |       |     1 |    30 |     2   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    30 |     2   (0)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | I1_T1 |     1 |       |     1   (0)| 00:00:01 |

---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("OBJECT_ID"=30)

SQL>alter index I1_T1 invisible;

SQL> select OBJECT_NAME from t1 where object_id=30;

--------------------------------------------------------------------------

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------

|   0 | SELECT STATEMENT  |      |     1 |    30 |   426   (1)| 00:00:01 |

|*  1 |  TABLE ACCESS FULL| T1   |     1 |    30 |   426   (1)| 00:00:01 |

--------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

1 - filter("OBJECT_ID"=30)

SQL> select /*+ use_invisible_indexes */ OBJECT_NAME from t1 where object_id=30;

---------------------------------------------------------------------------------------------

| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |

-------------------------------------------------------------------------------- -------------

|   0 | SELECT STATEMENT                    |       |     1 |    30 |     2   (0 )| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |     1 |    30 |     2   (0)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | I1_T1 |     1 |       |     1   (0)| 00:00:01 |

---------------------------------------------------------------------------------------------

2 - access("OBJECT_ID"=30)

SQL>  select index_name,visibility from dba_indexes where index_name='I1_T1';

INDEX_NAME                                                                                                                       VISIBILIT

-------------------------------------------------------------------------------------------------------------------------------- ---------

I1_T1                                                                                                                            INVISIBLE

C.Multiple Types of Indexes on the Same Column(s)

There can be multiple types of indexes on the same column(s), but only one index may be set to visible at the same time, unless the other index is a function-based index (since a function-based index isn’t really on the same column; it’s on the function of the column). This feature is great for variable workloads on the same table. It is great to use different types of indexes for batch, query, or data warehousing at different times of the day or night.

Test case:-

In the test case we use both bitmap and btree index on same column but only one index will be visible at one time.

SQL> create index I2_t1 on t1(object_type);

SQL> alter index I2_t1 invisible;

SQL> create bitmap index T3_BM_T1 on t1(object_type);

SQL> alter index I2_T1 invisible;

SQL> alter index T3_BM_T1 visible;

SQL> select object_name from t1 where object_type='PROCEDURE';

Execution Plan

------------------------------------------------------------------------------------------------

| Id  | Operation                           | Name     | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                    |          |  2068 | 70312 |   281   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1       |  2068 | 70312 |   281   (0)| 00:00:01 |

|   2 |   BITMAP CONVERSION TO ROWIDS       |          |       |       |            |          |

|*  3 |    BITMAP INDEX SINGLE VALUE        | T3_BM_T1 |       |       |            |          |

------------------------------------------------------------------------------------------------

3 - access("OBJECT_TYPE"='PROCEDURE')

SQL> alter index T3_BM_T1 invisible;

SQL> alter index I2_T1 visible;

SQL> select object_name from t1 where object_type='PROCEDURE';

---------------------------------------------------------------------------------------------

| Id  | Operation                           | Name  | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                    |       |  2068 | 70312 |   100   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1    |  2068 | 70312 |   100   (0)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | I2_T1 |  2068 |       |     6   (0)| 00:00:01 |

---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("OBJECT_TYPE"='PROCEDURE')

It is possible to create two visible indexes on a column at once, if one of those indexes is a function-based index.

By using invisible indexes, in 12c, you can now create multiple indexes on the same column or columns. This can be helpful with variable workloads. Perhaps use a b-tree index for the daily workload and use a reverse key index at night. But, keep in mind that there is internal index maintenance and also costs for INSERT, DELETE and when you update the indexed column on all of these indexes.

D.Concatenated Indexes and Skip scan indexes

When a single index has multiple columns that are indexed, it is called a concatenated or composite index. While Oracle’s introduction of skip-scan index access has increased the optimizer’s options when using concatenated indexes, you should be careful when selecting the order of the columns in the index. In general, the leading column of the index should be the one most likely to be used in WHERE clauses and also the most selective column of the set.

Test case:-

SQL> create index I1_comp_t1 on t1(object_id,object_type);

SQL> drop index I2_T1;

SQL> drop index T3_BM_T1;

SQL> select object_name from t1 where object_id=30;

--------------------------------------------------------------------------------------------------

| Id  | Operation                           | Name       | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                    |            |     1 |    30 |     3   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1         |     1 |    30 |     3   (0)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | I1_COMP_T1 |     1 |       |     2   (0)| 00:00:01 |

--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

--------------------------------------------------

2 - access("OBJECT_ID"=30)

Skip scan:-

As discussed in the section “Concatenated Indexes” , the index skip-scan feature enables the optimizer to use a concatenated index even if its leading column is not listed in the WHERE clause. Index skip-scans are faster than full scans of the index, requiring fewer reads to be performed.

SQL> select object_name from t1 where object_type='SEQUENCE';

--------------------------------------------------------------------------------------------------

| Id  | Operation                           | Name       | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                    |            |  2068 | 70312 |   372   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T1         |  2068 | 70312 |   372   (0)| 00:00:01 |

|*  2 |   INDEX SKIP SCAN                   | I1_COMP_T1 |  2068 |       |   314   (0)| 00:00:01 |

--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("OBJECT_TYPE"='SEQUENCE')

filter("OBJECT_TYPE"='SEQUENCE')

 

E.Index Fast full scan

The fast full scan can be used if all of the columns in the query for the table are in the index with the leading edge of the index not part of the WHERE condition.During a fast full scan of an index, Oracle reads all of the leaf blocks in a b-tree index. The index is
being read sequentially, so multiple blocks can be read at once. The DB_FILE_MULTIBLOCK_READ_COUNT parameter in the initialization file controls the number of blocks that can be read simultaneously. The fast full scan usually requires fewer physical I/Os than a full table scan, allowing the query to be resolved faster.

Test case:-
 SQL> create table t1(c1 number,c2 char(10),c3 date);

SQL>  insert into t1 select rownum,'A',sysdate from dual connect by rownum<100000;

SQL> commit;

SQL> insert into t1 select rownum,'B',sysdate from dual connect by rownum<100

SQL> commit;

SQL> exec dbms_stats.gather_table_stats('C##TEST','T1');

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly;

SQL> select c1 from t1 where c2='B';

Execution Plan

--------------------------------------------------------------------------

| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------

|   0 | SELECT STATEMENT  |      | 50049 |   782K|   137   (1)| 00:00:01 |

|*  1 |  TABLE ACCESS FULL| T1   | 50049 |   782K|   137   (1)| 00:00:01 |

--------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

1 - filter("C2"='B')

SQL> create index I1_T1 on T1(c1,c2);

Index created.

SQL> exec dbms_stats.gather_table_stats('C##TEST','T1');

PL/SQL procedure successfully completed.

SQL> select c1 from t1 where c2='B';

Execution Plan

| Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |

------------------------------------------------------------------------------

|   0 | SELECT STATEMENT     |       |    98 |  1568 |   104   (1)| 00:00:01 |

|*  1 |  INDEX FAST FULL SCAN| I1_T1 |    98 |  1568 |   104   (1)| 00:00:01 |

------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

1 - filter("C2"='B')

F.Bitmap Index

Bitmap indexes are not generally recommended for online transaction processing (OLTP) applications. B-tree indexes contain a ROWID with the indexed value. So, when updating tables and their indexes, Oracle has the ability to lock individual rows. Bitmap indexes are stored as compressed indexed values, which can contain a range of ROWIDs. Therefore, Oracle has to lock the entire range of the ROWIDs for a given value. This type of locking has the potential to cause deadlock situations with certain types of DML statements. SELECT statements are not affected by this locking problem. A solution to updates is to drop the index, do the updating in batch during off-hours, and then rebuild the bitmap index (you could also add/drop an index on a column(s) that makes the update faster possibly as well).

Bitmap indexes have several restrictions:

Bitmap indexes are not considered by the rule-based optimizer.

Performing an ALTER TABLE statement and modifying a column that has a bitmap index built on it invalidates the index.

Bitmap indexes do not contain any of the data from the column and cannot be used for any type of integrity checking.

Bitmap indexes cannot be declared as unique.

Bitmap indexes have a maximum length of 30 columns.

Test case:-

In this test case we will test how bitmap join can be helpful over btree index.

1.First with bitmap index

SQL> create table test_bitmap(c1 number,c2 char(100),c3 char(100));

SQL> insert into test_bitmap select rownum,'A','AA' from dual connect by rownum<1000000;

SQL> insert into test_bitmap select rownum,'B','BB'  from dual connect by rownum<100000;

SQL> insert into test_bitmap select rownum,'C','CC' from dual connect by rownum<10000;

SQL> create bitmap index B_test_bitmap on test_bitmap(c2);

SQL> create bitmap index B_test_bitmap1 on test_bitmap(c3);

SQL> exec dbms_stats.gather_table_stats('C##TEST','TEST_BITMAP');

SQL> select c3 from test_bitmap where c2='B';

99999 rows selected.

Elapsed: 00:00:00.36

Execution Plan

----------------------------------------------------------

Plan hash value: 892894935

--------------------------------------------------------------------------------------------------

| Id  | Operation                     | Name             | Rows  | Bytes | Cost (%CPU)| Time     |

--------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT              |                  |   369K|    71M|  8665   (1)| 00:00:01 |

|*  1 |  VIEW                         | index$_join$_001 |   369K|    71M|  8665   (1)| 00:00:01 |

|*  2 |   HASH JOIN                   |                  |       |       |            |          |

|   3 |    BITMAP CONVERSION TO ROWIDS|                  |   369K|    71M|    12   (0)| 00:00:01 |

|*  4 |     BITMAP INDEX SINGLE VALUE | B_TEST_BITMAP    |       |       |            |          |

|   5 |    BITMAP CONVERSION TO ROWIDS|                  |   369K|    71M|    35   (0)| 00:00:01 |

|   6 |     BITMAP INDEX FULL SCAN    | B_TEST_BITMAP1   |       |       |            |          |

--------------------------------------------------------------------------------------------------

2.With Btree index.

SQL> create index I_TEST_BITMAP on TEST_BITMAP(C2);

Index created.

Elapsed: 00:00:02.98

SQL> create index I_TEST_BITMAP1 on TEST_BITMAP(C3);

Index created.

Elapsed: 00:00:02.09

SQL>  select c3 from test_bitmap where c2='B';

99999 rows selected.

Elapsed: 00:00:00.59

Execution Plan

------------------------------------------------------------------------------------------------------

| Id  | Operation                           | Name          | Rows  | Bytes | Cost (%CPU)| Time     |

-----------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT                    |               |   369K|    71M| 16675   (1)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| TEST_BITMAP   |   369K|    71M| 16675   (1)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | I_TEST_BITMAP |   369K|       |  5788   (1)| 00:00:01 |

-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access("C2"='B')

Please note there is performance improvement in case of bitmap index join.

G.Reverse key index

When sequential data is loaded, the index may encounter I/O-related bottlenecks. During the data loads, one part of the index, and one part of the disk, may be used much more heavily than any other part. To remedy this problem, you should store your index tablespaces on disk architectures that permit the files to be physically striped across multiple disks.
Oracle provides reverse key indexes as another solution to this performance problem. When data is stored in a reverse key index, its values are reversed prior to being stored in the index. Thus, the values 1234, 1235, and 1236 are stored as 4321, 5321, and 6321. As a result, the index may update
different index blocks for each inserted row.

In the below test case,I will show how buffer busy wait can be minimized in case 30 parallel session insert data into a table using sequence.There is index on primary key of the table .

Test case:-

1.Using simple BTree index

SQL> CREATE TABLESPACE data_assm DATAFILE ‘+DATA’ SIZE 5G EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

SQL> CREATE TABLESPACE ind_assm DATAFILE ‘+DATA’ SIZE 5G EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; SQL> create table t (id number, sometext varchar2(50),my_date date) tablespace data_assm;

SQL> create or replace procedure manyinserts as
begin
for i in 1..1000000 loop
insert into t values (id_seq.nextval, ‘DOES THIS CAUSE BUFFER BUSY WAITS?’,sysdate);
end loop;
commit;
end;
/

Procedure created.

SQL> create or replace procedure manysessions as
v_jobno number:=0;
begin
FOR i in 1..30 LOOP
dbms_job.submit(v_jobno,’manyinserts;’, sysdate);
END LOOP;
commit;
end;
/

Procedure created.

SQL> create index i_t on t(id) tablespace ind_assm;

Index created.

SQL>exec manysessions; (This will start load data with 30 multiple session)

SQL> exec dbms_stats.gather_table_stats(‘C##TEST’,’T’);

PL/SQL procedure successfully completed.

SQL> EXEC dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

In AWR,top 10 wait event:-

2.Using reverse key index

SQL> CREATE TABLESPACE data_assm DATAFILE '+DATA' SIZE 5G EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

SQL> CREATE TABLESPACE ind_assm DATAFILE '+DATA' SIZE 5G EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO; SQL> create table t (id number, sometext varchar2(50),my_date date) tablespace data_assm;

SQL> create or replace procedure manyinserts as
begin
for i in 1..1000000 loop
 insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?',sysdate);
end loop;
commit;
end;
/

Procedure created.

SQL> create or replace procedure manysessions as
v_jobno number:=0;
begin
FOR i in 1..30 LOOP
dbms_job.submit(v_jobno,'manyinserts;', sysdate);
END LOOP;
commit;
end;
/

Procedure created.

SQL> create index i_t on t(id) reverse tablespace ind_assm;

Index created.

SQL>exec manysessions; (This will start load data with 30 multiple session)

SQL> exec dbms_stats.gather_table_stats('C##TEST','T');

PL/SQL procedure successfully completed.

SQL> EXEC dbms_workload_repository.create_snapshot;

PL/SQL procedure successfully completed.

AWR

 

H.Function based index

You can create function-based indexes on your tables. Without function-based indexes, any query that performed a function on a column could not use that column’s index.

Test case:-

This test case will show when we need to create function based index and How we can avoid by modifying where clause.

create table test_func(c1 number,c2 date);

insert into test_func select rownum,sysdate from dual connect by rownum<100;

insert into test_func select rownum,sysdate-1 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-2 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-3 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-4 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-5 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-6 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-7 from dual connect by rownum<100;

insert into test_func select rownum,sysdate-8 from dual connect by rownum<100;

create index I_C2 on test_func(c2);

create index I_C2_FUNC on test_func(trunc(c2));

exec dbms_stats.gather_table_stats('C##TEST','TEST_FUNC');

SQL> select count(1) from test_func where trunc(c2)='03-FEB-18';

COUNT(1)

----------

100

Execution Plan



-------------------------------------------------------------------------------

| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |

-------------------------------------------------------------------------------

|   0 | SELECT STATEMENT  |           |     1 |     8 |     1   (0)| 00:00:01 |

|   1 |  SORT AGGREGATE   |           |     1 |     8 |            |          |

|*  2 |   INDEX RANGE SCAN| I_C2_FUNC |    39 |   312 |     1   (0)| 00:00:01 |

-------------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - access(TRUNC(INTERNAL_FUNCTION("C2"))='03-FEB-18')

SQL> select count(1) from test_func where c2>'03-FEB-18' and c2<(to_date('03-FEB-18')+0.99999);

COUNT(1)

----------

100

Execution Plan

---------------------------------------------------------------------------

| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |

---------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |      |     1 |    16 |     1   (0)| 00:00:01 |

|   1 |  SORT AGGREGATE    |      |     1 |    16 |            |          |

|*  2 |   FILTER           |      |       |       |            |          |

|*  3 |    INDEX RANGE SCAN| I_C2 |    29 |   464 |     1   (0)| 00:00:01 |

---------------------------------------------------------------------------

Predicate Information (identified by operation id):

---------------------------------------------------

2 - filter(TO_DATE('03-FEB-18')+.999988425925925925925925925925925925

9259>'03-FEB-18')

3 - access("C2">'03-FEB-18' AND "C2"<TO_DATE('03-FEB-18')+.9999884259

259259259259259259259259259259)

filter(TRUNC(INTERNAL_FUNCTION("C2"))>=TRUNC('03-FEB-18') AND

TRUNC(INTERNAL_FUNCTION("C2"))<=TRUNC(TO_DATE('03-FEB-18')+.999988425925

9259259259259259259259259259))

I.Index organized table

Properties and restrictions

  • An IOT must contain a primary key.

  • Rows are accessed via a logical rowid and not a physical rowid like in heap-organized tables.

  • An IOT cannot be in a cluster.

  • An IOT cannot contain a column of LONG data type.

  • You cannot modify an IOT index property using ALTER INDEX (error ORA-25176), you must use an ALTER TABLE instead.

Advantages of an IOT

  • As an IOT has the structure of an index and stores all the columns of the row, accesses via primary key conditions are faster as they don’t need to access the table to get additional column values.

  • As an IOT has the structure of an index and is thus sorted in the order of the primary key, accesses of a range of primary key values are also faster.

  • As the index and the table are in the same segment, less storage space is needed.

  • In addition, as rows are stored in the primary key order, you can further reduce space with key compression.

  • As all indexes on an IOT uses logical rowids, they will not become unusable if the table is reorganized.

Row overflow area

If some columns of the table are infrequently accessed, it is possible to offload them into another segment named the overflow area. An overflow segment will decrease the size of the main (or top) segment and will increase the performance of statements that do not need access the columns in the overflow area. The overflow segments can reside in a tablespace different from the main segments.

Notes:

  • The overflow area can contains only columns that are not part of the primary key.

  • If a row cannot fit in a block, you must define an overflow area.

  • Consequently, the primary key values of an IOT must fit in a single block.

The columns of the table that are recorded in the overflow segment are defined using the PCTHRESHOLD and/or INCLUDING options of the OVERFLOW clause (see example below).

Test case:-

SQL> CREATE TABLE test_iot (c1 INTEGER PRIMARY KEY, c2 char(100),c3 date) ORGANIZATION INDEX INCLUDING c2 OVERFLOW;

SQL> insert into test_iot select rownum,'A',sysdate from dual connect by rownum<100000;

SQL> exec dbms_stats.gather_table_stats('C##TEST','TEST_IOT');

PL/SQL procedure successfully completed.

SQL> select count(1) from test_iot where c1=1;

COUNT(1)

----------

1

Execution Plan

----------------------------------------------------------------------------------------

| Id  | Operation          | Name              | Rows  | Bytes | Cost (%CPU)| Time     |

----------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT   |                   |     1 |     5 |     2   (0)| 00:00:01 |

|   1 |  SORT AGGREGATE    |                   |     1 |     5 |            |          |

|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_92575 |     1 |     5 |     2   (0)| 00:00:01 |

----------------------------------------------------------------------------------------

Index internal oracle using simple test case

Index internal oracle using simple test case

First of all I will create a table with char(1000) so that in one block I can fit 6-7 entries .

Here are some relevant notes regarding Index split

For Leaf block

kdxlespl: bytes of uncommitted data at time of block split that have been cleaned out
kdxlende: number of deleted entries
kdxlenxt: pointer to the next leaf block in the index structure via corresponding rba
kdxleprv: pointer to the previous leaf block in the index structure via corresponding rba
Kdxledsz: deleted space
kdxlebksz: usable block space (by default less than branch due to the additional ITL entry)

For branch block

kdxbrlmc: block address if index value is less than the first (row#0) value
kdxbrsno: last index entry to be modified
kdxbrbksz: size of usable block space

Notes on 50-50 Block Split

An index block split is a relatively expensive operation:
1. Allocate new index block from index freelist
2. Redistribute block so the lower half (by volume) of index entries
remain in current block and move the other half into the new block
3. Insert the new index entry into appropriate leaf block
4. Update the previously full block such that its “next leaf block
pointer” (kdxlenxt) references the new block
5. Update the leaf block that was the right of the previously full block
such that its “previous leaf block pointer”(kdxleprv) also points to
the new block
6. Update the branch block that references the full block and add a
new entry to point to the new leaf block (effectively the lowest
value in the new leaf block)

50-50 Root Block Split

Root block is just a special case of a branch block:
1. Allocate two new blocks from the freelist
2. Redistributed the entries in the root block such that half the
entries are placed in one new block, the other half in the
other block
3. Update the root block such that it now references the two
new blocks
Richard Foote – Index Internals 73
Root block is always physically the same block
Root block split is the only time when the height of index increases
Therefore an index must always be balanced. Always !!
Suggestions that Oracle indexes become unbalanced are another
silly myth, made by those that don’t understand index block splits

90-10 Block Split

• If the new insert index entry is the maximum value, a 90-10 block split is performed
• Reduces wastage of space for index with monotonically increasing values
• Rather than leaving behind ½ empty blocks,full index blocks are generated
• I prefer to call them 99-1 block splits as 90-10 is misleading

1.Test Case preparation:-We will see how many block has been allocated.

SQL> create table test_ind(c1 char(1000));
Table created.
SQL> insert into test_ind values(‘A’);
1 row created.
SQL> insert into test_ind values(‘B’);
1 row created.
SQL>  insert into test_ind values(‘C’);
1 row created.
SQL> insert into test_ind values(‘D’);
1 row created.
SQL> insert into test_ind values(‘E’);
1 row created.
SQL> insert into test_ind values(‘F’);
1 row created.
SQL> commit;
Commit complete.
SQL> create index i_test_ind on test_ind(c1);
Index created.
SQL>  exec dbms_stats.gather_table_stats(‘C##TEST’,’TEST_IND’);
SQL> select EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where SEGMENT_NAME=’I_TEST_IND’;
EXTENT_ID    FILE_ID   BLOCK_ID     BLOCKS
———- ———- ———- ———-
        0          6        608          8

SQL> select EXTENT_ID,FILE_ID,BLOCK_ID,BLOCKS from dba_extents where SEGMENT_NAME=’I_TEST_IND’;
EXTENT_ID    FILE_ID   BLOCK_ID     BLOCKS
———-             ———-    ———-          ———-
        0                  6                 608                   8
SQL> select object_id from dba_objects where object_name=’I_TEST_IND’;
OBJECT_ID
———-
    91946
SQL> alter session set events ‘immediate trace name treedump level 91946’;
Session altered.
SQL> alter system checkpoint;
System altered.
SQL> ALTER SYSTEM DUMP DATAFILE 6 block 611;
System altered.
SQL> ALTER SYSTEM DUMP DATAFILE 6 block 612;
System altered.
SQL> ALTER SYSTEM DUMP DATAFILE 6 block 613;
System altered.
SQL> ALTER SYSTEM DUMP DATAFILE 6 block 614;
System altered.
SQL> ALTER SYSTEM DUMP DATAFILE 6 block 615;

A.The treedump will show now the branch and leaf block entry:-

—– begin tree dump

leaf: 0x1800263 25166435 (0: row:6.6 avs:1918)

—– end tree dump

Now we can see the block_id start with 608, In ASSM set to auto: Add 3 to BLOCK_ID to find Root Block (can vary)

 

B.Dump of Leaf block 611 (Actually Branch block as only one index block allocated)

So root block is 611.We need to dump the content of block_id 611 for deeper analysis.

*** 2018-01-31 05:29:46.417
Start dump data blocks tsn: 4 file#:6 minblk 611 maxblk 611
Block dump from cache:
Dump of buffer cache at level 4 for pdb=1 tsn=4 rdba=25166435
7FCD89CE8810 00000000 2046E883 20202020 20202020  [……F         ]
7FCD89CE8820 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FCD89CE8BF0 20202020 20202020 20202020 01062020  [              ..]
7FCD89CE8C00 005D0280 83000005 202045E8 20202020  [..]……E      ]
7FCD89CE8C10 20202020 20202020 20202020 20202020  [                ]
       Repeat 61 times
7FCD89CE8FF0 80010620 04005D02 E8830000 20202044  [ ….]……D   ]
7FCD89CE9000 20202020 20202020 20202020 20202020  [                ]
       Repeat 61 times
7FCD89CE93E0 20202020 02800106 0003005D 43E88300  [    ….]……C]
7FCD89CE93F0 20202020 20202020 20202020 20202020  [                ]
       Repeat 61 times
7FCD89CE97D0 20202020 06202020 5D028001 00000200  [       ….]….]
7FCD89CE97E0 2042E883 20202020 20202020 20202020  [..B             ]
7FCD89CE97F0 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FCD89CE9BC0 20202020 20202020 01062020 005D0280  [          ….].]
7FCD89CE9BD0 83000001 202041E8 20202020 20202020  […..A          ]
7FCD89CE9BE0 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FCD89CE9FB0 20202020 20202020 20202020 80010620  [             …]
7FCD89CE9FC0 00005D02 00000000 00000000 00000000  [.]…………..]
7FCD89CE9FD0 00000000 00000000 00000000 00000000  […………….]
       Repeat 1 times
7FCD89CE9FF0 00000000 00000000 00000000 B8A70601  […………….]
Object id on Block? Y
seg/obj: 0x1672a  csc: 0x00.32b8a5  itc: 2  flg: E  typ: 2 – INDEX
    brn: 0  bdba: 0x1800260 ver: 0x01 opc: 0
    inc: 0  exflg: 0
Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0000.000.00000000  0x00000000.0000.00  —-    0  fsc 0x0000.00000000
0x02   0xffff.000.00000000  0x00000000.0000.00  C—    0  scn 0x0000.0032b8a5
Leaf block dump
===============
header address 140520757035108=0x7fcd89ce8064
kdxcolev 0
KDXCOLEV Flags = – – –
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=— is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 6
kdxcofbo 48=0x30
kdxcofeo 1966=0x7ae
kdxcoavs 1918
kdxlespl 0
kdxlende 0
kdxlenxt 0=0x0
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[7021] flag: ——-, lock: 0, len=1011
.
.
row#5[1966] flag: ——-, lock: 0, len=1011
—– end of leaf block Logical dump —–
—– end of leaf block dump —–
End dump data blocks tsn: 4 file#: 6 minblk 611 maxblk 611

 

2.Test Case preparation:-To test index block split as new entry can not be accommodated in the old leaf.

SQL> insert into test_ind values(‘G’);

1 row created.

SQL>  insert into test_ind values(‘J’);

1 row created.

SQL>  exec dbms_stats.gather_table_stats(‘C##TEST’,’TEST_IND’);

PL/SQL procedure successfully completed.

Please note LEAF_BLOCKS has been increased to 2

CLUSTERING_FACTOR is 2

SQL> select sn.name, ms.value
from v$statname sn, v$mystat ms
where sn.statistic#=ms.statistic#
and sn.name like ‘%leaf node %’ 2 3 4 ;

NAME VALUE
—————————————————————- ———-
leaf node splits 1
leaf node 90-10 splits 1

Please note that this is 90-10 split as new index entry added after max value.

A.The treedump will show now the branch and leaf block entry:-

—– begin tree dump
branch: 0x1800263 25166435 (0: nrow: 2, level: 1)
  leaf: 0x1800266 25166438 (-1: row:7.7 avs:905)
  leaf: 0x1800267 25166439 (0: row:1.1 avs:6983)
—– end tree dump

B.Dump of Branch block 611

Dump of buffer cache at level 4 for pdb=1 tsn=4 rdba=25166435
 
Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000b.018.0000023d  0x01401354.0064.01  -BU-    1  fsc 0x0000.0032e067

C.Dump of Leaf block 614

      Repeat 55 times
7FF65F363420 41E88300 20202020 20202020 20202020  […A            ]
7FF65F363430 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FF65F363800 20202020 20202020 06202020 5D028001  [           ….]]
7FF65F363810 00000000 2042E883 20202020 20202020  [……B         ]
7FF65F363820 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FF65F363BF0 20202020 20202020 20202020 01062020  [              ..]
7FF65F363C00 005D0280 83000001 202043E8 20202020  [..]……C      ]
7FF65F363C10 20202020 20202020 20202020 20202020  [                ]
       Repeat 61 times
7FF65F363FF0 80010620 02005D02 E8830000 20202044  [ ….]……D   ]
7FF65F364000 20202020 20202020 20202020 20202020  [                ]
       Repeat 61 times
7FF65F3643E0 20202020 02800106 0003005D 45E88300  [    ….]……E]
7FF65F3643F0 20202020 20202020 20202020 20202020  [                ]
       Repeat 61 times
7FF65F3647D0 20202020 06202020 5D028001 00000400  [       ….]….]
7FF65F3647E0 2046E883 20202020 20202020 20202020  [..F             ]
7FF65F3647F0 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FF65F364BC0 20202020 20202020 01062020 005D0280  [          ….].]
7FF65F364BD0 83000005 202047E8 20202020 20202020  […..G          ]
7FF65F364BE0 20202020 20202020 20202020 20202020  [                ]
       Repeat 60 times
7FF65F364FB0 20202020 20202020 20202020 80010620  [             …]
7FF65F364FC0 06005D02 00000000 00000000 00000000  [.]…………..]
7FF65F364FD0 00000000 00000000 00000000 00000000  […………….]
       Repeat 1 times
Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000b.018.0000023d  0x01401356.0064.01  CB–    0  scn 0x0000.0032e067
0x02   0x0014.014.0000023d  0x01400467.0060.01  C—    0  scn 0x0000.0032e069

D.Dump of Leaf block 615

7FF65F364030 00000000 00000002 00001FE8 00000001  […………….]
7FF65F364040 00000000 00000002 00000003 00000000  […………….]
7FF65F364050 00000000 00000000 00000000 00000000  […………….]
       Repeat 183 times
7FF65F364BD0 83000000 20204AE8 20202020 20202020  […..J          ]
7FF65F364BE0 20202020 20202020 20202020 20202020  [                ]
Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x000b.018.0000023d  0x01401356.0064.02  CB–    0  scn 0x0000.0032e067
0x02   0x0014.014.0000023d  0x01400467.0060.03  C—    0  scn 0x0000.0032e069

3.Test Case preparation:-To test index block split as an update can not be accommodated in the old leaf.

SQL> update TEST_IND set c1=’H’ where c1=’G’;

1 row updated.

SQL> commit;

Commit complete.

SQL>  exec dbms_stats.gather_table_stats(‘C##TEST’,’TEST_IND’);

PL/SQL procedure successfully completed.

SQL> select sn.name, ms.value
from v$statname sn, v$mystat ms
where sn.statistic#=ms.statistic#
and sn.name like ‘%leaf node %’ 2 3 4 ;

NAME VALUE
—————————————————————- ———-
leaf node splits 2
leaf node 90-10 splits 1

Here is will happen 50-50 block split because Value ‘G’ is updated to ‘H’

A.Dump of Leaf block 615

7F698E4E47D0 00000000 00000000 00000000 00030000 […………….]
7F698E4E47E0 2047E883 20202020 20202020 20202020 [..G ]
7F698E4E47F0 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F698E4E4BC0 20202020 20202020 01062020 006B0280 [ ….k.]
7F698E4E4BD0 83000006 202048E8 20202020 20202020 […..H ]
7F698E4E4BE0 20202020 20202020 20202020 20202020 [ ]

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.017.00000b9d 0x01400d73.016f.01 CB– 0 scn 0x0000.003c23ae
0x02 0x0011.01d.000002b0 0x01400878.0065.18 C— 0 scn 0x0000.003c23b4
row#0[6010] flag: —DS–, lock: 0, len=1011
row#1[7021] flag: ——-, lock: 0, len=1011

B.Dump of Leaf block 616

Repeat 55 times
7F698E4E3420 41E88300 20202020 20202020 20202020 […A ]
7F698E4E3430 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F698E4E3800 20202020 20202020 06202020 6B028001 [ ….k]
7F698E4E3810 00000000 2041E883 20202020 20202020 [……A ]
7F698E4E3820 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F698E4E3BF0 20202020 20202020 20202020 01062020 [ ..]
7F698E4E3C00 006B0280 83000000 202042E8 20202020 [..k……B ]
7F698E4E3C10 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7F698E4E3FF0 80010620 01006B02 E8830000 20202043 [ ….k……C ]
7F698E4E4000 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7F698E4E43E0 20202020 02800106 0002006B 44E88300 [ ….k……D]
7F698E4E43F0 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7F698E4E47D0 20202020 06202020 6B028001 00000300 [ ….k….]
7F698E4E47E0 2045E883 20202020 20202020 20202020 [..E ]
7F698E4E47F0 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F698E4E4BC0 20202020 20202020 01062020 006B0280 [ ….k.]
7F698E4E4BD0 83000004 202046E8 20202020 20202020 […..F ]
7F698E4E4BE0 20202020 20202020 20202020 20202020 [ ]

 

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.017.00000b9d 0x01400d72.016f.01 -BU- 1 fsc 0x0000.003c23ae
0x02 0x0011.01d.000002b0 0x01400878.0065.17 —- 0 fsc 0x0000.00000000
row#0[1966] flag: ——-, lock: 0, len=1011
row#1[2977] flag: ——-, lock: 0, len=1011
row#2[3988] flag: ——-, lock: 0, len=1011
row#3[4999] flag: ——-, lock: 0, len=1011
row#4[6010] flag: ——-, lock: 0, len=1011
row#5[7021] flag: ——-, lock: 0, len=1011

C.Dump of Leaf block 617

7F698E4E4BD0 83000000 20204AE8 20202020 20202020 […..J ]
7F698E4E4BE0 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F698E4E4FB0 20202020 20202020 20202020 80010620 [ …]
Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.017.00000b9d 0x01400d73.016f.02 –U- 1 fsc 0x0000.003c23ae
0x02 0x0009.00b.00000b36 0x0140a425.014d.07 C— 0 scn 0x0000.003bdb96
row#0[7021] flag: ——-, lock: 0, len=1011
kdxleprv 25166460=0x180027c

4.Test Case preparation:-I will re-insert value ‘G’ again which was updated previously from ‘H’.

In that case we can see old index space will be re-used.There will not be any new addition of leaf block/value.

SQL> insert into TEST_IND values(‘G’);

1 row created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(‘C##TEST’,’TEST_IND’);

PL/SQL procedure successfully completed.

 

A.Dump of Leaf block 615

7FF4354A43E0 00000000 00000000 00000000 47E88302 [……………G]
7FF4354A43F0 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7FF4354A47D0 20202020 06202020 6E028001 00030000 [ ….n….]
7FF4354A47E0 2047E883 20202020 20202020 20202020 [..G ]
7FF4354A47F0 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7FF4354A4BC0 20202020 20202020 01062020 006B0280 [ ….k.]
7FF4354A4BD0 83000006 202048E8 20202020 20202020 […..H

 

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.017.00000b9d 0x01400d73.016f.01 CB– 0 scn 0x0000.003c23ae
0x02 0x0007.015.00000fb7 0x0142ef8c.011b.15 –U- 1 fsc 0x0000.003c4def

row#0[6010] flag: —DS–, lock: 0, len=1011

row#1[7021] flag: ——-, lock: 0, len=1011

row#2[7021] flag: ——-, lock: 0, len=1011

B.Dump of Leaf block 616

Repeat 55 times
7F0E2A993420 41E88300 20202020 20202020 20202020 […A ]
7F0E2A993430 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F0E2A993800 20202020 20202020 06202020 6B028001 [ ….k]
7F0E2A993810 00000000 2041E883 20202020 20202020 [……A ]
7F0E2A993820 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F0E2A993BF0 20202020 20202020 20202020 01062020 [ ..]
7F0E2A993C00 006B0280 83000000 202042E8 20202020 [..k……B ]
7F0E2A993C10 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7F0E2A993FF0 80010620 01006B02 E8830000 20202043 [ ….k……C ]
7F0E2A994000 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7F0E2A9943E0 20202020 02800106 0002006B 44E88300 [ ….k……D]
7F0E2A9943F0 20202020 20202020 20202020 20202020 [ ]
Repeat 61 times
7F0E2A9947D0 20202020 06202020 6B028001 00000300 [ ….k….]
7F0E2A9947E0 2045E883 20202020 20202020 20202020 [..E ]
7F0E2A9947F0 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times
7F0E2A994BC0 20202020 20202020 01062020 006B0280 [ ….k.]
7F0E2A994BD0 83000004 202046E8 20202020 20202020 […..F ]
7F0E2A994BE0 20202020 20202020 20202020 20202020 [ ]
Repeat 60 times

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.017.00000b9d 0x01400d72.016f.01 -BU- 1 fsc 0x0000.003c23ae
0x02 0x0011.01d.000002b0 0x01400878.0065.17 —- 0 fsc 0x0000.00000000

C.Dump of Leaf block 617

Repeat 435 times
7F0E2A994BD0 83000000 20204AE8 20202020 20202020 […..J ]
7F0E2A994BE0 20202020 20202020 20202020 20202020 [ ]

Itl Xid Uba Flag Lck Scn/Fsc
0x01 0x0003.017.00000b9d 0x01400d73.016f.02 –U- 1 fsc 0x0000.003c23ae
0x02 0x0009.00b.00000b36 0x0140a425.014d.07 C— 0 scn 0x0000.003bdb96

5.Test Case preparation:-Now we will test what will happen to CLUSTER_FACTOR and USED_SPACE.

CLUSTER_FACTOR will remain same.

SQL> analyze index I_TEST_IND validate structure;

Index analyzed.

SQL> analyze index I_TEST_IND validate structure;

Index analyzed.

Undo Advisor help to estimate the undo tablespace size and undo retention to avoid ORA-1555

Prepare by: Nurullah Sharif

Scope: Undo Advisor

 

Undo Advisor help to estimate the undo tablespace size and also advise of undo retention.

SQL> @db
NAME      OPEN_MODE
--------- --------------------
COLLPROD  READ WRITE

 

Undo retention is 900 sec which 15 min

SQL> sho parameter undo
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTBS2

Undo tablespace Size:

SQL> @tblspc
TABLESPACE_NAME                ALLOCATED_MB    USED_MB FREE_SPACE_MB PERCENT_USED
------------------------------ ------------ ---------- ------------- ------------
SYSAUX                                  300    -88.625       388.625       -29.54
SYSTEM                                  300    -55.625       355.625       -18.54
DATA                                  30720        109         30611          .35
UNDOTBS2                                125       5.75        119.25          4.6
UNDOTBS1                                140    18.8125      121.1875        13.44
USERS                                     5     1.3125        3.6875        26.25
SYSTEM                                  870    514.375       355.625        59.12
SYSAUX                                 1350    961.375       388.625        71.21

8 rows selected.

 

Note: To make the undo tablespace fixed-size, Oracle suggests that you first allow enough time after database creation to run a full workload, thus allowing the undo tablespace to grow to its minimum required size to handle the workload. Then, you can use the Undo Advisor to determine, if desired, how much larger to set the size of the undo tablespace to allow for long-running queries and Oracle Flashback operations.

Historical information in memory:

SQL> set serveroutput on
 SQL> DECLARE
 utbsiz_in_MB NUMBER;
 BEGIN
 utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
 dbms_output.put_line('=================================================================');
 dbms_output.put_line('The Minimum size of the undo tablespace required is : '||utbsiz_in_MB||'
 MB');
 dbms_output.put_line('=================================================================');
 end;
 /
==============================================
The Minimum size of the undo tablespace required is : 72 MB
==============================================

PL/SQL procedure successfully completed.

Function undo_info is used to get information about undo tablespace of the current instance.

it returns undo tablespace name, max possible size, if it is auto extensible, current undo retention value and if the undo tablespace has guaranteed undo retention.

SQL> set serveroutput on
 SQL> DECLARE
 tbs_name VARCHAR2(30);
 tbs_size NUMBER(10);
 tbs_autoextend BOOLEAN;
 tbs_retention NUMBER(5);
 tbs_guarantee BOOLEAN;
 undo_adv BOOLEAN;
 BEGIN
 dbms_output.put_line('=====================================================================');
 undo_adv := dbms_undo_adv.undo_info(tbs_name, tbs_size, tbs_autoextend, tbs_retention, tbs_guarantee);
 If undo_adv=TRUE then
 dbms_output.put_line('UNDO Tablespace Name : ' || tbs_name);
 dbms_output.put_line('UNDO tablespace is '|| CASE WHEN tbs_autoextend THEN 'Auto Extensiable' ELSE 'Fixed Size' END);
 If tbs_autoextend=TRUE then dbms_output.put_line('UNDO Tablespace Maximum size (MB) is : ' || TO_CHAR(tbs_size));
 else dbms_output.put_line('UNDO Tablespace Fixed size (MB) is : ' || TO_CHAR(tbs_size));
 end if;
 dbms_output.put_line('Undo Retention is ' || TO_CHAR(tbs_retention)||' Seconds' ||' Equivelant to ' ||round((tbs_retention/60),2) ||' Minutes');
 dbms_output.put_line('Retention : '||CASE WHEN tbs_guarantee THEN 'Guaranteed ' ELSE 'Not Guaranteed' END);
 else dbms_output.put_line('Function undo_info can only run if parameters undo_management is auto');
 end if;
 dbms_output.put_line('=====================================================================');
 END;
 /

==============================================

UNDO Tablespace Name : UNDOTBS2

UNDO tablespace is Auto Extensiable

UNDO Tablespace Maximum size (MB) is : 32768

Undo Retention is 900 Seconds Equivelant to 15 Minutes

Retention : Not Guaranteed

==============================================

PL/SQL procedure successfully completed.

 

Function longest_query returns the length of the longest query for a given period

SQL> SELECT 'The Length of the Longest Query in Memory is ' || dbms_undo_adv.longest_query LONGEST_QUERY FROM dual;
LONGEST_QUERY
--------------------------------------------------------------------------------
The Length of the Longest Query in Memory is 1472

 

The Output using Start/End time :

SQL> SELECT 'The Length of the Longest Query During This Time Range is ' ||dbms_undo_adv.longest_query(SYSDATE-1/24, SYSDATE) LONGEST_QUERY FROM dual;

LONGEST_QUERY

The Length of the Longest Query During This Time Range is 1080

Function required_retention  returns the required value for parameter undo_retention to satisfy longest query based on undo statistics available

SQL> SELECT 'The Required undo_retention using Statistics In Memory is ' || dbms_undo_adv.required_retention required_retention FROM dual;

REQUIRED_RETENTION

——————————————————————————–

The Required undo_retention using Statistics In Memory is 1472

Current retention is 900,  and the required retention is 1472

 

SQL> SELECT 'The Required undo_retention During This Time Range is ' ||dbms_undo_adv.required_retention(SYSDATE-1/24, SYSDATE) required_retention FROM dual;

REQUIRED_RETENTION

——————————————————————————–

The Required undo_retention During This Time Range is 1080

 

SQL> SELECT 'The Required undo tablespace size using Statistics In Memory is ' || dbms_undo_adv.required_undo_size(900) || ' MB' required_undo_size FROM dual;

REQUIRED_UNDO_SIZE

——————————————————————————–

The Required undo tablespace size using Statistics In Memory is 80 MB

 

Function undo_health is used to check if there is any problem with the current setting of undo_retention and undo tablespace size based on the historical information of given period , and provide recommendation to fix the problem.

 

If the return value is 0, no problem is found. Otherwise, parameter “problem” and “recommendation” are the problem and recommendation on fixing the problem.

 

The Output Parameters are :

 

problem: problem of the system. It can be for example : “long running query may fail” or “undo tablespace cannot satisfy undo_retention”.

recommendation: recommendation on fixing the problem found.

rationale: rationale for the recommendation.

retention: numerical value of retention if recommendation is to change retention.

utbsize: numberical value of undo tablespace size in MB if recommendation is to change undo tablespace size.

 

DECLARE
 prob VARCHAR2(100);
 reco VARCHAR2(100);
 rtnl VARCHAR2(100);
 retn PLS_INTEGER;
 utbs PLS_INTEGER;
 retv PLS_INTEGER;
 BEGIN
 retv := dbms_undo_adv.undo_health(prob, reco, rtnl, retn, utbs);
 dbms_output.put_line('=====================================================================');
 If retv=0 Then dbms_output.put_line('Problem: ' || prob || ' The undo tablespace is OK');
 ELSIF retv=2 Then dbms_output.put_line('Long running queries may fail , The recommendation is :' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('retention: ' || TO_CHAR(retn));
 ELSIF retv=3 Then
 dbms_output.put_line('The Undo tablespace cannot satisfy the longest query ,The recommendation is : ' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('undo tablespace size in MB : ' || TO_CHAR(utbs));
 dbms_output.put_line('retention: ' || TO_CHAR(retn));
 ELSIF retv=4 Then
 dbms_output.put_line('The System does not have an online undo tablespace , The recommendation is : ' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('undo tablespace size in MB : ' || TO_CHAR(utbs));
 ELSIF retv=1 Then
 dbms_output.put_line('The Undo tablespace cannot satisfy the specified undo_retention or The Undo tablespace cannot satisfy auto tuning undo retention , The recommendation is :' || reco);
 dbms_output.put_line('rationale: ' || rtnl);
 dbms_output.put_line('undo tablespace size in MB : ' || TO_CHAR(utbs));
 end if;
 dbms_output.put_line('=====================================================================');
 END;
 /

PL/SQL procedure successfully completed.

Needed undo tablespace for specified undo retention:-

SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
 SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
 (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
 g.undo_block_per_sec) / (1024*1024)
 "NEEDED UNDO SIZE [MByte]"
 FROM (
 SELECT SUM(a.bytes) undo_size
 FROM v$datafile a,
 v$tablespace b,
 dba_tablespaces c
 WHERE c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#
 ) d,
 v$parameter e,
 v$parameter f,
 (
 SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
 undo_block_per_sec
 FROM v$undostat
 ) g
 WHERE e.name = 'undo_retention'
 AND f.name = 'db_block_size'
 /

How to check and implement best practice for Oracle Database 11g onwards

Author: Saibal Ghosh

About author:

I am an Oracle professional with twenty plus years of experience and have deep knowledge and understanding of various facets of Oracle technology including basic Oracle Database Administration, Performance Tuning, Real Application Clusters, Data Guard and Maximum Availability Architecture. I also work a lot in the Oracle Database Cloud space, and I believe that the a lot of the future database work will be on the Cloud.

1   Scope and Purpose of the 360 degreeAudit

There are two purposes of this audit:

The primary purpose of the audit is to take a step back and look into the database to see whether there is something that is grossly wrongly configured, or some parameters that has not been optimally set, or that there is something that needs to be fixed immediately on priority or there is any item that needs attention in the short term, so that at the end of it, we have the reassurance that the database has been carefully studied diligently for any obvious shortcomings.

The secondary purpose of the audit is to ensure that we make an attempt at following the documented best practices to the extent practicable as well as try to ensure that we are able to leverage the functionalities of Oracle 11g extensively.

This audit does not claim to cover everything inside the database, but attempts to establish a general health and well-being check of the database, as well as point out any problem areas or potential problem areas.

2 Introduction

The ORCLPRD database is a two node RAC Cluster with the combined processing power of the two servers providing greater throughput and Oracle RAC scalability than is available from a single server. The same RAC setup is there in KOLKATA and DELHI.

The ORCLPRD database cluster comprises two interconnected computers or servers that appear as if they are one server to end users and applications. The Oracle RAC option with Oracle Database enables us to cluster Oracle databases. Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.

Basically, the ORCLPRD Oracle Clusterware is a portable cluster management solution that is integrated with the Oracle Database. Oracle Clusterware is a required component for using Oracle RAC that provides the infrastructure necessary to run Oracle RAC. Oracle Clusterware also manages resources, such as Virtual Internet Protocol (VIP) addresses, databases, listeners, services, and so on. In addition, Oracle Clusterware enables both non-clustered Oracle databases and Oracle RAC databases to use the Oracle high-availability infrastructure. Oracle Clusterware along with Oracle Automatic Storage Management (Oracle ASM) (the two together comprise the Oracle Grid Infrastructure enables us to achieve High Availability of the Oracle database). We have extensively used these features to setup the ORCLPRD database.

The Oracle Database with the Oracle Real Application Clusters (RAC) option allows running multiple database instances on different servers in the cluster against a shared set of data files, also known as the database. The database spans multiple hardware systems and yet appears as a single unified database to the application. This enables the utilization of commodity hardware to reduce total cost of ownership and to provide a scalable computing environment that supports various application workloads. Oracle RAC is Oracle’s premier shared disk database clustering technology.

The basic database version and configuration is set out below:

 

Configuration Value
Name of the Database ORCLPRD
Name of the Instances ORCLPRD1, ORCLPRD2
Environment Setting Set . oraenv to ORCLPRD1 or ORCLPRD2
ORACLE_BASE /orasw/app/oracle
ORACLE_HOME /orasw/app/oracle/product/11.2.0/db_1
GRID BASE /orasw/app/grid
GRID HOME /orasw/app/11.2.0/grid

 

The Operating System version is as follows:

The TPC Real Application Cluster setup is a two node RAC cluster, as shown in the screenshot below:

3 CLUSTERWARE CONFIGURATION

Oracle Clusterware includes a high availability framework that provides an infrastructure to manage any application. Oracle Clusterware ensures that the applications it manages start when the system starts and monitors the applications to ensure that they are always available. If a process fails then Oracle Clusterware attempts to restart the process using agent programs (agents). Oracle clusterware provides built-in agents so that we can use shell or batch scripts to protect and manage an application. Oracle Clusterware also provides preconfigured agents for some applications (for example for Oracle TimesTen In-Memory Database).

If a node in the cluster fails, then we can program processes that normally run on the failed node to restart on another node. The monitoring frequency, starting, and stopping of the applications and the application dependencies are configurable.

We also notice that the Cluster Time Synchronization daemon is running in OBSERVER mode, which implies that we are using NTP to synchronize time amongst the nodes in the RAC cluster.

See the screenshot below:

We have followed documented clusterware best practices, namely:

  • We are using DNS to enable Load Balancing. Load Balancing is enabled both at client end as well as server end.
  • We have installed the Oracle software on a local home.
  • Oracle Clusterware and Oracle ASM have been both installed in one home on a non-shared file system called the Grid Infrastructure home.
  • In general, SCAN name is used to resolve IP addresses to take advantage of client and server side load balancing.
  • The ability to migrate client connections to and from the nodes on which we are working is a critical aspect of planned maintenance. Migrating client connections should always be the first step in any planned maintenance activity requiring software shutdown (for example, when performing a rolling upgrade). The potential for problems increase if there are still active database connections when the service switchover commences. To enhance robustness and performance it would be good if we configure all of the following best practices:

 

  • Client is configured to receive FAN notifications and is properly configured for run time connection load balancing and Fast Connection Failover.
  • Oracle Clusterware stops services on the instance to be brought down or relocates services to an alternate instance.
  • Oracle Clusterware returns a Service-Member-Down event.
  • Client that is configured to receive FAN notifications receives a notification for a Service-Member-Down event and moves connections to other instances offering the service.
  • Mirror Oracle Cluster Registry (OCR) and Configure Multiple Voting Disks with Oracle ASM-done in the case of TPC databases
  • It is strongly recommended that the Services created are configured into the clusterware to ensure high availability-this enhances the high availability aspect of the service.

4  Current Database Memory Configuration on the System

Following is the main memory configuration for the ORCLPRD database. The exact same configuration exists for both KOLKATA and DELHI nodes. All memory configurations have been done assuming a high load on the database. Automatic Memory Management is enabled on the database. For doing it we have set (MEMORY_TARGET) and a maximum memory size initialization parameter (MEMORY_MAX_TARGET). The total memory that the instance uses remains relatively constant, based on the value of MEMORY_TARGET, and the instance automatically distributes memory between the system global area (SGA) and the instance program global area (instance PGA). As memory requirements change, the instance dynamically redistributes memory between the SGA and instance PGA.

Memory Size (GB) Remarks
memory_max_target 50 The maximum memory that will ever be available to the system.
memory_target 50 The memory that is currently accessible to the system.
SGA_Target 0 The SGA_TARGET parameter has not been set because we are using AMM.
PGA_aggregate_limit 0 This parameter need not be set because we are using AMM.
Database Buffer Cache 22 Current size of the buffer cache
Result _cache_max_size 128 M The size is kept pretty small, as in general we don’t plan to use Result Cache in our setup.

Following

4.1    System Global Area (SGA) Configuration:

A system global area (SGA) is a group of shared memory structures that contain data and control information for one Oracle database instance. If multiple users are concurrently connected to the same instance, then the data in the instance’s SGA is shared among the users. Consequently, the SGA is sometimes called the shared global area.

An SGA and Oracle processes constitute an Oracle instance. Oracle automatically allocates memory for an SGA when you start an instance, and the operating system reclaims the memory when you shut down the instance. Each instance has its own SGA.

The SGA is read/write. All users connected to a multiple-process database instance can read information contained within the instance’s SGA, and several processes write to the SGA during execution of Oracle.

The SGA contains the following data structures:

  • Database buffer cache
  • Redo log buffer
  • Shared pool
  • Java pool
  • Large pool (optional)
  • Streams pool
  • Data dictionary cache
  • Other miscellaneous information

Part of the SGA contains general information about the state of the database and the instance, which the background processes need to access; this is called the fixed SGA. No user data is stored here. The SGA also includes information communicated between processes, such as locking information.

Let’s check the total memory configuration along with the SGA. The total memory allocated for the system is 50GB, but we have not set any value for the SGA_TARGRET, which implies that we are letting Oracle determine the size of the SGA according to its needs. This is technically called Automatic Memory Management or AMM:

Note: If we are using AMM or ASMM then it is imperative that /dev/shm is set to a size at least equal to or greater than the MEMORY_TARGET parameter. In our case, this requirement has been fulfilled.  While /dev/shm is set to 127G, the MEMORY_TARGET is capped at 50G.

Recommendation:  Where the SGA is greater than 8G, as is the case here, then the recommendation is to use is to use HugePages. Ref:

https://docs.oracle.com/cd/E37670_01/E37355/html/ol_about_hugepages.html

This is something that needs to be seriously considered as configuring HugePages has a definite increase in performance based on the fact that without HugePages, the operating system keeps each 4 KB of memory as a page. When it allocates pages to the database System Global Area (SGA), the operating system kernel must continually update its page table with the page lifecycle (dirty, free, mapped to a process, and so on) for each 4 KB page allocated to the SGA.

With HugePages, the operating system page table (virtual memory to physical memory mapping) is smaller, because each page table entry is pointing to pages from 2 MB to 256 MB.

Also, the kernel has fewer pages whose lifecycle must be monitored. For example, if you use HugePages with 64-bit hardware, and you want to map 256 MB of memory, you may need one page table entry (PTE). If you do not use HugePages, and you want to map 256 MB of memory, then you must have 256 MB * 1024 KB/4 KB = 65536 PTEs.

HugePages provides the following advantages:

  • Increased performance through increased (Transaction Lookaside Buffer) TLB hits
  • Pages are locked in memory and never swapped out, which provides RAM for shared memory structures such as SGA
  • Contiguous pages are preallocated and cannot be used for anything else but for System V shared memory (for example, SGA)
  • Less bookkeeping work for the kernel for that part of virtual memory because of larger page sizes

Note: To set up HugePages we need to disable AMM (Automatic Memory Management). This constraint is not there if we are using ASMM (Automatic Shared Memory Management), which is automatically enabled if we are setting the size of SGA to any value greater than 0.

4.2   PGA Configuration

As discussed earlier in the report, since Automatic Memory Management is enabled on the system, the parameter PGA_AGGREGATE_TARGET is also set to 0.

Let us check the PGA performance:

The PGA cache hit percentage is 96%, which is pretty decent, and at the current workload, and there is no PGA related problem on the database.

Recommendation: There are no additional recommendations regarding the PGA configuration as it was has been optimally set up.

 

5 ASM MEMORY CONFIGURATION

5.1       Current setup

  • The minimum MEMORY_TARGET for Oracle ASM is 1 GB. If we set MEMORY_TARGET lower, then Oracle increases the value for MEMORY_TARGET to 1 GB automatically. In this case we are fine. However, there is a known bug – Bug 19434529 – ORA-4031 in ASM shared pool due to leaked “kffil” state objects (Doc ID 19434529.8)
  • The fix is to apply Patch: 19769480

Recommendation: There are no additional recommendations with regard to ASM memory.

5.2   STORAGE CONFIGURATION

Background: When Automatic Storage Management was introduced in Oracle 10gR1, it was simply marketed as the volume manager for the Oracle database. ASM was designed as a purpose built host based volume management and filesystem that is integrated with the Oracle database. It is built on the Striping and Mirroring technology (SAME), which stripes and mirrors disks across as many disks as possible and provides the ability of rebalancing the file layout online whenever the physical storage configuration changes.

From the user’s perspective, ASM exposes a small number of disk groups. These disk groups consist of ASM disks and files that are striped across all the disks in a disk group. The disk groups are global in nature and database instances running individually or in clusters have a shared access to the disk groups and the files within them. The ASM instances communicate amongst themselves and form the ASM cluster.

5.2.1   Current setup:

Let us see the current disk group setup and the free space available:

Disk Group Name Total Size (GB) Currently Available Size (GB)
ORCL_DATA 600 193
ORCL_FRA 400 353
ORCL_REDO 50 42
OCR_VOTE 3 2

So we have 4 disk groups with ample free space available. Now, let us look into the size of the disks that make up the diskgroup. The test is to confirm whether we have equi-sized disks in a disk group. As we can see from the screenshot below, all the disks in a particular disk group are equi-sized. This is important both from a stability as well as performance perspective.

Next we test whether the COMPATIBLE.ASM value is set to 11.1 or higher for the disk group to display in the V$ASM_ATTRIBUTE view output. Attributes that are designated as read-only (Y) can only be set during disk group creation.

Similarly, we check the software compatibility for each disk group. These are also properly configured:

                                We check the ASM_POWER_LIMIT parameter. This parameter gives us an idea of how fast the rebalance operation can take place after performing a (planned) maintenance. Until Oracle 11.2.0.1, the default value is 1 and the maximum value is 11, but from Oracle 11.2.0.2 the maximum value is 1024. We see from the following screenshot that we are using the default rebalancing value.

5.3   Best Practices & Recommendations:

 

                     The number of LUNs (Oracle ASM disks) for each disk group should be at least equal to four times the number of active I/O paths**. For example, if a disk group has two active I/O paths, then minimum of eight LUNs should be used. The LUNs should be of equal size and performance for each disk group. Re: Oracle ASM and Multi-Pathing Technologies” in My Oracle Support Note 294869.1—this needs to be checked with the Storage Administrator.

 

                     **An I/O path is a distinct channel or connection between storage presenting LUNs and the server. An active I/O path is an I/O path in which the I/O load on a LUN is multiplexed through multipathing software.

It is strongly recommended that all Oracle ASM disks in a disk group have similar storage performance and availability characteristics. In storage configurations with mixed speed drives, such as flash memory and hard disk drives (HDD), I/O performance is constrained by the slowest speed drive. Re: https://docs.oracle.com/cd/E11882_01/server.112/e18951/asmprepare.htm#OSTMG02550

5.4   Storage Recommendations:

1.    Consider setting the ASM_POWER_LIMIT parameter to around 200 to speed up rebalancing operations in case of a planned maintenance.

  1. Minimize I/O contention between Oracle ASM disks and other applications by dedicating disks in Oracle ASM disk groups.

6  TABLESPACE CONFIGURATION

6.1                          CURRENT SETUP

First, let us check the tablespaces, their associated datafiles, and the size of such files:

It is noticed that while most of the tablespaces are autoextensible, there are several of them which are not autoextensible. Additionally there are files added to the tablespace which are randomly sized. So it shows that tablespaces are NOT being optimally managed.

Let us see the free space available in the tablespaces. Space is being shown in GB:

Space is pretty much available in all the tablespaces, and as mentioned above since the tablespaces are autoextensible, space management is well taken care of in this system.

 

6.2    Best Practice & Recommendations:

There are a couple of recommendations here:

  1. All tablespaces may be made autoextensible. In such cases the underlying files grow (or is configured at the outset) to ~32 GB before a new file needs to be added to the tablespace. We need to have normal monitoring and threshold alarms configured for both tablespace and storage monitoring.
  2. Or, tablespaces can be made non-extensible, and have datafiles of uniform size (maybe 20G) added. Monitoring of the storage and tablespace has to be done with the same rigor as before.

7    CONTROLFILES AND REDOLOG FILES

7.1    CURRENT SETUP

Controlfiles are multiplexed and they are kept at separate locations, but both inside ASM, once copy in the ORCL_FRA diskgroup, while the other copy is in ORCL_REDO diskgroup.

There are 4 redo log groups created each having 2 members each as can be seen from the screenshot below:

And the size of each logfile is 2048 MB, as can be seen below:

Let us take a sample of the number of log switches made in an hour:

At this point we can see the number of switches is just 3 per hour which is just fine.

 

7.2     Best Practices & Recommendations:

 

Both the controlfiles are multiplexed in ASM-it can be considered to have one copy in ASM and the other on the Unix filesystem or consider keeping the same configuration, but have an additional multiplexing of the controlfile in the Unix filesystem.

Redo Logs have been configured appropriately and there are sufficient number of groups that have been created to ensure that we should not ever encounter the dreaded error:

“Checkpoint not complete, cannot allocate new log” message

 

8   NETWORKING AND SCAN CONFIGURATION

8.1                            Current Setup

Below is the Network and SCAN configuration for the KOLKATA site

KOLKATA

 

SCAN name: epedbp1-scan, Network: 1

 

Type of IP IP Address Remarks
SCAN IP 1 10.XX.26.102 Enabled
SCAN IP 2 10.XX.26.103 Enabled
SCAN IP 3 10.XX.26.104 Enabled
VIP 10.XX.26.101 Enabled
VIP 10.XX.26.107 Enabled
Client IP-Node 1 10.XX.5.187 Enabled
Client IP-Node 2 10.XX.5.155 Enabled
Private IP 192.168.5.1 Enabled
Private IP 192.168.5.2 Enabled

 

Let Let us check whether the round robin algorithm is working while resolving the SCAN name. The round robin algorithm implies that SCAN name resolves to a different IP address each time an attempt is made to connect to the database. This is Load Balancing at the connection level.

Additionally we will run a check on how the SCAN listeners are placed on both the nodes. Since there are three SCAN listeners, typically two of them ought to be placed on one node, while the other will be the remaining node (in the case of a two node RAC configuration).  In an ideal situation there should not be any deviation from a configuration of this sort.

Node 1

Node 2

Finally, let’s have a look at the VIPs

Thus, network and SCAN have been properly set up on the ORCLPRD database.

 

8.2  Best Practices & Recommendations:

The DNS server which resolves the SCAN name should be properly secured in the sense that only authorized personnel should have access to the server. Additionally, there should be a backup of the DNS server, in case there is a requirement for maintenance on the main DNS server, so that there is no interruption of service.

As a best practice the round robin algorithm ought to be enabled on the DNS server, so that connect time load balancing is automatically occurring and the connections are equitably distributed amongst the servers.

Finally, SCAN should be leveraged by using the SCAN name to connect instead of using physical IPS or VIPs. It was noticed that several connections were actually made using VIPS and other IPs-this should be strongly discouraged.

NO CHANGES ON THE NETWORK SHOULD BE MADE WITHOUT TAKING THE CONCERNED DBAs INTO CONFIDENCE, AS NETWORKS IN A REAL APPLICATION CLUSTER CONNECTION IS A MAJOR COMPONENT OF A PROPERLY FUNCTIONING CLUSTERED DATABASE SOLUTION.

9   BASIC HOUSEKEEPING

 

9.1     Alertlogs:

There seem to be no log rotation done in the case of alert logs. The alert log has grown pretty big as can be seen in the screenshot below:

 

9.2     Listener Logs:

the Listener logs which are in .xml format.  Being in XML format, many tools now can be made to read the files unambiguously since the data is now enclosed within meaningful tags. Additionally the listener log files (the XML format) is now rotated. After reaching a certain threshold value the file is renamed to log_1.xml and a new log.xml is created – somewhat akin to the archived log concept in the case of redo log files.  So we are relieved of the task of deleting old listener logs lest they start filling up the system. See the screenshot below:

 

 

However, Oracle still continues to write a version of the Listener logs in the old format. The old format log is still called listener.log but the directory it is created in is different – $ADR_BASE/tnslsnr/Hostname/listener/trace. Unfortunately there is no archiving scheme for this file so this simply kept growing, and this has to be manually trimmed.
Similarly, as part of the housekeeping, the tracefiles (*.trc) which are generated on the system need to be trimmed from time to time.  There is no automatic mechanism to delete these files on the system.

There is a cron job running which deletes old Listener logs, but

9.3    Backups:

The basic backup configuration scheduled through the Oracle Job Scheduler is as follows:

 

  • A Level 0 Backup is taken to ASM every day (02:05 hours).
  • A Level 1 Backup is taken to ASM every Monday, Tuesday, Wednesday, Friday and Saturday (00:05 hours).

The following has been set up in cron

  • A Full Backup to NBU is done every day (00:05 hours).
  • An archivelog deletion script runs every 4 hours compressing the archives and then deleting the input.

The RMAN configuration is set to keep one backup in addition to the current backup, as can be seen from the screenshot below:

9.4       Partitioning scheme:

The table MESSAGELOG has interval partitioning enabled on it (creating a new partition everyday) and an auto-dropping of older partitions coded on it.

This allows for optimization of space usage and ease of administration, as long as we keep a small number of tables under this scheme. Currently we are keeping about three days’ worth of data,-we may sometimes see four partitions, this happens because when the new partition has already been created, but the older partition’s scheduled drop time has not arrived as yet.

9.5   Filesystem:

The Unix filesystem is shown below:

 

/orasw needs to be cleaned up as the available space on that directory is only about 17G and the directory is 78% used already. There seems to be no automatic deletion script in place and this could be a major issue in the scheme of things.

9.6    Best Practices & Recommendations:

Alert logs, tracefiles, Listener logs all need to be deleted from time to time. Using the nifty Unix utility Logrotate to do it is a very good way of taking care of unwanted runaway growth on the database.

In the ORCLPRD database, alert logs, tracefiles and listener logs are NOT being managed that way. This need to be taken care of. There is an observation regarding backups:

  1. All backups ought to be set up using the Oracle Job Scheduler because the scheduler operates at the database level, and the backup will still run if there is a problem on any of the nodes in the cluster as long as one node is up and running. This is not true if the backup is scheduled in the cron and the node on which the job is scheduled in the cron happens to be down.
  2. There is a RMAN level 0 backup being taken everyday, as well RMAN level 1 backup taken five days a week-this is a suboptimal configuration and waste of computing resources (space, memory and CPU).

IT IS STRONGLY SUGGESTED THAT RESTORE AND RECOVERY ARE BOTH TESTED OUT-BOTH FROM THE ASM LEVEL AS WELL AS FROM THE NET BACKUPS.

 

 

10   IMPORTANT PARAMETER SETTINGS:

Below is a set of generic parameter settings which are considered the most important ones from a RAC database optimization and performance management perspective. This list of parameters are by no means an exhaustive list, and it is suggested that one refer to the Oracle documentation (specifically to the Performance Tuning Guide and the Administrator’s Guide) for further information on parameter settings. The parameter settings need to be first tested on a Lab setup before being implemented in the Production setup. 

  

  • Optimize Instance Recovery Time

Instance recovery is the process of recovering the redo thread from the failed instance. Instance recovery is different from crash recovery, which occurs when all instances accessing a database have failed. Crash recovery is the only type of recovery when an instance fails using a single-instance Oracle Database.

When using Oracle RAC, the SMON process in one surviving instance performs instance recovery of the failed instance.

In both Oracle RAC and single-instance environments, checkpointing is the internal mechanism used to bind Mean Time To Recover (MTTR). Checkpointing is the process of writing dirty buffers from the buffer cache to disk. With more aggressive checkpointing, less redo is required for recovery after a failure.  The MTTR on ORCLPRD database is not set. Enable MTTR Advisory by setting FAST_START_MTTR_TARGET to a value greater than zero

The recommendation would be to set it to 300.

 Please refer http://www.oracle.com/goto/maa for best practice.

 

  • Maximize the Number of Processes Performing Transaction Recovery

The FAST_START_PARALLEL_ROLLBACK parameter determines how many processes are used for transaction recovery, which is done after redo application. Optimizing transaction recovery is important to ensure an efficient workload after an unplanned failure. If the system is not CPU bound, setting this parameter to HIGH is a best practice. This causes Oracle to use four times the CPU_COUNT (4 X CPU_COUNT) parallel processes for transaction recovery. The default setting for this parameter is LOW, or two times the CPU_COUNT (2 X CPU_COUNT). See the screenshot below:

 

 

Recommendation: consider setting the parameter fast_start_parallel_rollback parameter to high using:

ALTER SYSTEM SET FAST_START_PARALLEL_ROLLBACK=HIGH SCOPE=BOTH;

When FAST_START_PARALLEL_ROLLBACK is set to HIGH, a system with a large number of CPUs will spawn a lot of parallel recovery slaves which can substantially increase the IOPS rate. In this case the system should not be challenged for I/O before FAST_START_PARALLEL_ROLLBACK is set to HIGH.

The ORCLPRD system is not quite CPU bound as we can see from the following screenshot:

 

 

  • Ensure Asynchronous I/O Is Enabled

Under most circumstances, Oracle Database automatically detects if asynchronous I/O is available and appropriate for a particular platform and enables asynchronous I/O through the DISK_ASYNCH_IO initialization parameter. However, for optimal performance, it is always a best practice to ensure that asynchronous I/O is actually being used. Query the V$IOSTAT_FILE view to determine whether asynchronous I/O is used. In the case of ORCLPRD:

 

  • Set LOG_BUFFER Initialization Parameter to 64 MB or higher.

In our case we are already well below the 64 MB threshold:

However, without knowing the transaction rates, the sizes of the transactions, the number of users etc. it is difficult to say whether this in itself would be a problem, but it would be good to be above the 64 MB threshold to be on the safe side.

 

  • Use Automatic Shared Memory Management and Avoid Memory Paging

For any systems with 4 GB or more memory, disable Automatic Memory Management by setting MEMORY_TARGET=0 and enable Automatic Shared Memory Management by setting SGA_TARGET.

 

Let us check this on our system:

 

Since MEMORY_TARGET is set but SGA_TARGET is not set, this means that Automatic Memory management (AMM) is set on the system. As a best practice, it is being suggested that Linux HugePages ought to be used if the SGA > 8GB, which is the case here, then as a first step we should consider moving to Automatic Shared Memory Management (ASMM) from Automatic Memory Management (AMM).

Additionally, the sum of SGA and PGA memory allocations on the database server should always be less than our system’s physical memory, and conservatively should be less than 75% of total system memory. However, PGA_AGGREGATE_TARGET is not a hard limit, and for some Data Warehouse or reporting applications, the PGA memory can grow to be

3 X PGA_AGGREGATE_TARGET.

It is suggested that we monitor PGA memory and host-based memory utilization using Oracle Enterprise Manager, or by querying v$pgastat and operating systems statistics, to get an accurate understanding of memory utilization.

It is strongly recommended to move towards HugePages so that ASM and database instances can use it for their SGA. HugePages is a feature integrated into the Linux kernel from release 2.6. This feature provides the alternative to the 4K page size providing bigger pages. Using HugePages has the benefit of saving memory resources by decreasing page table overhead while making sure the memory is not paged to disk. This contributes to faster overall memory performance. Next to this overall node stability will benefit from using HugePages.

Ensuring the entire SGA of a database instance is stored in HugePages can be accomplished by setting the init.ora parameter use_large_pages=only. Setting this parameter will ensure that an instance will start only when it can get all of its memory for SGA from HugePages. For this reason the setting use_large_pages=only is recommended for database instances.

For ASM instances leave use_large_pages=true (the default value). This setting still ensures that HugePages are used when available, but also ensures that ASM as part of Grid Infrastructure starts when HugePages are not or insufficiently configured.

 

  • CURSOR SHARING

CURSOR_SHARING=SIMILAR has been deprecated as per MOSC Note 1169017.1. Starting Oracle 11g, Oracle implemented a more intelligent solution known as adaptive cursor sharing making cursor_ sharing=similar obsolete.

So unless we have a very strong reason to do otherwise, it is best to stick with    CURSOR_SHARING=EXACT

 

  • DB_BLOCK_CHECKSUM

This parameter setting should be TYPICAL or FULL on all the databases. In our case it is set to TYPICAL, as can be seen from the following screenshot.

DB_BLOCK_CHECKSUM determines whether DBWn and the direct loader will calculate a checksum (a number calculated from all the bytes stored in the block) and store it in the cache header of every data block when writing it to disk. Checksums are verified when a block is read – only if this parameter is TYPICAL or FULL and the last write of the block stored a checksum. In FULL mode, Oracle also verifies the checksum before a change application from update/delete statements and recomputes it after the change is applied. In addition, Oracle gives every log block a checksum before writing it to the current log.

If this parameter is set to OFFDBWn calculates checksums only for the SYSTEM tablespace, but not for user tablespaces.

Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O systems. If set to FULLDB_BLOCK_CHECKSUM also catches in-memory corruptions and stops them from making it to the disk. Turning on this feature in TYPICAL mode causes only an additional 1% to 2% overhead. In the FULL mode it causes 4% to 5% overhead. Oracle recommends that you set DB_BLOCK_CHECKSUM to TYPICAL. For backward compatibility we preserve use of TRUE (implying TYPICAL) and FALSE (implying OFF) values.

  • Use Oracle Enterprise Manager for better administration

Oracle Enterprise Manager Cloud Control enables us to use the Oracle Enterprise Manager console interface to discover Oracle RAC database and instance targets. We can use Oracle Enterprise Manager to configure Oracle RAC environment alerts. We can also configure special Oracle RAC database tests, such as global cache converts, consistent read requests, and so on.

Oracle Enterprise Manager distinguishes between database- and instance-level alerts in Oracle RAC environments. Alert thresholds for instance-level alerts, such as archive log alerts, can be set at the instance target level. This function enables us to receive alerts for the specific instance if performance exceeds our threshold. We can also configure alerts at the database level, such as setting alerts for tablespaces, to avoid receiving duplicate alerts at each instance. We can define blackouts (which are time periods in which database monitoring is suspended so that maintenance operations do not skew monitoring data or generate needless alerts) for all managed targets of an Oracle RAC database to prevent alerts from occurring while performing maintenance. So overall, we can leverage OEM to provide better manageability and administration of a database, clustered or otherwise.

 

11   SECURITY UPDATES & PATCHES

My Oracle Support provides patch recommendations for the following product lines, if we have a valid Customer Support Identifier Number.

Starting with the October 2013 Critical Patch Update, security fixes for Java SE are released under the normal Critical Patch Update schedule.

A pre-release announcement will be published on the Thursday preceding each Critical Patch Update release. Oracle will issue Security Alerts for vulnerability fixes deemed too critical to wait for distribution in the next Critical Patch Update.

 

  • Oracle Database
  • Oracle Application Server
  • Enterprise Manager Cloud Control (Management Agent/Oracle Management Service)
  • Oracle Fusion Applications
  • Oracle Fusion Middleware
  • Oracle SOA Suite
  • Oracle Real Applications Clusters
  • Oracle Real Applications Clusters with Clusterware
  • Oracle Dataguard
  • Oracle Exadata Database Machine and Exadata Storage Serve
  • Oracle Exalogic
  • Oracle Coherence, Oracle WebLogic Server, Oracle JRockit for non- Exalogic systems
  • Oracle Business Process Management
  • Application Development Framework Runtime

 

Critical Patch Updates are collections of security fixes for Oracle products. They are available to customers with valid support contracts. They are released on the Tuesday closest to the 17th day of January, April, July and October. The next three dates are:

 

  • 17 January 2017
  • 18 April 2017
  • 18 July 2017

and the last date was 18 October 2016.

Additionally, we can also check for recommended patches, security updates, and bug fixes in the My Oracle Support page, and apply those from time to time.

 

12     ORACHK SCORE

We ran the orachk tool on the database. This tool analyzes all the information inside the database, and amongst other things provides a report card with a score. In general, any score above 85 over 100 is considered a good score and a validation that the database is functioning properly. See the screenshot below:

13    CONCLUSION

Please find below a summary of recommendations based on the audit done on the database. While these recommendations are made after due diligence and consideration, it is advised to do a test on a lab setup wherever applicable.

 

  • Apply the latest patches and bug-fixes on the database, especially Patch # 19769480 to prevent known bug-Bug 19434529 – ORA-4031 in ASM shared pool due to leaked “kffil” state objects (Doc ID 19434529.8).

 

  • Consider upgrading to Oracle 12c as Premier Support has already ended (means that Oracle will not offer fixes for new issues unless an extra-cost offer called Extended Support has been purchased forOracle Database 11.2.0.4). Assuming Extended Support is available, it can be extended to DEC-2020 – but then at the usual extra cost. This information can be found in MOS Note: 742060.1. Oracle 12c offers numerous extra features plus an unparalleled ability to scale, which is why there is every reason to upgrade to Oracle 12c.

  

  • Free space under ORCL_FRA and ORCL_DATA are tight, and there could be major space related issues if Applications from both sites are moved to one site. So please consider increasing the storage in the disk groups ORCL_DATA and ORCL_FRA at the earliest.

 

  • Backup & Recovery are sub-optimally configured, and it is very strongly suggested that backups, restore, and recovery are tested at each site, both from the ASM, as well as from the Net Backup.

  

  • Basic housekeeping has a lot of scope for improvement-alert logs, trace files, listener logs have to be deleted on schedule. Jobs can be set up through the Oracle Job Scheduler, or in the crontab, or the UNIX facility of logrotate can be used to control runaway growth of these files. The UNIX filesystem has also to be carefully monitored to check that the free space in the directories remain under the threshold limit.

 

  • Consider implementing Linux HugePages to get a definite improvement in performance. This is especially significant where there is heavy transaction load and concurrency on the database.

 

  • Always connect using the SCAN name epedbp1-scan, as using the SCAN name as opposed to using an IP or a VIP results in a client side load balancing occurring. This in itself is not a major point, but when concurrency increases, it significantly helps in optimizing the connections that go to the database.

 

  • Many of the datafiles added in tablespaces are randomly sized. This is not a best practice. The suggestion here is that henceforth any file added to a tablespace should be sized uniformly. Additionally, there is a mix and match of AUTOEXTENSIBLE and NON-AUTOEXTENSIBLE in the tablespace setup. It is strongly recommended to make all the tablespaces AUTOEXTENSIBLE. Also there is a lot of free space in these tablespaces-consider shrinking the datafiles to release space.
  • Consider implementing the parameters pointed to out in the section relating to parameters.
  • Consider installing Oracle Enterprise Manager Express for a lightweight GUI for monitoring the DB.

 

 

 

FLASHBACK DATABASE AND QUERIES

FLASHBACK FEATURES

 

Purging the Recycle Bin

 

purge table sales.woodscrews;

purge index sales.woodscrews_pk_idx;

purge tablespace sales;

purge recyclebin;

 

Undropping Objects in the Recycle Bin

 

flashback table ws_app.woodscrews to before drop;

select object_name, original_name, droptime, dropscn from user_recyclebin;

flashback table bonus to before drop rename to bonus_deb;

 

———–Accidentaly Delete——————————————

 

Accidentally dropped table MHC_PA_INT_PPA_SPN_RL_TBL from MHCPA schema

**********************************************************************

How to recover:

1)Login to schema from where you dropped:

2)SELECT * FROM recyclebin where ORIGINAL_NAME =’MHC_PA_INT_PPA_SPN_RL_TBL’;

3)If you get multiple dates check the drop date.

4)FLASHBACK TABLE “BIN$pLplsuQ1FXDgRAAhKFcB9g==$0” TO BEFORE DROP;

5)FLASHBACK TABLE XXX TO BEFORE DROP RENAME TO OLD_XXX;

 

Display recyclebin

show recyclebin;

 

Performing the Flashback Table from SQL

flashback table sales.woodscrew to timestamp
to_timestamp(‘2003-12-01 13:30:00′,’YYYY-MM-DD HH24:MI:SS’);
flashback table sales.woodscrew to scn 751652;

 

To recover a table to a previous timestamp

******************************************

 

The following query retrieves the previous state of a table:

SELECT * FROM AR_RECEIPT_METHODS AS OF TIMESTAMP TO_TIMESTAMP(‘2011-09-09 14:30:00’, ‘YYYY-MM-DD HH24:MI:SS’);

SQL> alter session set nls_date_format=’YYYY/MM/DD HH24:MI:SS’;

Session altered.

SQL> select sysdate from dual;

SYSDATE

——————-

2011/08/30 00:43:01

SQL> flashback table MHCAP.MHC_INT_MASTRO_EBS_TBL to timestamp TO_TIMESTAMP(‘2011/08/30 00:20:01′,’YYYY/MM/DD HH24:MI:SS’);

Flashback complete.

select versions_startscn,versions_oper,* from t1 versions between scn 1239229 and 1239236;

select dbms_flashback.get_system_change_number from dual;

 

Flashback Retention Target

alter system set db_flashback_retention_target=720;

Configure flash back database

 

1.Database must be in mount

select status from v$instance;

archive log list;

2.Set the flashback retention target to your desired value. We will use 12 hours as the window.

alter system set db_flashback_retention_target=720

SCOPE=BOTH SID=’*’;

 

3.Set the values for db_recovery_file_dest and db_recovery_file_dest_size (flash recovery area parameters).

 

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 2335825920 SCOPE=BOTH SID=’*’;

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST = ‘/u02/fra/’ SCOPE=BOTH SID=’*’;

4.Turn flashback logging on

alter database flashback on;

5.Turn flashback logging off for any tablespaces that you deem do not require it.

alter tablespace sales_idx flashback off;

Step 6. Open the database.

alter database open;

Flashback Database: Tuning

select estimated_flashback_size from v$flashback _database_log;

select oldest_flashback_scn, oldest_flashback_time

from v$flashback_database_log;

select * from v$flashback_database_stat;

Flashback example

  1. First, get the current SCN from the database. Because we are simply testing, we can prepare for the test by getting the current SCN prior to putting the fault into the database.

SQL> select current_scn from v$database;

CURRENT_SCN

—————-

885524

  1. Introduce the fault.

SQL> truncate table woodscrew;

Table truncated.

  1. Shut down the database, and then remount. The database must be mounted and not open for flashback.

SQL> connect / as sysdba

Connected.

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup mount;

  1. Issue the FLASHBACK command.

SQL> flashback database to scn 885524;

Flashback complete.

  1. Open the database read-only to confirm that the table has been flashed back to the appropriate SCN.

SQL> alter database open read only;

Database altered.

SQL> connect sales/sales;

Connected.

SQL> select count(*) from woodscrew;

COUNT(*)

———-

12

  1. Open the database with resetlogs.

SQL> connect / as sysdba

SQL> shutdown immediate;

SQL> startup mount;

 

####### Recover database flashback technology #########################

 

SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;

 

SELECT CURRENT_SCN FROM V$DATABASE;

SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,GUARANTEE_FLASHBACK_DATABASE FROM V$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE=’YES’;

SHUTDOWN IMMEDIATE;

STARTUP MOUNT;

 

FLASHBACK DATABASE TO SCN 46963;

FLASHBACK DATABASE TO RESTORE POINT BEFORE_CHANGES;

FLASHBACK DATABASE TO TIME “TO_DATE(’09/20/05′,’MM/DD/YY’)”;

ALTER DATABASE OPEN READ ONLY;

 

SHUTDOWN IMMEDIATE

STARTUP MOUNT

ALTER DATABASE OPEN RESETLOGS;

 

SET UNTIL TIME ‘Nov 15 2004 09:00:00’;

SET UNTIL SEQUENCE 9923;

SET UNTIL RESTORE POINT before_update;

RESTORE DATABASE;

RECOVER DATABASE;

ALTER DATABASE OPEN RESETLOGS;

Network configuration listener and tnsnames in oracle

Oracle Network Configuration
===========================
In its most basic form, Oracle uses three files (listener.ora, tnsnames.ora & sqlnet.ora) for network configuration. This article gives an example of each file as a starting point for simple network configuration.

•Assumptions
•Listener.ora
•Tnsnames.ora
•Sqlnet.ora
•Testing

Assumptions
==========
The example files below are relevant for an Oracle installation and instance with the following values.

Parameter Value
=================
HOST hostname
ORACLE_HOME /u01/app/oracle/product/9.2.0
ORACLE_SID ORCL
DOMAIN world

Listener.ora
============
The listerner.ora file contains server side network configuration parameters. It can be found in the “$ORACLE_HOME/network/admin” directory on the server. Here is an example of a listener.ora file from Windows 2000.

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
)
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = ORCL.WORLD)
(ORACLE_HOME = /u01/app/oracle/product/9.2.0)
(SID_NAME = ORCL)
)
)After the “listener.ora” file is amended the listener should be restarted or reloaded to allow the new configuation to take effect.

C:> lsnrctl stop
C:> lsnrctl start

C:> lsnrctl reload

Tnsnames.ora
============

The “tnsnames.ora” file contains client side network configuration parameters. It can be found in the “$ORACLE_HOME/network/admin” or “$ORACLE_HOME/net80/admin” directory on the client. This file will also be present on the server if client style connections are used on the server itself. Here is an example of a “tnsnames.ora” file.

ORCL.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = ORCL.WORLD)
)
)

Sqlnet.ora
==========

The “sqlnet.ora” file contains client side network configuration parameters. It can be found in the “$ORACLE_HOME/network/admin” or “$ORACLE_HOME/net80/admin” directory on the client. This file will also be present on the server if client style connections are used on the server itself. Here is an example of an “sqlnet.ora” file.

NAMES.DIRECTORY_PATH= (TNSNAMES, ONAMES, HOSTNAME)
NAMES.DEFAULT_DOMAIN = WORLD

# The following entry is necessary on Windows if OS authentication is required.
SQLNET.AUTHENTICATION_SERVICES= (NTS)

Testing
=======
Once the files are present in the correct location and amended as necessary the configuration can be tested using SQL*Plus by attempting to connect to the database using the appropriate username (SCOTT), password (TIGER) and service (ORCL).

$ sqlplus scott/tiger@orcl

Performance
============

The tcp.nodelay parameter in the protocol.ora file
==================================================

Oracle Net, by default, waits until the buffer is filled before transmitting data. Therefore, requests aren’t always sent immediately to their destinations. This is most common when large amounts of data are streamed from one end to another, and Oracle Net does not transmit the packet until the buffer is full. Adding a protocol.ora file, and specifying a tcp.nodelay to stop buffer flushing delays, can remedy this problem.

The protocol.ora file can be specified to indicate no data buffering for all TCP/IP implementations. The parameter can be used on both the client and server. The protocol.ora statement is:

tcp.nodelay = yes

Specifying this parameter causes TCP buffering to be skipped so that every request is sent immediately. Keep in mind, however, that network traffic can increase due to smaller and more frequent packet transmission, causing slowdowns in the network.

The tcp.nodelay parameter should be used only if TCP timeouts are encountered. Setting tcp.nodelay can cause a huge improvement in performance when there is high-volume traffic between database servers.

The automatic_ipc parameter of the sqlnet.ora file
===================================================

The automatic_ipc parameter bypasses the network layer, thereby speeding local connections to the database. When automatic_ipc=on, Oracle Net checks to see if a local database is defined by the same alias. If so, network layers are bypassed as the connection is translated directly to the local IPC connections. This is useful on database servers, but it’s absolutely useless for Oracle Net clients.

The automatic_ipc parameter should be used only on the database server when an Oracle Net connection must be made to the local database. If local connections are not needed or required, set this parameter to off; with this setting, all Oracle Net clients can improve performance.

The SDU and TDU parameters in the tnsnames.ora file
===================================================

The session data unit (SDU) and transport date unit (TDU) parameters are located in the tnsnames.ora and listener.ora files. SDU specifies the size of the packets to send over the network. Ideally, SDU should not surpass the size of the maximum transmission unit (MTU). MTU is a fixed value that depends on the actual network implementation used. Oracle recommends that SDU be set equal to MTU.

The TDU is the default packet size used within Oracle Net to group data together. The TDU parameter should ideally be a multiple of the SDU parameter. The default value for both SDU and TDU is 2,048, and the maximum value is 32,767 bytes.

The following guidelines apply to SDU and TDU:

• The SDU should never be set greater than TDU because you’ll waste network resources by shipping wasted space in each packet.

• If your users are connecting via modem lines, you may want to set SDU and TDU to smaller values because of the frequent resends that occur over modem lines.

• On fast network connections (T1 or T3 lines), you should set SDU and TDU equal to the MTU for your network. On standard Ethernet networks, the default MTU size is set to 1,514 bytes. On standard token ring networks, the default MTU size is 4,202.

• If the Multi-Threaded Server (MTS) is used, you must also set the mts_dispatchers with the proper MTU TDU configuration.

The SDU and TDU settings are a direct function of the connection speed between the hosts. For fast T1 lines, set SDU=TDU=MTU. For slower modem lines, experiment with smaller values of SDU and TDU.

The queuesize parameter in the listener.ora file
===============================================

The number of requests the listener can store while Oracle is working to establish a connection is determined by the undocumented queuesize parameter. This parameter is used only for very high-volume databases, where the listener spawns thousands of connections per hour. The number of expected simultaneous connections should be equal to the size of the queuesize parameter. Here’s an example of this parameter in the listener.ora file:

LISTENER =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = marvin)
(PORT = 1521)
(QUEUESIZE = 32)
)
)

A disadvantage of this parameter is that it preallocates resources for anticipated requests, therefore using more system memory and resources. You may want to consider using MTS and prespawned Oracle connections if you have high-volume connections into a dedicated listener. Also, note that some versions of UNIX do not allow queues greater than five, and there are some restrictions of the MTS queue size.

Memory and Process in Oracle generic note

MEMORY MANAGEMENT 11g
=====================

Oracle 11g internals part 1: Automatic Memory Management

PGA+SGA=MEMORY_TARGET

what are the shared memory IDs for my instance

$ sysresv

corresponding SysV SHM segments:

$ ipcs -m

mapped memory for an Oracle instance process – as the SGA should be definitely mapped there!

$ pmap `pgrep -f lgwr`

$ ls -l /dev/shm

Removing Shared Memory

Sometimes after an instance crash you may have to remove Oracle’s shared memory segment(s) manually.

To see all shared memory segments that are allocated on the system, execute:
$ ipcs -m

—— Shared Memory Segments ——–
key shmid owner perms bytes nattch status
0x8f6e2129 98305 oracle 600 77694523 0
0x2f629238 65536 oracle 640 2736783360 35
0x00000000 32768 oracle 640 2736783360 0 dest

In this example you can see that three shared memory segments have been allocated. The output also shows that shmid 32768 is an abandoned shared memory segment from a past ungraceful Oracle shutdown. Status “dest” means that this memory segment is marked to be destroyed. To find out more about this shared memory segment you can run:
$ ipcs -m -i 32768
Shared memory Segment shmid=32768
uid=500 gid=501 cuid=500 cgid=501
mode=0640 access_perms=0640
bytes=2736783360 lpid=3688 cpid=3652 nattch=0
att_time=Sat Oct 29 13:36:52 2005
det_time=Sat Oct 29 13:36:52 2005
change_time=Sat Oct 29 11:21:06 2005

To remove the shared memory segment, you could copy/paste shmid and execute:
$ ipcrm shm 32768

pmap output reveals that Oracle 11g likes to use /dev/shm for shared memory implementation instead.
There are multiple 16MB “files” mapped to Oracle server processes address space

Btw, if your MEMORY_MAX_TARGET parameter is larger than 1024 MB then Oracle’s memory granule size is 16MB on Linux,
otherwise it’s 4MB

Overall Tools:
top, vmstat, sar, ps, pstree, ipcs

CPU:
top, mpstat, tload, /proc/cpuinfo, x86info

Memory:
free, /proc/meminfo, slabtop, /proc/slabinfo, ipcs

I/O:
iostat, vmstat, sar

sar examples:

To display CPU utilization:
sar 3 100
To display paging activity:
sar -B 3 100
To display swapping activity:
sar -W 3 100
To display block I/O activity:
sar -b 3 100
To display block I/O activity for each block device:
sar -d 3 100
To display network activity:
sar -n DEV 3 100

PGA_AGGREGATE_TARGET itself is just a recommended number, leaving over from MEMORY_TARGET – SGA_TARGET (if it’s set).
And Oracle uses that number to decide how big PGAs it will “recommend” for sessions that are using WORKAREA_SIZE_POLICY=AUTO.

Manual PGA Memory Management
============================
In manual PGA memory management, the parameters that have the largest impact on the size of your
PGA, outside of the memory allocated by your session for PL/SQL tables and other variables, are:

• SORT_AREA_SIZE: The total amount of RAM that will be used to sort information
before swapping out to disk.
• SORT_AREA_RETAINED_SIZE: The amount of memory that will be used to hold sorted
data after the sort is complete. That is, if SORT_AREA_SIZE is 512KB and
SORT_AREA_RETAINED_SIZE is 256KB, your server process would use up to 512KB of
memory to sort data during the initial processing of the query. When the sort was
complete, the sorting area would “shrink” down to 256KB, and any sorted data
that did not fit in that 256KB would be written out to the temporary tablespace.
• HASH_AREA_SIZE: The amount of memory your server process can use to store hash
tables in memory. These structures are used during a hash join, typically when
joining a large set with another set. The smaller of the two sets would be hashed
into memory and anything that didn’t fit in the hash area region of memory would
be stored in the temporary tablespace by the join key.

These parameters control the amount of space Oracle will use to sort or hash data before writing
(swapping) it to disk, and how much of that memory segment will be retained after the sort is done. The
SORT_AREA_SIZE-SORT_AREA_RETAINED_SIZE calculated value is generally allocated out of your PGA, and
the SORT_AREA_RETAINED_SIZE value will be in your UGA. You can discover your current use of PGA and
UGA memory and monitor its size by querying special Oracle V$ views, also referred to as dynamic
performance views.

In Oracle9i, direct I/O to temporary space was not labeled as such, so we used a WHERE clause that
included (and a.name like ‘%ga %’ or a.name like ‘%physical % direct%’) in it.

Automatic PGA Memory Management
===============================
Starting with Oracle9i Release 1, a new way to manage PGA memory was introduced that avoids using
the SORT_AREA_SIZE, BITMAP_MERGE_AREA_SIZE, and HASH_AREA_SIZE parameters. It was introduced to
address a few issues:
• Ease of use: Much confusion surrounded how to set the proper *_AREA_SIZE
parameters. There was also much confusion over how those parameters actually
worked and how memory was allocated.

• Manual allocation was a “one-size-fits-all” method: Typically, as the number of
users running similar applications against a database went up, the amount of

memory used for sorting and hashing went up linearly as well. If 10 concurrent
users with a sort area size of 1MB used 10MB of memory, 100 concurrent users
would probably use 100MB, 1,000 would probably use 1000MB, and so on. Unless
the DBA was sitting at the console continually adjusting the sort/hash area size
settings, everyone would pretty much use the same values all day long. Consider
the previous example, where you saw for yourself how the physical I/O to temp
decreased as the amount of RAM we allowed ourselves to use went up. If you run
that example for yourself, you will almost certainly see a decrease in response time
as the amount of RAM available for sorting increases. Manual allocation fixes the
amount of memory to be used for sorting at a more or less constant number,
regardless of how much memory is actually available. Automatic memory
management allows us to use the memory when it is available; it dynamically
adjusts the amount of memory we use based on the workload.

• Memory control: As a result of the previous point, it was hard, if not impossible, to
keep the Oracle instance inside a “box” memory-wise. You couldn’t control the
amount of memory the instance was going to use, as you had no real control over
the number of simultaneous sorts and hashes taking place. It was far too easy to
attempt to use more real memory (actual physical free memory) than was
available on the machine.
Enter automatic PGA memory management. Here, you first simply set up and size the SGA. The SGA
is a fixed-size piece of memory so you can very accurately see how big it is, and that will be its total size
(unless and until you change it). You then tell Oracle, “This is how much memory you should try to limit
yourself to across all work areas” (a new umbrella term for the sorting and hashing areas you use). Now,
you could in theory take a machine with 2GB of physical memory and allocate 768MB of memory to the
SGA and 768MB of memory to the PGA, leaving 512MB of memory for the OS and other processes. I say
“in theory” because it doesn’t work exactly that cleanly, but it’s close. Before I discuss why that’s true,
let’s take a look at how to set up automatic PGA memory management and turn it on.
The process of setting this up involves deciding on the proper values for two instance initialization
parameters:

WORKAREA_SIZE_POLICY: This parameter may be set to either MANUAL, which will
use the sort area and hash area size parameters to control the amount of
memory allocated, or AUTO, in which case the amount of memory allocated will
vary based on the current workload in the database. The default and
recommended value is AUTO.

• PGA_AGGREGATE_TARGET: This parameter controls how much memory the instance
should allocate, in total, for all work areas used to sort or hash data. Its default
value varies by version and may be set by various tools such as the DBCA. In
general, if you are using automatic PGA memory management, you should
explicitly set this parameter.

=============================Monitor================================

PGA usage statistics:

select * from v$pgastat;

select name,value
from v$pgastat
where name in (‘aggregate PGA target parameter’,
‘aggregate PGA auto target’,
‘total PGA inuse’,
‘total PGA allocated’,
‘over allocation count’,
‘extra bytes read/written’,
‘cache hit percentage’);

Determine a good setting for pga_aggregate_target:

select * from v$pga_target_advice order by pga_target_for_estimate;

Show the maximum PGA usage per process:

select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from v$process;

V$SQL_WORKAREA, what we need to examine for tuning PGA
size are:

Optimal_executions — number of times this work area ran in
optimal mode without interacting with temporary tablespace
Onepass_executions — number of times this work area ran in
one-pass mode, which means sql execution need to interact with
temporary tablespace once to get it finished
Multipasses_executions — number of times this work area ran
below the one-pass memory requirement, which means sql
execution need to interact with temporary tablespace more than
once to get it finished
Here, we’d like to get summary information about executions criteria
from view v$sql_workarea.

SQL> select sum(OPTIMAL_EXECUTIONS) OPTIMAL,
sum(ONEPASS_EXECUTIONS) ONEPASS ,
sum(MULTIPASSES_EXECUTIONS) MULTIPASSES
from v$sql_workarea
where POLICY=’AUTO’;

select n.name,sum(s.value) value
from v$sesstat s,v$statname n
where s.statistic#=n.statistic#
and n.name like ‘workarea executions%’
group by n.name;

SGA
===

The Database Buffer Cache
========================

The database buffer cache is Oracle’s work area for executing SQL. When updating data, users’ sessions don’t directly update the data on disk. The data blocks containing the data of interest are first copied into the database buffer cache. Changes (such inserting new rows and deleting or modifying existing rows) are applied to these copies of the data blocks in the database buffer cache. The blocks will remain in the cache for some time afterward, until the buffer they are occupying is needed for caching another block.

When querying data, the data also goes via the cache. The session works out which blocks contain the rows of interest and copies them into the database buffer cache; the relevant rows are then transferred into the session’s PGA for further processing. And again, the blocks remain in the database buffer cache for some time afterward.

Take note of the term block. Datafiles are formatted into fixed-sized blocks. Table rows, and other data objects such as index keys, are stored in these blocks. The database buffer cache is formatted into memory buffers each sized to hold one block. Unlike blocks, rows are of variable length; the length of a row will depend on the number of columns defined for the table, whether the columns actually have anything in them, and if so, what. Depending on the size of the blocks (which is chosen by the DBA) and the size of the rows (which is dependent on the table design and usage), there may be several rows per block or possibly a row may stretch over several blocks. The structure of a data block will be described in the section “The Datafiles” later in this chapter.

Ideally, all the blocks containing data that is frequently accessed will be in the database buffer cache, therefore minimizing the need for disk I/O. As a typical use of the database buffer cache, consider an end user retrieving an employee record and updating it, with these statements:

select last_name, salary, job_id from employees where employee_id=100;
update employees set salary=salary * 1.1 where employee_id=100;
commit;

The user process will have prompted the user for the employee number and constructed the SELECT statement. The SELECT retrieves some details to be sent to the user process, where they will be formatted for display. To execute this statement, the session’s server process will read the data block containing the relevant row from a datafile into a buffer. The user process will then initiate a screen dialogue to prompt for some change to be made and verified; then the UPDATE statement and the COMMIT statement will be constructed and sent to the server process for execution. Provided that an excessive period of time has not elapsed, the block with the row will still be available in the cache when the UPDATE statement is executed. In this example, the buffer cache hit ratio will be 50 percent: two accesses of a block in the cache, but only one read of the block from disk. A well-tuned database buffer cache can result in a cache hit ratio well over 90 percent.

A buffer storing a block whose image in the cache is not the same as the image on disk is often referred to as a dirty buffer. A buffer will be clean when a block is first copied into it: at that point, the block image in the buffer is the same as the block image on disk. The buffer will become dirty when the block in it is updated. Eventually, dirty buffers must be written back to the datafiles, at which point the buffer will be clean again. Even after being written to disk, the block remains in memory; it is possible that the buffer will not be overwritten with another block for some time.

Note that there is no correlation between the frequency of updates to a buffer (or the number of COMMITs) and when it gets written back to the datafiles. The write to the datafiles is done by the database writer background process.

The size of the database buffer cache is critical for performance. The cache should be sized adequately for caching all the frequently accessed blocks (whether clean or dirty), but not so large that it caches blocks that are rarely needed. An undersized cache will result in excessive disk activity, as frequently accessed blocks are continually read from disk, used, overwritten by other blocks, and then read from disk again. An oversized cache is not so bad (so long as it is not so large that the operating system is having to swap pages of virtual memory in and out of real memory) but can cause problems; for example, startup of an instance is slower if it involves formatting a massive database buffer cache.

The database buffer cache is allocated at instance startup time. Prior to release 9i of the database it was not possible to resize the database buffer cache subsequently without restarting the database instance, but from 9i onward it can be resized up or down at any time. This resizing can be either manual, or (from release 10g onward) automatic according to workload, if the automatic mechanism has been enabled.

The Log Buffer

=============

The log buffer is a small, short-term staging area for change vectors before they are written to the redo log on disk. A change vector is a modification applied to something; executing DML statements generates change vectors applied to data. The redo log is the database’s guarantee that data will never be lost: whenever a data block is changed, the change vectors applied to the block are written out to the redo log, from where they can be extracted and applied to datafile backups if it is ever necessary to restore a datafile.

Redo is not written directly to the redo log files by session server processes. If it were, the sessions would have to wait for disk I/O operations to complete whenever they executed a DML statement. Instead, sessions write redo to the log buffer, in memory. This is much faster than writing to disk. The log buffer (which may contain change vectors from many sessions, interleaved with each other) is then written out to the redo log files. One write of the log buffer to disk may therefore be a batch of many change vectors from many transactions. Even so, the change vectors in the log buffer are written to disk in very nearly real time—and when a session issues a COMMIT statement, the log buffer write really does happen in real time. The writes are done by the log writer background process, the LGWR.

The log buffer is small (in comparison with other memory structures) because it is a very short-term storage area. Change vectors are inserted into it and are streamed to disk in near real time. There is no need for it to be more than a few megabytes at the most, and indeed making it much bigger than the default value can be seriously bad for performance. The default is determined by the Oracle server and is based on the number of CPUs on the server node.

It is not possible to create a log buffer smaller than the default. If you attempt to, it will be set to the default size anyway. It is possible to create a log buffer larger than the default, but this is often not a good idea. The problem is that when a COMMIT statement is issued, part of the commit processing involves writing the contents of the log buffer to the redo log files on disk. This write occurs in real time, and while it is in progress, the session that issued the COMMIT will hang. Commit processing is a critical part of the Oracle architecture. The guarantee that a committed transaction will never be lost is based on this: the commit-complete message is not returned to the session until the data blocks in the cache have been changed (which means that the transaction has been completed) and the change vectors have been written to the redo log on disk (and therefore the transaction could be recovered if necessary). A large log buffer means that potentially there is more to write when a COMMIT is issued, and therefore it may take a longer time before the commit-complete message can be sent, and the session can resume work.

The log buffer is allocated at instance startup, and it can never be resized subsequently without restarting the instance. It is a circular buffer. As server processes write change vectors to it, the current write address moves around. The log writer process writes the vectors out in batches, and as it does so, the space they occupied becomes available and can be overwritten by more change vectors. It is possible that at times of peak activity, change vectors will be generated faster than the log writer process can write them out. If this happens, all DML activity will cease (for a few milliseconds) while the log writer clears the buffer.

The process of flushing the log buffer to disk is one of the ultimate bottlenecks in the Oracle architecture. You cannot do DML faster than the LGWR can flush the change vectors to the online redo log files.

The Shared Pool

==============

The shared pool is the most complex of the SGA structures. It is divided into dozens of substructures, all of which are managed internally by the Oracle server. This discussion of architecture will only mention four of the shared pool components, and these only briefly:

The library cache

The data dictionary cache

The PL/SQL area

The SQL query and PL/SQL function result caches

Some other structures will be described in later chapters. All the structures within the shared pool are automatically managed. Their size will very according to the pattern of activity against the instance, within the overall size of the shared pool The shared pool itself can be resized dynamically, either in response to the DBA’s instructions or through being managed automatically.

The Library Cache

================

The library cache is a memory area for storing recently executed code, in its parsed form. Parsing is the conversion of code written by programmers into something executable, and it is a process which Oracle does on demand. By caching parsed code in the shared pool so that it can be reused without reparsing, performance can be greatly improved. Parsing SQL code takes time. Consider a simple SQL statement:

select * from employees where last_name=’KING’;

Before this statement can be executed, the Oracle server has to work out what it means, and how to execute it. To begin with, what is employees? Is it a table, a synonym, or a view? Does it even exist? Then the “*”—what are the columns that make up the employees table (if it is a table)? Does the user have permission to see the table? Answers to these questions and many others have to be found by querying the data dictionary.

Having worked out what the statement actually means, the server has to decide out how best to execute it. Is there an index on the last_name column? If so, would it be quicker to use the index to locate the row, or to scan the whole table? More queries against the data dictionary … It is quite possible for a simple one-line query against a user table to generate dozens of queries against the data dictionary, and for the parsing of a statement to take many times longer than eventually executing it. The purpose of the library cache of the shared pool is to store statements in their parsed form, ready for execution. The first time a statement is issued, it has to be parsed before execution—the second time, it can be executed immediately. In a well-designed application, it is possible that statements may be parsed once and executed millions of times. This saves a huge amount of time

The Data Dictionary Cache
========================

The data dictionary cache is sometimes referred to as the row cache. Whichever term you prefer, it stores recently used object definitions: descriptions of tables, indexes, users, and other metadata definitions. Keeping such definitions in memory in the SGA, where they are immediately accessible to all sessions, rather than each session having to read them repeatedly from the data dictionary on disk, enhances parsing performance.

The data dictionary cache stores object definitions so that when statements do have to be parsed, they can be parsed fast—without having to query the data dictionary. Consider what happens if these statements are issued consecutively:

select sum(salary) from employees;
select * from employees where last_name=’KING’;

Both statements must be parsed because they are different statements—but parsing the first SELECT statement will have loaded the definition of the employees table and its columns into the data dictionary cache, so parsing the second statement will be faster than it would otherwise have been, because no data dictionary access will be needed.

The PL/SQL Area
==============

Stored PL/SQL objects are procedures, functions, packaged procedures and functions, object type definitions, and triggers. These are all stored in the data dictionary, as source code and also in their compiled form. When a stored PL/SQL object is invoked by a session, it must be read from the data dictionary. To prevent repeated reading, the objects are then cached in the PL/SQL area of the shared pool.

The first time a PL/SQL object is used, it must be read from the data dictionary tables on disk, but subsequent invocations will be much faster, because the object will already be available in the PL/SQL area of the shared pool

The SQL Query and PL/SQL Function Result Cache
=============================================

The result cache is a release 11g new feature. In many applications, the same query is executed many times, by either the same session or many different sessions. Creating a result cache lets the Oracle server store the results of such queries in memory. The next time the query is issued, rather than running the query the server can retrieve the cached result.

The result cache mechanism is intelligent enough to track whether the tables against which the query was run have been updated. If this has happened, the query results will be invalidated and the next time the query is issued, it will be rerun. There is therefore no danger of ever receiving an out-of-date cached result.

The PL/SQL result cache uses a similar mechanism. When a PL/SQL function is executed, its return value can be cached ready for the next time the function is executed. If the parameters passed to the function, or the tables that the function queries, are different, the function will be reevaluated, but otherwise, the cached value will be returned.

By default, use of the SQL query and PL/SQL function result cache is disabled, but if enabled programmatically, it can often dramatically improve performance. The cache is within the shared pool: unlike the other memory areas described previously, it does afford the DBA some control: he/she can specify a maximum size.

The Large Pool
==============

The large pool is an optional area that, if created, will be used automatically by various processes that would otherwise take memory from the shared pool. One major use of the large pool is by shared server processes, described in Chapter 6 in the discussion of the shared (or multithreaded) server. Parallel execution servers will also use the large pool, if there is one. In the absence of a large pool, these processes will use memory on the shared pool. This can cause bad contention for the shared pool: if shared servers or parallel servers are being used, a large pool should always be created. Some I/O processes may also make use of the large pool, such as the processes used by the Recovery Manager when it is backing up to a tape device.

Sizing the large pool is not a matter for performance. If a process needs large pool of memory, it will fail with an error if that memory is not available. Allocating more memory than is needed will not make statements run faster. Furthermore, if a large pool exists, it will be used: it is not possible for a statement to start off by using the large pool, and then revert to the shared pool if the large pool is too small.

The Java Pool
=============

The Java pool is only required if your application is going to run Java-stored procedures within the database: it is used for the heap space needed to instantiate the Java objects. However, a number of Oracle options are written in Java, so the Java pool is considered standard nowadays. Note that Java code is not cached in the Java pool: it is cached in the shared pool, in the same way that PL/SQL code is cached.

The optimal size of the Java pool is dependent on the Java application, and how many sessions are running it. Each session will require heap space for its objects. If the Java pool is undersized, performance may degrade due to the need to continually reclaim space. In an EJB (Enterprise JavaBean) application, an object such as a stateless session bean may be instantiated and used, and then remain in memory in case it is needed again: such an object can be reused immediately. But if the Oracle server has had to destroy the bean to make room for another, then it will have to be reinstantiated next time it is needed. If the Java pool is chronically undersized, then the applications may simply fail

The Streams Pool
================

The Streams pool is used by Oracle Streams. This is an advanced tool that is beyond the scope of the OCP examinations or this book, but for completeness a short description follows.

The mechanism used by Streams is to extract change vectors from the redo log and from these reconstruct the statements that were executed—or statements that would have the same effect. These statements are executed at the remote database. The processes that extract changes from redo and the processes that apply the changes need memory: this memory is the Streams pool. From database release 10g it is possible to create and to resize the Streams pool after instance startup; this creation and sizing can be completely automatic. With earlier releases it had to be defined at startup and was a fixed size.

The parameters that have the greatest effect on the overall size of the SGA are as follows:
• JAVA_POOL_SIZE: Controls the size of the Java pool.
• SHARED_POOL_SIZE: Controls the size of the shared pool (to some degree).
• LARGE_POOL_SIZE: Controls the size of the large pool.
• DB_*_CACHE_SIZE: Eight of these cache_size parameters control the sizes of the
various buffer caches available.
• LOG_BUFFER: Controls the size of the redo buffer (to some degree).
• SGA_TARGET: Used with automatic SGA memory management in Oracle 10g and
above.
• SGA_MAX_SIZE: Used to control the maximum size to which the SGA can be resized
while the database is up and running.
• MEMORY_TARGET: Used with automatic memory management (both PGA and SGA
automatic memory management).
• MEMORY_MAX_SIZE: Used to control the maximum amount of memory Oracle should
strive to use over both the PGA and SGA sizes under automatic memory
management. This is really just a target; the PGA may exceed the optimum size if
the number of users increases beyond some level or a session(s) allocates large
untunable bits of memory as demonstrated above.

Multiple Block Sizes
===================

ops$tkyte%ORA11GR2> create tablespace ts_16k
2 datafile ‘/tmp/ts_16k.dbf’
3 size 5m
4 blocksize 16k;
CHAPTER 4 MEMORY STRUCTURES
155
create tablespace ts_16k
*
ERROR at line 1:
ORA-29339: tablespace block size 16384 does not match configured block sizes
ops$tkyte%ORA11GR2> show parameter 16k

Monitor Scripts

==================

select pool, name, bytes
from v$sgastat
order by pool, name;

show parameter sga_target

select component, granule_size from v$sga_dynamic_components;

select tch, file#, dbablk,
case when obj = 4294967295
then ‘rbs/compat segment’
else (select max( ‘(‘||object_type||’) ‘ ||
owner || ‘.’ || object_name ) ||
decode( count(*), 1, ”, ‘ maybe!’ )
from dba_objects
where data_object_id = X.OBJ )
end what
from (
select tch, file#, dbablk, obj
from x$bh
where state <> 0
order by tch desc
) x
where rownum <= 5

control files in general

What Is a Control File?

Every Oracle database has a control file. A control file is a small binary file that records the physical structure of the database and includes:

n The database name

n Names and locations of associated datafiles and online redo log files

n The timestamp of the database creation

n The current log sequence number

n Checkpoint information

The control file must be available for writing by the Oracle database server whenever the database is open. Without the control file, the database cannot be mounted and recovery is difficult. The control file of an Oracle database is created at the same time as the database. By

default, at least one copy of the control file is created during database creation. On some operating systems the default is to create multiple copies. You should create two or more copies of the control file during database creation. You might also need to create control files later, if you lose control files or want to change particular settings in the control files.

 

MULTIPLEXING CONTROL FILES USING INIT.ORA

1.shutdown the database.

2.copy the control file to move locations by using operating system command.

3.change the initialization parameter file to include new control files name in the CONTROL_FILES.

control_files=(“d:\oracle\oradata\frame\CONTROL01.CTL”, “d:\oracle\oradata\frame\CONTROL02.CTL”, “d:\oracle\oradata\frame\CONTROL03.CTL”,”e:\backup\controlfile\CONTROL01.CTL”)

4.startup database.

 

MULTIPLEXING CONTROL FILES USING SPFILES

1.alter system set control_files=’d:\oracle\oradata\frame\CONTROL01.CTL’,’d:\oracle\oradata\frame\CONTROL02.CTL’,’d:\oracle\oradata\frame\CONTROL03.CTL’,’e:\backup\controlfile\CONTROL01.CTL’ scope=spfile.

2.shutdown database.

3.copy the existing control file to new location.

4.start up instance

 

CREATING NEW CONTROL FILES

You can create new control file by using create controlfile command.you need to create it when

1.if you loose all the control files.

2.if you want to change any of the MAX clauses.

3.if you want to change database name.

You have to know data file names and redo log files name to create control file.

command:-

1.start the database in NOMOUNT mode.

SQL>STARTUP NOMOUNT

 

2.SQL>CREATE CONTROLFILE REUSE DATABASE “FRAME” NORESETLOGS ARCHIVELOG

MAXLOGFILES 50

MAXLOGMEMBERS 5

MAXDATAFILES 100

MAXINSTANCES 1

MAXLOGHISTORY 113

LOGFILE

GROUP 1 ‘D:\ORACLE\ORADATA\FRAME\REDO01.LOG’  SIZE 100M,

GROUP 2 ‘D:\ORACLE\ORADATA\FRAME\REDO02.LOG’  SIZE 100M,

GROUP 3 ‘D:\ORACLE\ORADATA\FRAME\REDO03.LOG’  SIZE 100M

DATAFILE

‘D:\ORACLE\ORADATA\FRAME\SYSTEM01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\UNDOTBS01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\CWMLITE01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\DRSYS01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\EXAMPLE01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\INDX01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\TOOLS01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\USERS01.DBF’,

‘D:\ORACLE\ORADATA\FRAME\FRAME.DBF’

CHARACTER SET WE8MSWIN1252

;

3.Start the database.

SQL>ALTER DATABASE OPEN.

 

Backing Up Control Files

  1. Back up the control file to a binary file (duplicate of existing control file) using

the following statement:

ALTER DATABASE BACKUP CONTROLFILE TO ‘/oracle/backup/control.bkp’ REUSE;

  1. Produce SQL statements that can later be used to recreate your control file:

ALTER DATABASE BACKUP CONTROLFILE TO TRACE;