AWR report analysis 12c in depth-part 2

AWR report analysis 12c in depth-part 2

This document is continuation of AWR report analysis in depth step by step which will provide you an idea to understand AWR report more quickly.

Please refer to first part:-

AWR report analysis in 12c depth-part 1

H.SQL Ordered by Elapsed Time in AWR REPORT

For A:-

For B:-

For A:-Total Elapsed Time = CPU Time + Wait Time. If a SQL statement appears in the total elapsed time area of the report this means its CPU time plus any other wait times made it. Excessive Elapsed Time could be due to excessive CPU usage or excessive wait times.
Please note that first 3 rows have highest elapsed time.Hence we will provide our attention to analyze the queries.
First row is clearly indicates the PL/SQL block we ran in first part of AWR analysis report.It’s elapsed time is cumulative sum of insert statements and select statements executed.
PL/SQL block executed 10 times,per execution it took 4,868 seconds which is 58% of total DB time spent (83385 sec total DB TIME).It took 33% CPU time of total CPU time(16250 sec total CPU time).

I.SQL Ordered by CPU time in AWR REPORT

For A:-

For B:-

For A:-When a statement appears in the Total CPU Time area this indicates it used excessive CPU cycles during its processing. Excessive CPU processing time can be caused by sorting, excessive function usage or long parse times. Indicators that you should be looking at this section for SQL tuning candidates include high CPU percentages in the service section for the service associated with this SQL.
In that section,again the 3 queries appeared as most CPU consuming queries.We need to take close look how to reduce CPU time.The first row is PL/SQL block which is cumulative CPU time of the other 2 expensive insert and select query.

J.SQL ordered by user I/O wait time in AWR REPORT

for A:-

For B:-

For A:-In that section,We need to identify which query is making most I/O .The gather_stats job needs to scan the tables hence it is accounted for large IO time.But still I can say,here we need to worry much on IO time as it is very less(123 sec) compared to DB Time(83348 sec). Please check “Segments by table scans” to understand what may have been contributed in your IO time .

K.SQL Ordered by Gets in AWR REPORT

For A:-

For B:-

For A:-Total buffer gets mean a SQL statement is reading a lot of data from the db block buffers. Buffer gets (Logical IO) are OK, except when they become excessive. LIO may  have incurred a PIO in order to get the block into the buffer in the first place. Reducing buffer gets is very important and should not be ignored.
To get a block from db block buffers, we have to latch it (i.e. in order to prevent someone from modifying the data structures we are currently reading from the buffer). Although latches are less persistent than locks, a latch is still a serialization device.
Also note that by lowering buffer gets you will require less CPU usage and less latching. Thus to reduce excessive buffer gets, optimize SQL to use appropriate indexes and reduce full table scans.
Please check “Segments by Logical Reads” section of AWR REPORT.

For A:-

For B:-

L.SQL ordered by Reads in AWR REPORT.

For A:-

For B:-

For A:-High total disk reads mean a SQL statement is reading a lot of data from disks rather than being able to access that data from the db block buffers.  Excessive disk reads do cause performance issues. The usual norm is to increase the db buffer cache to allow more buffers and reduce ageing . Total disk reads are caused by high physical reads, a low buffer cache hit ratio, with high IO wait times. Higher wait times for Disk IO can be associated with a variety of reasons (busy or over saturated SAN, slower underlying storage, low capacity in HBC and other hardware causes). Statistics on IO section in AWR, plus the Operating System diagnostic tools as simple as iostat can help in identifying these issues. To reduce excessive disk reads, consider partitioning, use indexes and look at optimizing SQL to avoid excessive full table scans.
In our case first query is doing physical reads o 879299 blocks;
Please check “Segments by Physical Reads” in AWR report.

M.SQL ordered by Execution in AWR REPORT.

For A:-

For B:-

High total executions need to be reviewed to see if they are genuine executions or loops in SQL code.Here we see Top 2 queries have most executions typically due to loop in PL/SQL.
In general statements with high numbers of executions usually are being properly reused. However, there is always a chance of unnecessary loop in PL/SQL, Java or C#. Statements with high number of executions, high number of logical and or physical reads are candidates for review to be sure they are not being executed multiple times when a single execution would serve.

N.SQL ordered by Parse Calls IN AWR REPORT.

For A:-

For B:-

Whenever a statement is issued by a user or process, regardless of whether it is in the SQL pool it undergoes a parse.  As explained under Parsing, the parse can be a hard parse or a soft parse. Excessive parse calls usually go with excessive executions. If the statement is using what are known as unsafe bind variables then the statement will be reparsed each time. If the header parse ratios are low look here and in the version count areas.

O.SQL ordered by shareable Memory in AWR REPORT.

For A:-

For B:-

For A:-Shareable Memory refers to Shared Pool memory area in SGA , hence this particular section in AWR Report states about the SQL STATEMENT CURSORS which consumed the maximum amount of the Shared Pool for their execution.
We need to be concerned if some query takes large chunk of shareable memory.

P.Key Instance Activity stats in AWR REPORT.

For A:-

For B:-

DB Block changes:-Denotes you have frequent change in DB block due to huge DML.Please check “Segments by physical writes” for more details in AWR.

which supports following statistics:-
1.Execute count :-very high number of executions
2.Physical writes :-Huge physical writes.Please check “Segments by physical writes” for more details in AWR.

3.Redo size:-58 GB total redo generated during snapshot time.Please search “log switches” in AWR to check how many log switches happened during snapshot capture time.We can see 43 per hour is very bad.Ideally it should be around 3-4 log switches per hour.

4.User commits:-Number of user commits very high (Case A) .But low in case B (As commit was done outside loop)

For A:-

For B:-

Q.Tablespace IO stat in AWR REPORT

Functions:-
LGWR generates 113 GB Data total(Redo data).Data/sec is 23 MB.IOPS is 1047 for redo.
DBWR reads 28 MB total and writes 21 GB data.IOPS for read is 1 and IOPS for write is 301.
Direct reads read 7GB data( Full table scan of T table which was direct path read )
In Total:-
8.7GB data in read.
141 GB data in write.
1358 IOPS derived from reqs per second.

For A:-

For B:-

It is evident that DATA and UNDOTBS1 have undergone most reads/write during snapshot period.
If the I/O activity is not properly distributed among your files, you may encounter performance bottlenecks during periods of high activity. As a rule of thumb, you don’t want more than 100 I/Os per second per 10,000 RPM disk (even with a RAID array). If the Av Rd(ms) column (Average Reads per millisecond) is higher than 14 ms (given a fair amount of reading is being done), you may want to investigate, since most disks should provide at least this much performance. If rows in this column show 1,000 ms or more in the Av Rd(ms) column or in the Av Buf Wt(ms) column (Average Buffer Writes per millisecond), you probably have some type of I/O problem, and if it shows ###### (meaning it couldn’t fit the value in the size of the field), then you have a serious I/O problem of some kind (this can also be a formatting problem, but anything greater than 1,000 is a problem when a fair number of reads are being done).