Redo log in general

Managing the Online Redo Log

 

Creating Online Redo Log Groups and Members

Creating Online Redo Log Groups

The following statement adds a new group of redo logs to the database:

SQL>ALTER DATABASE ADD LOGFILE (‘/oracle/dbs/log1c.rdo’, ‘/oracle/dbs/log2c.rdo’) SIZE 500M;

You can also specify the number that identifies the group using the GROUP option:

SQL>ALTER DATABASE ADD LOGFILE GROUP 3  ‘/ORACLE/ORADATA/FRAME/REDO03.LOG’ SIZE 500M;

In ASM

SQL>ALTER DATABASE ADD LOGFILE (‘+FRA’, ‘+REDO’) SIZE 500M;

Creating Online Redo Log Members

SQL> alter database add logfile member ‘e:/backup/redolog/redo02.log’ to group 2;

SQL>ALTER DATABASE ADD LOGFILE MEMBER ‘/oracle/dbs/log2c.rdo’ TO

(‘/oracle/dbs/log2a.rdo’, ‘/oracle/dbs/log2b.rdo’);

In ASM

SQL>alter database add logfile member ‘+FRA’ to group 2;

To Rename Online Redo Log Members

  1. Shutdown the database.

SHUTDOWN

  1. Copy the online redo log files to the new location. Operating system files, such as online redo log members, must be copied using the appropriate operating system commands. See your operating system specific documentation for more information about copying files.

mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo

mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo

  1. Startup the database, mount, but do not open it.

CONNECT / as SYSDBA

STARTUP MOUNT

  1. Rename the online redo log members.Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database’s online redo log files.

ALTER DATABASE RENAME FILE ‘/diska/logs/log1a.rdo’, ‘/diska/logs/log2a.rdo’

TO ‘/diskc/logs/log1c.rdo’, ‘/diskc/logs/log2c.rdo’;

  1. Open the database for normal operation.

The online redo log alterations take effect when the database is opened.

 

Dropping Log Groups

SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

GROUP# ARC STATUS

——— — —————-

1 YES ACTIVE

2 NO CURRENT

3 YES INACTIVE

4 YES INACTIVE

Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause. The following statement drops redo log group number 3:

ALTER DATABASE DROP LOGFILE GROUP 3;

 

Dropping Online Redo Log Members

ALTER DATABASE DROP LOGFILE MEMBER ‘/oracle/dbs/log3c.rdo’;

ALTER DATABASE DROP LOGFILE MEMBER ‘/oracle/dbs/log3c.rdo’;

 

Forcing Log Switches

ALTER SYSTEM SWITCH LOGFILE;

ALTER SYSTEM CHECKPOINT;

 

Clearing an Online Redo Log File

ALTER DATABASE CLEAR LOGFILE GROUP 3;

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

 

Viewing Online Redo Log Information

V$LOG                                         Displays the redo log file information from the control file

V$LOGFILE                                     Identifies redo log groups and members and member status

V$LOG_HISTORY                                 Contains log history information

 

Why Can’t I Allocate a New Log?

 

will be found in alert.log on your server:

 

 

Thread 1 cannot allocate new log, sequence 1466

Checkpoint not complete

Current log# 3 seq# 1465 mem# 0: /…/…redo03.log

 

It might say Archival required instead of Checkpoint not complete, but the effect is pretty

much the same. This is really something the DBA should be looking out for. This message will

be written to alert.log on the server whenever the database attempts to reuse an online redo

log file and finds that it can’t. This happens when DBWR has not yet finished checkpointing

the data protected by the redo log or ARCH has not finished copying the redo log file to the

archive destination. At this point, the database effectively halts as far as the end user is

concerned. It stops cold. DBWR or ARCH will be given priority to flush the blocks to disk.

Upon completion of the checkpoint or archival, everything goes back to normal. The reason

the database suspends user activity is that there is simply no place to record the changes

the users are making. Oracle is attempting to reuse an online redo log file, but because

either the file would be needed to recover the database in the event of a failure

(Checkpoint not complete), or the archiver has not yet finished copying it (Archival

required), Oracle must wait (and the end users will wait) until the redo log file can safely

be reused.

 

If you see that your sessions spend a lot of time waiting on a “log file switch,” “log

buffer space,” or “log file switch checkpoint or archival incomplete,” you are most likely

hitting this. You will notice it during prolonged periods of database modifications if your

log files are sized incorrectly, or because DBWR and ARCH need to be tuned by the DBA or

system administrator. I frequently see this issue with the “starter” database that has not

been customized. The “starter” database typically sizes the redo logs far too small for any

significant amount of work (including the initial database build of the data dictionary

itself). As soon as you start loading up the database, you will notice that the first 1,000

rows go fast, and then things start going in spurts: 1,000 go fast, then hang, then go fast,

then hang, and so on. These are the indications you are hitting this condition.

 

There are a couple of things you can do to solve this issue:

 

Make DBWR faster. Have your DBA tune DBWR by enabling ASYNC I/O, using DBWR I/O slaves, or using multiple DBWR processes. Look at the I/O on the system and see if one disk or a set of disks is “hot” and you need to therefore spread the data out. The same general advice

applies for ARCH as well. The pros of this are that you get “something for nothing”

here—increased performance without really changing any logic/structures/code. There really

are no downsides to this approach.

 

Add more redo log files. This will postpone the Checkpoint not complete in some cases and,

after a while, it will postpone the Checkpoint not complete so long that it perhaps doesn’t

happen (because you gave DBWR enough breathing room to checkpoint). The same applies to the

Archival required message. The benefit of this approach is the removal of the “pauses” in

your system. The downside is it consumes more disk, but the benefit far outweighs any

downside here.

 

Re-create the log files with a larger size. This will extend the amount of time between the

time you fill the online redo log and the time you need to reuse it. The same applies to the

Archival required message, if the redo log file usage is “bursty.” If you have a period of

massive log generation (nightly loads, batch processes) followed by periods of relative

calm, then having larger online redo logs can buy enough time for ARCH to catch up during

the calm periods. The pros and cons are identical to the preceding approach of adding more

files. Additionally, it may postpone a checkpoint from happening until later, since

checkpoints happen at each log switch (at least), and the log switches will now be further

apart.

 

Make checkpointing happen more frequently and more continuously. Use a smaller block buffer

cache (not entirely desirable) or various parameter settings such as FAST_START_MTTR_TARGET,

LOG_CHECKPOINT_INTERVAL, and LOG_CHECKPOINT_TIMEOUT. This will force DBWR to flush dirty

blocks more frequently. The benefit to this approach is that recovery time from a failure is

reduced. There will always be less work in the online redo logs to be applied. The downside

is that blocks may be written to disk more frequently if they are modified often. The buffer

cache will not be as effective as it could be, and it can defeat the block cleanout

mechanism discussed in the next section.

 

 

Block Cleanout

 

In this section, we’ll discuss block cleanouts, or the removal of “locking”-related

information on the database blocks we’ve modified. This concept is important to understand

when we talk about the infamous ORA-01555: snapshot too old error in a subsequent section.

 

If you recall from Chapter 6 “Locking and Latching,” we talked about data locks and how they

are managed. I described how they are actually attributes of the data, stored on the block

header. A side effect of this is that the next time that block is accessed, we may have to

clean it out—in other words, remove the transaction information. This action generates redo

and causes the block to become dirty if it wasn’t already, meaning that a simple SELECT may

generate redo and may cause lots of blocks to be written to disk with the next checkpoint.

Under most normal circumstances, however, this will not happen. If you have mostly small- to

medium-sized transactions (OLTP), or you have a data warehouse that performs direct-path

loads or uses DBMS_STATS to analyze tables after load operations, you’ll find the blocks are

generally cleaned for you. If you recall from the earlier section titled “What Does a COMMIT

Do?” one of the steps of COMMIT-time processing is to revisit some blocks if they are still

in the SGA and if they are accessible (no one else is modifying them), and then clean them

out. This activity is known as a commit clean out and is the activity that cleans out the

transaction information on our modified block. Optimally, our COMMIT can clean out the

blocks so that a subsequent SELECT (read) will not have to clean it out. Only an UPDATE of

this block would truly clean out our residual transaction information, and since the UPDATE

is already generating redo, the cleanout is not noticeable.

 

We can force a cleanout to not happen, and therefore observe its side effects, by

understanding how the commit cleanout works. In a commit list associated with our

transaction, Oracle will record lists of blocks we have modified. Each of these lists is 20

blocks long, and Oracle will allocate as many of these lists as it needs—up to a point. If

the sum of the blocks we modify exceeds 10 percent of the block buffer cache size, Oracle

will stop allocating new lists. For example, if our buffer cache is set to cache 3,000

blocks, Oracle will maintain a list of up to 300 blocks (10 percent of 3,000). Upon COMMIT,

Oracle will process each of these lists of 20 block pointers, and if the block is still

available, it will perform a fast cleanout. So, as long as the number of blocks we modify

does not exceed 10 percent of the number of blocks in the cache and our blocks are still in

the cache and available to us, Oracle will clean them out upon COMMIT. Otherwise, it just

skips them (i.e., does not clean them out).

 

Log Contention

 

This, like the cannot allocate new log message, is something the DBA must fix, typically in

conjunction with the system administrator. However, it is something a developer might detect

as well if the DBA isn’t watching closely enough.

 

If you are faced with log contention, what you might observe is a large wait time on the

“log file sync” event and long write times evidenced in the “log file parallel write” event

in a Statspack report. If you see this, you may be experiencing contention on the redo logs;

they are not being written fast enough. This can happen for many reasons. One application

reason (one the DBA can’t fix, but the developer must) is that you are committing too

frequently—committing inside of a loop doing INSERTs, for example. As demonstrated in the

“What Does a Commit Do?” section, committing too frequently, aside from being a bad

programming practice, is a surefire way to introduce lots of log file sync waits. Assuming

all of your transactions are correctly sized (you are not committing more frequently than

your business rules dictate), the most common causes for log file waits that I’ve seen are

as follows:

 

Putting redo on a slow device: The disks are just performing poorly. It is time to buy

faster disks.

 

Putting redo on the same device as other files that are accessed frequently: Redo is

designed to be written with sequential writes and to be on dedicated devices. If other

components of your system—even other Oracle components—are attempting to read and write to

this device at the same time as LGWR, you will experience some degree of contention. Here,

you want to ensure LGWR has exclusive access to these devices if at all possible.

 

Mounting the log devices in a buffered manner: Here, you are using a “cooked” file system

(not RAW disks). The operating system is buffering the data, and the database is also

buffering the data (redo log buffer). Double-buffering slows things down. If possible, mount

the devices in a “direct” fashion. How to do this varies by operating system and device, but

it is usually possible.

 

Tags :

About the Author

debasis maity

12+ years of rich experience on Database Administrations and on Infrastructure Solution Architect. AWS Certified Solution Architect and Senior Oracle DBA

0 thoughts on “Redo log in general

Leave a Reply

Your email address will not be published. Required fields are marked *