All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored and the transactions recovered to a point beyond the time of the last full, offline backup, the logs are required to roll the data forward to that point in time. There are two types of DB2 logging, and each provides a different level of recovery capability: circular logging and archive logging.
Circular logging is the default method when a new database is created.
In the database configuration, the LOGRETAIN and USEREXIT parameters are set to NO. It is set to OFF by default.
When circular logging is used, only full, offline backups of the database are allowed.
The database must be offline (no connections to the database) when a full backup is taken.
This logging method does not allow the database to roll forward through transactions performed after the last full backup operation.
All changes occurring since the last backup operation are lost if the database is restored.
It is also known as version recovery, i.e., you can recover the database as it existed when the backup was taken.
db2 get db cfg for sample | grep ?i "log retain" Log retain for recovery status = NO Log retain for recovery enabled (LOGRETAIN) = OFF db2 get db cfg for sample | grep ?i "user exit" User exit for logging status = NO User exit for logging enabled (USEREXIT) = OFF You can combine the above commands if egrep is compatible on Windows: db2 get db cfg for sample | egrep ?i "log retain|user exit"
Archive logging is used to enable rollforward recovery of a database to ensure that transactions performed after the backup images were taken are not lost in the event that the database needs to be restored.
Enabling either LOGRETAIN or USEREXIT in the database configuration parameter will result in archive logging method.
Online backups of the database are allowed.
Table space backups are allowed.
Rollforward recovery can use both archived logs and active logs to rebuild a database, either to the end of the logs or to a specific point in time.
To archive logs, you can choose to have DB2 leave the log files in the active log path, then manually archive them (LOGRETAIN=ON), or you can edit and build one of the sample user exit programs to automate the archiving (USEREXIT=ON).
Archived logs are logs that were active but are no longer required for crash recovery.
The archived log files can be used to recover changes made after the backup was taken.
db2 update db cfg for sample using logretain ON userexit ON db2 get db cfg for sample | egrep ?i "log retain|user exit" Log retain for recovery status = NO Log retain for recovery enabled (LOGRETAIN) = RECOVERY User exit for logging status = NO User exit for logging enabled (USEREXIT) = ON
NOTE
You must take a full database backup right after enabling LOGRETAIN and USEREXIT to avoid the following error message on the subsequent connection to the database.
[View full width]SQL1116N A connection to or activation of database "SAMPLE" cannot be made because of BACKUP PENDING. SQLSTATE=57019
After the database has been successfully backed up, the "Log retain for recovery status" will change from NO to RECOVERY.
db2 get db cfg for sample | egrep ?i "log retain|user exit" Log retain for recovery status = RECOVERY Log retain for recovery enabled (LOGRETAIN) = RECOVERY User exit for logging status = YES User exit for logging enabled (USEREXIT) = ON
Without infinite active log files enabled, active logs are:
Used during crash recovery to prevent a system failure (such as a system power or application error) from leaving a database in an inconsistent state.
Located in the database log path directory.
NOTE
We will discuss infinite active logs later.
The RESTART DATABASE command uses the active logs to apply or undo the necessary transactions to change the database to a consistent and usable state. During crash recovery, all uncommitted changes recorded in the logs are rolled back, and changes that were committed but not yet written from the buffer pool to table space containers are redone. These actions ensure the integrity of the database and guarantee that no transactions will be lost.
Online archival log files contain information related to completed transactions no longer required for crash recovery. They reside in the same subdirectory path as the active log files.
Offline archival log files have been moved from the active log file directory. The method of moving these files could be either manual or a process that is invoked through a user exit program or a shell script. Archived log files can be placed offline simply by moving them to another directory, storing them on tape or elsewhere.
Taking online backups is supported only if the database is configured for archive logging. During an online backup operation, all activities against the database are logged. When an online backup image is restored, the logs must be rolled forward at least to the point in time at which the backup operation completed to guarantee the integrity of the data. For this to happen, the logs must be available when the database is restored. After an online backup is complete, DB2 forces the currently active log to be closed, and as a result, it will be archived (if USEREXIT is set to YES or ON). This ensures that your online backup has a complete set of archived logs available for recovery.
db2 backup database sample online to c:\dbbackup Backup successful. The timestamp for this backup image is : 20020714115137
The USEREXIT database configuration parameter allows you to change where archived logs are stored. Changing the NEWLOGPATH parameter affects only where active logs are stored. To determine which log files in the database log path directory are archived logs, check the value of the "First active log file" in the database configuration. This field indicates the lowest numbered log file that is active. Those logs with sequence numbers less than the value displayed in the "First active log file" are archived logs and can be moved. You can check the value of this parameter by using the Control Center or by using the following command:
db2 get db cfg for sample | grep ?i "first active log" First active log file = S0000000.LOG Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0
This parameter specifies the number of primary logs of size LOGFILSIZ that will be created. As you select the number of logs to configure, you must consider the size you make each log and whether your application can handle a log-full condition.
A primary log requires the same amount of disk space, whether it is full or empty.
You can waste disk space if you configure more logs than you need.
You can encounter a log-full condition if you configure too few logs.
The total log file size default limit on active log space is 256 GB (it is UNLIMITED if the LOGSECOND is set to ?1).
db2 get db cfg for sample | grep ?i logprimary Number of primary log files (LOGPRIMARY)= 3
This parameter specifies the maximum number of secondary log files that can be created and used for recovery, if needed.
If the primary log files become full, secondary log files are allocated, one at a time as needed, up to the maximum number specified by this parameter.
Once allocated, they are not deleted until the database is deactivated.
If this parameter is set to ?1, the database is configured with infinite active log space.
There is no limit on the size or number of in-flight transactions running on the database when infinite active logs is set.
db2 get db cfg for sample | grep ?i logsecond Number of secondary log files (LOGSECOND)= 2
NOTE
The USEREXIT database configuration parameter must be set to YES or ON in order to set LOGSECOND parameter to ?1.
If this parameter is set to ?1, crash recovery time may be increased because DB2 may need to retrieve archived log files.
This parameter specifies the size of each configured log, in number of 4-KB pages. The maximum log file size is 262,144 pages.
db2 get db cfg for sample | grep ?i logfilsiz Log file size (4KB) (LOGFILSIZ)= 250
NOTE
Prior to Version 7, there was a 4-GB logical limit on the total active log space that you can configure. In Version 7, the upper limit for the active log space is 32 GB. In Version 8, there is no limit on the active log space.
The size of the log file has an impact on performance because there is a cost for switching from one log to another. So, from a pure performance perspective, the larger the log file size, the better. This parameter also indicates the log file size for archiving. In this case, a larger log file size is not necessarily better, because a larger log file size may increase the chance of failure or cause a delay in log shipping scenarios. When considering the active log space, it may be better to have a larger number of smaller log files. For example, if there are two very large log files and a transaction starts close to the end of one log file, only half of the log space remains available.
Assuming that you have an application that keeps the database open or that the DBA has activated the database, to minimize processing time when opening the database, the log file size should be determined by the amount of time it takes to make offline archived log copies. When LOGRETAIN is set to YES or ON, minimizing log file loss is also an important consideration when setting the log size.
Log archiving will copy an entire log file. If you use a single large log file, you increase the time between archiving, and if the disk(s) containing the log fails, some transaction information will probably be lost. Decreasing the log file size increases the frequency of archiving but can reduce the amount of information loss in case of a media failure because the logs will be archived more frequently and will contain less work.
This parameter allows you to specify the amount of memory to use as a buffer for log records before writing these records to disk. The log buffer is written to disk when the first of the following events occurs:
A transaction commits.
The log buffer becomes full.
Every one second interval.
db2 get db cfg for sample | grep ?i logbufsz Log buffer size (4KB) (LOGBUFSZ)= 8
Increasing the log buffer size results in more efficient I/O activity associated with logging, because the log records are written to disk less frequently, and more records are written each time.
This parameter allows you to delay the writing of the log buffer log records to disk until a minimum number of commits have been performed, when the buffer is filled, or at one second intervals. This can
Reduce the database manager overhead associated with writing log records.
Improve performance when you have multiple applications running against a database and many commits are requested by the applications within a very short period of time.
NOTE
This can slow down very short transactions, because each one will likely take up to 1 second to return.
The grouping of commits occurs only if the value of this parameter is greater than 1 and if the number of applications connected to the database is greater than the value of this parameter.
When commit grouping is in effect, application commit requests are held until either one second has elapsed or the number of commit requests equals the value of this parameter.
db2 get db cfg for sample | grep ?i mincommit Group commit count (MINCOMMIT) = 1
The database logs are initially created in SQLOGDIR, which is a subdirectory under the database directory.
You can change the location in which active logs and future online archive logs are placed/stored by changing the value of this configuration parameter to point to a different directory or to a device.
Active logs that are stored in the database log path directory are not moved to the new location if the database is configured for rollforward recovery. It will be moved only once the database is stopped and reactivated.
db2 get db cfg for sample | grep ?i path Changed path to log files (NEWLOGPATH) = Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = db2 update db cfg for sample using newlogpath c:\dblog\SAMPLE db2 get db cfg for sample | grep ?i path Changed path to log files (NEWLOGPATH) = c:\dblog\SAMPLE\NODE0000\ Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) =
Because you can change the log path location, the logs needed for rollforward recovery may exist in different directories or on different devices. The OVERFLOWLOGPATH can be used to specify the location of the logs during a rollforward operation to allow you to access logs in multiple locations.
This parameter can be used for several functions, depending on your logging requirements. You can specify a location for DB2 to find log files that are needed for a rollforward operation. It is similar to the OVERFLOWLOGPATH option of the ROLLFORWARD command.
db2 update db cfg for sample using overflowlogpath e:\dblogo\SAMPLE db2 get db cfg for sample | grep ?i path Changed path to log files (NEWLOGPATH) = c:\dblog\SAMPLE\NODE0000\ Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = e:\dblogo\SAMPLE\NODE0000\ Mirror log path (MIRRORLOGPATH) = d:\dblogm\SAMPLE\NODE0000\
However, instead of specifying the OVERFLOWLOGPATH option for every ROLLFORWARD command issued, you can set this configuration parameter once. If both are used, the OVERFLOWLOGPATH option on the rollforward command will overwrite the OVERFLOWLOGPATH configuration parameter for that rollforward operation.
If LOGSECOND is set to ?1, you can specify a directory for DB2 to store active log files retrieved from the archive. Active log files must be retrieved for rollback operations if they are no longer in the active log path.
If OVERFLOWLOGPATH is not specified, DB2 will retrieve the log files into the active log path. By specifying this parameter, you can provide additional resource for DB2 to store the retrieved log files. The benefit includes spreading the I/O cost to different disks and allowing more log files to be stored in the active log path.
If you have configured a raw device for the active log path, OVERFLOWLOGPATH must be configured if you want to enable infinite active log space.
NOTE
In a partitioned database environment, the database partition number is automatically appended to the path. This is done to maintain the uniqueness of the path in a multi-partitioned database configuration.
At the database level, mirroring log files helps protect a database from:
Accidental deletion of an active log
File corruption caused by hardware failure
Since the logs for a database can be a single point of failure, you should consider using the DB2 database configuration parameter, MIRRORLOGPATH, to specify a secondary path for the database to manage copies of the active log file; or you can use the operating system or hardware to mirror the volumes on which the logs are stored.
The MIRRORLOGPATH configuration parameter allows the database to write an identical second copy of log files to a different path. It is recommended that you place the secondary log path on a physically separate disk (preferably one that is also on a different disk controller). That way, the disk controller cannot be a single point of failure.
When MIRRORLOGPATH is first enabled, it will not actually be used until the database is stopped and reactivated. This is similar to the NEWLOGPATH configuration parameter. If there is an error writing to either the active log path or the mirror log path, the database will mark the failing path as bad, write a message to the administration notification log, and write subsequent log records only to the remaining good log path.
DB2 will not attempt to use the bad path again until the current log file is completed. When DB2 needs to open the next log file, it will again try writing to both locations, and if there are no errors, it will continue to use both log paths. If not, DB2 will not attempt to use the path again until the next log file is accessed for the first time. There is no attempt to synchronize the log paths, but DB2 keeps information about access errors that occur, so that the correct paths are used when log files are archived. If a failure occurs while writing to the remaining good path, the database will be shut down.
Active logs that are currently stored in the mirrored log path directory are not moved to the new location if the database is configured for rollforward recovery.
Since you can change the log path location, the logs needed for rollforward recovery may exist in different directories. You can change the value of this configuration parameter during a rollforward operation to allow you to access logs in multiple locations.
You must keep track of the location of the logs. Changes are not applied until the database is in a consistent state. The configuration parameter "Database is consistent" displays the status of the database.
To turn this configuration parameter off, set its value to DEFAULT.
db2 update db cfg for sample using mirrorlogpath d:\dblogm\SAMPLE db2 get db cfg for sample | grep ?i path Changed path to log files (NEWLOGPATH) = c:\dblog\SAMPLE\NODE0000\ Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = d:\dblogm\SAMPLE\NODE0000\
NOTE
This configuration parameter is not supported if the primary log path is a raw device.
The value specified for this parameter cannot be a raw device.
If your application creates and populates work tables from master tables and you are not concerned about the recoverability of these work tables because they can be easily recreated from the master tables, you may want to create the work tables specifying the NOT LOGGED INITIALLY parameter on the CREATE TABLE statement. You can achieve the same result for existing tables by using the ALTER TABLE statement with the NOT LOGGED INITIALLY parameter (if and only if they were created with the NOT LOGGED INITIALLY option).
Advantages of using the NOT LOGGED INITIALLY:
Any changes made on the table (including insert, delete, update, or create index operations) in the same unit of work that creates the table will not be logged or in the UOW after the table is altered with the NOT LOGGED option.
Increases the performance of your application.
db2 connect to sample db2 "create table tab_1 (col1 int, col2 char(20) not logged initially" db2 "alter table tab_1 activate not logged initially" db2 "insert into tab_1 values (1,'MICHAEL')" db2 "insert into tab_1 values (2,'TIFFANY')" db2 "insert into tab_1 values (3,'JONATHAN')" db2 "insert into tab_1 values (4,'JENNIFER')" db2 "insert into tab_1 values (5,'DWAINE')" db2 "insert into tab_1 values (6,'TOM')" db2 COMMIT db2 "delete from tab_1 where col1=6" db2 ROLLBACK
NOTE
You can create more than one table with the NOT LOGGED INITIALLY parameter in the same unit of work.
Changes to the catalog tables and other user tables are still logged.
Because changes to the table are not logged, you should consider the following when deciding whether to use the NOT LOGGED INITIALLY table attribute:
All changes to the table will be flushed to disk at commit time. This means that the commit may take longer.
If the NOT LOGGED INITIALLY attribute is activated and an activity occurs that is not logged, the entire unit of work will be rolled back if a statement fails or a ROLLBACK TO SAVEPOINT is executed.
You cannot recover these tables when rolling forward. If the rollforward operation encounters a table that was created or altered with the NOT LOGGED INITIALLY option, the table is marked as unavailable. After the database is recovered, any attempt to access the table returns SQL1477N.
When a table is created, row locks are held on the catalog tables until a COMMIT is done. To take advantage of the no logging behavior, you must populate the table in the same unit of work in which it is created (or altered). This has implications for concurrency.
DGTTs can now be logged. It is the default. If you plan to use DGTTs as work tables, note the following:
The create DGTT statement defines a temporary table for the current session.
The DGTT description does not appear in the system catalog.
The DGTT is not persistent and cannot be shared with other sessions.
Each session that defines a DGTT of the same name has its own unique description of the temporary table.
When the session terminates, the rows of the table are deleted, and the description of the temporary table is dropped.
Errors in operation during a unit of work using a DGTT do not cause the unit of work to be completely rolled back. However, an error in operation in a statement changing the contents of a DGTT will delete all the rows in that table.
A rollback of the unit of work (or a savepoint) will delete all rows in declared temporary tables that were modified in that unit of work (or savepoint).
Changes to the DGTT are not logged if you created the DGTT with the NOT LOGGED option.
db2 connect to sample db2 "create user temporary tablespace global_temp_ts managed by system using ('c:\data\global_temp_ts') extentsize 16 prefetchsize 64" Application ... declared global temporary table gtab_1 (col1 int, col2 char(20)) with replace in global_temp_ts NOT LOGGED; insert into gtab_1 values (1,'SAN FRANCISCO'); update gtab_1 set col2='TIBURON' where col1=1; ... Application.
Consider the following when managing database logs:
The numbering scheme for archived logs starts with S0000000.LOG, and continues through S9999999.LOG.
DB2 may reuse log names after restoring a database (with or without rollforward recovery).
There is an exception if you roll forward to end of logs.
The database manager ensures that an incorrect log is not applied during rollforward recovery, but it cannot detect the location of the required log if it has been archived.
You must ensure that the correct logs are available for rollforward recovery. When a rollforward operation completes successfully, the last log that was used is truncated, and logging begins with the next sequential log file.
Any log in the log path directory with a sequence number greater than the last log used for rollforward recovery is reused.
Any entries in the truncated log following the truncation point are lost.
If a database has not been activated, DB2 truncates the current log file when all applications have disconnected from the database. The next time an application connects to the database, DB2 starts logging to a new log file. If many small log files (i.e., smaller than the configured LOGFILSIZ) are being produced on your system, you may want to consider using the ACTIVATE DATABASE command. This not only saves the overhead of having to initialize the database when applications connect, but it also saves the overhead of having to allocate a large log file, truncate it, then allocate a new log file.
The USEREXIT database parameter causes the database manager to call a user exit program to archive and retrieve log files. The log files are archived in a location that is different from the active log path. If USEREXIT is set to ON, rollforward recovery is automatically enabled.
db2 get db cfg for sample | grep ?i "user exit" User exit for logging status = YES User exit for logging enabled (USEREXIT) = ON
Consider the following when deciding whether to enable user exits:
The data transfer speed of the device you use to store offline archived logs and the software used to make the copies should, at a minimum, match the average rate at which the database manager fills log files. If the transfer speed cannot keep up with new log files being generated, you may run out of disk space if logging activity continues for a sufficient period of time. The amount of time it takes to run out of disk space is determined by the amount of free disk space. If this happens, DB2 database processing stops.
Unless the BLK_LOG_DSK_FUL variable is set.
The data transfer speed is most significant when using tape or an optical medium. Some tape devices require the same amount of time to copy a file, regardless of its size. You must determine the capabilities of your archiving device. Tape devices have other considerations. The frequency of the archiving request is important. For example, if the time taken to complete any copy operation is 5 minutes, the log should be large enough to hold 5 minutes of log data during your peak workload.
The tape device may have design limits that restrict the number of operations per day. This is very important when you determine the log file size.
The following considerations apply to calling a user exit program for archiving and retrieving log files:
The database configuration file parameter USEREXIT specifies whether the database manager invokes a user exit program to archive log files automatically or to retrieve log files during rollforward recovery of databases. A request to the user exit program to retrieve a log file is also made when the rollforward command needs a log file that is not found in the active log path.
When archiving, a log file name is passed to the user exit program when it is full, even if the log file is still active and is needed for normal processing. This allows copies of the log file to be moved away from volatile media as quickly as possible. The log file that was passed to the user exit program is retained in the log path directory until it is no longer needed for normal processing. At this point, the disk space is reused.
DB2 opens a log file in read mode when it starts a user exit program to archive the log file. A user exit program should never delete a log file after it is archived, because the file could still be active and needed for crash recovery. DB2 manages disk space reuse when log files are archived.
When a log file has been archived and is inactive, DB2 does not delete the file but renames it as the next log file in sequence when such a file is needed. This results in a performance gain, because creating a new log file (instead of renaming the file) causes all pages to be written out to guarantee the disk space. It is more efficient to reuse than to free up and then reacquire the necessary pages on disk.
DB2 will not invoke the user exit program to retrieve the log file during crash recovery or rollback unless the logsecond database configuration parameter is set to ?1 (i.e., infinite active log space is enabled).
A user exit program does not guarantee rollforward recovery to the point of failure but attempts only to make the failure window smaller. As log files fill, they are queued for the user exit program. Should the disk containing the log fail before a log file is filled, the data in that log file is lost (unless it is mirrored). Also, because the files are queued for archiving, the disk can fail before all the files are copied, causing any log files in the queue to be lost.
The configured size of each individual log file has a direct bearing on the user exit program. If each log file is very large, a large amount of data can be lost if a disk fails. A database configured with small log files causes the data to be passed to the user exit program more often.
An archive request to the user exit program occurs each time an active log file is filled, only if USEREXIT is set to YES or ON. It is possible that an active log file is not full when the last disconnection from the database occurs and the user exit program is also called for the last truncated log file.
A copy of the log should be made to another physical device so that the offline log file can be used by rollforward recovery if the device containing the log file experiences a media failure. This should not be the same device containing database data files.
If you have enabled user exit programs and are using a tape drive as a storage device for logs and backup images, you need to ensure that the destination for the backup images and the archived logs is not the same tape drive. Because some log archiving may take place while a backup operation is in progress, an error may occur when the two processes are trying to write to the same tape drive at the same time.
In some cases, if a database is closed before a positive response has been received from a user exit program for an archive request, the database manager will send another request when the database is opened. Thus, a log file may be archived more than once.
NOTE
To free unused log space, the log file is truncated before it is archived.
If a user exit program receives a request to archive a file that does not exist (because there were multiple requests to archive and the file was deleted after the first successful archiving operation) or to retrieve a file that does not exist (because it is located in another directory or the end of the logs has been reached), it should ignore this request and pass a successful return code.
The user exit program should allow for the existence of different log files with the same name after a point in time recovery; it should be written to preserve both log files and to associate those log files with the correct recovery path.
When a user exit program is enabled for two or more databases that are using the same tape device to archive log files and a rollforward operation is taking place on one of the databases, if the other database(s) is active, it can impact the performance of the rollforward. If another database tries to archive a log file while the rollforward operation is in progress, the logs required for the rollforward operation may not be found or the new log file archived to the tape device might overwrite the log files previously stored on that tape device.
To prevent either situation from occurring, you can ensure that no other databases on the server that calls the user exit program are open during the rollforward operation or write a user exit program to handle this situation.
When a database is restarted, the minimum number of logs in the database log directory will equal the number of primary logs that can be configured using the LOGPRIMARY database configuration parameter. More logs than the number of primary logs might be found in the log directory if, at the time of shutdown, the number of empty logs in the log directory is greater than the primary log configuration parameter at the time of database restart. This will happen if, between shutdown and restart, the primary log configuration parameter was changed or if secondary logs were allocated and never used.
Each time that a database is restarted, DB2 examines the database log directory. If fewer logs than the number of primary logs are found, the difference will be allocated. If more empty logs than primary logs are available in the database directory, DB2 will allow the database to be restarted with as many available empty logs as are found in the database directory.
As well, after database shutdown, secondary log files that have been created will remain in the active log path at restart time.
If a log file is truncated as a result of an online backup or force archive command (ARCHIVE LOG), DB2 does not rename the log file. Instead, a truncated log file is deleted when it becomes inactive. A new log file will be created when the application needs one. As a result, it may appear as though there are fewer log files in the log directory than LOGPRIMARY. You can avoid generating truncated logs altogether by setting the DB2 registry variable DB2_DISABLE_FLUSH_LOG registry variable to ON. However, this may cause online backups to be unusable if the log is not flushed after the backup completes.
DB2 checks whenever the first active log changes. As a result, information is recorded to disk earlier and more often. The benefit of this change is that if the system crashes, the information stored on disk (related to which log files are successfully archived) is more accurate, and DB2 does not have to reissue the archive request for log files that are already archived.
There is no change to what DB2 does after detecting the successful archive of a particular log file. DB2 now detects the completion of log archives earlier and will rename them earlier. Inactive truncated log files are deleted. As a result, the number of log files remaining in the active log path can be fewer than the LOGPRIMARY database configuration value. In this case, DB2 will create new log files when needed.
Before this change, restarting the database reduced the number of logs to equal the value of LOGPRIMARY. Now, when you restart a database, DB2 first examines the database log directory. If the number of empty logs is fewer than the number of primary logs, DB2 will allocate new logs to make up the difference. If more empty logs are available than there are primary logs in the database directory, DB2 will allow the database to be restarted with all the available empty logs in the database directory.
After database shutdown, any secondary log files in existence will remain in the active log path at restart time. To clear out the active log path, the ARCHIVE LOG command may be used.
If the LOGRETAIN is set to YES or ON, archived logs are kept in the database log path directory, and the database is considered to be recoverable, meaning that rollforward recovery is enabled.
Log files in the database log directory are never removed if they may be required for crash recovery.
When the USEREXIT database configuration parameter is enabled, a full log file becomes a candidate for removal only after it has been archived and is no longer required for crash recovery.
A log file that is required for crash recovery is called an active log.
A log file that is not required for crash recovery is called an archived log.
db2 get db cfg for sample | grep ?i "log retain" Log retain for recovery status = RECOVERY Log retain for recovery enabled (LOGRETAIN) = RECOVERY
The process of allocating new log files and removing old log files is dependent on the settings of the USEREXIT and LOGRETAIN database configuration parameters.
When both LOGRETAIN and USEREXIT are set to OFF:
Circular logging will be used.
Rollforward recovery is not supported with circular logging.
Crash recovery is always supported.
During circular logging, new log files other than secondary logs are not generated, and old log files are not deleted.
Log files are handled in a circular fashion. That is, when the last log file is full, DB2 begins writing to the first log file.
A log full situation can occur if all of the log files are active and the circular logging process cannot wrap to the first log file.
Secondary log files are created when all the primary log files are active and full. Once a secondary log is created, it is not deleted until the database is restarted.
When LOGRETAIN is set to ON and USEREXIT is set to OFF:
Both rollforward recovery and crash recovery are enabled.
The database is known to be recoverable.
When USEREXIT is set to OFF, DB2 does not move log files from the active log directory.
Each time a log file becomes full, DB2 begins writing records to another log file and creates a new log file.
When USEREXIT is set to ON:
This also turns LOGRETAIN ON.
When both LOGRETAIN and USEREXIT are set to ON, both rollforward recovery and crash recovery are enabled.
When a log file becomes full, it is automatically archived, using the user-supplied user exit program.
Log files are usually not deleted. Instead, when a new log file is required and one is not available, an archived log file is renamed and used again.
An archived log file is not deleted or renamed once it has been closed and copied to the log archive directory.
DB2 waits until a new log file is needed, then renames the oldest archived log.
A log file that has been moved to the database directory during recovery is removed during the recovery process when it is no longer needed. Until DB2 runs out of log space, you will see old log files in the database directory.
If an error is encountered while archiving a log file, archiving of log files will be suspended for 5 minutes before being attempted again. DB2 will then continue archiving log files as they become full. Log files that became full during the 5-minute waiting period will not be archived immediately after the delay; DB2 will spread the archive of these files over time.
This configuration parameter can be set to prevent disk full errors from causing DB2 to stop processing when it cannot create a new log file in the active log path.
If BLK_LOG_DSK_FUL is set to NO, a transaction that receives a log disk full error will fail and be rolled back. In addition, DB2 will stop processing.
Setting BLK_LOG_DSK_FUL to YES causes applications to wait when DB2 encounters a log disk full error. You are then able to resolve the error, and the transaction can continue.
A disk full situation can be resolved by moving old log files to another file system or by increasing the size of the file system so that hanging applications can complete.
DB2 will attempt to create a log file every 5 minutes until it succeeds and will write a message to the administration notification log.
Frequently monitor the administration notification log to confirm that your application is hanging because of a log disk full condition.
Until the log file is successfully created, no user application that attempts to update table data will be able to commit transactions.
Read-only queries may not be directly affected; however, if a query needs to access data that is locked by an update request or a data page that is fixed in the buffer pool by the updating application, read-only queries will also appear to hang.
db2 get db cfg for sample | grep ?i "block log" Block log on disk full (BLK_LOG_DSK_FUL) = NO db2 update db cfg for sample using blk_log_dsk_ful on db2 get db cfg for sample | grep ?i "block log" Block log on disk full (BLK_LOG_DSK_FUL) = ON
DB2 now supports closing the active log for a recoverable database at any time. This allows you to collect a complete set of log files up to a known point, then to use these log files to update a standby database. You can initiate on demand log archiving by invoking the ARCHIVE LOG command.
db2 archive log for database sample
NOTE
The issuer of this command cannot have a connection to the specified database, although other users may be connected.
Performance may be impacted during execution of the command, due to the activities writing from the log buffer to disk.
If USEREXIT is set to ON, an archive request is issued after the logs are closed and truncated. The completion of the archive command does not guarantee that the logs have been moved to the archive directory.
There are some advantages and disadvantages when a raw device is used for the database logs.
Advantages:
You can attach more than 26 physical drives to a system.
The file I/O path length is shorter. This may improve performance on your system. You should conduct benchmarks to evaluate whether there are measurable benefits for your workload.
Disadvantages:
The device cannot be shared by other applications; i.e., the entire device must be assigned to DB2.
The device cannot be operated on by any operating system utility or third-party tool that would backup or copy from the device.
You can easily wipe out the file system on an existing drive if you specify the wrong physical drive number.
You can configure a raw log with the NEWLOGPATH database configuration parameter. Before doing so, however, consider the advantages and disadvantages listed above and the additional considerations listed below:
Only one PATH device is allowed. You can define the PATH device over multiple disks at the operating system level. DB2 will make an operating system call to determine the size of the device in 4-KB pages.
If you use multiple disks, this will provide a larger device, and the striping that results can improve performance and increase I/O throughput speed.
In this situation, DB2 will attempt to use all pages, up to the supported limit. Information about the size of the device is used to indicate the size of the device (in 4-KB pages) available to DB2 under the support of the operating system. The amount of disk space that DB2 can write to is referred to as the device size available. The first 4-KB page of the device is not used by DB2 (it is generally used by the operating system). This means that the total space available to DB2 is device size = device size available ? 1.
When using raw devices for the logs, secondary logs are not used. The size of active log space is the number of 4-KB pages that result from (LOGPRIMARY x LOGFILSIZ).
Log records are still grouped into log extents, each with a log file size of 4-KB pages. Log extents are placed in the raw device, one after another. Each extent also consists of an extra two pages for the extent header. This means that the number of available log extents on the device is device-size / (LOGFILSIZ + 2).
The device must be large enough to support the active log space. That is, the number of available log extents must be greater than (or equal to) the value specified for the LOGPRIMARY database configuration parameter. If the USEREXIT database configuration parameter is set to ON, ensure that the raw device can contain more logs than the value specified for the LOGPRIMARY database configuration parameter. This will compensate for the delay incurred when the user exit program is archiving a log file.
If you are using circular logging, the LOGPRIMARY database configuration parameter will determine the number of log extents that are written to the device. This may result in unused space on the device.
If you are using LOGRETAIN without a user exit program, after the number of available log extents are all used up, all operations that result in an update will receive a log full error. At this time, you must shut down the database and take an offline backup of it to ensure recoverability. After the database backup operation, the log records written to the device are lost. This means that you cannot use an earlier database backup image to restore the database, then roll it forward. If you take a database backup before the number of available log extents are all used up, you can restore and roll the database forward.
If you are using LOGRETAIN with a user exit program, the user exit program is called for each log extent as it is filled with log records. The user exit program must be able to read the device and to store the archived log as a file. DB2 will not call a user exit program to retrieve log files to a raw device. Instead, during rollforward recovery, DB2 will read the extent headers to determine when the raw device contains the required log file. If the required log file is not found in the raw device, DB2 will search the overflow log path. If the log file is still not found, DB2 will call the user exit program to retrieve the log file into the overflow log path. If you do not specify an overflow log path for the rollforward operation, DB2 will not call the user exit program to retrieve the log file.
If you have configured a raw device for logging and are using DataPropagator or another application that calls the db2ReadLog API, the OVERFLOWLOGPATH database configuration parameter must be configured. DB2 may call a user exit program to retrieve the log file and return the log data requested by the db2ReadLog API. The retrieved log file will be placed in the path specified by the OVERFLOWLOGPATH database configuration parameter.
When a database is restored and rolled forward to a point in time prior to the end of logs, a new set of logs is generated. Therefore, the currently archived logs will no longer be useable for this new database.
Consider the following scenarios:
If you plan to drop a database prior to a restore operation, you need to save the log files in the active log path before issuing the DROP DATABASE command. After the database has been restored, these log files may be required for rollforward recovery because some of them may not have been archived before the database was dropped. Normally, you are not required to drop a database prior to issuing the RESTORE command.
If you are rolling a database forward to a specific point in time, log data after the timestamp you specify will be overwritten. If, after you have completed the point-in-time rollforward operation and reconnected to the database, you determine that you actually needed to roll the database forward to a later point in time, you will not be able to because the logs may already have been overwritten, if they were not saved to a different path or using a utility such as TSM.
It is possible that the original set of log files may have been archived; however, DB2 may be calling a user exit program to automatically archive the newly generated log files automatically. Depending on how the user exit program is written, this could cause the original set of log files in the archive log directory to be overwritten. Even if both the original and new set of log files exist in the archive log directory, you may have to determine which set of logs should be used for future recovery operations.
Let's manage log files for the database SAMPLE.
First, you must create SAMPLE database using the db2sampl program. After this database has been successfully created, the default primary log files (3) with the size of 250 4-KB pages will be created in the directory:
Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\
By default, all databases use circular logging. We need to enable archive logging by updating the LOGRETAIN and/or USEREXIT database configuration parameters. Then we will need to modify the DB2 user exit program to archive and retrieve logs. The log files are archived in a location that is different from the active log path.
Create sample database and get the database configuration
db2sampl db2 connect to sample db2 get db cfg for sample | grep ?i log Log retain for recovery status = NO User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 250 Number of primary log files (LOGPRIMARY) = 3 Number of secondary log files(LOGSECOND) = 2 Changed path to log files (NEWLOGPATH) = Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0 Percent log file reclaimed before soft chckpt (SOFTMAX) = 100 Log retain for recovery enabled (LOGRETAIN) = OFF User exit for logging enabled (USEREXIT) = OFF
For this example, we will change the log file size to a smaller size in order to make the log fill up quickly. By doing that, we will need to reduce the logfilsiz to 16 4-KB pages.
For recovery reasons, we will need to change the active log directory to c:\dblog\SAMPLE. This directory structure must match with the definition defined in the db2uext2.c program.
Reduce log file size
db2 update db cfg for sample using logfilsiz 16 logprimary 2 logsecond 10 db2 get db cfg for sample | grep ?i log Log retain for recovery status = NO User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 16 Number of primary log files (LOGPRIMARY) = 2 Number of secondary log files (LOGSECOND) = 10 Changed path to log files (NEWLOGPATH) = Path to log files = C:\DB2\NODE0000\SQL00002\SQLOGDIR\ Overflow log path (OVERFLOWLOGPATH) = Mirror log path (MIRRORLOGPATH) = First active log file = Block log on disk full (BLK_LOG_DSK_FUL) = NO Percent of max active log space by transaction (MAX_LOG) = 0 Num. of active log files for 1 active UOW (NUM_LOG_SPAN) = 0 Percent log file reclaimed before soft chckpt (SOFTMAX) = 100 Log retain for recovery enabled (LOGRETAIN) = OFF User exit for logging enabled (USEREXIT) = OFF
By default, the first database is SQL00001, and the active log files reside under C:\DB2\NODE0000\SQL00002\SQLOGDIR\. After the database was updated with the NEWLOGPATH parameter, the active log files will be allocated under the new log path directory when last users disconnect from the database and the first user connects to the database.
ls -altr c:\DB2\NODE0000\SQL00002\SQLOGDIR\ total 6048 drwxrwxrwx 1 Administrators None 0 Jul 14 22:12 .. drwxrwxrwx 1 Administrators None 0 Jul 14 22:12 . -rwxrwxrwa 1 Administrators None 1032192 Jul 14 22:15 S0000002.LOG -rwxrwxrwa 1 Administrators None 1032192 Jul 14 22:15 S0000001.LOG -rwxrwxrwa 1 Administrators None 1032192 Jul 14 22:16 S0000000.LOG db2 update db cfg for sample using newlogpath c:\dblog\SAMPLE db2 connect to sample db2 get db cfg for sample | grep ?i log Log retain for recovery status = NO User exit for logging status = Catalog cache size (4KB) (CATALOGCACHE_SZ) = (MAXAPPLS*4) Log buffer size (4KB) (LOGBUFSZ) = 8 Log file size (4KB) (LOGFILSIZ) = 16 Number of primary log files