A client of mine is performing a conversion process that takes days to run and of course it would be nice if we could shorten that. The conversion process uses COBOL code to read data, validate it (application logic) and write it into the target tables. Were that not bad enough, it commits every transaction (each transaction could be multiple inserts into numerous tables). Log file sync is one of the main wait events; “enq: TX – row lock contention” is another. Since the code is relatively old, it’s not feasible to change it to perform batch commits. We’ve had that conversation already.
So this sticky situation had me delving into commit strategies; fortunately for me, I was reading “Oracle Core: Essential Internals for DBAs and Developers” by Jonathan Lewis and he has an excellent chapter that describes this very well. Buy the book (links at the end) – you won’t be disappointed.
The proposed solution involves tinkering with the parameters COMMIT_LOGGING and COMMIT_WAIT which can be set at either the system or session level. In order to prove the value of setting these parameters to my client, I came up with a simple test case using Oracle 18.104.22.168 in a VirtualBox VM. Taking the sample SOE data from Swingbench (http://dominicgiles.com/swingbench.html), I used a PL/SQL loop to update all 1.4 million rows in the ORDERS table with a commit after each update.
So those parameters change the behaviour of the COMMIT statement, useful when you can’t modify the code. These COMMIT modifications are nicely summarised in MOS note 857576.1 “How to Reduce Waits on ‘Log File Sync'”.
- COMMIT WRITE WAIT: The commit command is synchronous. It doesn’t return until the relevant redo information is written to the online redo log.
- COMMIT WRITE NOWAIT: The commit command is asynchronous. It can return before the relevant redo information is written to the online redo log.
- COMMIT WRITE BATCH: The commit command is synchronous. It doesn’t return until the relevant redo information is written to the online redo log.
- COMMIT WRITE IMMEDIATE: The commit “prods” the LGWR process by sending a message, so that the redo is written immediately to the redo logs.
The bold setting above is pretty much my desired one; bear in mind that these settings should be used with caution. As the documentation goes on to explain…
COMMIT_WAITis an advanced parameter used to control when the redo for a commit is flushed to the redo logs.
- Be aware that the
NOWAIToption can cause a failure that occurs after the database receives the commit message, but before the redo log records are written. This can falsely indicate to a transaction that its changes are persistent. Also, it can violate the durability of ACID (Atomicity, Consistency, Isolation, Durability) transactions if the database shuts down unexpectedly.
- If the parameter is set to
FORCE_WAIT, the default behavior (immediate flushing of the redo log buffer with wait) is used. If this is a system setting, the session level and transaction level (
COMMIT_WRITE) options will be ignored. If this is a session level setting, the transaction level options will be ignored. If
COMMIT_WAITis altered after it has been set to
FORCE_WAIT, then the
FORCE_WAIToption is no longer valid.
So, with that understood, on to the test results.
As you can see, by setting COMMIT_WAIT to NOWAIT, the contrast in performance is staggering. When you consider the redo synch time alone, the reduction in elapsed time is worth considering this option.
So what’s the catch? Well, as I’ve pointed out, we’re effectively breaking Oracle’s promise that a committed transaction is durable. What happens is that a commit makes an asynchronous call to log writer and proceeds to the next transaction before getting an acknowledgement that the transaction has been written to the online redo log buffer. Best case, all is well. However an instance crash would mean committed transactions may not have made it to the redo logs as the in-memory buffer gets wiped out by the instance crash. What this means is effectively having to restart the conversion, having first restored the database back (or use FLASHBACK DATABASE if you’ve enabled it).
Worth the risk? For this conversion, yes. Acknowledging the risk and the rare chance of an instance crash, definitely worth a shot.
set timing on echo on feedback on
set linesize 132 pagesize 99
— connect to the
— flush the buffer cache between executions
alter system flush buffer_cache;
— set the commit parameters
alter session set commit_logging=batch;
alter session set commit_wait=nowait;
— the update PL/SQL
cursor get_all is select customer_id from orders;
for l_cust in get_all loop
update orders set information=l_cust.customer_id where customer_id=l_cust.customer_id;
— now get the interesting session statistics
select n.name, s.*
from v$sesstat s, v$statname n
where s.statistic# = n.statistic#and n.name like ‘%edo%’
and s.value >0
Oracle Core: Essential Internals for DBAs and Developers – http://www.apress.com/9781430239543?gtmf=s