Lost Updates: Another Portability Issue-Concurrency and Multiversioning

A common use of REPEATABLE READ in databases that employ the shared read locks could be for lost update prevention.

Note Lost update detection and solutions to the lost update problem are discussed in the previous chapter.

If we have REPEATABLE READ enabled in a database that employs shared read locks (and not multiversioning), lost update errors can’t happen.

The reason lost updates will not happen in those databases is because the simple act of selecting the data leaves a lock on it, and once read by our transaction, that data cannot be modified by any other transaction.

Now, if your application assumes that REPEATABLE READ implies “lost updates can’t happen,” you are in for a painful surprise when you move your application to a database that does not use shared read locks as an underlying concurrency control mechanism.

Note In a stateless environment, such as a web-based application, lost updates would likely be a cause for concern—even in REPEATABLE READ isolation. This is because a single database session is used by many clients via a connection pool and locks are not held across calls. REPEATABLE READ isolation only prevents lost updates in a stateful environment, such as that observed with a client-server application.

While this sounds good, you must remember that leaving the shared read locks behind on all data as it is read will, of course, severely limit concurrent reads and modifications.

So, while this isolation level in those databases provides for lost update prevention, it does so by removing the ability to perform concurrent operations! You can’t always have your cake and eat it too.