When two or more transactions are updating the data, concurrency is the biggest issue
There are four types of concurrency issues commonly visible in the normal programming.
- Lost Update – This occurs when there are two transaction which one transaction overwrites the transactions created by the earlier update. A good example could be double booking on a hotel reservation.
- Dirty Reads – A dirty read occurs when a transaction is allowed to read data from a row that has been modified by another running transaction that has not yet committed.
- Non-Repeatable Reads – Inconsistent analysis occurs when a second transaction accesses the same row several times and reads different data each time. For example, an editor reads the same document twice, but between each reading the writer rewrites the document. When the editor reads the document for the second time, it has changed. The original read was not repeatable. This problem could be avoided if the writer could not change the document until the editor has finished reading it for the last time.
- Phantom Reads – These occur when UPDATE/DELETE happen on one set of data while INSERT/UPDATE is being processed on the same set of data leading inconsistent data in earlier transaction when both the transactions are over.
The concurrency tools that are used to avoid these concurrency effects are:
- Different levels of locks
- Blocks (live and dead)
The causes of blocking
- Manual concurrency control done poorly
- Poor indexing
- Poor queries