|
Zero
0.1.0
|
An RAW-style lock queue to hold granted and waiting lock requests (RawLock). More...
#include <lock_raw.h>
Classes | |
| struct | Compatibility |
| struct | Iterator |
| Safely iterates through lock entires in this queue from the head. More... | |
Public Member Functions | |
| w_error_codes | acquire (RawXct *xct, uint32_t hash, const okvl_mode &mode, int32_t timeout_in_ms, bool check, bool wait, bool acquire, RawLock **out) |
| Adds a new lock in the given mode to this queue, waiting until it is granted. More... | |
| w_error_codes | retry_acquire (RawLock **lock, bool wait, bool acquire, int32_t timeout_in_ms) |
| w_error_codes | complete_acquire (RawLock **lock, bool wait, bool acquire, int32_t timeout_in_ms) |
| void | release (RawLock *lock, const lsn_t &commit_lsn) |
| Releases the given lock from this queue, waking up others if necessary. More... | |
| void | update_xlock_tag (const lsn_t &commit_lsn) |
| void | atomic_lock_insert (RawLock *new_lock) |
| Atomically insert the given lock to this queue. Called from acquire(). More... | |
| Compatibility | check_compatiblity (RawLock *lock) const |
| bool | peek_compatiblity (RawXct *xct, uint32_t hash, const okvl_mode &mode) const |
| Used for check_only=true case. Many things are much simpler and faster. More... | |
| w_error_codes | wait_for (RawLock *new_lock, int32_t timeout_in_ms) |
| RawLock * | find_predecessor (RawLock *lock) const |
| Returns the predecessor of the given lock. This removes marked entries it encounters. More... | |
| RawLock * | tail () const |
| Returns the last entry of this queue. This removes marked entries it encounters, which is the reason why we can't have "tail" as a variable in the queue and instead we have to traverse each time. More... | |
| bool | delink (RawLock *predecessor, RawLock *target, RawLock *successor) const |
| Delinks a marked entry from the list and deallocates the entry object. More... | |
| bool | trigger_UNDO (Compatibility &compatibility) |
Public Attributes | |
| RawLock | head |
| lsn_t | x_lock_tag |
Static Public Attributes | |
| static int | loser_count = HAS_LOSER_COUNT |
An RAW-style lock queue to hold granted and waiting lock requests (RawLock).
Queue is a bucket. So, this queue can contain locks for resources with different hashes. However, each lock also contains the precise hash in the member. When lock-compatibility methods traverse the list, we ignore locks that have different hashes. This gives the same concurrency and yet does not cause 3-level complexity.
| w_error_codes RawLockQueue::acquire | ( | RawXct * | xct, |
| uint32_t | hash, | ||
| const okvl_mode & | mode, | ||
| int32_t | timeout_in_ms, | ||
| bool | check, | ||
| bool | wait, | ||
| bool | acquire, | ||
| RawLock ** | out | ||
| ) |
Adds a new lock in the given mode to this queue, waiting until it is granted.
| [in] | xct | the transaction to own the new lock |
| [in] | hash | precise hash of the resource to lock. |
| [in] | mode | requested lock mode |
| [in] | timeout_in_ms | maximum length to wait in milliseconds. negative number means forever. If conditional, this parameter is ignored. |
| [in] | check | if true, this method doesn't actually create a new lock object but just checks if the requested lock mode can be granted or not. |
| [in] | wait | If false, this method doesn't wait at all and also it leaves the inserted lock entry even if it wasn't granted immediately. |
| [out] | out | pointer to the successfully acquired lock. it returns NULL if we couldn't get the lock except conditional==true case. |
check_only=true can give a false positive in concurrent unlock case, but give no false negative assuming a conflicting lock is not concurrently taken for the key. This assumption holds for our only check_only=true use case, which is the tentative NX lock check before inserting a new key, because we then have an EX latch! Thus, this is a safe and efficient check for B-tree insertion.
conditional locking is the standard way to take a lock in DBMS without leaving latches long time. B-tree first requests a lock without releasing latch (conditional). If it fails, it releases latch and unconditionally lock, which needs re-check of LSN after lock and re-latch. The purpose of this conditional parameter is that we don't want to insert the same lock entry twice when the first conditional locking fails. When conditional==true, we leave the lock entry and return it in out even if it wasn't granted. The caller MUST be responsible to call retry_acquire() after the failed acquire (which returns eCONDLOCKTIMEOUT if it failed) or release the lock. It is anyway released at commit time, but waiting lock entry should be removed before the transaction does anything else.
| void RawLockQueue::atomic_lock_insert | ( | RawLock * | new_lock | ) |
Atomically insert the given lock to this queue. Called from acquire().
See Figure 4 and Sec 3.1 of [JUNG13].
| RawLockQueue::Compatibility RawLockQueue::check_compatiblity | ( | RawLock * | lock | ) | const |
Checks if the given lock can be granted. Called from acquire() after atomic_lock_insert() and release().
| w_error_codes RawLockQueue::complete_acquire | ( | RawLock ** | lock, |
| bool | wait, | ||
| bool | acquire, | ||
| int32_t | timeout_in_ms | ||
| ) |
Subroutine of acquire(), retry_acquire().
Delinks a marked entry from the list and deallocates the entry object.
target should be marked for death (target->next.is_marked()), but this is not a contract because of concurrent delinks. If it isn't (say target is already delinked), then the atomic CAS fails, returning false.
Returns the predecessor of the given lock. This removes marked entries it encounters.
Used for check_only=true case. Many things are much simpler and faster.
This is analogous to the wait-free contains() of lock-free linked list in [HERLIHY]. As commented in acquire(), this method is safe for check_only case. EX latch on the page guarantees that no lock for the key is newly coming now.
Releases the given lock from this queue, waking up others if necessary.
| [in] | lock | the lock to release. |
| [in] | commit_lsn | LSN to update X-lock tag during SX-ELR. |
| w_error_codes RawLockQueue::retry_acquire | ( | RawLock ** | lock, |
| bool | wait, | ||
| bool | acquire, | ||
| int32_t | timeout_in_ms | ||
| ) |
Waits for the already-inserted lock entry. Used after a failed conditional locking.
| RawLock * RawLockQueue::tail | ( | ) | const |
Returns the last entry of this queue. This removes marked entries it encounters, which is the reason why we can't have "tail" as a variable in the queue and instead we have to traverse each time.
| bool RawLockQueue::trigger_UNDO | ( | Compatibility & | compatibility | ) |
| void RawLockQueue::update_xlock_tag | ( | const lsn_t & | commit_lsn | ) |
Makes sure x_lock_tag is at least the given LSN.
| w_error_codes RawLockQueue::wait_for | ( | RawLock * | new_lock, |
| int32_t | timeout_in_ms | ||
| ) |
Sleeps until the lock is granted. Called from acquire() after check_compatiblity() if the lock was not immediately granted.
|
mutable |
The always-existing dummy entry as head. _head is never marked for death. Mutable because even find() physically removes something (though logically nothing).
|
static |
| lsn_t RawLockQueue::x_lock_tag |
Stores the commit timestamp of the latest transaction that released an X lock on this queue; holds lsn_t::null if no such transaction exists; protected by _requests_latch.
1.8.12