Zero  0.1.0
Classes | Public Member Functions | Public Attributes | Static Public Attributes | List of all members
RawLockQueue Struct Reference

An RAW-style lock queue to hold granted and waiting lock requests (RawLock). More...

#include <lock_raw.h>

Classes

struct  Compatibility
 
struct  Iterator
 Safely iterates through lock entires in this queue from the head. More...
 

Public Member Functions

w_error_codes acquire (RawXct *xct, uint32_t hash, const okvl_mode &mode, int32_t timeout_in_ms, bool check, bool wait, bool acquire, RawLock **out)
 Adds a new lock in the given mode to this queue, waiting until it is granted. More...
 
w_error_codes retry_acquire (RawLock **lock, bool wait, bool acquire, int32_t timeout_in_ms)
 
w_error_codes complete_acquire (RawLock **lock, bool wait, bool acquire, int32_t timeout_in_ms)
 
void release (RawLock *lock, const lsn_t &commit_lsn)
 Releases the given lock from this queue, waking up others if necessary. More...
 
void update_xlock_tag (const lsn_t &commit_lsn)
 
void atomic_lock_insert (RawLock *new_lock)
 Atomically insert the given lock to this queue. Called from acquire(). More...
 
Compatibility check_compatiblity (RawLock *lock) const
 
bool peek_compatiblity (RawXct *xct, uint32_t hash, const okvl_mode &mode) const
 Used for check_only=true case. Many things are much simpler and faster. More...
 
w_error_codes wait_for (RawLock *new_lock, int32_t timeout_in_ms)
 
RawLockfind_predecessor (RawLock *lock) const
 Returns the predecessor of the given lock. This removes marked entries it encounters. More...
 
RawLocktail () const
 Returns the last entry of this queue. This removes marked entries it encounters, which is the reason why we can't have "tail" as a variable in the queue and instead we have to traverse each time. More...
 
bool delink (RawLock *predecessor, RawLock *target, RawLock *successor) const
 Delinks a marked entry from the list and deallocates the entry object. More...
 
bool trigger_UNDO (Compatibility &compatibility)
 

Public Attributes

RawLock head
 
lsn_t x_lock_tag
 

Static Public Attributes

static int loser_count = HAS_LOSER_COUNT
 

Detailed Description

An RAW-style lock queue to hold granted and waiting lock requests (RawLock).

Queue is a bucket. So, this queue can contain locks for resources with different hashes. However, each lock also contains the precise hash in the member. When lock-compatibility methods traverse the list, we ignore locks that have different hashes. This gives the same concurrency and yet does not cause 3-level complexity.

Member Function Documentation

§ acquire()

w_error_codes RawLockQueue::acquire ( RawXct xct,
uint32_t  hash,
const okvl_mode mode,
int32_t  timeout_in_ms,
bool  check,
bool  wait,
bool  acquire,
RawLock **  out 
)

Adds a new lock in the given mode to this queue, waiting until it is granted.

Parameters
[in]xctthe transaction to own the new lock
[in]hashprecise hash of the resource to lock.
[in]moderequested lock mode
[in]timeout_in_msmaximum length to wait in milliseconds. negative number means forever. If conditional, this parameter is ignored.
[in]checkif true, this method doesn't actually create a new lock object but just checks if the requested lock mode can be granted or not.
[in]waitIf false, this method doesn't wait at all and also it leaves the inserted lock entry even if it wasn't granted immediately.
[out]outpointer to the successfully acquired lock. it returns NULL if we couldn't get the lock except conditional==true case.

check_only=true can give a false positive in concurrent unlock case, but give no false negative assuming a conflicting lock is not concurrently taken for the key. This assumption holds for our only check_only=true use case, which is the tentative NX lock check before inserting a new key, because we then have an EX latch! Thus, this is a safe and efficient check for B-tree insertion.

conditional locking is the standard way to take a lock in DBMS without leaving latches long time. B-tree first requests a lock without releasing latch (conditional). If it fails, it releases latch and unconditionally lock, which needs re-check of LSN after lock and re-latch. The purpose of this conditional parameter is that we don't want to insert the same lock entry twice when the first conditional locking fails. When conditional==true, we leave the lock entry and return it in out even if it wasn't granted. The caller MUST be responsible to call retry_acquire() after the failed acquire (which returns eCONDLOCKTIMEOUT if it failed) or release the lock. It is anyway released at commit time, but waiting lock entry should be removed before the transaction does anything else.

Precondition
out != NULL

§ atomic_lock_insert()

void RawLockQueue::atomic_lock_insert ( RawLock new_lock)

Atomically insert the given lock to this queue. Called from acquire().

See Figure 4 and Sec 3.1 of [JUNG13].

§ check_compatiblity()

RawLockQueue::Compatibility RawLockQueue::check_compatiblity ( RawLock lock) const

Checks if the given lock can be granted. Called from acquire() after atomic_lock_insert() and release().

§ complete_acquire()

w_error_codes RawLockQueue::complete_acquire ( RawLock **  lock,
bool  wait,
bool  acquire,
int32_t  timeout_in_ms 
)

Subroutine of acquire(), retry_acquire().

§ delink()

bool RawLockQueue::delink ( RawLock predecessor,
RawLock target,
RawLock successor 
) const

Delinks a marked entry from the list and deallocates the entry object.

Returns
whether the delink was really done by this thread

target should be marked for death (target->next.is_marked()), but this is not a contract because of concurrent delinks. If it isn't (say target is already delinked), then the atomic CAS fails, returning false.

Note
Although might sound weird, this method is const. This method physically delinks an already-marked entry, so logically it does nothing.

§ find_predecessor()

RawLock * RawLockQueue::find_predecessor ( RawLock lock) const

Returns the predecessor of the given lock. This removes marked entries it encounters.

Returns
predecessor of the given lock. NULL if not found.

§ peek_compatiblity()

bool RawLockQueue::peek_compatiblity ( RawXct xct,
uint32_t  hash,
const okvl_mode mode 
) const

Used for check_only=true case. Many things are much simpler and faster.

This is analogous to the wait-free contains() of lock-free linked list in [HERLIHY]. As commented in acquire(), this method is safe for check_only case. EX latch on the page guarantees that no lock for the key is newly coming now.

Returns
whether the mode is permitted

§ release()

void RawLockQueue::release ( RawLock lock,
const lsn_t commit_lsn 
)

Releases the given lock from this queue, waking up others if necessary.

Parameters
[in]lockthe lock to release.
[in]commit_lsnLSN to update X-lock tag during SX-ELR.

§ retry_acquire()

w_error_codes RawLockQueue::retry_acquire ( RawLock **  lock,
bool  wait,
bool  acquire,
int32_t  timeout_in_ms 
)

Waits for the already-inserted lock entry. Used after a failed conditional locking.

See also
acquire()
Note
"lock" is a RawLock**, not RawLock*. It might be cleared after another failure, or become a new lock when it's automatically retried.

§ tail()

RawLock * RawLockQueue::tail ( ) const

Returns the last entry of this queue. This removes marked entries it encounters, which is the reason why we can't have "tail" as a variable in the queue and instead we have to traverse each time.

Returns
the last entry. Never returns NULL, but might be &head (meaning empty).

§ trigger_UNDO()

bool RawLockQueue::trigger_UNDO ( Compatibility compatibility)

§ update_xlock_tag()

void RawLockQueue::update_xlock_tag ( const lsn_t commit_lsn)

Makes sure x_lock_tag is at least the given LSN.

§ wait_for()

w_error_codes RawLockQueue::wait_for ( RawLock new_lock,
int32_t  timeout_in_ms 
)

Sleeps until the lock is granted. Called from acquire() after check_compatiblity() if the lock was not immediately granted.

Member Data Documentation

§ head

RawLock RawLockQueue::head
mutable

The always-existing dummy entry as head. _head is never marked for death. Mutable because even find() physically removes something (though logically nothing).

§ loser_count

int RawLockQueue::loser_count = HAS_LOSER_COUNT
static

§ x_lock_tag

lsn_t RawLockQueue::x_lock_tag

Stores the commit timestamp of the latest transaction that released an X lock on this queue; holds lsn_t::null if no such transaction exists; protected by _requests_latch.


The documentation for this struct was generated from the following files: