Zero  0.1.0
Classes | Public Member Functions | Private Member Functions | Static Private Member Functions | Private Attributes | Friends | List of all members
lock_queue_t Class Reference

A lock queue to hold granted and waiting lock requests (lock_queue_entry_t's) for a given lock. More...

#include <lock_bucket.h>

Classes

struct  check_grant_result
 

Public Member Functions

 lock_queue_t (uint32_t hash)
 
 ~lock_queue_t ()
 
uint32_t hash () const
 
uint32_t hit_count () const
 

Private Member Functions

void increase_hit_count ()
 
lock_queue_tnext ()
 
void set_next (lock_queue_t *new_next)
 
const lsn_tx_lock_tag () const
 
void update_x_lock_tag (const lsn_t &new_tag)
 
lock_queue_entry_tfind_request (const xct_lock_info_t *li)
 
void append_request (lock_queue_entry_t *myreq)
 
void detach_request (lock_queue_entry_t *myreq)
 
bool grant_request (lock_queue_entry_t *myreq, lsn_t &observed)
 
void check_can_grant (lock_queue_entry_t *myreq, check_grant_result &result)
 
bool _check_compatible (const okvl_mode &granted_mode, const okvl_mode &requested_mode, lock_queue_entry_t *other_request, bool proceeds_me, lsn_t &observed)
 
void wakeup_waiters (const okvl_mode &released_requested)
 

Static Private Member Functions

static lock_queue_tallocate_lock_queue (uint32_t hash)
 
static void deallocate_lock_queue (lock_queue_t *obj)
 

Private Attributes

const uint32_t _hash
 precise hash for this lock queue. More...
 
uint32_t _hit_counts
 
uint64_t _release_version
 Monotonically increasing counter that is incremented when some xct releases some lock in this queue. More...
 
lock_queue_t_next
 
srwlock_t _requests_latch
 
lsn_t _x_lock_tag
 
lock_queue_entry_t_head
 
lock_queue_entry_t_tail
 

Friends

class bucket_t
 
class lock_core_m
 

Detailed Description

A lock queue to hold granted and waiting lock requests (lock_queue_entry_t's) for a given lock.

NOTE objects of this class are created via the container of this object (bucket_t) calling allocate_lock_queue / deallocate_lock_queue.

WARNING: lock_queue_t's are currently only deleted at shutdown. There is no mechanism presently to prevent one thread from deleting a lock_queue_t out from under another thread. FIXME?

Constructor & Destructor Documentation

§ lock_queue_t()

lock_queue_t::lock_queue_t ( uint32_t  hash)
inline

§ ~lock_queue_t()

lock_queue_t::~lock_queue_t ( )
inline

Member Function Documentation

§ _check_compatible()

bool lock_queue_t::_check_compatible ( const okvl_mode granted_mode,
const okvl_mode requested_mode,
lock_queue_entry_t other_request,
bool  proceeds_me,
lsn_t observed 
)
inlineprivate

§ allocate_lock_queue()

static lock_queue_t* lock_queue_t::allocate_lock_queue ( uint32_t  hash)
staticprivate

Allocate a new queue object. Uses TLS cache in lock_core.cpp.

§ append_request()

void lock_queue_t::append_request ( lock_queue_entry_t myreq)
private

§ check_can_grant()

void lock_queue_t::check_can_grant ( lock_queue_entry_t myreq,
check_grant_result result 
)
private

Checkif my request can be granted, and also check deadlocks.

§ deallocate_lock_queue()

static void lock_queue_t::deallocate_lock_queue ( lock_queue_t obj)
staticprivate

Deallocate a new queue object. Uses TLS cache in lock_core.cpp. Shutdown time only.

§ detach_request()

void lock_queue_t::detach_request ( lock_queue_entry_t myreq)
private

§ find_request()

lock_queue_entry_t* lock_queue_t::find_request ( const xct_lock_info_t li)
private

§ grant_request()

bool lock_queue_t::grant_request ( lock_queue_entry_t myreq,
lsn_t observed 
)
private

try getting a lock.

Returns
if it really succeeded to get the lock. Requires current thread is myreq->_thr

§ hash()

uint32_t lock_queue_t::hash ( ) const
inline

§ hit_count()

uint32_t lock_queue_t::hit_count ( ) const
inline

§ increase_hit_count()

void lock_queue_t::increase_hit_count ( )
inlineprivate

§ next()

lock_queue_t* lock_queue_t::next ( )
inlineprivate

Requires read access for the bucket containing us if any's _queue_latch.

§ set_next()

void lock_queue_t::set_next ( lock_queue_t new_next)
inlineprivate

Requires write access for the bucket containing us if any's _queue_latch.

§ update_x_lock_tag()

void lock_queue_t::update_x_lock_tag ( const lsn_t new_tag)
inlineprivate

Requires write access for _request_latch, new_tag may be lsn_t::null.

§ wakeup_waiters()

void lock_queue_t::wakeup_waiters ( const okvl_mode released_requested)
private

opportunistically wake up waiters. called when some lock is released.

§ x_lock_tag()

const lsn_t& lock_queue_t::x_lock_tag ( ) const
inlineprivate

Requires read access for _request_latch.

Friends And Related Function Documentation

§ bucket_t

friend class bucket_t
friend

§ lock_core_m

friend class lock_core_m
friend

Member Data Documentation

§ _hash

const uint32_t lock_queue_t::_hash
private

precise hash for this lock queue.

§ _head

lock_queue_entry_t* lock_queue_t::_head
private

The first entry in this queue or NULL if queue empty; protected by _requests_latch.

§ _hit_counts

uint32_t lock_queue_t::_hit_counts
private

How popular this lock is (approximately, so, no protection). Intended for background cleanup to save memory, but not currently used. Someday, the queue may be revoked in background cleanup. FIXME

NOTE this is NOT a pin-count that is precisely incremented/decremented to control when deletion could occur.

§ _next

lock_queue_t* lock_queue_t::_next
private

Forms a singly-linked list for other queues in the same bucket as us; protected by the bucket containing us's _queue_latch if any.

§ _release_version

uint64_t lock_queue_t::_release_version
private

Monotonically increasing counter that is incremented when some xct releases some lock in this queue.

This is NOT protected by latch because it is used only for quick check to opportunistically prevent false deadlock detection. This counter was previously just a true/false flag (_wait_map_obsolete flag) in waitmap, but turns out that a granular checking prevents false deadlock detections more precisely and efficiently.

§ _requests_latch

srwlock_t lock_queue_t::_requests_latch
private

R/W latch to protect remaining fields as well as our lock_queue_entry_t's fields.

§ _tail

lock_queue_entry_t* lock_queue_t::_tail
private

The last entry in this queue or NULL if queue empty; protected by _requests_latch.

§ _x_lock_tag

lsn_t lock_queue_t::_x_lock_tag
private

Stores the commit timestamp of the latest transaction that released an X lock on this queue; holds lsn_t::null if no such transaction exists; protected by _requests_latch.


The documentation for this class was generated from the following file: