|
Zero
0.1.0
|
Control block in the new buffer pool class. More...
#include <bf_tree_cb.h>
Public Member Functions | |
| bf_tree_cb_t () | |
| void | init (PageID pid=0, lsn_t page_lsn=lsn_t::null) |
| void | clear_latch () |
| void | pin_for_restore () |
| void | unpin_for_restore () |
| bool | is_pinned_for_restore () |
| lsn_t | get_page_lsn () const |
| void | set_page_lsn (lsn_t lsn) |
| lsn_t | get_persisted_lsn () const |
| bool | is_dirty () const |
| lsn_t | get_rec_lsn () const |
| lsn_t | get_next_persisted_lsn () const |
| void | mark_persisted_lsn () |
| lsn_t | get_next_rec_lsn () const |
| void | notify_write () |
| void | notify_write_logbased (lsn_t archived_lsn) |
| This is used by the decoupled (a.k.a log-based) cleaner. More... | |
| uint16_t | get_log_volume () const |
| void | increment_log_volume (uint16_t c) |
| void | set_log_volume (uint16_t c) |
| void | set_check_recovery (bool chk) |
| bool | pin () |
| void | unpin () |
| bool | prepare_for_eviction () |
| bool | is_in_use () const |
| void | inc_ref_count () |
| void | inc_ref_count_ex () |
| void | reset_ref_count_ex () |
| bf_tree_cb_t (const bf_tree_cb_t &) | |
| bf_tree_cb_t & | operator= (const bf_tree_cb_t &) |
| latch_t * | latchp () const |
| latch_t & | latch () const |
Public Attributes | |
| PageID | _pid |
| std::atomic< int32_t > | _pin_cnt |
| Count of pins on this block. See class comments; protected by ?? More... | |
| uint16_t | _ref_count |
| uint16_t | _ref_count_ex |
| Reference count incremented only by X-latching. More... | |
| uint8_t | _fill13 |
| std::atomic< bool > | _pinned_for_restore |
| std::atomic< bool > | _used |
| true if this block is actually used More... | |
| std::atomic< bool > | _swizzled |
| Whether this page is swizzled from the parent. More... | |
| lsn_t | _page_lsn |
| lsn_t | _persisted_lsn |
| lsn_t | _rec_lsn |
| lsn_t | _next_persisted_lsn |
| lsn_t | _next_rec_lsn |
| uint32_t | _log_volume |
| Log volume generated on this page (for page_img logrec compression, see xct_logger.h) More... | |
| uint16_t | _swizzled_ptr_cnt_hint |
| bool | _check_recovery |
| int8_t | _fill64 |
| latch_t | _latch |
Static Public Attributes | |
| static const uint16_t | BP_MAX_REFCOUNT = 1024 |
Control block in the new buffer pool class.
The design of control block had at least 2 big surgeries. The first one happened in the summer of 2011, making our first version of Foster B-tree based on Shore-MT. We added a few relatively-minor things in bufferpool
Next happened in early 2012 when we overhauled the whole bufferpool code to implement page swizzling and do related surgeries. At this point, all bufferpool classes are renamed and rewritten from scratch.
The value is incremented when 1) if the page is non-swizzled and some thread fixes this page, 2) when the page has been swizzled, 3) when the page's child page is brought into buffer pool. Decremented on a corresponding anti-actions of them (e.g., unfix). Both increments and decrements as well as reading must be done atomically.
The block can be evicted from bufferpool only when this value is 0. So, when the block is selected as the eviction victim, the eviction thread should atomically set this value to be -1 from 0. In other words, it must be atomic CAS.
Whenever this value is -1, everyone should ignore this block as non-existent like NULL. It's similar to the "in-transit" in the old buffer pool, but this is simpler and more efficient. The thread that increments this value should check it, too. Thus, the atomic increments must be atomic-CAS (not just atomic-INC) because the original value might be -1 as of the action! However, there are some cases you can do inc/dec just as atomic-INC/DEC.
Decrement is always safe to be atomic-dec because (assuming there is no bug) you should be always decrementing from 1 or larger. Also, increment is sometimes safe to be atomic-inc when for some reason you are sure there are at least one more pins on the block, such as when you are incrementing for the case of 3) above.
|
inline |
| bf_tree_cb_t::bf_tree_cb_t | ( | const bf_tree_cb_t & | ) |
|
inline |
clears latch
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
Initializes all fields – called by fix when fetching a new page
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
This is used by the decoupled (a.k.a log-based) cleaner.
| bf_tree_cb_t& bf_tree_cb_t::operator= | ( | const bf_tree_cb_t & | ) |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
|
inline |
| bool bf_tree_cb_t::_check_recovery |
This is used for frames that are prefetched during buffer pool warmup. They might need recovery next time they are fixed.
| uint8_t bf_tree_cb_t::_fill13 |
| int8_t bf_tree_cb_t::_fill64 |
| latch_t bf_tree_cb_t::_latch |
| uint32_t bf_tree_cb_t::_log_volume |
Log volume generated on this page (for page_img logrec compression, see xct_logger.h)
| lsn_t bf_tree_cb_t::_next_persisted_lsn |
| lsn_t bf_tree_cb_t::_next_rec_lsn |
| lsn_t bf_tree_cb_t::_page_lsn |
| lsn_t bf_tree_cb_t::_persisted_lsn |
| PageID bf_tree_cb_t::_pid |
short page ID of the page currently pinned on this block. (we don't have stnum in bufferpool) protected by ??
| std::atomic<int32_t> bf_tree_cb_t::_pin_cnt |
Count of pins on this block. See class comments; protected by ??
| std::atomic<bool> bf_tree_cb_t::_pinned_for_restore |
| lsn_t bf_tree_cb_t::_rec_lsn |
| uint16_t bf_tree_cb_t::_ref_count |
Reference count (for clock algorithm). Approximate, so not protected by latches. We increment it whenever (re-)fixing the page in the bufferpool. We limit the maximum value of the refcount by BP_MAX_REFCOUNT to avoid the scalability bottleneck caused by excessive cache coherence traffic (cacheline ping-pongs between sockets). The counter still has enough granularity to separate cold from hot pages. Clock decrements the counter when it visits the page.
| uint16_t bf_tree_cb_t::_ref_count_ex |
Reference count incremented only by X-latching.
| std::atomic<bool> bf_tree_cb_t::_swizzled |
Whether this page is swizzled from the parent.
| uint16_t bf_tree_cb_t::_swizzled_ptr_cnt_hint |
number of swizzled pointers to children; protected by ??
| std::atomic<bool> bf_tree_cb_t::_used |
true if this block is actually used
|
static |
Maximum value of the per-frame refcount (reference counter). We cap the refcount to avoid contention on the cacheline of the frame's control block (due to ping-pongs between sockets) when multiple sockets read-access the same frame. The refcount max value should have enough granularity to separate cold from hot pages.
CS TODO: but doesnt the latch itself already incur such cacheline bouncing? If so, then we could simply move the refcount inside latch_t (which must have the same size as a cacheline) and be done with it. No additional overhead on cache coherence other than the latching itself is expected. We could reuse the field _total_count in latch_t, or even split it into to 16-bit integers: one for shared and one for exclusive latches. This field is currently only used for tests, but it doesn't make sense to count CB references and latch acquisitions in separate variables.
1.8.12