Struct cfxcore::consensus::consensus_inner::ConsensusGraphInner
source · pub struct ConsensusGraphInner {
pub data_man: Arc<BlockDataManager>,
pub pos_verifier: Arc<PosVerifier>,
pub inner_conf: ConsensusInnerConfig,
pub pow_config: ProofOfWorkConfig,
pub pow: Arc<PowComputer>,
pub arena: Slab<ConsensusGraphNode>,
pub hash_to_arena_indices: HashMap<H256, usize>,
pub current_difficulty: U256,
/* private fields */
}
Expand description
§Implementation details of Eras, Timer chain and Checkpoints
Era in Conflux is defined based on the height of a block. Every epoch_block_count height corresponds to one era. For example, if era_block_count is 50000, then blocks at height 0 (the original genesis) is the era genesis of the first era. The blocks at height 50000 are era genesis blocks of the following era. Note that it is possible to have multiple era genesis blocks for one era period. Eventually, only one era genesis block and its subtree will become dominant and all other genesis blocks together with their subtrees will be discarded. The definition of Era enables Conflux to form checkpoints at the stabilized era genesis blocks.
§Implementation details of the Timer chain
Timer chain contains special blocks whose PoW qualities are significantly higher than normal blocks. The goal of timer chain is to enable a slowly growing longest chain to indicate the time elapsed between two blocks. Timer chain also provides a force confirmation rule which will enable us to safely form the checkpoint.
Any block whose PoW quality is timer_chain_block_difficulty_ratio times higher than its supposed difficulty is timer block. The longest chain of timer blocks (counting both parent edges and reference edges) is the timer chain. When timer_chain_beta is large enough, malicious attackers can neither control the timer chain nor stop its growth. We use Timer(G) to denote the number of timer chain blocks in G. We use TimerDis(b_1, b_2) to denote Timer(Past(B_1)) - Timer(Past(B_2)). In case that b_2 \in Future(b_1), TimerDis(b_1, b_2) is a good indicator about how long it has past between the generation of the two blocks.
A block b in G is considered force-confirm if 1) there are consecutively timer_chain_beta timer chain blocks under the subtree of b and 2) there are at least timer_chain_beta blocks after these blocks (not necessarily in the subtree of b). Force-confirm rule overrides any GHAST weight rule, i.e., new blocks will always be generated under b.
§Implementation details of the GHAST algorithm
Conflux uses the Greedy Heaviest Adaptive SubTree (GHAST) algorithm to select a chain from the genesis block to one of the leaf blocks as the pivot chain. For each block b, GHAST algorithm computes it is adaptive
B = Past(b)
f is the force confirm point of b in the view of Past(b)
a = b.parent
adaptive = False
Let f(x) = 2 * SubTW(B, x) - SubTW(B, x.parent) + x.parent.weight
Let g(x) = adaptive_weight_beta * b.diff
while a != force_confirm do
if TimerDis(a, b) >= timer_chain_beta and f(a) < g(a) then
adaptive = True
a = a.parent
To efficiently compute adaptive, we maintain a link-cut tree called adaptive_weight_tree. The value for x in the link-cut-tree is 2 * SubTW(B, x) + x.parent.weight - SubTW(B, x.parent). Note that we need to do special caterpillar update in the Link-Cut-Tree, i.e., given a node X, we need to update the values of all of those nodes A such that A is the child of one of the node in the path from Genesis to X.
For an adaptive block, its weights will be calculated in a special way. If its PoW quality is adaptive_heavy_weight_ratio times higher than the normal difficulty, its weight will be adaptive_heavy_weight_ratio instead of one. Otherwise, the weight will be zero. The goal of adaptive weight is to deal with potential liveness attacks that balance two subtrees. Note that when computing adaptive we only consider the nodes after force_confirm.
§Implementation details of partial invalid blocks
One block may become partial invalid because 1) it chooses incorrect parent or 2) it generates an adaptive block when it should not. In normal situations, we should verify every block we receive and determine whether it is partial invalid or not. For a partial invalid block b, it will not receive any reward. Normal nodes will also refrain from directly or indirectly referencing b until TimerDis(b, new_block) is greater than or equal to timer_dis_delta. Normal nodes essentially ignores partial invalid blocks for a while. We implement this via our inactive_dependency_cnt field. Last but not least, we exclude partial invalid blocks from the timer chain consideration. They are not timer blocks!
§Implementation details of checkpoints
Our consensus engine will form a checkpoint pair (a, b) given a DAG state G if:
- b is force confirmed in G
- a is force confirmed in Past(b)
Now we are safe to remove all blocks that are not in Future(a). For those blocks that are in the Future(a) but not in Subtree(a), we can also redirect a as their parents. We call a the cur_era_genesis_block and b the cur_era_stable_block.
We no longer need to check the partial invalid block which does not referencing b (directly and indirectly), because such block would never go into the timer chain. Our assumption is that the timer chain will not reorg on a length greater than timer_chain_beta. For those blocks which referencing b but also not under the subtree of a, they are by default partial invalid. We can ignore them as well. Therefore a can be treated as a new genesis block. We are going to check the possibility of making checkpoints only at the era boundary.
Note that we have the assumption that the force confirmation point will always move along parental edges, i.e., it is not possible for the point to move to a sibling tree. This assumption is true if the timer_chain_beta and the timer_chain_difficulty_ratio are set to large enough values.
§Introduction of blaming mechanism
Blaming is used to provide proof for state root of a specific pivot block. The rationale behind is as follows. Verifying state roots of blocks off pivot chain is very costly and sometimes impractical, e.g., when the block refers to another block that is not in the current era. It is preferred to avoid this verification if possible. Normally, Conflux only needs to store correct state root in header of pivot block to provide proof for light node. However, the pivot chain may oscillate at the place close to ledger tail, which means that a block that is off pivot at some point may become pivot block in the future. If we do not verify the state root in the header of that block, when it becomes a pivot block later, we cannot guarantee the correctness of the state root in its header. Therefore, if we do not verify the state root in off-pivot block, we cannot guarantee the correctness of state root in pivot block. Of course, one may argue that you can switch pivot chain when incorrect state root in pivot block is observed. However, this makes the check for the correct parent selection rely on state root checking. Then, since Conflux is an inclusive protocol which adopts off-pivot blocks in its final ledger, it needs to verify the correctness of parent selection of off-pivot blocks and this relies on the state verification on all the parent candidates of the off-pivot blocks. Therefore, this eventually will lead to state root verification on any blocks including off-pivot ones. This violates the original goal of saving cost of the state root verification in off-pivot blocks.
We therefore allow incorrect state root in pivot block header, and use the blaming mechanism to enable the proof generation of the correct state root. A full/archive node verifies the deferred state root and the blaming information stored in the header of each pivot block. It blames the blocks with incorrect information and stores the blaming result in the header of the newly mined block. The blaming result is simply a count which represents the distance (in the number of blocks) between the last correct block on the pivot chain and the newly mined block. For example, consider the blocks Bi-1, Bi, Bi+1, Bi+2, Bi+3. Assume the blaming count in Bi+3 is 2. This means when Bi+3 was mined, the node thinks Bi’s information is correct, while the information in Bi+1 and Bi+2 are wrong. Therefore, the node recovers the true deferred state roots (DSR) of Bi+1, Bi+2, and Bi+3 by computing locally, and then computes keccak(DSRi+3, keccak(DSRi+2, DSRi+1)) and stores the hash into the header of Bi+3 as its final deferred state root. A special case is if the blaming count is 0, the final deferred state root of the block is simply the original deferred state root, i.e., DSRi+3 for block Bi+3 in the above case.
Computing the reward for a block relies on correct blaming behavior of the block. If the block is a pivot block when computing its reward, it is required that:
- the block correctly chooses its parent;
- the block contains the correct deferred state root;
- the block correctly blames all its previous blocks following parent edges.
If the block is an off-pivot block when computing its reward, it is required that:
- the block correctly chooses its parent;
- the block correctly blames the blocks in the intersection of pivot chain blocks and all its previous blocks following parent edges. (This is to encourage the node generating the off-pivot block to keep verifying pivot chain blocks.)
To provide proof of state root to light node (or a full node when it tries to recover from a checkpoint), the protocol goes through the following steps. Let’s assume the verifier has a subtree of block headers which includes the block whose state root is to be verified.
-
The verifier node gets a merkle path whose merkle root corresponds to the state root after executing block Bi. Let’s call it the path root which is to be verified.
-
Assume deferred count is 2, the verifier node gets block header Bi+2 whose deferred state root should be the state root of Bi.
-
The verifier node locally searches for the first block whose information in header is correct, starting from block Bi+2 along with the pivot chain. The correctness of header information of a block is decided based on the ratio of the number of blamers in the subtree of the block. If the ratio is small enough, the information is correct. Assume the first such block is block Bj.
-
The verifier then searches backward along the pivot chain from Bj for the block whose blaming count is larger than or equal to the distance between block Bi+2 and it. Let’s call this block as Bk.
-
The verifier asks the prover which is a full or archive node to get the deferred state root of block Bk and its DSR vector, i.e., […, DSRi+2, …].
-
The verifier verifies the accumulated keccak hash of […, DSRi+2, …] equals to deferred state root of Bk, and then verifies that DSRi+2 equals to the path root of Bi.
In ConsensusGraphInner, every block corresponds to a ConsensusGraphNode and each node has an internal index. This enables fast internal implementation to use integer index instead of H256 block hashes.
Fields§
§data_man: Arc<BlockDataManager>
data_man is the handle to access raw block data
pos_verifier: Arc<PosVerifier>
§inner_conf: ConsensusInnerConfig
§pow_config: ProofOfWorkConfig
§pow: Arc<PowComputer>
§arena: Slab<ConsensusGraphNode>
This slab hold consensus graph node data and the array index is the internal index.
hash_to_arena_indices: HashMap<H256, usize>
indices maps block hash to internal index.
current_difficulty: U256
It maintains the expected difficulty of the next local mined block.
Implementations§
source§impl ConsensusGraphInner
impl ConsensusGraphInner
pub fn with_era_genesis( pow_config: ProofOfWorkConfig, pow: Arc<PowComputer>, pos_verifier: Arc<PosVerifier>, data_man: Arc<BlockDataManager>, inner_conf: ConsensusInnerConfig, cur_era_genesis_block_hash: &H256, cur_era_stable_block_hash: &H256 ) -> Self
pub fn current_era_genesis_seq_num(&self) -> u64
sourcepub fn get_pivot_block_arena_index(&self, height: u64) -> usize
pub fn get_pivot_block_arena_index(&self, height: u64) -> usize
The caller should ensure that height
is within the current
self.pivot_chain
range. Otherwise the function may panic.
pub fn get_pivot_height(&self) -> u64
pub fn height_to_pivot_index(&self, height: u64) -> usize
pub fn pivot_index_to_height(&self, pivot_index: usize) -> u64
pub fn set_initial_sequence_number(&mut self, initial_sn: u64)
pub fn get_cur_era_genesis_height(&self) -> u64
pub fn get_epoch_block_hashes(&self, epoch_arena_index: usize) -> Vec<H256>
pub fn get_ordered_executable_epoch_blocks(&self, index: usize) -> &Vec<usize>
pub fn get_or_compute_skipped_epoch_blocks( &mut self, index: usize ) -> &Vec<H256>
pub fn get_skipped_epoch_blocks(&self, index: usize) -> Option<&Vec<H256>>
pub fn find_first_index_with_correct_state_of( &self, pivot_index: usize, blame_bound: Option<u32>, min_vote_count: usize ) -> Option<usize>
pub fn find_first_trusted_starting_from( &self, from: usize, blame_bound: Option<u32>, min_vote_count: usize ) -> Option<usize>
pub fn check_mining_adaptive_block( &mut self, parent_arena_index: usize, referee_indices: Vec<usize>, difficulty: U256, pos_reference: Option<PosBlockId> ) -> bool
sourcepub fn insert_out_era_block(
&mut self,
block_header: &BlockHeader,
partial_invalid: bool
) -> (u64, usize)
pub fn insert_out_era_block( &mut self, block_header: &BlockHeader, partial_invalid: bool ) -> (u64, usize)
Try to insert an outside era block, return it’s sequence number. If both
it’s parent and referees are empty, we will not insert it into
arena
.
sourcepub fn get_pivot_reward_index(
&self,
epoch_arena_index: usize
) -> Option<(usize, usize)>
pub fn get_pivot_reward_index( &self, epoch_arena_index: usize ) -> Option<(usize, usize)>
Return the consensus graph indexes of the pivot block where the rewards of its epoch should be computed.
epoch to Block holding compute reward the reward state Block epoch Block with | [Bi1] | for cared [Bj]’s state | \ | anticone as deferred root –|––[Bi]-|–––––––[Ba]———[Bj]–––––[Bt] | / | | [Bi2] |
Let i([Bi]) is the arena index of [Bi]. Let h([Bi]) is the height of [Bi].
Params: epoch_arena_index: the arena index of [Bj] Return: Option<(i([Bi]), i([Ba]))>
The gap between [Bj] and [Bi], i.e., h([Bj])-h([Bi]), is REWARD_EPOCH_COUNT. Let D is the gap between the parent of the genesis of next era and [Bi]. The gap between [Ba] and [Bi] is min(ANTICONE_PENALTY_UPPER_EPOCH_COUNT, D).
sourcepub fn expected_difficulty(&self, parent_hash: &H256) -> U256
pub fn expected_difficulty(&self, parent_hash: &H256) -> U256
Compute the expected difficulty of a new block given its parent. Assume the difficulty adjustment period being p. The period boundary is [i*p+1, (i+1)*p]. Genesis block does not belong to any period, and the first period is [1, p]. Then, if parent height is less than p, the current block belongs to the first period, and its difficulty should be the initial difficulty. Otherwise, we need to consider 2 cases:
-
The parent height is at the period boundary, i.e., the height is exactly divisible by p. In this case, the new block and its parent do not belong to the same period. The expected difficulty of the new block should be computed based on the situation of parent’s period.
-
The parent height is not at the period boundary. In this case, the new block and its parent belong to the same period, and hence, its difficulty should be same as its parent’s.
pub fn best_block_hash(&self) -> H256
pub fn best_block_number(&self) -> u64
sourcepub fn best_state_epoch_number(&self) -> u64
pub fn best_state_epoch_number(&self) -> u64
Return the latest epoch number whose state has been enqueued.
The state may not exist, so the caller should wait for the result if its state will be used.
pub fn best_state_block_hash(&self) -> H256
pub fn get_state_block_with_delay( &self, block_hash: &H256, delay: usize ) -> Result<&H256, String>
pub fn best_epoch_number(&self) -> u64
pub fn best_timer_chain_height(&self) -> u64
sourcepub fn get_pivot_hash_from_epoch_number(
&self,
epoch_number: u64
) -> Result<EpochId, String>
pub fn get_pivot_hash_from_epoch_number( &self, epoch_number: u64 ) -> Result<EpochId, String>
Get the pivot hash from an epoch number. This function will try to query the data manager if it is not available in the ConsensusGraph due to out of the current era.
sourcepub fn epoch_hash(&self, epoch_number: u64) -> Option<H256>
pub fn epoch_hash(&self, epoch_number: u64) -> Option<H256>
This function differs from get_pivot_hash_from_epoch_number
in that it
only returns the hash if it is in the current consensus graph.
pub fn block_hashes_by_epoch( &self, epoch_number: u64 ) -> Result<Vec<H256>, String>
pub fn skipped_block_hashes_by_epoch( &self, epoch_number: u64 ) -> Result<Vec<H256>, String>
pub fn bounded_terminal_block_hashes( &mut self, referee_bound: usize ) -> Vec<H256>
pub fn get_block_epoch_number(&self, hash: &H256) -> Option<u64>
pub fn all_blocks_with_topo_order(&self) -> Vec<H256>
sourcepub fn block_execution_results_by_hash(
&self,
hash: &H256,
update_cache: bool
) -> Option<BlockExecutionResultWithEpoch>
pub fn block_execution_results_by_hash( &self, hash: &H256, update_cache: bool ) -> Option<BlockExecutionResultWithEpoch>
Return the block receipts in the current pivot view and the epoch block
hash. If hash
is not executed in the current view, return None.
pub fn is_timer_block(&self, block_hash: &H256) -> Option<bool>
pub fn is_adaptive(&self, block_hash: &H256) -> Option<bool>
pub fn is_partial_invalid(&self, block_hash: &H256) -> Option<bool>
pub fn is_pending(&self, block_hash: &H256) -> Option<bool>
pub fn get_transaction_info(&self, tx_hash: &H256) -> Option<TransactionInfo>
pub fn check_block_pivot_assumption( &self, pivot_hash: &H256, epoch: u64 ) -> Result<(), String>
pub fn total_processed_block_count(&self) -> u64
pub fn get_trusted_blame_block( &self, checkpoint_hash: &H256, plus_depth: usize ) -> Option<H256>
sourcepub fn get_trusted_blame_block_for_snapshot(
&self,
snapshot_epoch_id: &EpochId
) -> Option<H256>
pub fn get_trusted_blame_block_for_snapshot( &self, snapshot_epoch_id: &EpochId ) -> Option<H256>
Find a trusted blame block for snapshot full sync
sourcepub fn get_to_sync_epoch_id(&self) -> EpochId
pub fn get_to_sync_epoch_id(&self) -> EpochId
Return the epoch that we are going to sync the state
pub fn reset_epoch_number_in_epoch(&mut self, pivot_arena_index: usize)
sourcepub fn recover_state_valid(&mut self)
pub fn recover_state_valid(&mut self)
Find the first state valid block on the pivot chain after
state_boundary_height
and set state_valid
of it and its blamed
blocks. This block is found according to blame_ratio.
pub fn block_node(&self, block_hash: &H256) -> Option<&ConsensusGraphNode>
sourcepub fn best_terminals(
&mut self,
best_index: usize,
ref_bound: usize
) -> Vec<H256>
pub fn best_terminals( &mut self, best_index: usize, ref_bound: usize ) -> Vec<H256>
Return the list of best terminals when respecting a bound (for referencing edges). We sort the terminals based on its lca so that it will not change the parent selection results if we exclude last few terminals in the sorted order.
pub fn finish_block_recovery(&mut self)
pub fn get_pivot_chain_and_weight( &self, height_range: Option<(u64, u64)> ) -> Result<Vec<(H256, U256)>, String>
sourcepub fn get_subtree(&self, root_block: &H256) -> Option<Vec<H256>>
pub fn get_subtree(&self, root_block: &H256) -> Option<Vec<H256>>
Return None
if root_block
is not in consensus.
pub fn get_next_pivot_decision( &self, parent_decision_hash: &H256, confirmed_height: u64 ) -> Option<(u64, H256)>
pub fn validate_pivot_decision( &self, ancestor_hash: &H256, me_hash: &H256 ) -> bool
sourcepub fn choose_correct_parent(
&mut self,
parent_arena_index: usize,
referee_indices: Vec<usize>,
pos_reference: Option<PosBlockId>
) -> usize
pub fn choose_correct_parent( &mut self, parent_arena_index: usize, referee_indices: Vec<usize>, pos_reference: Option<PosBlockId> ) -> usize
Return possibly new parent.
pub fn pivot_block_processed(&self, pivot_hash: &H256) -> bool
sourcepub fn is_confirmed_by_pos(&self, block_hash: &H256) -> bool
pub fn is_confirmed_by_pos(&self, block_hash: &H256) -> bool
Return if a block has been confirmed by the pivot decision by the latest committed PoS block.
This function needs persisted BlockExecutionResult
to respond
correctly for blocks before the checkpoint. If the data are not
persisted, it will return false
for blocks before the checkpoint even
though they have been confirmed.
sourcepub fn latest_epoch_confirmed_by_pos(&self) -> &(H256, u64)
pub fn latest_epoch_confirmed_by_pos(&self) -> &(H256, u64)
Return the latest PoS pivot decision processed in ConsensusGraph.
Trait Implementations§
source§impl MallocSizeOf for ConsensusGraphInner
impl MallocSizeOf for ConsensusGraphInner
source§fn size_of(&self, ops: &mut MallocSizeOfOps) -> usize
fn size_of(&self, ops: &mut MallocSizeOfOps) -> usize
source§impl RichTreeGraph for ConsensusGraphInner
impl RichTreeGraph for ConsensusGraphInner
source§impl StateMaintenanceTrait for ConsensusGraphInner
impl StateMaintenanceTrait for ConsensusGraphInner
fn get_pivot_hash_from_epoch_number( &self, epoch_number: u64 ) -> Result<EpochId, String>
fn get_epoch_execution_commitment_with_db( &self, block_hash: &EpochId ) -> Option<EpochExecutionCommitment>
fn remove_epoch_execution_commitment_from_db(&self, block_hash: &EpochId)
Auto Trait Implementations§
impl !RefUnwindSafe for ConsensusGraphInner
impl Send for ConsensusGraphInner
impl Sync for ConsensusGraphInner
impl Unpin for ConsensusGraphInner
impl !UnwindSafe for ConsensusGraphInner
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
§impl<T> Conv for T
impl<T> Conv for T
source§impl<T> DAG for Twhere
T: TreeGraph,
impl<T> DAG for Twhere
T: TreeGraph,
fn predecessor_edges( &self, node_index: <T as Graph>::NodeIndex ) -> Vec<<T as Graph>::NodeIndex>
fn topological_sort_with_order_indicator<OrderIndicator, FOrd, Set>( &self, index_set: Set, order_indicator: FOrd ) -> Vec<Self::NodeIndex>
fn topological_sort<Set>(&self, index_set: Set) -> Vec<Self::NodeIndex>
source§impl<T> ElementSatisfy<ElementNoConstrain> for T
impl<T> ElementSatisfy<ElementNoConstrain> for T
fn to_constrain_object(&self) -> &ElementNoConstrain
fn to_constrain_object_mut(&mut self) -> &mut ElementNoConstrain
§impl<T> FmtForward for T
impl<T> FmtForward for T
§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
self
to use its Binary
implementation when Debug
-formatted.§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
self
to use its Display
implementation when
Debug
-formatted.§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
self
to use its LowerExp
implementation when
Debug
-formatted.§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
self
to use its LowerHex
implementation when
Debug
-formatted.§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
self
to use its Octal
implementation when Debug
-formatted.§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
self
to use its Pointer
implementation when
Debug
-formatted.§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
self
to use its UpperExp
implementation when
Debug
-formatted.§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
self
to use its UpperHex
implementation when
Debug
-formatted.§fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
fn fmt_list(self) -> FmtList<Self>where
&'a Self: for<'a> IntoIterator,
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere
T: ?Sized,
§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read more§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
self
and passes that borrow into the pipe function. Read more§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R ) -> R
§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
self
, then passes self.as_ref()
into the pipe function.§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
self
, then passes self.as_mut()
into the pipe
function.§fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
self
, then passes self.deref()
into the pipe function.§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<T> RichDAG for Twhere
T: RichTreeGraph + DAG,
impl<T> RichDAG for Twhere
T: RichTreeGraph + DAG,
fn successor_edges( &self, node_index: <T as Graph>::NodeIndex ) -> Vec<<T as Graph>::NodeIndex>
fn get_future_with_stop_condition<FStop, Set, Iter>( &self, index_set: Iter, stop_condition: FStop ) -> Set
fn get_future<Set, Iter>(&self, index_set: Iter) -> Set
§impl<T> Tap for T
impl<T> Tap for T
§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
Borrow<B>
of a value. Read more§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
BorrowMut<B>
of a value. Read more§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
AsRef<R>
view of a value. Read more§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
AsMut<R>
view of a value. Read more§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
Deref::Target
of a value. Read more§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
Deref::Target
of a value. Read more§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap()
only in debug builds, and is erased in release builds.§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut()
only in debug builds, and is erased in release
builds.§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
.tap_borrow()
only in debug builds, and is erased in release
builds.§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
.tap_borrow_mut()
only in debug builds, and is erased in release
builds.§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
.tap_ref()
only in debug builds, and is erased in release
builds.§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
.tap_ref_mut()
only in debug builds, and is erased in release
builds.§fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
.tap_deref()
only in debug builds, and is erased in release
builds.