pub struct ConsensusGraphInner {
    pub data_man: Arc<BlockDataManager>,
    pub pos_verifier: Arc<PosVerifier>,
    pub inner_conf: ConsensusInnerConfig,
    pub pow_config: ProofOfWorkConfig,
    pub pow: Arc<PowComputer>,
    pub arena: Slab<ConsensusGraphNode>,
    pub hash_to_arena_indices: HashMap<H256, usize>,
    pub current_difficulty: U256,
    /* private fields */
}
Expand description

§Implementation details of Eras, Timer chain and Checkpoints

Era in Conflux is defined based on the height of a block. Every epoch_block_count height corresponds to one era. For example, if era_block_count is 50000, then blocks at height 0 (the original genesis) is the era genesis of the first era. The blocks at height 50000 are era genesis blocks of the following era. Note that it is possible to have multiple era genesis blocks for one era period. Eventually, only one era genesis block and its subtree will become dominant and all other genesis blocks together with their subtrees will be discarded. The definition of Era enables Conflux to form checkpoints at the stabilized era genesis blocks.

§Implementation details of the Timer chain

Timer chain contains special blocks whose PoW qualities are significantly higher than normal blocks. The goal of timer chain is to enable a slowly growing longest chain to indicate the time elapsed between two blocks. Timer chain also provides a force confirmation rule which will enable us to safely form the checkpoint.

Any block whose PoW quality is timer_chain_block_difficulty_ratio times higher than its supposed difficulty is timer block. The longest chain of timer blocks (counting both parent edges and reference edges) is the timer chain. When timer_chain_beta is large enough, malicious attackers can neither control the timer chain nor stop its growth. We use Timer(G) to denote the number of timer chain blocks in G. We use TimerDis(b_1, b_2) to denote Timer(Past(B_1)) - Timer(Past(B_2)). In case that b_2 \in Future(b_1), TimerDis(b_1, b_2) is a good indicator about how long it has past between the generation of the two blocks.

A block b in G is considered force-confirm if 1) there are consecutively timer_chain_beta timer chain blocks under the subtree of b and 2) there are at least timer_chain_beta blocks after these blocks (not necessarily in the subtree of b). Force-confirm rule overrides any GHAST weight rule, i.e., new blocks will always be generated under b.

§Implementation details of the GHAST algorithm

Conflux uses the Greedy Heaviest Adaptive SubTree (GHAST) algorithm to select a chain from the genesis block to one of the leaf blocks as the pivot chain. For each block b, GHAST algorithm computes it is adaptive

B = Past(b)
f is the force confirm point of b in the view of Past(b)
a = b.parent
adaptive = False
Let f(x) = 2 * SubTW(B, x) - SubTW(B, x.parent) + x.parent.weight
Let g(x) = adaptive_weight_beta * b.diff
while a != force_confirm do
    if TimerDis(a, b) >= timer_chain_beta and f(a) < g(a) then
        adaptive = True
    a = a.parent

To efficiently compute adaptive, we maintain a link-cut tree called adaptive_weight_tree. The value for x in the link-cut-tree is 2 * SubTW(B, x) + x.parent.weight - SubTW(B, x.parent). Note that we need to do special caterpillar update in the Link-Cut-Tree, i.e., given a node X, we need to update the values of all of those nodes A such that A is the child of one of the node in the path from Genesis to X.

For an adaptive block, its weights will be calculated in a special way. If its PoW quality is adaptive_heavy_weight_ratio times higher than the normal difficulty, its weight will be adaptive_heavy_weight_ratio instead of one. Otherwise, the weight will be zero. The goal of adaptive weight is to deal with potential liveness attacks that balance two subtrees. Note that when computing adaptive we only consider the nodes after force_confirm.

§Implementation details of partial invalid blocks

One block may become partial invalid because 1) it chooses incorrect parent or 2) it generates an adaptive block when it should not. In normal situations, we should verify every block we receive and determine whether it is partial invalid or not. For a partial invalid block b, it will not receive any reward. Normal nodes will also refrain from directly or indirectly referencing b until TimerDis(b, new_block) is greater than or equal to timer_dis_delta. Normal nodes essentially ignores partial invalid blocks for a while. We implement this via our inactive_dependency_cnt field. Last but not least, we exclude partial invalid blocks from the timer chain consideration. They are not timer blocks!

§Implementation details of checkpoints

Our consensus engine will form a checkpoint pair (a, b) given a DAG state G if:

  1. b is force confirmed in G
  2. a is force confirmed in Past(b)

Now we are safe to remove all blocks that are not in Future(a). For those blocks that are in the Future(a) but not in Subtree(a), we can also redirect a as their parents. We call a the cur_era_genesis_block and b the cur_era_stable_block.

We no longer need to check the partial invalid block which does not referencing b (directly and indirectly), because such block would never go into the timer chain. Our assumption is that the timer chain will not reorg on a length greater than timer_chain_beta. For those blocks which referencing b but also not under the subtree of a, they are by default partial invalid. We can ignore them as well. Therefore a can be treated as a new genesis block. We are going to check the possibility of making checkpoints only at the era boundary.

Note that we have the assumption that the force confirmation point will always move along parental edges, i.e., it is not possible for the point to move to a sibling tree. This assumption is true if the timer_chain_beta and the timer_chain_difficulty_ratio are set to large enough values.

§Introduction of blaming mechanism

Blaming is used to provide proof for state root of a specific pivot block. The rationale behind is as follows. Verifying state roots of blocks off pivot chain is very costly and sometimes impractical, e.g., when the block refers to another block that is not in the current era. It is preferred to avoid this verification if possible. Normally, Conflux only needs to store correct state root in header of pivot block to provide proof for light node. However, the pivot chain may oscillate at the place close to ledger tail, which means that a block that is off pivot at some point may become pivot block in the future. If we do not verify the state root in the header of that block, when it becomes a pivot block later, we cannot guarantee the correctness of the state root in its header. Therefore, if we do not verify the state root in off-pivot block, we cannot guarantee the correctness of state root in pivot block. Of course, one may argue that you can switch pivot chain when incorrect state root in pivot block is observed. However, this makes the check for the correct parent selection rely on state root checking. Then, since Conflux is an inclusive protocol which adopts off-pivot blocks in its final ledger, it needs to verify the correctness of parent selection of off-pivot blocks and this relies on the state verification on all the parent candidates of the off-pivot blocks. Therefore, this eventually will lead to state root verification on any blocks including off-pivot ones. This violates the original goal of saving cost of the state root verification in off-pivot blocks.

We therefore allow incorrect state root in pivot block header, and use the blaming mechanism to enable the proof generation of the correct state root. A full/archive node verifies the deferred state root and the blaming information stored in the header of each pivot block. It blames the blocks with incorrect information and stores the blaming result in the header of the newly mined block. The blaming result is simply a count which represents the distance (in the number of blocks) between the last correct block on the pivot chain and the newly mined block. For example, consider the blocks Bi-1, Bi, Bi+1, Bi+2, Bi+3. Assume the blaming count in Bi+3 is 2. This means when Bi+3 was mined, the node thinks Bi’s information is correct, while the information in Bi+1 and Bi+2 are wrong. Therefore, the node recovers the true deferred state roots (DSR) of Bi+1, Bi+2, and Bi+3 by computing locally, and then computes keccak(DSRi+3, keccak(DSRi+2, DSRi+1)) and stores the hash into the header of Bi+3 as its final deferred state root. A special case is if the blaming count is 0, the final deferred state root of the block is simply the original deferred state root, i.e., DSRi+3 for block Bi+3 in the above case.

Computing the reward for a block relies on correct blaming behavior of the block. If the block is a pivot block when computing its reward, it is required that:

  1. the block correctly chooses its parent;
  2. the block contains the correct deferred state root;
  3. the block correctly blames all its previous blocks following parent edges.

If the block is an off-pivot block when computing its reward, it is required that:

  1. the block correctly chooses its parent;
  2. the block correctly blames the blocks in the intersection of pivot chain blocks and all its previous blocks following parent edges. (This is to encourage the node generating the off-pivot block to keep verifying pivot chain blocks.)

To provide proof of state root to light node (or a full node when it tries to recover from a checkpoint), the protocol goes through the following steps. Let’s assume the verifier has a subtree of block headers which includes the block whose state root is to be verified.

  1. The verifier node gets a merkle path whose merkle root corresponds to the state root after executing block Bi. Let’s call it the path root which is to be verified.

  2. Assume deferred count is 2, the verifier node gets block header Bi+2 whose deferred state root should be the state root of Bi.

  3. The verifier node locally searches for the first block whose information in header is correct, starting from block Bi+2 along with the pivot chain. The correctness of header information of a block is decided based on the ratio of the number of blamers in the subtree of the block. If the ratio is small enough, the information is correct. Assume the first such block is block Bj.

  4. The verifier then searches backward along the pivot chain from Bj for the block whose blaming count is larger than or equal to the distance between block Bi+2 and it. Let’s call this block as Bk.

  5. The verifier asks the prover which is a full or archive node to get the deferred state root of block Bk and its DSR vector, i.e., […, DSRi+2, …].

  6. The verifier verifies the accumulated keccak hash of […, DSRi+2, …] equals to deferred state root of Bk, and then verifies that DSRi+2 equals to the path root of Bi.

In ConsensusGraphInner, every block corresponds to a ConsensusGraphNode and each node has an internal index. This enables fast internal implementation to use integer index instead of H256 block hashes.

Fields§

§data_man: Arc<BlockDataManager>

data_man is the handle to access raw block data

§pos_verifier: Arc<PosVerifier>§inner_conf: ConsensusInnerConfig§pow_config: ProofOfWorkConfig§pow: Arc<PowComputer>§arena: Slab<ConsensusGraphNode>

This slab hold consensus graph node data and the array index is the internal index.

§hash_to_arena_indices: HashMap<H256, usize>

indices maps block hash to internal index.

§current_difficulty: U256

It maintains the expected difficulty of the next local mined block.

Implementations§

source§

impl ConsensusGraphInner

source

pub fn with_era_genesis( pow_config: ProofOfWorkConfig, pow: Arc<PowComputer>, pos_verifier: Arc<PosVerifier>, data_man: Arc<BlockDataManager>, inner_conf: ConsensusInnerConfig, cur_era_genesis_block_hash: &H256, cur_era_stable_block_hash: &H256 ) -> Self

source

pub fn current_era_genesis_seq_num(&self) -> u64

source

pub fn get_pivot_block_arena_index(&self, height: u64) -> usize

The caller should ensure that height is within the current self.pivot_chain range. Otherwise the function may panic.

source

pub fn get_pivot_height(&self) -> u64

source

pub fn height_to_pivot_index(&self, height: u64) -> usize

source

pub fn pivot_index_to_height(&self, pivot_index: usize) -> u64

source

pub fn set_initial_sequence_number(&mut self, initial_sn: u64)

source

pub fn get_cur_era_genesis_height(&self) -> u64

source

pub fn get_epoch_block_hashes(&self, epoch_arena_index: usize) -> Vec<H256>

source

pub fn get_ordered_executable_epoch_blocks(&self, index: usize) -> &Vec<usize>

source

pub fn get_or_compute_skipped_epoch_blocks( &mut self, index: usize ) -> &Vec<H256>

source

pub fn get_skipped_epoch_blocks(&self, index: usize) -> Option<&Vec<H256>>

source

pub fn find_first_index_with_correct_state_of( &self, pivot_index: usize, blame_bound: Option<u32>, min_vote_count: usize ) -> Option<usize>

source

pub fn find_first_trusted_starting_from( &self, from: usize, blame_bound: Option<u32>, min_vote_count: usize ) -> Option<usize>

source

pub fn check_mining_adaptive_block( &mut self, parent_arena_index: usize, referee_indices: Vec<usize>, difficulty: U256, pos_reference: Option<PosBlockId> ) -> bool

source

pub fn insert_out_era_block( &mut self, block_header: &BlockHeader, partial_invalid: bool ) -> (u64, usize)

Try to insert an outside era block, return it’s sequence number. If both it’s parent and referees are empty, we will not insert it into arena.

source

pub fn get_pivot_reward_index( &self, epoch_arena_index: usize ) -> Option<(usize, usize)>

Return the consensus graph indexes of the pivot block where the rewards of its epoch should be computed.

epoch to Block holding compute reward the reward state Block epoch Block with | [Bi1] | for cared [Bj]’s state | \ | anticone as deferred root –|––[Bi]-|–––––––[Ba]———[Bj]–––––[Bt] | / | | [Bi2] |

Let i([Bi]) is the arena index of [Bi]. Let h([Bi]) is the height of [Bi].

Params: epoch_arena_index: the arena index of [Bj] Return: Option<(i([Bi]), i([Ba]))>

The gap between [Bj] and [Bi], i.e., h([Bj])-h([Bi]), is REWARD_EPOCH_COUNT. Let D is the gap between the parent of the genesis of next era and [Bi]. The gap between [Ba] and [Bi] is min(ANTICONE_PENALTY_UPPER_EPOCH_COUNT, D).

source

pub fn expected_difficulty(&self, parent_hash: &H256) -> U256

Compute the expected difficulty of a new block given its parent. Assume the difficulty adjustment period being p. The period boundary is [i*p+1, (i+1)*p]. Genesis block does not belong to any period, and the first period is [1, p]. Then, if parent height is less than p, the current block belongs to the first period, and its difficulty should be the initial difficulty. Otherwise, we need to consider 2 cases:

  1. The parent height is at the period boundary, i.e., the height is exactly divisible by p. In this case, the new block and its parent do not belong to the same period. The expected difficulty of the new block should be computed based on the situation of parent’s period.

  2. The parent height is not at the period boundary. In this case, the new block and its parent belong to the same period, and hence, its difficulty should be same as its parent’s.

source

pub fn best_block_hash(&self) -> H256

source

pub fn best_block_number(&self) -> u64

source

pub fn best_state_epoch_number(&self) -> u64

Return the latest epoch number whose state has been enqueued.

The state may not exist, so the caller should wait for the result if its state will be used.

source

pub fn best_state_block_hash(&self) -> H256

source

pub fn get_state_block_with_delay( &self, block_hash: &H256, delay: usize ) -> Result<&H256, String>

source

pub fn best_epoch_number(&self) -> u64

source

pub fn best_timer_chain_height(&self) -> u64

source

pub fn get_pivot_hash_from_epoch_number( &self, epoch_number: u64 ) -> Result<EpochId, String>

Get the pivot hash from an epoch number. This function will try to query the data manager if it is not available in the ConsensusGraph due to out of the current era.

source

pub fn epoch_hash(&self, epoch_number: u64) -> Option<H256>

This function differs from get_pivot_hash_from_epoch_number in that it only returns the hash if it is in the current consensus graph.

source

pub fn block_hashes_by_epoch( &self, epoch_number: u64 ) -> Result<Vec<H256>, String>

source

pub fn skipped_block_hashes_by_epoch( &self, epoch_number: u64 ) -> Result<Vec<H256>, String>

source

pub fn bounded_terminal_block_hashes( &mut self, referee_bound: usize ) -> Vec<H256>

source

pub fn get_block_epoch_number(&self, hash: &H256) -> Option<u64>

source

pub fn all_blocks_with_topo_order(&self) -> Vec<H256>

source

pub fn block_execution_results_by_hash( &self, hash: &H256, update_cache: bool ) -> Option<BlockExecutionResultWithEpoch>

Return the block receipts in the current pivot view and the epoch block hash. If hash is not executed in the current view, return None.

source

pub fn is_timer_block(&self, block_hash: &H256) -> Option<bool>

source

pub fn is_adaptive(&self, block_hash: &H256) -> Option<bool>

source

pub fn is_partial_invalid(&self, block_hash: &H256) -> Option<bool>

source

pub fn is_pending(&self, block_hash: &H256) -> Option<bool>

source

pub fn get_transaction_info(&self, tx_hash: &H256) -> Option<TransactionInfo>

source

pub fn check_block_pivot_assumption( &self, pivot_hash: &H256, epoch: u64 ) -> Result<(), String>

source

pub fn total_processed_block_count(&self) -> u64

source

pub fn get_trusted_blame_block( &self, checkpoint_hash: &H256, plus_depth: usize ) -> Option<H256>

source

pub fn get_trusted_blame_block_for_snapshot( &self, snapshot_epoch_id: &EpochId ) -> Option<H256>

Find a trusted blame block for snapshot full sync

source

pub fn get_to_sync_epoch_id(&self) -> EpochId

Return the epoch that we are going to sync the state

source

pub fn reset_epoch_number_in_epoch(&mut self, pivot_arena_index: usize)

source

pub fn recover_state_valid(&mut self)

Find the first state valid block on the pivot chain after state_boundary_height and set state_valid of it and its blamed blocks. This block is found according to blame_ratio.

source

pub fn block_node(&self, block_hash: &H256) -> Option<&ConsensusGraphNode>

source

pub fn best_terminals( &mut self, best_index: usize, ref_bound: usize ) -> Vec<H256>

Return the list of best terminals when respecting a bound (for referencing edges). We sort the terminals based on its lca so that it will not change the parent selection results if we exclude last few terminals in the sorted order.

source

pub fn finish_block_recovery(&mut self)

source

pub fn get_pivot_chain_and_weight( &self, height_range: Option<(u64, u64)> ) -> Result<Vec<(H256, U256)>, String>

source

pub fn get_subtree(&self, root_block: &H256) -> Option<Vec<H256>>

Return None if root_block is not in consensus.

source

pub fn get_next_pivot_decision( &self, parent_decision_hash: &H256, confirmed_height: u64 ) -> Option<(u64, H256)>

source

pub fn validate_pivot_decision( &self, ancestor_hash: &H256, me_hash: &H256 ) -> bool

source

pub fn choose_correct_parent( &mut self, parent_arena_index: usize, referee_indices: Vec<usize>, pos_reference: Option<PosBlockId> ) -> usize

Return possibly new parent.

source

pub fn pivot_block_processed(&self, pivot_hash: &H256) -> bool

source

pub fn is_confirmed_by_pos(&self, block_hash: &H256) -> bool

Return if a block has been confirmed by the pivot decision by the latest committed PoS block.

This function needs persisted BlockExecutionResult to respond correctly for blocks before the checkpoint. If the data are not persisted, it will return false for blocks before the checkpoint even though they have been confirmed.

source

pub fn latest_epoch_confirmed_by_pos(&self) -> &(H256, u64)

Return the latest PoS pivot decision processed in ConsensusGraph.

Trait Implementations§

source§

impl Graph for ConsensusGraphInner

source§

impl MallocSizeOf for ConsensusGraphInner

source§

fn size_of(&self, ops: &mut MallocSizeOfOps) -> usize

Measure the heap usage of all descendant heap-allocated structures, but not the space taken up by the value itself.
source§

impl RichTreeGraph for ConsensusGraphInner

source§

fn children(&self, node_index: Self::NodeIndex) -> Vec<Self::NodeIndex>

source§

fn referrers(&self, node_index: Self::NodeIndex) -> Vec<Self::NodeIndex>

source§

impl StateMaintenanceTrait for ConsensusGraphInner

source§

impl TreeGraph for ConsensusGraphInner

source§

fn parent(&self, node_index: Self::NodeIndex) -> Option<Self::NodeIndex>

source§

fn referees(&self, node_index: Self::NodeIndex) -> Vec<Self::NodeIndex>

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
§

impl<T> Conv for T

§

fn conv<T>(self) -> T
where Self: Into<T>,

Converts self into T using Into<T>. Read more
source§

impl<T> DAG for T
where T: TreeGraph,

source§

fn predecessor_edges( &self, node_index: <T as Graph>::NodeIndex ) -> Vec<<T as Graph>::NodeIndex>

source§

fn topological_sort_with_order_indicator<OrderIndicator, FOrd, Set>( &self, index_set: Set, order_indicator: FOrd ) -> Vec<Self::NodeIndex>
where OrderIndicator: Ord, FOrd: Fn(Self::NodeIndex) -> OrderIndicator, Set: SetLike<Self::NodeIndex> + Default + Clone + IntoIterator<Item = Self::NodeIndex>,

source§

fn topological_sort<Set>(&self, index_set: Set) -> Vec<Self::NodeIndex>
where Set: SetLike<Self::NodeIndex> + Default + Clone + IntoIterator<Item = Self::NodeIndex>,

source§

impl<T> ElementSatisfy<ElementNoConstrain> for T

§

impl<T> FmtForward for T

§

fn fmt_binary(self) -> FmtBinary<Self>
where Self: Binary,

Causes self to use its Binary implementation when Debug-formatted.
§

fn fmt_display(self) -> FmtDisplay<Self>
where Self: Display,

Causes self to use its Display implementation when Debug-formatted.
§

fn fmt_lower_exp(self) -> FmtLowerExp<Self>
where Self: LowerExp,

Causes self to use its LowerExp implementation when Debug-formatted.
§

fn fmt_lower_hex(self) -> FmtLowerHex<Self>
where Self: LowerHex,

Causes self to use its LowerHex implementation when Debug-formatted.
§

fn fmt_octal(self) -> FmtOctal<Self>
where Self: Octal,

Causes self to use its Octal implementation when Debug-formatted.
§

fn fmt_pointer(self) -> FmtPointer<Self>
where Self: Pointer,

Causes self to use its Pointer implementation when Debug-formatted.
§

fn fmt_upper_exp(self) -> FmtUpperExp<Self>
where Self: UpperExp,

Causes self to use its UpperExp implementation when Debug-formatted.
§

fn fmt_upper_hex(self) -> FmtUpperHex<Self>
where Self: UpperHex,

Causes self to use its UpperHex implementation when Debug-formatted.
§

fn fmt_list(self) -> FmtList<Self>
where &'a Self: for<'a> IntoIterator,

Formats each item in a sequence. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T> Instrument for T

source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

§

impl<T> Pipe for T
where T: ?Sized,

§

fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
where Self: Sized,

Pipes by value. This is generally the method you want to use. Read more
§

fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R
where R: 'a,

Borrows self and passes that borrow into the pipe function. Read more
§

fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R
where R: 'a,

Mutably borrows self and passes that borrow into the pipe function. Read more
§

fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R
where Self: Borrow<B>, B: 'a + ?Sized, R: 'a,

Borrows self, then passes self.borrow() into the pipe function. Read more
§

fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R ) -> R
where Self: BorrowMut<B>, B: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.borrow_mut() into the pipe function. Read more
§

fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R
where Self: AsRef<U>, U: 'a + ?Sized, R: 'a,

Borrows self, then passes self.as_ref() into the pipe function.
§

fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R
where Self: AsMut<U>, U: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.as_mut() into the pipe function.
§

fn pipe_deref<'a, T, R>(&'a self, func: impl FnOnce(&'a T) -> R) -> R
where Self: Deref<Target = T>, T: 'a + ?Sized, R: 'a,

Borrows self, then passes self.deref() into the pipe function.
§

fn pipe_deref_mut<'a, T, R>( &'a mut self, func: impl FnOnce(&'a mut T) -> R ) -> R
where Self: DerefMut<Target = T> + Deref, T: 'a + ?Sized, R: 'a,

Mutably borrows self, then passes self.deref_mut() into the pipe function.
§

impl<T> Pointable for T

§

const ALIGN: usize = _

The alignment of pointer.
§

type Init = T

The type for initializers.
§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
source§

impl<T> RichDAG for T
where T: RichTreeGraph + DAG,

source§

fn successor_edges( &self, node_index: <T as Graph>::NodeIndex ) -> Vec<<T as Graph>::NodeIndex>

source§

fn get_future_with_stop_condition<FStop, Set, Iter>( &self, index_set: Iter, stop_condition: FStop ) -> Set
where FStop: Fn(Self::NodeIndex) -> bool, Set: SetLike<Self::NodeIndex> + Default, Iter: IntoIterator<Item = Self::NodeIndex>,

source§

fn get_future<Set, Iter>(&self, index_set: Iter) -> Set
where Set: SetLike<Self::NodeIndex> + Default, Iter: IntoIterator<Item = Self::NodeIndex>,

source§

impl<T> Same for T

§

type Output = T

Should always be Self
§

impl<T> Tap for T

§

fn tap(self, func: impl FnOnce(&Self)) -> Self

Immutable access to a value. Read more
§

fn tap_mut(self, func: impl FnOnce(&mut Self)) -> Self

Mutable access to a value. Read more
§

fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self
where Self: Borrow<B>, B: ?Sized,

Immutable access to the Borrow<B> of a value. Read more
§

fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self
where Self: BorrowMut<B>, B: ?Sized,

Mutable access to the BorrowMut<B> of a value. Read more
§

fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self
where Self: AsRef<R>, R: ?Sized,

Immutable access to the AsRef<R> view of a value. Read more
§

fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self
where Self: AsMut<R>, R: ?Sized,

Mutable access to the AsMut<R> view of a value. Read more
§

fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self
where Self: Deref<Target = T>, T: ?Sized,

Immutable access to the Deref::Target of a value. Read more
§

fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self
where Self: DerefMut<Target = T> + Deref, T: ?Sized,

Mutable access to the Deref::Target of a value. Read more
§

fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self

Calls .tap() only in debug builds, and is erased in release builds.
§

fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self

Calls .tap_mut() only in debug builds, and is erased in release builds.
§

fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self
where Self: Borrow<B>, B: ?Sized,

Calls .tap_borrow() only in debug builds, and is erased in release builds.
§

fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self
where Self: BorrowMut<B>, B: ?Sized,

Calls .tap_borrow_mut() only in debug builds, and is erased in release builds.
§

fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self
where Self: AsRef<R>, R: ?Sized,

Calls .tap_ref() only in debug builds, and is erased in release builds.
§

fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self
where Self: AsMut<R>, R: ?Sized,

Calls .tap_ref_mut() only in debug builds, and is erased in release builds.
§

fn tap_deref_dbg<T>(self, func: impl FnOnce(&T)) -> Self
where Self: Deref<Target = T>, T: ?Sized,

Calls .tap_deref() only in debug builds, and is erased in release builds.
§

fn tap_deref_mut_dbg<T>(self, func: impl FnOnce(&mut T)) -> Self
where Self: DerefMut<Target = T> + Deref, T: ?Sized,

Calls .tap_deref_mut() only in debug builds, and is erased in release builds.
§

impl<T> TryConv for T

§

fn try_conv<T>(self) -> Result<T, Self::Error>
where Self: TryInto<T>,

Attempts to convert self into T using TryInto<T>. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

source§

impl<T> WithSubscriber for T

source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more
§

impl<T> ErasedDestructor for T
where T: 'static,

§

impl<T> MaybeSend for T
where T: Send,

§

impl<T> MaybeSendSync for T

§

impl<T> UnsafeAny for T
where T: Any,