feat(dag): implement DAGKnight adaptive consensus protocol
Phase 13 Milestone 1 - DAGKnight Protocol Implementation: - Add LatencyTracker for network propagation delay measurement - Rolling statistics (mean, stddev, P95, P99) - Anticone growth rate tracking - Configurable sample window (1000 samples) - Implement DagKnightManager extending GHOSTDAG - Adaptive k parameter based on observed network latency - Probabilistic confirmation time estimation - Confidence levels (Low/Medium/High/VeryHigh) - ConfirmationStatus with depth and finality tracking - Add BlockRateConfig for throughput scaling - Standard: 10 BPS (100ms block time) - current - Enhanced: 32 BPS (31ms block time) - target - Maximum: 100 BPS (10ms block time) - stretch goal - Auto-adjusted merge/finality/pruning depths per config - Utility functions for network analysis - calculate_optimal_k() for k parameter optimization - estimate_throughput() for TPS projection Based on DAGKnight paper (2022) and Kaspa 2025 roadmap.
This commit is contained in:
parent
e20e5cb11f
commit
e2a6a10bee
4 changed files with 1032 additions and 0 deletions
483
crates/synor-dag/src/dagknight.rs
Normal file
483
crates/synor-dag/src/dagknight.rs
Normal file
|
|
@ -0,0 +1,483 @@
|
||||||
|
//! DAGKnight adaptive consensus protocol.
|
||||||
|
//!
|
||||||
|
//! DAGKnight is an evolution of GHOSTDAG that eliminates fixed network delay
|
||||||
|
//! assumptions. Instead of using a static k parameter, DAGKnight adapts based
|
||||||
|
//! on observed network conditions.
|
||||||
|
//!
|
||||||
|
//! # Key Improvements Over GHOSTDAG
|
||||||
|
//!
|
||||||
|
//! 1. **Adaptive K Parameter**: Adjusts based on measured network latency
|
||||||
|
//! 2. **Probabilistic Confirmation**: Provides confidence-based finality estimates
|
||||||
|
//! 3. **No Fixed Delay Assumption**: Learns actual network behavior
|
||||||
|
//! 4. **Faster Confirmation**: Converges faster under good network conditions
|
||||||
|
//!
|
||||||
|
//! # Algorithm Overview
|
||||||
|
//!
|
||||||
|
//! DAGKnight maintains the core GHOSTDAG blue set selection but adds:
|
||||||
|
//! - Network latency tracking via `LatencyTracker`
|
||||||
|
//! - Dynamic k calculation based on observed anticone growth
|
||||||
|
//! - Probabilistic confirmation time estimation
|
||||||
|
//!
|
||||||
|
//! # References
|
||||||
|
//!
|
||||||
|
//! - DAGKnight Paper (2022): "DAGKnight: A Parameterless GHOSTDAG"
|
||||||
|
//! - Kaspa 2025 Roadmap: Implementation plans for production use
|
||||||
|
|
||||||
|
use std::sync::Arc;
|
||||||
|
use std::time::Duration;
|
||||||
|
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
dag::BlockDag,
|
||||||
|
ghostdag::{GhostdagData, GhostdagError, GhostdagManager},
|
||||||
|
latency::{LatencyStats, LatencyTracker},
|
||||||
|
reachability::ReachabilityStore,
|
||||||
|
BlockId, BlueScore, GHOSTDAG_K,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Minimum adaptive k value (security lower bound).
|
||||||
|
const MIN_ADAPTIVE_K: u8 = 8;
|
||||||
|
|
||||||
|
/// Maximum adaptive k value (performance upper bound).
|
||||||
|
const MAX_ADAPTIVE_K: u8 = 64;
|
||||||
|
|
||||||
|
/// Default k when insufficient latency data is available.
|
||||||
|
const DEFAULT_K: u8 = GHOSTDAG_K;
|
||||||
|
|
||||||
|
/// Number of samples required before adapting k.
|
||||||
|
const MIN_SAMPLES_FOR_ADAPTATION: usize = 100;
|
||||||
|
|
||||||
|
/// Block rate (blocks per second) - used for k calculation.
|
||||||
|
/// At 10 BPS with 100ms block time, this is the baseline.
|
||||||
|
const BLOCK_RATE_BPS: f64 = 10.0;
|
||||||
|
|
||||||
|
/// Safety margin multiplier for k calculation.
|
||||||
|
/// Higher values = more conservative (safer but lower throughput).
|
||||||
|
const SAFETY_MARGIN: f64 = 1.5;
|
||||||
|
|
||||||
|
/// Confirmation confidence levels.
|
||||||
|
#[derive(Clone, Copy, Debug, PartialEq)]
|
||||||
|
pub enum ConfirmationConfidence {
|
||||||
|
/// ~68% confidence (1 sigma).
|
||||||
|
Low,
|
||||||
|
/// ~95% confidence (2 sigma).
|
||||||
|
Medium,
|
||||||
|
/// ~99.7% confidence (3 sigma).
|
||||||
|
High,
|
||||||
|
/// ~99.99% confidence (4 sigma).
|
||||||
|
VeryHigh,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl ConfirmationConfidence {
|
||||||
|
/// Returns the sigma multiplier for this confidence level.
|
||||||
|
fn sigma_multiplier(&self) -> f64 {
|
||||||
|
match self {
|
||||||
|
ConfirmationConfidence::Low => 1.0,
|
||||||
|
ConfirmationConfidence::Medium => 2.0,
|
||||||
|
ConfirmationConfidence::High => 3.0,
|
||||||
|
ConfirmationConfidence::VeryHigh => 4.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Confirmation status for a block.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct ConfirmationStatus {
|
||||||
|
/// Block being queried.
|
||||||
|
pub block_id: BlockId,
|
||||||
|
/// Current blue score depth from virtual tip.
|
||||||
|
pub depth: u64,
|
||||||
|
/// Estimated time to reach requested confidence.
|
||||||
|
pub estimated_time: Duration,
|
||||||
|
/// Current confidence level achieved.
|
||||||
|
pub current_confidence: f64,
|
||||||
|
/// Whether the block is considered final.
|
||||||
|
pub is_final: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// DAGKnight manager extending GHOSTDAG with adaptive consensus.
|
||||||
|
pub struct DagKnightManager {
|
||||||
|
/// Underlying GHOSTDAG manager.
|
||||||
|
ghostdag: Arc<GhostdagManager>,
|
||||||
|
/// The DAG structure.
|
||||||
|
dag: Arc<BlockDag>,
|
||||||
|
/// Reachability queries.
|
||||||
|
reachability: Arc<ReachabilityStore>,
|
||||||
|
/// Network latency tracker.
|
||||||
|
latency_tracker: Arc<LatencyTracker>,
|
||||||
|
/// Current adaptive k value.
|
||||||
|
adaptive_k: RwLock<u8>,
|
||||||
|
/// Block rate setting.
|
||||||
|
block_rate_bps: f64,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl DagKnightManager {
|
||||||
|
/// Creates a new DAGKnight manager.
|
||||||
|
pub fn new(
|
||||||
|
dag: Arc<BlockDag>,
|
||||||
|
reachability: Arc<ReachabilityStore>,
|
||||||
|
) -> Self {
|
||||||
|
let ghostdag = Arc::new(GhostdagManager::new(dag.clone(), reachability.clone()));
|
||||||
|
let latency_tracker = Arc::new(LatencyTracker::new());
|
||||||
|
|
||||||
|
Self {
|
||||||
|
ghostdag,
|
||||||
|
dag,
|
||||||
|
reachability,
|
||||||
|
latency_tracker,
|
||||||
|
adaptive_k: RwLock::new(DEFAULT_K),
|
||||||
|
block_rate_bps: BLOCK_RATE_BPS,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a DAGKnight manager with custom block rate.
|
||||||
|
pub fn with_block_rate(
|
||||||
|
dag: Arc<BlockDag>,
|
||||||
|
reachability: Arc<ReachabilityStore>,
|
||||||
|
block_rate_bps: f64,
|
||||||
|
) -> Self {
|
||||||
|
let ghostdag = Arc::new(GhostdagManager::new(dag.clone(), reachability.clone()));
|
||||||
|
let latency_tracker = Arc::new(LatencyTracker::new());
|
||||||
|
|
||||||
|
Self {
|
||||||
|
ghostdag,
|
||||||
|
dag,
|
||||||
|
reachability,
|
||||||
|
latency_tracker,
|
||||||
|
adaptive_k: RwLock::new(DEFAULT_K),
|
||||||
|
block_rate_bps,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a DAGKnight manager wrapping an existing GHOSTDAG manager.
|
||||||
|
pub fn from_ghostdag(
|
||||||
|
ghostdag: Arc<GhostdagManager>,
|
||||||
|
dag: Arc<BlockDag>,
|
||||||
|
reachability: Arc<ReachabilityStore>,
|
||||||
|
) -> Self {
|
||||||
|
Self {
|
||||||
|
ghostdag,
|
||||||
|
dag,
|
||||||
|
reachability,
|
||||||
|
latency_tracker: Arc::new(LatencyTracker::new()),
|
||||||
|
adaptive_k: RwLock::new(DEFAULT_K),
|
||||||
|
block_rate_bps: BLOCK_RATE_BPS,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Processes a new block with latency tracking.
|
||||||
|
///
|
||||||
|
/// This method:
|
||||||
|
/// 1. Records the block observation in the latency tracker
|
||||||
|
/// 2. Delegates to GHOSTDAG for blue set calculation
|
||||||
|
/// 3. Updates the adaptive k parameter if needed
|
||||||
|
pub fn add_block(
|
||||||
|
&self,
|
||||||
|
block_id: BlockId,
|
||||||
|
parents: &[BlockId],
|
||||||
|
block_time_ms: u64,
|
||||||
|
) -> Result<GhostdagData, GhostdagError> {
|
||||||
|
// Calculate anticone size for this block
|
||||||
|
let anticone_size = self.calculate_anticone_size(&block_id, parents);
|
||||||
|
|
||||||
|
// Record observation in latency tracker
|
||||||
|
self.latency_tracker.record_block(block_id, block_time_ms, anticone_size);
|
||||||
|
|
||||||
|
// Process with underlying GHOSTDAG
|
||||||
|
let data = self.ghostdag.add_block(block_id, parents)?;
|
||||||
|
|
||||||
|
// Periodically update adaptive k
|
||||||
|
if self.latency_tracker.sample_count() % 50 == 0 {
|
||||||
|
self.update_adaptive_k();
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(data)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Calculates the anticone size for a new block.
|
||||||
|
fn calculate_anticone_size(&self, block_id: &BlockId, parents: &[BlockId]) -> usize {
|
||||||
|
// Anticone is the set of blocks that are neither ancestors nor descendants
|
||||||
|
// For a new block, we estimate based on tips that aren't in parent set
|
||||||
|
let tips = self.dag.tips();
|
||||||
|
let mut anticone_count = 0;
|
||||||
|
|
||||||
|
for tip in tips {
|
||||||
|
if tip != *block_id && !parents.contains(&tip) {
|
||||||
|
// Check if tip is in the past of any parent
|
||||||
|
let in_past = parents.iter().any(|p| {
|
||||||
|
self.reachability
|
||||||
|
.is_ancestor(p, &tip)
|
||||||
|
.unwrap_or(false)
|
||||||
|
});
|
||||||
|
|
||||||
|
if !in_past {
|
||||||
|
anticone_count += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
anticone_count
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Updates the adaptive k parameter based on observed latency.
|
||||||
|
///
|
||||||
|
/// The adaptive k formula is:
|
||||||
|
/// k = ceil(block_rate * network_delay * safety_margin)
|
||||||
|
///
|
||||||
|
/// This ensures that even with network delays, honest miners
|
||||||
|
/// can create blocks that fit within the k-cluster.
|
||||||
|
fn update_adaptive_k(&self) {
|
||||||
|
let stats = self.latency_tracker.get_stats();
|
||||||
|
|
||||||
|
// Don't adapt until we have enough samples
|
||||||
|
if stats.sample_count < MIN_SAMPLES_FOR_ADAPTATION {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate k based on P95 delay (conservative)
|
||||||
|
let delay_secs = stats.p95_delay_ms / 1000.0;
|
||||||
|
let calculated_k = (self.block_rate_bps * delay_secs * SAFETY_MARGIN).ceil() as u8;
|
||||||
|
|
||||||
|
// Clamp to valid range
|
||||||
|
let new_k = calculated_k.clamp(MIN_ADAPTIVE_K, MAX_ADAPTIVE_K);
|
||||||
|
|
||||||
|
// Update if significantly different (avoid jitter)
|
||||||
|
let current_k = *self.adaptive_k.read();
|
||||||
|
if (new_k as i16 - current_k as i16).abs() >= 2 {
|
||||||
|
*self.adaptive_k.write() = new_k;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the current adaptive k parameter.
|
||||||
|
pub fn adaptive_k(&self) -> u8 {
|
||||||
|
*self.adaptive_k.read()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the current latency statistics.
|
||||||
|
pub fn latency_stats(&self) -> LatencyStats {
|
||||||
|
self.latency_tracker.get_stats()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Estimates confirmation time for a block at a given confidence level.
|
||||||
|
///
|
||||||
|
/// DAGKnight provides probabilistic confirmation based on:
|
||||||
|
/// 1. Current depth (blue score difference from tip)
|
||||||
|
/// 2. Observed network latency
|
||||||
|
/// 3. Requested confidence level
|
||||||
|
pub fn estimate_confirmation_time(
|
||||||
|
&self,
|
||||||
|
block_id: &BlockId,
|
||||||
|
confidence: ConfirmationConfidence,
|
||||||
|
) -> Result<ConfirmationStatus, GhostdagError> {
|
||||||
|
let block_data = self.ghostdag.get_data(block_id)?;
|
||||||
|
let tip_data = self.get_virtual_tip_data()?;
|
||||||
|
|
||||||
|
// Depth is the blue score difference
|
||||||
|
let depth = tip_data.blue_score.saturating_sub(block_data.blue_score);
|
||||||
|
|
||||||
|
// Get latency stats
|
||||||
|
let stats = self.latency_tracker.get_stats();
|
||||||
|
|
||||||
|
// Calculate required depth for requested confidence
|
||||||
|
// Based on the paper, confirmation requires depth proportional to
|
||||||
|
// network delay variance
|
||||||
|
let sigma = stats.std_dev_ms / 1000.0; // Convert to seconds
|
||||||
|
let mean_delay = stats.mean_delay_ms / 1000.0;
|
||||||
|
let sigma_multiplier = confidence.sigma_multiplier();
|
||||||
|
|
||||||
|
// Required depth scales with variance and confidence level
|
||||||
|
let required_depth = (self.block_rate_bps * (mean_delay + sigma * sigma_multiplier)).ceil() as u64;
|
||||||
|
|
||||||
|
// Current confidence based on actual depth
|
||||||
|
let current_confidence = if depth >= required_depth {
|
||||||
|
self.calculate_confidence(depth, mean_delay, sigma)
|
||||||
|
} else {
|
||||||
|
// Interpolate confidence based on depth progress
|
||||||
|
(depth as f64 / required_depth as f64) * 0.95
|
||||||
|
};
|
||||||
|
|
||||||
|
// Time to reach required depth
|
||||||
|
let blocks_needed = required_depth.saturating_sub(depth);
|
||||||
|
let time_per_block_ms = 1000.0 / self.block_rate_bps;
|
||||||
|
let estimated_time = Duration::from_millis((blocks_needed as f64 * time_per_block_ms) as u64);
|
||||||
|
|
||||||
|
// Block is final if depth exceeds finality threshold
|
||||||
|
let is_final = depth >= crate::FINALITY_DEPTH;
|
||||||
|
|
||||||
|
Ok(ConfirmationStatus {
|
||||||
|
block_id: *block_id,
|
||||||
|
depth,
|
||||||
|
estimated_time,
|
||||||
|
current_confidence,
|
||||||
|
is_final,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Calculates confidence level based on depth and network conditions.
|
||||||
|
fn calculate_confidence(&self, depth: u64, mean_delay: f64, sigma: f64) -> f64 {
|
||||||
|
// Using simplified normal CDF approximation
|
||||||
|
// Confidence increases with depth relative to expected delay variance
|
||||||
|
let depth_secs = depth as f64 / self.block_rate_bps;
|
||||||
|
let z_score = (depth_secs - mean_delay) / sigma.max(0.001);
|
||||||
|
|
||||||
|
// Approximate CDF using logistic function
|
||||||
|
1.0 / (1.0 + (-1.7 * z_score).exp())
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the GHOSTDAG data for the virtual tip (highest blue score block).
|
||||||
|
fn get_virtual_tip_data(&self) -> Result<GhostdagData, GhostdagError> {
|
||||||
|
let tips = self.dag.tips();
|
||||||
|
|
||||||
|
// Find tip with highest blue score
|
||||||
|
let mut best_tip = tips[0];
|
||||||
|
let mut best_score = self.ghostdag.get_blue_score(&tips[0]).unwrap_or(0);
|
||||||
|
|
||||||
|
for tip in tips.iter().skip(1) {
|
||||||
|
let score = self.ghostdag.get_blue_score(tip).unwrap_or(0);
|
||||||
|
if score > best_score {
|
||||||
|
best_score = score;
|
||||||
|
best_tip = *tip;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
self.ghostdag.get_data(&best_tip)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the underlying GHOSTDAG manager.
|
||||||
|
pub fn ghostdag(&self) -> &Arc<GhostdagManager> {
|
||||||
|
&self.ghostdag
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the latency tracker.
|
||||||
|
pub fn latency_tracker(&self) -> &Arc<LatencyTracker> {
|
||||||
|
&self.latency_tracker
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the blue score for a block (delegates to GHOSTDAG).
|
||||||
|
pub fn get_blue_score(&self, block_id: &BlockId) -> Result<BlueScore, GhostdagError> {
|
||||||
|
self.ghostdag.get_blue_score(block_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the GHOSTDAG data for a block.
|
||||||
|
pub fn get_data(&self, block_id: &BlockId) -> Result<GhostdagData, GhostdagError> {
|
||||||
|
self.ghostdag.get_data(block_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Checks if a block is in the blue set.
|
||||||
|
pub fn is_blue(&self, block_id: &BlockId) -> bool {
|
||||||
|
self.ghostdag.is_blue(block_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the selected chain from a block to genesis.
|
||||||
|
pub fn get_selected_chain(&self, from: &BlockId) -> Result<Vec<BlockId>, GhostdagError> {
|
||||||
|
self.ghostdag.get_selected_chain(from)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Resets the latency tracker (e.g., after network reconfiguration).
|
||||||
|
pub fn reset_latency_tracking(&self) {
|
||||||
|
self.latency_tracker.reset();
|
||||||
|
*self.adaptive_k.write() = DEFAULT_K;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Debug for DagKnightManager {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
let stats = self.latency_tracker.get_stats();
|
||||||
|
f.debug_struct("DagKnightManager")
|
||||||
|
.field("adaptive_k", &*self.adaptive_k.read())
|
||||||
|
.field("block_rate_bps", &self.block_rate_bps)
|
||||||
|
.field("mean_delay_ms", &stats.mean_delay_ms)
|
||||||
|
.field("sample_count", &stats.sample_count)
|
||||||
|
.finish()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Calculates the optimal k for a given network delay and block rate.
|
||||||
|
///
|
||||||
|
/// This is a utility function for network analysis.
|
||||||
|
pub fn calculate_optimal_k(network_delay_ms: f64, block_rate_bps: f64) -> u8 {
|
||||||
|
let delay_secs = network_delay_ms / 1000.0;
|
||||||
|
let k = (block_rate_bps * delay_secs * SAFETY_MARGIN).ceil() as u8;
|
||||||
|
k.clamp(MIN_ADAPTIVE_K, MAX_ADAPTIVE_K)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Estimates throughput (TPS) for given network conditions.
|
||||||
|
///
|
||||||
|
/// Throughput depends on block rate and transaction capacity per block.
|
||||||
|
pub fn estimate_throughput(
|
||||||
|
block_rate_bps: f64,
|
||||||
|
avg_tx_per_block: u64,
|
||||||
|
network_delay_ms: f64,
|
||||||
|
) -> f64 {
|
||||||
|
// Effective block rate accounting for orphan rate
|
||||||
|
let orphan_rate = (network_delay_ms / 1000.0 * block_rate_bps).min(0.5);
|
||||||
|
let effective_bps = block_rate_bps * (1.0 - orphan_rate);
|
||||||
|
|
||||||
|
effective_bps * avg_tx_per_block as f64
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use synor_types::Hash256;
|
||||||
|
|
||||||
|
fn make_block_id(n: u8) -> BlockId {
|
||||||
|
let mut bytes = [0u8; 32];
|
||||||
|
bytes[0] = n;
|
||||||
|
Hash256::from_bytes(bytes)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn setup_test_dag() -> (Arc<BlockDag>, Arc<ReachabilityStore>, DagKnightManager) {
|
||||||
|
let genesis = make_block_id(0);
|
||||||
|
let dag = Arc::new(BlockDag::new(genesis, 0));
|
||||||
|
let reachability = Arc::new(ReachabilityStore::new(genesis));
|
||||||
|
let dagknight = DagKnightManager::new(dag.clone(), reachability.clone());
|
||||||
|
(dag, reachability, dagknight)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_initial_k() {
|
||||||
|
let (_, _, dagknight) = setup_test_dag();
|
||||||
|
assert_eq!(dagknight.adaptive_k(), DEFAULT_K);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_calculate_optimal_k() {
|
||||||
|
// 100ms delay at 10 BPS: k = ceil(10 * 0.1 * 1.5) = 2, clamped to MIN_ADAPTIVE_K (8)
|
||||||
|
let k_low = calculate_optimal_k(100.0, 10.0);
|
||||||
|
assert!(k_low >= MIN_ADAPTIVE_K);
|
||||||
|
assert!(k_low <= MAX_ADAPTIVE_K);
|
||||||
|
|
||||||
|
// 1000ms delay at 10 BPS: k = ceil(10 * 1.0 * 1.5) = 15, above MIN
|
||||||
|
let k_medium = calculate_optimal_k(1000.0, 10.0);
|
||||||
|
assert!(k_medium >= MIN_ADAPTIVE_K);
|
||||||
|
|
||||||
|
// 3000ms delay at 10 BPS: k = ceil(10 * 3.0 * 1.5) = 45
|
||||||
|
let k_high = calculate_optimal_k(3000.0, 10.0);
|
||||||
|
assert!(k_high > k_medium);
|
||||||
|
assert!(k_high > k_low);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_estimate_throughput() {
|
||||||
|
// Good network: 10ms delay - orphan_rate = 0.01 * 10 = 0.1
|
||||||
|
let tps_good = estimate_throughput(10.0, 100, 10.0);
|
||||||
|
|
||||||
|
// Poor network: 40ms delay - orphan_rate = 0.04 * 10 = 0.4
|
||||||
|
let tps_poor = estimate_throughput(10.0, 100, 40.0);
|
||||||
|
|
||||||
|
// Good network should have higher throughput
|
||||||
|
assert!(tps_good > tps_poor, "tps_good={} should be > tps_poor={}", tps_good, tps_poor);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_confidence_levels() {
|
||||||
|
assert!(ConfirmationConfidence::VeryHigh.sigma_multiplier()
|
||||||
|
> ConfirmationConfidence::High.sigma_multiplier());
|
||||||
|
assert!(ConfirmationConfidence::High.sigma_multiplier()
|
||||||
|
> ConfirmationConfidence::Medium.sigma_multiplier());
|
||||||
|
assert!(ConfirmationConfidence::Medium.sigma_multiplier()
|
||||||
|
> ConfirmationConfidence::Low.sigma_multiplier());
|
||||||
|
}
|
||||||
|
}
|
||||||
372
crates/synor-dag/src/latency.rs
Normal file
372
crates/synor-dag/src/latency.rs
Normal file
|
|
@ -0,0 +1,372 @@
|
||||||
|
//! Network latency tracking for DAGKnight adaptive consensus.
|
||||||
|
//!
|
||||||
|
//! This module tracks observed network propagation delays to enable
|
||||||
|
//! DAGKnight's adaptive k parameter calculation. Unlike GHOSTDAG's
|
||||||
|
//! fixed k assumption, DAGKnight adjusts based on real-world conditions.
|
||||||
|
//!
|
||||||
|
//! # Key Metrics
|
||||||
|
//!
|
||||||
|
//! - **Block propagation delay**: Time from block creation to network-wide visibility
|
||||||
|
//! - **Anticone growth rate**: How quickly anticones grow (indicates network latency)
|
||||||
|
//! - **Confirmation velocity**: Rate at which blocks achieve probabilistic finality
|
||||||
|
|
||||||
|
use parking_lot::RwLock;
|
||||||
|
use std::collections::VecDeque;
|
||||||
|
use std::time::{Duration, Instant};
|
||||||
|
|
||||||
|
use crate::BlockId;
|
||||||
|
|
||||||
|
/// Maximum number of latency samples to keep for moving average.
|
||||||
|
const MAX_LATENCY_SAMPLES: usize = 1000;
|
||||||
|
|
||||||
|
/// Default network delay assumption in milliseconds.
|
||||||
|
const DEFAULT_DELAY_MS: u64 = 100;
|
||||||
|
|
||||||
|
/// Minimum delay to prevent unrealistic values.
|
||||||
|
const MIN_DELAY_MS: u64 = 10;
|
||||||
|
|
||||||
|
/// Maximum delay to cap at reasonable network conditions.
|
||||||
|
const MAX_DELAY_MS: u64 = 5000;
|
||||||
|
|
||||||
|
/// Latency sample from observed block propagation.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct LatencySample {
|
||||||
|
/// Block that was observed.
|
||||||
|
pub block_id: BlockId,
|
||||||
|
/// Timestamp when block was first seen locally.
|
||||||
|
pub local_time: Instant,
|
||||||
|
/// Timestamp from block header (creation time).
|
||||||
|
pub block_time_ms: u64,
|
||||||
|
/// Observed propagation delay in milliseconds.
|
||||||
|
pub delay_ms: u64,
|
||||||
|
/// Anticone size at time of observation.
|
||||||
|
pub anticone_size: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Rolling statistics for latency measurements.
|
||||||
|
#[derive(Clone, Debug, Default)]
|
||||||
|
pub struct LatencyStats {
|
||||||
|
/// Mean propagation delay (ms).
|
||||||
|
pub mean_delay_ms: f64,
|
||||||
|
/// Standard deviation of delay (ms).
|
||||||
|
pub std_dev_ms: f64,
|
||||||
|
/// 95th percentile delay (ms).
|
||||||
|
pub p95_delay_ms: f64,
|
||||||
|
/// 99th percentile delay (ms).
|
||||||
|
pub p99_delay_ms: f64,
|
||||||
|
/// Average anticone growth rate (blocks per second).
|
||||||
|
pub anticone_growth_rate: f64,
|
||||||
|
/// Number of samples in current window.
|
||||||
|
pub sample_count: usize,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Network latency tracker for DAGKnight.
|
||||||
|
///
|
||||||
|
/// Collects block propagation samples and computes statistics
|
||||||
|
/// used for adaptive k calculation and confirmation time estimation.
|
||||||
|
pub struct LatencyTracker {
|
||||||
|
/// Recent latency samples.
|
||||||
|
samples: RwLock<VecDeque<LatencySample>>,
|
||||||
|
/// Cached statistics (recomputed on demand).
|
||||||
|
stats_cache: RwLock<Option<(Instant, LatencyStats)>>,
|
||||||
|
/// Cache validity duration.
|
||||||
|
cache_ttl: Duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl LatencyTracker {
|
||||||
|
/// Creates a new latency tracker.
|
||||||
|
pub fn new() -> Self {
|
||||||
|
Self {
|
||||||
|
samples: RwLock::new(VecDeque::with_capacity(MAX_LATENCY_SAMPLES)),
|
||||||
|
stats_cache: RwLock::new(None),
|
||||||
|
cache_ttl: Duration::from_secs(5),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Creates a latency tracker with custom cache TTL.
|
||||||
|
pub fn with_cache_ttl(cache_ttl: Duration) -> Self {
|
||||||
|
Self {
|
||||||
|
samples: RwLock::new(VecDeque::with_capacity(MAX_LATENCY_SAMPLES)),
|
||||||
|
stats_cache: RwLock::new(None),
|
||||||
|
cache_ttl,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Records a new block observation.
|
||||||
|
///
|
||||||
|
/// # Arguments
|
||||||
|
/// * `block_id` - Hash of the observed block
|
||||||
|
/// * `block_time_ms` - Timestamp from block header (Unix ms)
|
||||||
|
/// * `anticone_size` - Number of blocks in the anticone at observation time
|
||||||
|
pub fn record_block(
|
||||||
|
&self,
|
||||||
|
block_id: BlockId,
|
||||||
|
block_time_ms: u64,
|
||||||
|
anticone_size: usize,
|
||||||
|
) {
|
||||||
|
let local_time = Instant::now();
|
||||||
|
let now_ms = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.map(|d| d.as_millis() as u64)
|
||||||
|
.unwrap_or(0);
|
||||||
|
|
||||||
|
// Calculate observed delay (clamp to valid range)
|
||||||
|
let delay_ms = if now_ms > block_time_ms {
|
||||||
|
(now_ms - block_time_ms).clamp(MIN_DELAY_MS, MAX_DELAY_MS)
|
||||||
|
} else {
|
||||||
|
// Clock skew - use default
|
||||||
|
DEFAULT_DELAY_MS
|
||||||
|
};
|
||||||
|
|
||||||
|
let sample = LatencySample {
|
||||||
|
block_id,
|
||||||
|
local_time,
|
||||||
|
block_time_ms,
|
||||||
|
delay_ms,
|
||||||
|
anticone_size,
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut samples = self.samples.write();
|
||||||
|
if samples.len() >= MAX_LATENCY_SAMPLES {
|
||||||
|
samples.pop_front();
|
||||||
|
}
|
||||||
|
samples.push_back(sample);
|
||||||
|
|
||||||
|
// Invalidate stats cache
|
||||||
|
*self.stats_cache.write() = None;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Records a latency sample directly (for testing or external measurements).
|
||||||
|
pub fn record_sample(&self, sample: LatencySample) {
|
||||||
|
let mut samples = self.samples.write();
|
||||||
|
if samples.len() >= MAX_LATENCY_SAMPLES {
|
||||||
|
samples.pop_front();
|
||||||
|
}
|
||||||
|
samples.push_back(sample);
|
||||||
|
|
||||||
|
// Invalidate stats cache
|
||||||
|
*self.stats_cache.write() = None;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets current latency statistics.
|
||||||
|
///
|
||||||
|
/// Uses cached value if available and fresh, otherwise recomputes.
|
||||||
|
pub fn get_stats(&self) -> LatencyStats {
|
||||||
|
// Check cache
|
||||||
|
{
|
||||||
|
let cache = self.stats_cache.read();
|
||||||
|
if let Some((cached_at, stats)) = cache.as_ref() {
|
||||||
|
if cached_at.elapsed() < self.cache_ttl {
|
||||||
|
return stats.clone();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Recompute statistics
|
||||||
|
let stats = self.compute_stats();
|
||||||
|
|
||||||
|
// Update cache
|
||||||
|
*self.stats_cache.write() = Some((Instant::now(), stats.clone()));
|
||||||
|
|
||||||
|
stats
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Computes latency statistics from current samples.
|
||||||
|
fn compute_stats(&self) -> LatencyStats {
|
||||||
|
let samples = self.samples.read();
|
||||||
|
|
||||||
|
if samples.is_empty() {
|
||||||
|
return LatencyStats {
|
||||||
|
mean_delay_ms: DEFAULT_DELAY_MS as f64,
|
||||||
|
std_dev_ms: 0.0,
|
||||||
|
p95_delay_ms: DEFAULT_DELAY_MS as f64,
|
||||||
|
p99_delay_ms: DEFAULT_DELAY_MS as f64,
|
||||||
|
anticone_growth_rate: 0.0,
|
||||||
|
sample_count: 0,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
let n = samples.len();
|
||||||
|
|
||||||
|
// Collect delay values
|
||||||
|
let mut delays: Vec<f64> = samples.iter().map(|s| s.delay_ms as f64).collect();
|
||||||
|
delays.sort_by(|a, b| a.partial_cmp(b).unwrap_or(std::cmp::Ordering::Equal));
|
||||||
|
|
||||||
|
// Mean
|
||||||
|
let sum: f64 = delays.iter().sum();
|
||||||
|
let mean = sum / n as f64;
|
||||||
|
|
||||||
|
// Standard deviation
|
||||||
|
let variance: f64 = delays.iter().map(|d| (d - mean).powi(2)).sum::<f64>() / n as f64;
|
||||||
|
let std_dev = variance.sqrt();
|
||||||
|
|
||||||
|
// Percentiles
|
||||||
|
let p95_idx = ((n as f64 * 0.95) as usize).min(n - 1);
|
||||||
|
let p99_idx = ((n as f64 * 0.99) as usize).min(n - 1);
|
||||||
|
|
||||||
|
// Anticone growth rate (blocks per second)
|
||||||
|
let anticone_growth_rate = if n > 1 {
|
||||||
|
let first = samples.front().unwrap();
|
||||||
|
let last = samples.back().unwrap();
|
||||||
|
let time_span_secs = last.local_time.duration_since(first.local_time).as_secs_f64();
|
||||||
|
|
||||||
|
if time_span_secs > 0.0 {
|
||||||
|
let total_anticone_growth: usize = samples.iter().map(|s| s.anticone_size).sum();
|
||||||
|
total_anticone_growth as f64 / time_span_secs / n as f64
|
||||||
|
} else {
|
||||||
|
0.0
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
0.0
|
||||||
|
};
|
||||||
|
|
||||||
|
LatencyStats {
|
||||||
|
mean_delay_ms: mean,
|
||||||
|
std_dev_ms: std_dev,
|
||||||
|
p95_delay_ms: delays[p95_idx],
|
||||||
|
p99_delay_ms: delays[p99_idx],
|
||||||
|
anticone_growth_rate,
|
||||||
|
sample_count: n,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Estimates the network delay for adaptive k calculation.
|
||||||
|
///
|
||||||
|
/// Uses P95 delay as a conservative estimate to ensure security.
|
||||||
|
pub fn estimated_network_delay(&self) -> Duration {
|
||||||
|
let stats = self.get_stats();
|
||||||
|
Duration::from_millis(stats.p95_delay_ms as u64)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Estimates the expected anticone size for a given delay.
|
||||||
|
///
|
||||||
|
/// Used by DAGKnight to predict confirmation times.
|
||||||
|
pub fn expected_anticone_size(&self, delay: Duration) -> usize {
|
||||||
|
let stats = self.get_stats();
|
||||||
|
let delay_secs = delay.as_secs_f64();
|
||||||
|
|
||||||
|
// Anticone grows at approximately anticone_growth_rate blocks/second
|
||||||
|
(stats.anticone_growth_rate * delay_secs).ceil() as usize
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Gets the number of samples currently tracked.
|
||||||
|
pub fn sample_count(&self) -> usize {
|
||||||
|
self.samples.read().len()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Clears all samples and resets the tracker.
|
||||||
|
pub fn reset(&self) {
|
||||||
|
self.samples.write().clear();
|
||||||
|
*self.stats_cache.write() = None;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Default for LatencyTracker {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self::new()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl std::fmt::Debug for LatencyTracker {
|
||||||
|
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
|
||||||
|
let stats = self.get_stats();
|
||||||
|
f.debug_struct("LatencyTracker")
|
||||||
|
.field("sample_count", &stats.sample_count)
|
||||||
|
.field("mean_delay_ms", &stats.mean_delay_ms)
|
||||||
|
.field("p95_delay_ms", &stats.p95_delay_ms)
|
||||||
|
.finish()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
use synor_types::Hash256;
|
||||||
|
|
||||||
|
fn make_block_id(n: u8) -> BlockId {
|
||||||
|
let mut bytes = [0u8; 32];
|
||||||
|
bytes[0] = n;
|
||||||
|
Hash256::from_bytes(bytes)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_empty_tracker() {
|
||||||
|
let tracker = LatencyTracker::new();
|
||||||
|
let stats = tracker.get_stats();
|
||||||
|
|
||||||
|
assert_eq!(stats.sample_count, 0);
|
||||||
|
assert_eq!(stats.mean_delay_ms, DEFAULT_DELAY_MS as f64);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_record_samples() {
|
||||||
|
let tracker = LatencyTracker::new();
|
||||||
|
let now_ms = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis() as u64;
|
||||||
|
|
||||||
|
// Record some samples with varying delays
|
||||||
|
for i in 0..10 {
|
||||||
|
tracker.record_block(
|
||||||
|
make_block_id(i),
|
||||||
|
now_ms - (50 + i as u64 * 10), // 50-140ms delays
|
||||||
|
i as usize,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
let stats = tracker.get_stats();
|
||||||
|
assert_eq!(stats.sample_count, 10);
|
||||||
|
assert!(stats.mean_delay_ms >= 50.0);
|
||||||
|
assert!(stats.mean_delay_ms <= 150.0);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_sample_limit() {
|
||||||
|
let tracker = LatencyTracker::new();
|
||||||
|
let now_ms = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis() as u64;
|
||||||
|
|
||||||
|
// Record more than MAX_LATENCY_SAMPLES
|
||||||
|
for i in 0..MAX_LATENCY_SAMPLES + 100 {
|
||||||
|
tracker.record_block(make_block_id(i as u8), now_ms - 100, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert_eq!(tracker.sample_count(), MAX_LATENCY_SAMPLES);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_estimated_delay() {
|
||||||
|
let tracker = LatencyTracker::new();
|
||||||
|
let now_ms = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis() as u64;
|
||||||
|
|
||||||
|
// Record samples with ~100ms delay
|
||||||
|
for i in 0..50 {
|
||||||
|
tracker.record_block(make_block_id(i), now_ms - 100, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
let delay = tracker.estimated_network_delay();
|
||||||
|
assert!(delay.as_millis() >= 90);
|
||||||
|
assert!(delay.as_millis() <= 200);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_reset() {
|
||||||
|
let tracker = LatencyTracker::new();
|
||||||
|
let now_ms = std::time::SystemTime::now()
|
||||||
|
.duration_since(std::time::UNIX_EPOCH)
|
||||||
|
.unwrap()
|
||||||
|
.as_millis() as u64;
|
||||||
|
|
||||||
|
tracker.record_block(make_block_id(0), now_ms - 100, 0);
|
||||||
|
assert_eq!(tracker.sample_count(), 1);
|
||||||
|
|
||||||
|
tracker.reset();
|
||||||
|
assert_eq!(tracker.sample_count(), 0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
@ -23,13 +23,20 @@
|
||||||
#![allow(dead_code)]
|
#![allow(dead_code)]
|
||||||
|
|
||||||
pub mod dag;
|
pub mod dag;
|
||||||
|
pub mod dagknight;
|
||||||
pub mod ghostdag;
|
pub mod ghostdag;
|
||||||
|
pub mod latency;
|
||||||
pub mod ordering;
|
pub mod ordering;
|
||||||
pub mod pruning;
|
pub mod pruning;
|
||||||
pub mod reachability;
|
pub mod reachability;
|
||||||
|
|
||||||
pub use dag::{BlockDag, BlockRelations, DagError};
|
pub use dag::{BlockDag, BlockRelations, DagError};
|
||||||
|
pub use dagknight::{
|
||||||
|
calculate_optimal_k, estimate_throughput, ConfirmationConfidence, ConfirmationStatus,
|
||||||
|
DagKnightManager,
|
||||||
|
};
|
||||||
pub use ghostdag::{GhostdagData, GhostdagError, GhostdagManager};
|
pub use ghostdag::{GhostdagData, GhostdagError, GhostdagManager};
|
||||||
|
pub use latency::{LatencySample, LatencyStats, LatencyTracker};
|
||||||
pub use ordering::{BlockOrdering, OrderedBlock};
|
pub use ordering::{BlockOrdering, OrderedBlock};
|
||||||
pub use pruning::{PruningConfig, PruningManager};
|
pub use pruning::{PruningConfig, PruningManager};
|
||||||
pub use reachability::{ReachabilityError, ReachabilityStore};
|
pub use reachability::{ReachabilityError, ReachabilityStore};
|
||||||
|
|
@ -62,6 +69,78 @@ pub const FINALITY_DEPTH: u64 = 86400; // ~2.4 hours at 10 bps
|
||||||
/// Pruning depth - how many blocks to keep in memory.
|
/// Pruning depth - how many blocks to keep in memory.
|
||||||
pub const PRUNING_DEPTH: u64 = 288_000; // ~8 hours at 10 bps
|
pub const PRUNING_DEPTH: u64 = 288_000; // ~8 hours at 10 bps
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// DAGKnight Block Rate Configurations
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
/// Block rate configuration for different throughput modes.
|
||||||
|
#[derive(Clone, Copy, Debug, PartialEq)]
|
||||||
|
pub enum BlockRateConfig {
|
||||||
|
/// Standard 10 BPS (100ms block time) - default GHOSTDAG
|
||||||
|
Standard,
|
||||||
|
/// Enhanced 32 BPS (~31ms block time) - Phase 13 upgrade
|
||||||
|
Enhanced,
|
||||||
|
/// Maximum 100 BPS (10ms block time) - stretch goal
|
||||||
|
Maximum,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl BlockRateConfig {
|
||||||
|
/// Returns the blocks per second for this configuration.
|
||||||
|
pub const fn bps(&self) -> f64 {
|
||||||
|
match self {
|
||||||
|
BlockRateConfig::Standard => 10.0,
|
||||||
|
BlockRateConfig::Enhanced => 32.0,
|
||||||
|
BlockRateConfig::Maximum => 100.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the target block time in milliseconds.
|
||||||
|
pub const fn block_time_ms(&self) -> u64 {
|
||||||
|
match self {
|
||||||
|
BlockRateConfig::Standard => 100,
|
||||||
|
BlockRateConfig::Enhanced => 31,
|
||||||
|
BlockRateConfig::Maximum => 10,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the recommended k parameter for this block rate.
|
||||||
|
/// Higher block rates need higher k to accommodate network latency.
|
||||||
|
pub const fn recommended_k(&self) -> u8 {
|
||||||
|
match self {
|
||||||
|
BlockRateConfig::Standard => 18,
|
||||||
|
BlockRateConfig::Enhanced => 32,
|
||||||
|
BlockRateConfig::Maximum => 64,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the merge depth adjusted for block rate.
|
||||||
|
pub const fn merge_depth(&self) -> u64 {
|
||||||
|
match self {
|
||||||
|
BlockRateConfig::Standard => 3600, // ~6 min at 10 bps
|
||||||
|
BlockRateConfig::Enhanced => 11520, // ~6 min at 32 bps
|
||||||
|
BlockRateConfig::Maximum => 36000, // ~6 min at 100 bps
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the finality depth adjusted for block rate.
|
||||||
|
pub const fn finality_depth(&self) -> u64 {
|
||||||
|
match self {
|
||||||
|
BlockRateConfig::Standard => 86400, // ~2.4 hours at 10 bps
|
||||||
|
BlockRateConfig::Enhanced => 276480, // ~2.4 hours at 32 bps
|
||||||
|
BlockRateConfig::Maximum => 864000, // ~2.4 hours at 100 bps
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Returns the pruning depth adjusted for block rate.
|
||||||
|
pub const fn pruning_depth(&self) -> u64 {
|
||||||
|
match self {
|
||||||
|
BlockRateConfig::Standard => 288_000, // ~8 hours at 10 bps
|
||||||
|
BlockRateConfig::Enhanced => 921_600, // ~8 hours at 32 bps
|
||||||
|
BlockRateConfig::Maximum => 2_880_000, // ~8 hours at 100 bps
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests {
|
mod tests {
|
||||||
use super::*;
|
use super::*;
|
||||||
|
|
|
||||||
98
docs/PLAN/PHASE13-AdvancedEnhancements/README.md
Normal file
98
docs/PLAN/PHASE13-AdvancedEnhancements/README.md
Normal file
|
|
@ -0,0 +1,98 @@
|
||||||
|
# Phase 13: Advanced Blockchain Enhancements
|
||||||
|
|
||||||
|
> Research-driven enhancements to GHOSTDAG, quantum cryptography, Layer 2, and gateway architecture
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Phase 13 builds upon Synor's already advanced foundation:
|
||||||
|
- GHOSTDAG consensus (k=18, 10 BPS)
|
||||||
|
- Hybrid Ed25519 + Dilithium3 quantum-resistant signatures
|
||||||
|
- Complete L2 stack (Compute, Storage, Database, Hosting)
|
||||||
|
- Full gateway architecture with CID support
|
||||||
|
|
||||||
|
## Milestones
|
||||||
|
|
||||||
|
### Milestone 1: DAGKnight Protocol (Weeks 1-5)
|
||||||
|
**Priority: HIGH**
|
||||||
|
|
||||||
|
DAGKnight eliminates fixed network delay assumptions, making consensus adaptive.
|
||||||
|
|
||||||
|
| Task | Status | File |
|
||||||
|
|------|--------|------|
|
||||||
|
| DAGKnight core algorithm | Pending | `crates/synor-dag/src/dagknight.rs` |
|
||||||
|
| Network latency tracker | Pending | `crates/synor-dag/src/latency.rs` |
|
||||||
|
| 32 BPS upgrade | Pending | `crates/synor-dag/src/lib.rs` |
|
||||||
|
| 100 BPS stretch goal | Pending | `crates/synor-consensus/src/difficulty.rs` |
|
||||||
|
|
||||||
|
### Milestone 2: Enhanced Quantum Cryptography (Weeks 6-9)
|
||||||
|
**Priority: MEDIUM-HIGH**
|
||||||
|
|
||||||
|
Add NIST-standardized backup algorithms for defense in depth.
|
||||||
|
|
||||||
|
| Task | Status | File |
|
||||||
|
|------|--------|------|
|
||||||
|
| SPHINCS+ (SLH-DSA) integration | Pending | `crates/synor-crypto/src/sphincs.rs` |
|
||||||
|
| FALCON (FN-DSA) integration | Pending | `crates/synor-crypto/src/falcon.rs` |
|
||||||
|
| Algorithm negotiation protocol | Pending | `crates/synor-crypto/src/negotiation.rs` |
|
||||||
|
|
||||||
|
### Milestone 3: ZK-Rollup Foundation (Weeks 10-14)
|
||||||
|
**Priority: HIGH**
|
||||||
|
|
||||||
|
Lay groundwork for ZK-rollup support for massive L2 scaling.
|
||||||
|
|
||||||
|
| Task | Status | File |
|
||||||
|
|------|--------|------|
|
||||||
|
| ZK-SNARK proof system (Halo2/PLONK) | Pending | `crates/synor-zk/src/lib.rs` |
|
||||||
|
| Circuit definitions | Pending | `crates/synor-zk/src/circuit.rs` |
|
||||||
|
| Rollup state manager | Pending | `crates/synor-zk/src/rollup/mod.rs` |
|
||||||
|
| ZK-Rollup bridge contract | Pending | `contracts/zk-rollup/src/lib.rs` |
|
||||||
|
|
||||||
|
### Milestone 4: Gateway Enhancements (Weeks 15-18)
|
||||||
|
**Priority: MEDIUM**
|
||||||
|
|
||||||
|
Improve gateway infrastructure following IPFS best practices.
|
||||||
|
|
||||||
|
| Task | Status | File |
|
||||||
|
|------|--------|------|
|
||||||
|
| Subdomain gateway isolation | Pending | `crates/synor-storage/src/gateway/mod.rs` |
|
||||||
|
| Trustless verification (CAR files) | Pending | `crates/synor-storage/src/car.rs` |
|
||||||
|
| Multi-pin redundancy | Pending | `crates/synor-storage/src/pinning.rs` |
|
||||||
|
| CDN integration | Pending | `crates/synor-storage/src/gateway/cache.rs` |
|
||||||
|
|
||||||
|
## Research References
|
||||||
|
|
||||||
|
### GHOSTDAG/DAGKnight
|
||||||
|
- [Kaspa Official](https://kaspa.org/) - GHOSTDAG reference implementation
|
||||||
|
- [PHANTOM GHOSTDAG Paper](https://eprint.iacr.org/2018/104.pdf) - Academic foundation
|
||||||
|
- Kaspa 2025 roadmap: DAGKnight, 32 BPS → 100 BPS progression
|
||||||
|
|
||||||
|
### Quantum Cryptography (NIST PQC)
|
||||||
|
- **FIPS 203 (ML-KEM)**: CRYSTALS-Kyber - key encapsulation (already implemented as Kyber768)
|
||||||
|
- **FIPS 204 (ML-DSA)**: CRYSTALS-Dilithium - signatures (already implemented as Dilithium3)
|
||||||
|
- **FIPS 205 (SLH-DSA)**: SPHINCS+ - hash-based backup signatures
|
||||||
|
- **FIPS 206 (FN-DSA)**: FALCON - compact lattice signatures
|
||||||
|
|
||||||
|
### Layer 2 Scaling
|
||||||
|
- ZK-Rollups: Validity proofs, 10-20 min finality, 2,000-4,000 TPS
|
||||||
|
- Optimistic Rollups: 7-day challenge period, 1,000-4,000 TPS
|
||||||
|
- State Channels: Instant micropayments, zero fees
|
||||||
|
|
||||||
|
### Gateway Best Practices (IPFS)
|
||||||
|
- Subdomain gateways for origin isolation
|
||||||
|
- CAR files for trustless verification
|
||||||
|
- Multi-pin redundancy for availability
|
||||||
|
- CDN edge caching for performance
|
||||||
|
|
||||||
|
## Fee Distribution Model
|
||||||
|
|
||||||
|
| Fee Type | Burn | Stakers | Treasury | Validators/Operators |
|
||||||
|
|----------|------|---------|----------|----------------------|
|
||||||
|
| Transaction Fees | 10% | 60% | 20% | 10% |
|
||||||
|
| L2 Service Fees | 10% | - | 20% | 70% |
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
- **DAGKnight**: Achieve 32 BPS with <1% orphan rate
|
||||||
|
- **Quantum**: SPHINCS+ and FALCON signatures verified in <10ms
|
||||||
|
- **ZK-Rollup**: 1,000 TPS with <20 minute finality
|
||||||
|
- **Gateway**: <100ms TTFB for cached content globally
|
||||||
Loading…
Add table
Reference in a new issue