feat(storage): add Synor Storage L2 decentralized storage layer

Complete implementation of the Synor Storage Layer (L2) for decentralized
content storage. This enables permanent, censorship-resistant storage of
any file type including Next.js apps, Flutter apps, and arbitrary data.

Core modules:
- cid.rs: Content addressing with Blake3/SHA256 hashing (synor1... format)
- chunker.rs: File chunking for parallel upload/download (1MB chunks)
- erasure.rs: Reed-Solomon erasure coding (10+4 shards) for fault tolerance
- proof.rs: Storage proofs with Merkle trees for verification
- deal.rs: Storage deals and market economics (3 pricing tiers)

Infrastructure:
- node/: Storage node service with P2P networking and local storage
- gateway/: HTTP gateway for browser access with LRU caching
- Docker deployment with nginx load balancer

Architecture:
- Operates as L2 alongside Synor L1 blockchain
- Storage proofs verified on-chain for reward distribution
- Can lose 4 shards per chunk and still recover data
- Gateway URLs: /synor1<cid> for content access

All 28 unit tests passing.
This commit is contained in:
Gulshan Yadav 2026-01-10 11:42:03 +05:30
parent ac3b31d491
commit f5bdef2691
22 changed files with 4880 additions and 137 deletions

View file

@ -1,48 +1,53 @@
[package]
name = "synor-storage"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
description = "RocksDB storage layer for Synor blockchain"
version = "0.1.0"
edition = "2021"
description = "Decentralized storage layer for the Synor blockchain"
license = "MIT"
authors = ["Synor Team"]
repository = "https://github.com/synor/synor"
[dependencies]
# Core
thiserror = "1"
serde = { version = "1", features = ["derive"] }
serde_bytes = "0.11"
serde_json = "1"
tokio = { version = "1", features = ["full"] }
async-trait = "0.1"
bytes = "1"
# Cryptography
blake3 = "1"
sha2 = "0.10"
ed25519-dalek = "2"
# Encoding
bs58 = "0.5"
hex = "0.4"
base64 = "0.22"
# Erasure coding
reed-solomon-erasure = "6"
# Storage
rocksdb = { version = "0.22", optional = true }
# Networking (for storage nodes)
libp2p = { version = "0.54", features = ["tcp", "quic", "noise", "yamux", "kad", "identify", "gossipsub"], optional = true }
# Local workspace crates
synor-types = { path = "../synor-types" }
synor-dag = { path = "../synor-dag" }
synor-consensus = { path = "../synor-consensus" }
# Database
rocksdb = { version = "0.22", default-features = false, features = ["lz4", "zstd"] }
# Serialization
serde = { workspace = true }
borsh = { workspace = true }
# Data structures
hashbrown = "0.14"
lru = { workspace = true }
parking_lot = "0.12"
smallvec = { workspace = true }
# Error handling
thiserror = { workspace = true }
# Logging
tracing = { workspace = true }
# Async
tokio = { workspace = true }
[dev-dependencies]
tempfile = "3.8"
rand = { workspace = true }
criterion = { workspace = true }
borsh = { workspace = true }
[[bench]]
name = "storage_bench"
harness = false
synor-crypto = { path = "../synor-crypto" }
[features]
default = []
test-utils = []
node = ["libp2p", "rocksdb"]
[[bin]]
name = "synor-storage-node"
path = "src/bin/storage-node.rs"
[dev-dependencies]
tempfile = "3"
rand = "0.8"

View file

@ -0,0 +1,89 @@
//! Synor Storage Node Binary
//!
//! Runs a storage node that participates in the Synor Storage network.
//!
//! Usage:
//! synor-storage-node --config /path/to/config.toml
//! synor-storage-node --data-dir ./storage-data
use synor_storage::{NodeConfig, StorageNode, GatewayConfig, Gateway};
use std::path::PathBuf;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize logging
let log_level = std::env::var("RUST_LOG").unwrap_or_else(|_| "info".to_string());
eprintln!("Starting Synor Storage Node (log level: {})", log_level);
// Parse command line arguments
let args: Vec<String> = std::env::args().collect();
let config_path = args.iter()
.position(|a| a == "--config")
.map(|i| args.get(i + 1))
.flatten()
.cloned();
let data_dir = args.iter()
.position(|a| a == "--data-dir")
.map(|i| args.get(i + 1))
.flatten()
.cloned()
.unwrap_or_else(|| "./synor-storage-data".to_string());
// Load configuration
let node_config = if let Some(path) = config_path {
eprintln!("Loading config from: {}", path);
// TODO: Load from TOML file
NodeConfig {
data_dir: data_dir.clone(),
..NodeConfig::default()
}
} else {
NodeConfig {
data_dir: data_dir.clone(),
..NodeConfig::default()
}
};
eprintln!("Data directory: {}", node_config.data_dir);
eprintln!("Capacity: {} bytes", node_config.capacity);
// Create and start the storage node
eprintln!("Initializing storage node...");
let node = StorageNode::new(node_config.clone()).await?;
eprintln!("Starting storage node...");
node.start().await?;
let state = node.state().await;
eprintln!("Node state: {:?}", state);
// Start the gateway if enabled
let gateway_enabled = std::env::var("GATEWAY_ENABLED")
.map(|v| v == "true" || v == "1")
.unwrap_or(true);
if gateway_enabled {
let gateway_config = GatewayConfig {
listen_addr: std::env::var("GATEWAY_ADDR").unwrap_or_else(|_| "0.0.0.0:8080".to_string()),
..GatewayConfig::default()
};
eprintln!("Starting gateway on {}...", gateway_config.listen_addr);
let gateway = Gateway::new(gateway_config);
gateway.start().await?;
}
eprintln!("Storage node is running!");
eprintln!("Press Ctrl+C to stop");
// Wait for shutdown signal
tokio::signal::ctrl_c().await?;
eprintln!("\nShutting down...");
node.stop().await?;
eprintln!("Storage node stopped");
Ok(())
}

View file

@ -0,0 +1,282 @@
//! File chunking for large file storage
//!
//! Files are split into fixed-size chunks for:
//! - Parallel upload/download
//! - Efficient deduplication
//! - Erasure coding application
use crate::cid::ContentId;
use serde::{Deserialize, Serialize};
/// Default chunk size: 1 MB
pub const DEFAULT_CHUNK_SIZE: usize = 1024 * 1024;
/// A chunk of a file
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Chunk {
/// Chunk index within the file
pub index: u32,
/// Content ID of this chunk
pub cid: ContentId,
/// Chunk data
#[serde(with = "serde_bytes")]
pub data: Vec<u8>,
/// Offset in the original file
pub offset: u64,
}
impl Chunk {
/// Create a new chunk
pub fn new(index: u32, data: Vec<u8>, offset: u64) -> Self {
let cid = ContentId::from_content(&data);
Self {
index,
cid,
data,
offset,
}
}
/// Verify chunk integrity
pub fn verify(&self) -> bool {
self.cid.verify(&self.data)
}
/// Get chunk size
pub fn size(&self) -> usize {
self.data.len()
}
}
/// Chunker configuration
#[derive(Debug, Clone)]
pub struct ChunkerConfig {
/// Size of each chunk in bytes
pub chunk_size: usize,
}
impl Default for ChunkerConfig {
fn default() -> Self {
Self {
chunk_size: DEFAULT_CHUNK_SIZE,
}
}
}
/// File chunker - splits files into chunks
pub struct Chunker {
config: ChunkerConfig,
}
impl Chunker {
/// Create a new chunker with default config
pub fn new() -> Self {
Self {
config: ChunkerConfig::default(),
}
}
/// Create a new chunker with custom config
pub fn with_config(config: ChunkerConfig) -> Self {
Self { config }
}
/// Split data into chunks
pub fn chunk(&self, data: &[u8]) -> Vec<Chunk> {
let mut chunks = Vec::new();
let mut offset = 0u64;
let mut index = 0u32;
for chunk_data in data.chunks(self.config.chunk_size) {
chunks.push(Chunk::new(
index,
chunk_data.to_vec(),
offset,
));
offset += chunk_data.len() as u64;
index += 1;
}
chunks
}
/// Reassemble chunks into original data
pub fn reassemble(&self, chunks: &[Chunk]) -> Result<Vec<u8>, ReassembleError> {
if chunks.is_empty() {
return Ok(Vec::new());
}
// Sort by index
let mut sorted: Vec<_> = chunks.iter().collect();
sorted.sort_by_key(|c| c.index);
// Verify indices are contiguous
for (i, chunk) in sorted.iter().enumerate() {
if chunk.index != i as u32 {
return Err(ReassembleError::MissingChunk(i as u32));
}
}
// Verify each chunk
for chunk in &sorted {
if !chunk.verify() {
return Err(ReassembleError::InvalidChunk(chunk.index));
}
}
// Combine data
let total_size: usize = sorted.iter().map(|c| c.data.len()).sum();
let mut result = Vec::with_capacity(total_size);
for chunk in sorted {
result.extend_from_slice(&chunk.data);
}
Ok(result)
}
/// Get the number of chunks for a given file size
pub fn chunk_count(&self, file_size: u64) -> u32 {
let size = file_size as usize;
let full_chunks = size / self.config.chunk_size;
let has_remainder = size % self.config.chunk_size != 0;
(full_chunks + if has_remainder { 1 } else { 0 }) as u32
}
}
impl Default for Chunker {
fn default() -> Self {
Self::new()
}
}
/// Errors during chunk reassembly
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum ReassembleError {
/// A chunk is missing from the sequence
MissingChunk(u32),
/// A chunk failed integrity verification
InvalidChunk(u32),
}
impl std::error::Error for ReassembleError {}
impl std::fmt::Display for ReassembleError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::MissingChunk(i) => write!(f, "Missing chunk at index {}", i),
Self::InvalidChunk(i) => write!(f, "Chunk {} failed verification", i),
}
}
}
/// Metadata about a chunked file
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ChunkedFile {
/// CID of the complete file
pub cid: ContentId,
/// Total file size
pub size: u64,
/// Number of chunks
pub chunk_count: u32,
/// Size of each chunk (except possibly last)
pub chunk_size: usize,
/// CIDs of each chunk in order
pub chunk_cids: Vec<ContentId>,
}
impl ChunkedFile {
/// Create metadata from chunks
pub fn from_chunks(chunks: &[Chunk], original_cid: ContentId) -> Self {
Self {
cid: original_cid,
size: chunks.iter().map(|c| c.data.len() as u64).sum(),
chunk_count: chunks.len() as u32,
chunk_size: if chunks.is_empty() { 0 } else { chunks[0].data.len() },
chunk_cids: chunks.iter().map(|c| c.cid.clone()).collect(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_chunk_small_file() {
let chunker = Chunker::new();
let data = b"Small file that fits in one chunk";
let chunks = chunker.chunk(data);
assert_eq!(chunks.len(), 1);
assert_eq!(chunks[0].data, data);
assert!(chunks[0].verify());
}
#[test]
fn test_chunk_large_file() {
let config = ChunkerConfig { chunk_size: 10 };
let chunker = Chunker::with_config(config);
let data = b"This is a longer file that will be split into chunks";
let chunks = chunker.chunk(data);
assert!(chunks.len() > 1);
// Verify all chunks
for chunk in &chunks {
assert!(chunk.verify());
}
}
#[test]
fn test_reassemble() {
let config = ChunkerConfig { chunk_size: 10 };
let chunker = Chunker::with_config(config);
let original = b"This is a test file for chunking and reassembly";
let chunks = chunker.chunk(original);
let reassembled = chunker.reassemble(&chunks).unwrap();
assert_eq!(reassembled, original);
}
#[test]
fn test_reassemble_missing_chunk() {
let config = ChunkerConfig { chunk_size: 10 };
let chunker = Chunker::with_config(config);
let data = b"Test data for missing chunk test case here";
let mut chunks = chunker.chunk(data);
// Remove middle chunk
chunks.remove(1);
let result = chunker.reassemble(&chunks);
assert!(matches!(result, Err(ReassembleError::MissingChunk(_))));
}
#[test]
fn test_chunk_count() {
let config = ChunkerConfig { chunk_size: 100 };
let chunker = Chunker::with_config(config);
assert_eq!(chunker.chunk_count(0), 0);
assert_eq!(chunker.chunk_count(50), 1);
assert_eq!(chunker.chunk_count(100), 1);
assert_eq!(chunker.chunk_count(101), 2);
assert_eq!(chunker.chunk_count(250), 3);
}
#[test]
fn test_empty_file() {
let chunker = Chunker::new();
let chunks = chunker.chunk(&[]);
assert!(chunks.is_empty());
let reassembled = chunker.reassemble(&chunks).unwrap();
assert!(reassembled.is_empty());
}
}

View file

@ -0,0 +1,249 @@
//! Content Identifier (CID) - Hash-based content addressing
//!
//! Every file in Synor Storage is identified by its cryptographic hash,
//! not by location. This enables content verification and deduplication.
use serde::{Deserialize, Serialize};
use std::fmt;
/// Hash algorithm identifiers (multihash compatible)
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Serialize, Deserialize)]
#[repr(u8)]
pub enum HashType {
/// SHA2-256 (0x12)
Sha256 = 0x12,
/// Keccak-256 (0x1B)
Keccak256 = 0x1B,
/// Blake3 (0x1E) - Synor default
Blake3 = 0x1E,
}
impl Default for HashType {
fn default() -> Self {
Self::Blake3
}
}
/// Content Identifier - uniquely identifies content by hash
#[derive(Clone, PartialEq, Eq, Hash, Serialize, Deserialize)]
pub struct ContentId {
/// Hash algorithm used
pub hash_type: HashType,
/// Hash digest (32 bytes)
pub digest: [u8; 32],
/// Content size in bytes
pub size: u64,
}
impl ContentId {
/// Create a new CID from content bytes using Blake3
pub fn from_content(data: &[u8]) -> Self {
let hash = blake3::hash(data);
Self {
hash_type: HashType::Blake3,
digest: *hash.as_bytes(),
size: data.len() as u64,
}
}
/// Create a new CID from content bytes using SHA256
pub fn from_content_sha256(data: &[u8]) -> Self {
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(data);
let result = hasher.finalize();
let mut digest = [0u8; 32];
digest.copy_from_slice(&result);
Self {
hash_type: HashType::Sha256,
digest,
size: data.len() as u64,
}
}
/// Verify that content matches this CID
pub fn verify(&self, data: &[u8]) -> bool {
if data.len() as u64 != self.size {
return false;
}
match self.hash_type {
HashType::Blake3 => {
let hash = blake3::hash(data);
hash.as_bytes() == &self.digest
}
HashType::Sha256 => {
use sha2::{Sha256, Digest};
let mut hasher = Sha256::new();
hasher.update(data);
let result = hasher.finalize();
result.as_slice() == &self.digest
}
HashType::Keccak256 => {
// TODO: Implement Keccak256 verification
false
}
}
}
/// Encode CID as a string (synor1...)
pub fn to_string_repr(&self) -> String {
let mut bytes = Vec::with_capacity(34);
bytes.push(self.hash_type as u8);
bytes.push(32); // digest length
bytes.extend_from_slice(&self.digest);
format!("synor1{}", bs58::encode(&bytes).into_string())
}
/// Parse CID from string representation
pub fn from_string(s: &str) -> Result<Self, CidParseError> {
if !s.starts_with("synor1") {
return Err(CidParseError::InvalidPrefix);
}
let encoded = &s[6..]; // Skip "synor1"
let bytes = bs58::decode(encoded)
.into_vec()
.map_err(|_| CidParseError::InvalidBase58)?;
if bytes.len() < 34 {
return Err(CidParseError::InvalidLength);
}
let hash_type = match bytes[0] {
0x12 => HashType::Sha256,
0x1B => HashType::Keccak256,
0x1E => HashType::Blake3,
_ => return Err(CidParseError::UnknownHashType),
};
let digest_len = bytes[1] as usize;
if digest_len != 32 || bytes.len() < 2 + digest_len {
return Err(CidParseError::InvalidLength);
}
let mut digest = [0u8; 32];
digest.copy_from_slice(&bytes[2..34]);
Ok(Self {
hash_type,
digest,
size: 0, // Size not encoded in string
})
}
/// Get the digest as hex string
pub fn digest_hex(&self) -> String {
hex::encode(self.digest)
}
/// Create CID for an empty file
pub fn empty() -> Self {
Self::from_content(&[])
}
}
impl fmt::Debug for ContentId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("ContentId")
.field("hash_type", &self.hash_type)
.field("digest", &self.digest_hex())
.field("size", &self.size)
.finish()
}
}
impl fmt::Display for ContentId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self.to_string_repr())
}
}
/// Errors when parsing CID from string
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum CidParseError {
/// Missing "synor1" prefix
InvalidPrefix,
/// Invalid base58 encoding
InvalidBase58,
/// Invalid length
InvalidLength,
/// Unknown hash type
UnknownHashType,
}
impl std::error::Error for CidParseError {}
impl fmt::Display for CidParseError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::InvalidPrefix => write!(f, "CID must start with 'synor1'"),
Self::InvalidBase58 => write!(f, "Invalid base58 encoding"),
Self::InvalidLength => write!(f, "Invalid CID length"),
Self::UnknownHashType => write!(f, "Unknown hash type"),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_cid_from_content() {
let data = b"Hello, Synor Storage!";
let cid = ContentId::from_content(data);
assert_eq!(cid.hash_type, HashType::Blake3);
assert_eq!(cid.size, data.len() as u64);
assert!(cid.verify(data));
}
#[test]
fn test_cid_verification_fails_wrong_data() {
let data = b"Hello, Synor Storage!";
let cid = ContentId::from_content(data);
assert!(!cid.verify(b"Wrong data"));
}
#[test]
fn test_cid_string_roundtrip() {
let data = b"Test content for CID";
let cid = ContentId::from_content(data);
let s = cid.to_string_repr();
assert!(s.starts_with("synor1"));
let parsed = ContentId::from_string(&s).unwrap();
assert_eq!(cid.hash_type, parsed.hash_type);
assert_eq!(cid.digest, parsed.digest);
}
#[test]
fn test_cid_display() {
let data = b"Display test";
let cid = ContentId::from_content(data);
let display = format!("{}", cid);
assert!(display.starts_with("synor1"));
}
#[test]
fn test_cid_sha256() {
let data = b"SHA256 test";
let cid = ContentId::from_content_sha256(data);
assert_eq!(cid.hash_type, HashType::Sha256);
assert!(cid.verify(data));
}
#[test]
fn test_empty_cid() {
let cid = ContentId::empty();
assert_eq!(cid.size, 0);
assert!(cid.verify(&[]));
}
}

View file

@ -0,0 +1,321 @@
//! Storage Deals - Economic layer for storage
//!
//! Users pay storage providers to store their data.
//! Deals are registered on L1 for enforcement.
use crate::cid::ContentId;
use serde::{Deserialize, Serialize};
/// Storage deal status
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum DealStatus {
/// Awaiting storage node acceptance
Pending,
/// Deal active, data being stored
Active,
/// Deal completed successfully
Completed,
/// Storage node failed proofs
Failed,
/// Deal cancelled by user
Cancelled,
}
/// Storage deal between user and storage provider
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StorageDeal {
/// Unique deal ID
pub deal_id: [u8; 32],
/// Content being stored
pub cid: ContentId,
/// Client address (paying for storage)
pub client: [u8; 32],
/// Storage provider node ID
pub provider: Option<[u8; 32]>,
/// Content size in bytes
pub size: u64,
/// Storage duration (in L1 blocks)
pub duration: u64,
/// Price per byte per epoch (in atomic SYNOR units)
pub price_per_byte_epoch: u64,
/// Total collateral locked by provider
pub provider_collateral: u64,
/// Client deposit
pub client_deposit: u64,
/// Deal start block (on L1)
pub start_block: u64,
/// Deal end block (on L1)
pub end_block: u64,
/// Current status
pub status: DealStatus,
/// Replication factor (how many nodes store the data)
pub replication: u8,
/// Failed proof count
pub failed_proofs: u32,
/// Successful proof count
pub successful_proofs: u32,
}
impl StorageDeal {
/// Calculate total cost for the deal
pub fn total_cost(&self) -> u64 {
let epochs = self.duration;
self.size
.saturating_mul(self.price_per_byte_epoch)
.saturating_mul(epochs)
.saturating_mul(self.replication as u64)
}
/// Check if deal is active
pub fn is_active(&self) -> bool {
self.status == DealStatus::Active
}
/// Check if deal has expired
pub fn is_expired(&self, current_block: u64) -> bool {
current_block >= self.end_block
}
/// Calculate provider reward for successful storage
pub fn provider_reward(&self) -> u64 {
// Provider gets paid based on successful proofs
let total_possible_proofs = self.duration / 100; // Assume proof every 100 blocks
if total_possible_proofs == 0 {
return 0;
}
let success_rate = self.successful_proofs as u64 * 100 / total_possible_proofs;
self.total_cost() * success_rate / 100
}
}
/// Request to create a new storage deal
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CreateDealRequest {
/// Content to store
pub cid: ContentId,
/// Content size
pub size: u64,
/// Desired duration (blocks)
pub duration: u64,
/// Maximum price willing to pay
pub max_price_per_byte_epoch: u64,
/// Desired replication factor
pub replication: u8,
/// Preferred regions (optional)
pub preferred_regions: Vec<String>,
}
/// Storage provider offer
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StorageOffer {
/// Provider node ID
pub provider: [u8; 32],
/// Available capacity (bytes)
pub available_capacity: u64,
/// Price per byte per epoch
pub price_per_byte_epoch: u64,
/// Minimum deal duration
pub min_duration: u64,
/// Maximum deal duration
pub max_duration: u64,
/// Regions served
pub regions: Vec<String>,
/// Provider stake amount
pub stake: u64,
/// Historical uptime percentage
pub uptime: u8,
/// Success rate (proofs passed / total)
pub success_rate: u8,
}
/// Storage market for matching clients with providers
#[derive(Debug, Default)]
pub struct StorageMarket {
/// Active deals
pub deals: Vec<StorageDeal>,
/// Available offers from providers
pub offers: Vec<StorageOffer>,
}
impl StorageMarket {
/// Create a new storage market
pub fn new() -> Self {
Self::default()
}
/// Find best providers for a deal request
pub fn find_providers(&self, request: &CreateDealRequest) -> Vec<&StorageOffer> {
self.offers
.iter()
.filter(|offer| {
offer.available_capacity >= request.size
&& offer.price_per_byte_epoch <= request.max_price_per_byte_epoch
&& offer.min_duration <= request.duration
&& offer.max_duration >= request.duration
})
.collect()
}
/// Add a storage offer
pub fn add_offer(&mut self, offer: StorageOffer) {
self.offers.push(offer);
}
/// Create a deal from request and accepted offer
pub fn create_deal(
&mut self,
request: CreateDealRequest,
provider: [u8; 32],
client: [u8; 32],
current_block: u64,
) -> StorageDeal {
let deal_id = self.generate_deal_id(&request.cid, &client, current_block);
let offer = self.offers.iter().find(|o| o.provider == provider);
let price = offer.map(|o| o.price_per_byte_epoch).unwrap_or(0);
let deal = StorageDeal {
deal_id,
cid: request.cid,
client,
provider: Some(provider),
size: request.size,
duration: request.duration,
price_per_byte_epoch: price,
provider_collateral: 0, // Set during acceptance
client_deposit: 0, // Set during creation
start_block: current_block,
end_block: current_block + request.duration,
status: DealStatus::Pending,
replication: request.replication,
failed_proofs: 0,
successful_proofs: 0,
};
self.deals.push(deal.clone());
deal
}
fn generate_deal_id(&self, cid: &ContentId, client: &[u8; 32], block: u64) -> [u8; 32] {
let mut input = Vec::new();
input.extend_from_slice(&cid.digest);
input.extend_from_slice(client);
input.extend_from_slice(&block.to_le_bytes());
*blake3::hash(&input).as_bytes()
}
}
/// Pricing tiers for storage
#[derive(Debug, Clone)]
pub struct PricingTier {
/// Tier name
pub name: String,
/// Price per GB per month (in atomic SYNOR)
pub price_per_gb_month: u64,
/// Minimum replication
pub min_replication: u8,
/// SLA uptime guarantee in basis points (10000 = 100%)
pub uptime_sla_bps: u16,
}
impl PricingTier {
/// Standard tier - basic storage
pub fn standard() -> Self {
Self {
name: "Standard".to_string(),
price_per_gb_month: 100_000_000, // 0.1 SYNOR per GB/month
min_replication: 3,
uptime_sla_bps: 9900, // 99.00%
}
}
/// Premium tier - higher redundancy
pub fn premium() -> Self {
Self {
name: "Premium".to_string(),
price_per_gb_month: 250_000_000, // 0.25 SYNOR per GB/month
min_replication: 5,
uptime_sla_bps: 9990, // 99.90%
}
}
/// Permanent tier - one-time payment for ~20 years
pub fn permanent() -> Self {
Self {
name: "Permanent".to_string(),
price_per_gb_month: 2_400_000_000, // 2.4 SYNOR per GB (one-time, ~20 years equiv)
min_replication: 10,
uptime_sla_bps: 9999, // 99.99%
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_deal_cost() {
let deal = StorageDeal {
deal_id: [0u8; 32],
cid: ContentId::from_content(b"test"),
client: [1u8; 32],
provider: Some([2u8; 32]),
size: 1_000_000_000, // 1 GB
duration: 100,
price_per_byte_epoch: 1,
provider_collateral: 0,
client_deposit: 0,
start_block: 0,
end_block: 100,
status: DealStatus::Active,
replication: 3,
failed_proofs: 0,
successful_proofs: 0,
};
let cost = deal.total_cost();
assert_eq!(cost, 1_000_000_000 * 100 * 3);
}
#[test]
fn test_market_find_providers() {
let mut market = StorageMarket::new();
market.add_offer(StorageOffer {
provider: [1u8; 32],
available_capacity: 100_000_000_000, // 100 GB
price_per_byte_epoch: 1,
min_duration: 10,
max_duration: 10000,
regions: vec!["US".to_string()],
stake: 1000,
uptime: 99,
success_rate: 99,
});
let request = CreateDealRequest {
cid: ContentId::from_content(b"test"),
size: 1_000_000,
duration: 100,
max_price_per_byte_epoch: 2,
replication: 3,
preferred_regions: vec![],
};
let providers = market.find_providers(&request);
assert_eq!(providers.len(), 1);
}
#[test]
fn test_pricing_tiers() {
let standard = PricingTier::standard();
let premium = PricingTier::premium();
let permanent = PricingTier::permanent();
assert!(standard.price_per_gb_month < premium.price_per_gb_month);
assert!(premium.price_per_gb_month < permanent.price_per_gb_month);
assert!(standard.min_replication < premium.min_replication);
}
}

View file

@ -0,0 +1,286 @@
//! Erasure Coding for fault-tolerant storage
//!
//! Uses Reed-Solomon coding to add redundancy to chunks.
//! Allows recovery of data even if some shards are lost.
use reed_solomon_erasure::galois_8::ReedSolomon;
use serde::{Deserialize, Serialize};
use crate::cid::ContentId;
use crate::error::{Error, Result};
/// Default number of data shards
pub const DEFAULT_DATA_SHARDS: usize = 10;
/// Default number of parity shards
pub const DEFAULT_PARITY_SHARDS: usize = 4;
/// Erasure coding configuration
#[derive(Debug, Clone)]
pub struct ErasureConfig {
/// Number of data shards (original data pieces)
pub data_shards: usize,
/// Number of parity shards (redundancy pieces)
pub parity_shards: usize,
}
impl Default for ErasureConfig {
fn default() -> Self {
Self {
data_shards: DEFAULT_DATA_SHARDS,
parity_shards: DEFAULT_PARITY_SHARDS,
}
}
}
impl ErasureConfig {
/// Total number of shards
pub fn total_shards(&self) -> usize {
self.data_shards + self.parity_shards
}
/// Maximum shards that can be lost while still recovering
pub fn fault_tolerance(&self) -> usize {
self.parity_shards
}
}
/// A single shard of encoded data
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Shard {
/// Shard index (0..total_shards)
pub index: usize,
/// Whether this is a data shard (vs parity)
pub is_data: bool,
/// Shard content
#[serde(with = "serde_bytes")]
pub data: Vec<u8>,
/// Hash of shard data
pub hash: [u8; 32],
}
impl Shard {
/// Create a new shard
pub fn new(index: usize, is_data: bool, data: Vec<u8>) -> Self {
let hash = *blake3::hash(&data).as_bytes();
Self {
index,
is_data,
data,
hash,
}
}
/// Verify shard integrity
pub fn verify(&self) -> bool {
let computed = *blake3::hash(&self.data).as_bytes();
computed == self.hash
}
}
/// Erasure-coded chunk
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EncodedChunk {
/// Original chunk CID
pub chunk_cid: ContentId,
/// Original data size (before padding)
pub original_size: usize,
/// Shard size (padded to be divisible by data_shards)
pub shard_size: usize,
/// Configuration used for encoding
pub data_shards: usize,
pub parity_shards: usize,
/// All shards
pub shards: Vec<Shard>,
}
/// Erasure encoder/decoder
pub struct ErasureCoder {
config: ErasureConfig,
rs: ReedSolomon,
}
impl ErasureCoder {
/// Create a new erasure coder with default config
pub fn new() -> Result<Self> {
Self::with_config(ErasureConfig::default())
}
/// Create a new erasure coder with custom config
pub fn with_config(config: ErasureConfig) -> Result<Self> {
let rs = ReedSolomon::new(config.data_shards, config.parity_shards)
.map_err(|e| Error::ErasureCoding(format!("Failed to create RS coder: {}", e)))?;
Ok(Self { config, rs })
}
/// Encode data into shards with parity
pub fn encode(&self, data: &[u8], chunk_cid: ContentId) -> Result<EncodedChunk> {
let original_size = data.len();
// Pad data to be divisible by data_shards
let shard_size = (data.len() + self.config.data_shards - 1) / self.config.data_shards;
let padded_size = shard_size * self.config.data_shards;
let mut padded_data = data.to_vec();
padded_data.resize(padded_size, 0);
// Split into data shards
let mut shards: Vec<Vec<u8>> = padded_data
.chunks(shard_size)
.map(|c| c.to_vec())
.collect();
// Add parity shards (initially empty)
for _ in 0..self.config.parity_shards {
shards.push(vec![0u8; shard_size]);
}
// Encode parity
self.rs.encode(&mut shards)
.map_err(|e| Error::ErasureCoding(format!("Encoding failed: {}", e)))?;
// Create shard structs
let shard_structs: Vec<Shard> = shards
.into_iter()
.enumerate()
.map(|(i, data)| {
Shard::new(i, i < self.config.data_shards, data)
})
.collect();
Ok(EncodedChunk {
chunk_cid,
original_size,
shard_size,
data_shards: self.config.data_shards,
parity_shards: self.config.parity_shards,
shards: shard_structs,
})
}
/// Decode shards back to original data
/// Some shards can be None (missing) as long as enough remain
pub fn decode(&self, encoded: &EncodedChunk) -> Result<Vec<u8>> {
let total = encoded.data_shards + encoded.parity_shards;
// Prepare shards (Some for present, None for missing)
let mut shards: Vec<Option<Vec<u8>>> = vec![None; total];
let mut present_count = 0;
for shard in &encoded.shards {
if shard.index < total && shard.verify() {
shards[shard.index] = Some(shard.data.clone());
present_count += 1;
}
}
// Check if we have enough shards
if present_count < encoded.data_shards {
return Err(Error::ErasureCoding(format!(
"Not enough shards: have {}, need {}",
present_count, encoded.data_shards
)));
}
// Reconstruct missing shards
self.rs.reconstruct(&mut shards)
.map_err(|e| Error::ErasureCoding(format!("Reconstruction failed: {}", e)))?;
// Combine data shards
let mut result = Vec::with_capacity(encoded.original_size);
for i in 0..encoded.data_shards {
if let Some(ref shard_data) = shards[i] {
result.extend_from_slice(shard_data);
} else {
return Err(Error::ErasureCoding("Reconstruction incomplete".into()));
}
}
// Trim padding
result.truncate(encoded.original_size);
Ok(result)
}
}
impl Default for ErasureCoder {
fn default() -> Self {
Self::new().expect("Default erasure config should work")
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_encode_decode() {
let coder = ErasureCoder::new().unwrap();
let data = b"Hello, erasure coding!";
let cid = ContentId::from_content(data);
let encoded = coder.encode(data, cid).unwrap();
assert_eq!(encoded.shards.len(), 14); // 10 data + 4 parity
let decoded = coder.decode(&encoded).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_recovery_with_missing_shards() {
let coder = ErasureCoder::new().unwrap();
let data = b"Test data for recovery with some missing shards";
let cid = ContentId::from_content(data);
let mut encoded = coder.encode(data, cid).unwrap();
// Remove 4 shards (max we can lose with 4 parity)
encoded.shards.remove(0);
encoded.shards.remove(2);
encoded.shards.remove(4);
encoded.shards.remove(6);
let decoded = coder.decode(&encoded).unwrap();
assert_eq!(decoded, data);
}
#[test]
fn test_too_many_missing_shards() {
let coder = ErasureCoder::new().unwrap();
let data = b"Test data";
let cid = ContentId::from_content(data);
let mut encoded = coder.encode(data, cid).unwrap();
// Remove 5 shards (more than 4 parity can handle)
for _ in 0..5 {
encoded.shards.remove(0);
}
let result = coder.decode(&encoded);
assert!(result.is_err());
}
#[test]
fn test_shard_verification() {
let shard = Shard::new(0, true, b"test data".to_vec());
assert!(shard.verify());
}
#[test]
fn test_small_config() {
let config = ErasureConfig {
data_shards: 2,
parity_shards: 1,
};
let coder = ErasureCoder::with_config(config).unwrap();
let data = b"Small test";
let cid = ContentId::from_content(data);
let encoded = coder.encode(data, cid).unwrap();
assert_eq!(encoded.shards.len(), 3);
let decoded = coder.decode(&encoded).unwrap();
assert_eq!(decoded, data);
}
}

View file

@ -0,0 +1,65 @@
//! Error types for Synor Storage
use std::fmt;
/// Result type for storage operations
pub type Result<T> = std::result::Result<T, Error>;
/// Storage layer errors
#[derive(Debug)]
pub enum Error {
/// Content not found
NotFound(String),
/// Invalid CID format
InvalidCid(String),
/// Chunk reassembly failed
ReassemblyFailed(String),
/// Erasure coding error
ErasureCoding(String),
/// Storage proof verification failed
ProofVerificationFailed(String),
/// Deal not found or invalid
InvalidDeal(String),
/// Insufficient storage capacity
InsufficientCapacity,
/// Network error
Network(String),
/// I/O error
Io(std::io::Error),
/// Serialization error
Serialization(String),
/// Internal error
Internal(String),
}
impl std::error::Error for Error {}
impl fmt::Display for Error {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::NotFound(msg) => write!(f, "Content not found: {}", msg),
Self::InvalidCid(msg) => write!(f, "Invalid CID: {}", msg),
Self::ReassemblyFailed(msg) => write!(f, "Chunk reassembly failed: {}", msg),
Self::ErasureCoding(msg) => write!(f, "Erasure coding error: {}", msg),
Self::ProofVerificationFailed(msg) => write!(f, "Proof verification failed: {}", msg),
Self::InvalidDeal(msg) => write!(f, "Invalid deal: {}", msg),
Self::InsufficientCapacity => write!(f, "Insufficient storage capacity"),
Self::Network(msg) => write!(f, "Network error: {}", msg),
Self::Io(e) => write!(f, "I/O error: {}", e),
Self::Serialization(msg) => write!(f, "Serialization error: {}", msg),
Self::Internal(msg) => write!(f, "Internal error: {}", msg),
}
}
}
impl From<std::io::Error> for Error {
fn from(e: std::io::Error) -> Self {
Self::Io(e)
}
}
impl From<serde_json::Error> for Error {
fn from(e: serde_json::Error) -> Self {
Self::Serialization(e.to_string())
}
}

View file

@ -0,0 +1,295 @@
//! Gateway Cache - LRU cache for frequently accessed content
//!
//! Caches resolved content to reduce load on storage nodes
//! and improve response times for popular content.
use crate::cid::ContentId;
use super::GatewayResponse;
use std::collections::HashMap;
/// LRU cache entry
#[derive(Debug, Clone)]
struct CacheEntry {
/// Cached response
response: GatewayResponse,
/// Access count
access_count: u64,
/// Last access timestamp
last_access: u64,
/// Size in bytes
size: u64,
}
/// LRU cache for gateway responses
pub struct GatewayCache {
/// Maximum cache size in bytes
max_size: u64,
/// Current size in bytes
current_size: u64,
/// Cached entries by CID
entries: HashMap<ContentId, CacheEntry>,
/// Access order for LRU eviction (CID, last_access)
access_order: Vec<(ContentId, u64)>,
/// Cache statistics
stats: CacheStats,
}
/// Cache statistics
#[derive(Debug, Clone, Default)]
pub struct CacheStats {
/// Total hits
pub hits: u64,
/// Total misses
pub misses: u64,
/// Total evictions
pub evictions: u64,
/// Total bytes cached
pub bytes_cached: u64,
/// Total bytes evicted
pub bytes_evicted: u64,
}
impl GatewayCache {
/// Create a new cache with maximum size in bytes
pub fn new(max_size: u64) -> Self {
Self {
max_size,
current_size: 0,
entries: HashMap::new(),
access_order: Vec::new(),
stats: CacheStats::default(),
}
}
/// Get an entry from the cache
pub fn get(&self, cid: &ContentId) -> Option<GatewayResponse> {
self.entries.get(cid).map(|entry| entry.response.clone())
}
/// Get an entry and update access stats
pub fn get_mut(&mut self, cid: &ContentId) -> Option<GatewayResponse> {
let now = current_timestamp();
if let Some(entry) = self.entries.get_mut(cid) {
entry.access_count += 1;
entry.last_access = now;
self.stats.hits += 1;
// Update access order
if let Some(pos) = self.access_order.iter().position(|(c, _)| c == cid) {
self.access_order.remove(pos);
}
self.access_order.push((cid.clone(), now));
Some(entry.response.clone())
} else {
self.stats.misses += 1;
None
}
}
/// Put an entry in the cache
pub fn put(&mut self, cid: ContentId, response: GatewayResponse) {
let size = response.content.len() as u64;
// Don't cache if larger than max size
if size > self.max_size {
return;
}
// Remove existing entry if present
if self.entries.contains_key(&cid) {
self.remove(&cid);
}
// Evict entries until we have space
while self.current_size + size > self.max_size && !self.entries.is_empty() {
self.evict_lru();
}
let now = current_timestamp();
let entry = CacheEntry {
response,
access_count: 1,
last_access: now,
size,
};
self.entries.insert(cid.clone(), entry);
self.access_order.push((cid, now));
self.current_size += size;
self.stats.bytes_cached += size;
}
/// Remove an entry from the cache
pub fn remove(&mut self, cid: &ContentId) -> Option<GatewayResponse> {
if let Some(entry) = self.entries.remove(cid) {
self.current_size -= entry.size;
self.access_order.retain(|(c, _)| c != cid);
Some(entry.response)
} else {
None
}
}
/// Evict the least recently used entry
fn evict_lru(&mut self) {
if let Some((cid, _)) = self.access_order.first().cloned() {
if let Some(entry) = self.entries.remove(&cid) {
self.current_size -= entry.size;
self.stats.evictions += 1;
self.stats.bytes_evicted += entry.size;
}
self.access_order.remove(0);
}
}
/// Clear the entire cache
pub fn clear(&mut self) {
self.entries.clear();
self.access_order.clear();
self.current_size = 0;
}
/// Get cache statistics
pub fn stats(&self) -> &CacheStats {
&self.stats
}
/// Get current cache size in bytes
pub fn size(&self) -> u64 {
self.current_size
}
/// Get maximum cache size in bytes
pub fn max_size(&self) -> u64 {
self.max_size
}
/// Get number of entries in cache
pub fn len(&self) -> usize {
self.entries.len()
}
/// Check if cache is empty
pub fn is_empty(&self) -> bool {
self.entries.is_empty()
}
/// Check if CID is in cache
pub fn contains(&self, cid: &ContentId) -> bool {
self.entries.contains_key(cid)
}
/// Get hit rate
pub fn hit_rate(&self) -> f64 {
let total = self.stats.hits + self.stats.misses;
if total == 0 {
0.0
} else {
self.stats.hits as f64 / total as f64
}
}
/// Prune entries not accessed since cutoff timestamp
pub fn prune_stale(&mut self, cutoff_timestamp: u64) {
let stale: Vec<ContentId> = self
.entries
.iter()
.filter(|(_, entry)| entry.last_access < cutoff_timestamp)
.map(|(cid, _)| cid.clone())
.collect();
for cid in stale {
self.remove(&cid);
}
}
}
/// Get current timestamp in seconds
fn current_timestamp() -> u64 {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs()
}
#[cfg(test)]
mod tests {
use super::*;
use crate::cid::ContentId;
fn make_response(data: &[u8]) -> GatewayResponse {
let cid = ContentId::from_content(data);
GatewayResponse {
cid: cid.clone(),
content: data.to_vec(),
mime_type: "application/octet-stream".to_string(),
size: data.len() as u64,
}
}
#[test]
fn test_cache_put_get() {
let mut cache = GatewayCache::new(1024);
let data = b"hello world";
let cid = ContentId::from_content(data);
let response = make_response(data);
cache.put(cid.clone(), response);
let retrieved = cache.get(&cid).unwrap();
assert_eq!(retrieved.content, data);
}
#[test]
fn test_cache_eviction() {
let mut cache = GatewayCache::new(100);
// Add entries until we exceed limit
for i in 0..10 {
let data = vec![i; 20];
let cid = ContentId::from_content(&data);
let response = make_response(&data);
cache.put(cid, response);
}
// Should have evicted some entries
assert!(cache.size() <= 100);
assert!(cache.len() < 10);
}
#[test]
fn test_cache_lru_order() {
let mut cache = GatewayCache::new(100);
// Add 3 entries (each ~10 bytes)
let entries: Vec<_> = (0..3)
.map(|i| {
let data = vec![i; 10];
let cid = ContentId::from_content(&data);
let response = make_response(&data);
(cid, response)
})
.collect();
for (cid, response) in &entries {
cache.put(cid.clone(), response.clone());
}
// Access first entry to make it recently used
cache.get_mut(&entries[0].0);
// Add more entries to trigger eviction
for i in 3..10 {
let data = vec![i; 10];
let cid = ContentId::from_content(&data);
let response = make_response(&data);
cache.put(cid, response);
}
// First entry should still be present (was accessed recently)
assert!(cache.contains(&entries[0].0));
}
}

View file

@ -0,0 +1,262 @@
//! Gateway HTTP Handler - Request processing
//!
//! Handles incoming HTTP requests and routes them to the appropriate
//! content resolution logic.
use crate::cid::ContentId;
use crate::error::{Error, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::net::IpAddr;
use std::sync::Arc;
use tokio::sync::RwLock;
/// HTTP request abstraction
#[derive(Debug, Clone)]
pub struct GatewayRequest {
/// Request method
pub method: Method,
/// Request path (CID or CID/path)
pub path: String,
/// Query parameters
pub query: HashMap<String, String>,
/// Request headers
pub headers: HashMap<String, String>,
/// Client IP address
pub client_ip: Option<IpAddr>,
/// Request body (for POST)
pub body: Option<Vec<u8>>,
}
/// HTTP methods
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Method {
Get,
Head,
Post,
Options,
}
/// HTTP response
#[derive(Debug, Clone)]
pub struct HttpResponse {
/// Status code
pub status: u16,
/// Response headers
pub headers: HashMap<String, String>,
/// Response body
pub body: Vec<u8>,
}
impl HttpResponse {
/// Create a 200 OK response
pub fn ok(body: Vec<u8>, content_type: &str) -> Self {
let mut headers = HashMap::new();
headers.insert("Content-Type".to_string(), content_type.to_string());
headers.insert("Content-Length".to_string(), body.len().to_string());
headers.insert("X-Content-Type-Options".to_string(), "nosniff".to_string());
Self {
status: 200,
headers,
body,
}
}
/// Create a 404 Not Found response
pub fn not_found(message: &str) -> Self {
Self {
status: 404,
headers: HashMap::new(),
body: message.as_bytes().to_vec(),
}
}
/// Create a 400 Bad Request response
pub fn bad_request(message: &str) -> Self {
Self {
status: 400,
headers: HashMap::new(),
body: message.as_bytes().to_vec(),
}
}
/// Create a 500 Internal Server Error response
pub fn internal_error(message: &str) -> Self {
Self {
status: 500,
headers: HashMap::new(),
body: message.as_bytes().to_vec(),
}
}
/// Create a 429 Too Many Requests response
pub fn rate_limited() -> Self {
Self {
status: 429,
headers: HashMap::new(),
body: b"Rate limit exceeded".to_vec(),
}
}
/// Create a 413 Content Too Large response
pub fn content_too_large(max_size: u64) -> Self {
Self {
status: 413,
headers: HashMap::new(),
body: format!("Content exceeds maximum size of {} bytes", max_size).into_bytes(),
}
}
}
/// Rate limiter for requests
pub struct RateLimiter {
/// Requests per window per IP
limit: u32,
/// Window duration in seconds
window_secs: u64,
/// Request counts per IP
counts: Arc<RwLock<HashMap<IpAddr, (u64, u32)>>>, // (window_start, count)
}
impl RateLimiter {
/// Create a new rate limiter
pub fn new(limit: u32, window_secs: u64) -> Self {
Self {
limit,
window_secs,
counts: Arc::new(RwLock::new(HashMap::new())),
}
}
/// Check if request is allowed
pub async fn check(&self, ip: IpAddr) -> bool {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
let mut counts = self.counts.write().await;
let entry = counts.entry(ip).or_insert((now, 0));
// Reset window if expired
if now - entry.0 >= self.window_secs {
entry.0 = now;
entry.1 = 0;
}
if entry.1 >= self.limit {
false
} else {
entry.1 += 1;
true
}
}
/// Clean up old entries
pub async fn cleanup(&self) {
let now = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
let mut counts = self.counts.write().await;
counts.retain(|_, (window_start, _)| now - *window_start < self.window_secs * 2);
}
}
/// Gateway request handler
pub struct GatewayHandler {
/// Rate limiter
rate_limiter: RateLimiter,
/// Maximum content size
max_content_size: u64,
/// Upload enabled
enable_upload: bool,
/// CORS origins
cors_origins: Vec<String>,
}
impl GatewayHandler {
/// Create a new handler
pub fn new(rate_limit: u32, max_content_size: u64, enable_upload: bool, cors_origins: Vec<String>) -> Self {
Self {
rate_limiter: RateLimiter::new(rate_limit, 60),
max_content_size,
enable_upload,
cors_origins,
}
}
/// Parse CID from request path
pub fn parse_path(&self, path: &str) -> Result<(ContentId, Option<String>)> {
let path = path.trim_start_matches('/');
// Check for subpath
let (cid_part, subpath) = if let Some(idx) = path.find('/') {
(&path[..idx], Some(path[idx + 1..].to_string()))
} else {
(path, None)
};
let cid = ContentId::from_string(cid_part)
.map_err(|e| Error::InvalidCid(e.to_string()))?;
Ok((cid, subpath))
}
/// Check rate limit
pub async fn check_rate_limit(&self, ip: Option<IpAddr>) -> bool {
if let Some(ip) = ip {
self.rate_limiter.check(ip).await
} else {
true // Allow if no IP (internal requests)
}
}
/// Build CORS headers
pub fn cors_headers(&self, origin: Option<&str>) -> HashMap<String, String> {
let mut headers = HashMap::new();
if self.cors_origins.contains(&"*".to_string()) {
headers.insert("Access-Control-Allow-Origin".to_string(), "*".to_string());
} else if let Some(origin) = origin {
if self.cors_origins.iter().any(|o| o == origin) {
headers.insert("Access-Control-Allow-Origin".to_string(), origin.to_string());
}
}
headers.insert(
"Access-Control-Allow-Methods".to_string(),
"GET, HEAD, OPTIONS".to_string(),
);
headers.insert(
"Access-Control-Allow-Headers".to_string(),
"Content-Type".to_string(),
);
headers
}
/// Check if upload is allowed
pub fn upload_allowed(&self) -> bool {
self.enable_upload
}
/// Check content size limit
pub fn check_size(&self, size: u64) -> bool {
size <= self.max_content_size
}
}
/// Upload response
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct UploadResponse {
/// Content ID of uploaded content
pub cid: String,
/// Size in bytes
pub size: u64,
/// Gateway URL
pub url: String,
}

View file

@ -0,0 +1,247 @@
//! HTTP Gateway - Web access to Synor Storage
//!
//! Provides HTTP endpoints for accessing content stored on Synor.
//! Makes decentralized storage accessible from any web browser.
//!
//! # URL Patterns
//!
//! - `GET /synor1abc...xyz` - Fetch content by CID
//! - `GET /synor1abc...xyz/path/to/file` - Fetch file from directory CID
//! - `HEAD /synor1abc...xyz` - Check if content exists
//! - `POST /upload` - Upload content (returns CID)
mod handler;
mod resolver;
mod cache;
pub use handler::GatewayHandler;
pub use resolver::ContentResolver;
pub use cache::GatewayCache;
use crate::cid::ContentId;
use crate::error::{Error, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
/// Gateway configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GatewayConfig {
/// HTTP listen address
pub listen_addr: String,
/// Maximum content size to serve (bytes)
pub max_content_size: u64,
/// Cache size in bytes
pub cache_size: u64,
/// Request timeout in seconds
pub timeout_secs: u64,
/// Enable upload endpoint
pub enable_upload: bool,
/// CORS allowed origins
pub cors_origins: Vec<String>,
/// Storage node endpoints for content retrieval
pub storage_nodes: Vec<String>,
/// Rate limit (requests per minute per IP)
pub rate_limit: u32,
}
impl Default for GatewayConfig {
fn default() -> Self {
Self {
listen_addr: "0.0.0.0:8080".to_string(),
max_content_size: 100 * 1024 * 1024, // 100 MB
cache_size: 1024 * 1024 * 1024, // 1 GB
timeout_secs: 60,
enable_upload: false, // Disabled by default for public gateways
cors_origins: vec!["*".to_string()],
storage_nodes: vec![],
rate_limit: 100,
}
}
}
/// HTTP Gateway service
pub struct Gateway {
/// Configuration
config: GatewayConfig,
/// Content resolver
resolver: Arc<ContentResolver>,
/// Response cache
cache: Arc<RwLock<GatewayCache>>,
/// Gateway statistics
stats: Arc<RwLock<GatewayStats>>,
/// Running state
running: Arc<std::sync::atomic::AtomicBool>,
}
/// Gateway statistics
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct GatewayStats {
/// Total requests handled
pub total_requests: u64,
/// Successful requests
pub successful_requests: u64,
/// Failed requests
pub failed_requests: u64,
/// Cache hits
pub cache_hits: u64,
/// Cache misses
pub cache_misses: u64,
/// Bytes served
pub bytes_served: u64,
/// Upload requests (if enabled)
pub upload_requests: u64,
/// Unique CIDs requested
pub unique_cids: u64,
}
impl Gateway {
/// Create a new gateway
pub fn new(config: GatewayConfig) -> Self {
let resolver = Arc::new(ContentResolver::new(config.storage_nodes.clone()));
let cache = Arc::new(RwLock::new(GatewayCache::new(config.cache_size)));
Self {
config,
resolver,
cache,
stats: Arc::new(RwLock::new(GatewayStats::default())),
running: Arc::new(std::sync::atomic::AtomicBool::new(false)),
}
}
/// Start the gateway server
pub async fn start(&self) -> Result<()> {
self.running
.store(true, std::sync::atomic::Ordering::SeqCst);
// TODO: Start HTTP server using axum/actix-web/warp
// Routes:
// - GET /{cid} -> fetch content
// - HEAD /{cid} -> check existence
// - POST /upload -> upload content (if enabled)
// - GET /stats -> gateway statistics
Ok(())
}
/// Stop the gateway
pub async fn stop(&self) {
self.running
.store(false, std::sync::atomic::Ordering::SeqCst);
}
/// Check if gateway is running
pub fn is_running(&self) -> bool {
self.running.load(std::sync::atomic::Ordering::SeqCst)
}
/// Get gateway statistics
pub async fn stats(&self) -> GatewayStats {
self.stats.read().await.clone()
}
/// Fetch content by CID
pub async fn fetch(&self, cid: &ContentId) -> Result<GatewayResponse> {
// Check cache first
{
let cache = self.cache.read().await;
if let Some(cached) = cache.get(cid) {
let mut stats = self.stats.write().await;
stats.cache_hits += 1;
return Ok(cached);
}
}
// Resolve from storage network
let content = self.resolver.resolve(cid).await?;
// Determine MIME type
let mime_type = detect_mime_type(&content);
let response = GatewayResponse {
cid: cid.clone(),
content,
mime_type,
size: cid.size,
};
// Cache the response
{
let mut cache = self.cache.write().await;
cache.put(cid.clone(), response.clone());
}
// Update stats
{
let mut stats = self.stats.write().await;
stats.cache_misses += 1;
stats.bytes_served += response.size;
}
Ok(response)
}
}
/// Gateway response
#[derive(Debug, Clone)]
pub struct GatewayResponse {
/// Content ID
pub cid: ContentId,
/// Content bytes
pub content: Vec<u8>,
/// MIME type
pub mime_type: String,
/// Content size
pub size: u64,
}
/// Detect MIME type from content
fn detect_mime_type(content: &[u8]) -> String {
// Magic byte detection for common types
if content.starts_with(b"\x89PNG\r\n\x1a\n") {
"image/png".to_string()
} else if content.starts_with(b"\xff\xd8\xff") {
"image/jpeg".to_string()
} else if content.starts_with(b"GIF87a") || content.starts_with(b"GIF89a") {
"image/gif".to_string()
} else if content.starts_with(b"RIFF") && content.len() > 12 && &content[8..12] == b"WEBP" {
"image/webp".to_string()
} else if content.starts_with(b"PK\x03\x04") {
// ZIP-based formats
"application/zip".to_string()
} else if content.starts_with(b"%PDF") {
"application/pdf".to_string()
} else if content.starts_with(b"<!DOCTYPE html") || content.starts_with(b"<html") {
"text/html".to_string()
} else if content.starts_with(b"<?xml") {
"application/xml".to_string()
} else if content.starts_with(b"{") || content.starts_with(b"[") {
"application/json".to_string()
} else if content.iter().take(1024).all(|&b| b.is_ascii()) {
"text/plain".to_string()
} else {
"application/octet-stream".to_string()
}
}
/// Content type for directories (DAG structures)
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DirectoryListing {
/// Directory entries
pub entries: Vec<DirectoryEntry>,
}
/// Single directory entry
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DirectoryEntry {
/// Entry name
pub name: String,
/// CID of the entry
pub cid: ContentId,
/// Size in bytes
pub size: u64,
/// True if this is a directory
pub is_directory: bool,
}

View file

@ -0,0 +1,284 @@
//! Content Resolver - Fetches content from storage network
//!
//! Resolves CIDs by contacting storage nodes, fetching chunks,
//! and reassembling content.
use crate::cid::ContentId;
use crate::chunker::Chunk;
use crate::error::{Error, Result};
use std::collections::HashMap;
use std::sync::Arc;
use tokio::sync::RwLock;
/// Content resolver configuration
#[derive(Debug, Clone)]
pub struct ResolverConfig {
/// Storage node endpoints
pub nodes: Vec<String>,
/// Request timeout in seconds
pub timeout_secs: u64,
/// Maximum concurrent requests
pub max_concurrent: usize,
/// Retry count for failed requests
pub retries: u32,
}
impl Default for ResolverConfig {
fn default() -> Self {
Self {
nodes: vec![],
timeout_secs: 30,
max_concurrent: 10,
retries: 3,
}
}
}
/// Resolves content from the storage network
pub struct ContentResolver {
/// Storage node endpoints
nodes: Vec<String>,
/// Node health status
node_health: Arc<RwLock<HashMap<String, NodeHealth>>>,
}
/// Node health tracking
#[derive(Debug, Clone)]
struct NodeHealth {
/// Successful requests
successes: u64,
/// Failed requests
failures: u64,
/// Last response time (ms)
latency_ms: u32,
/// Last check timestamp
last_check: u64,
}
impl Default for NodeHealth {
fn default() -> Self {
Self {
successes: 0,
failures: 0,
latency_ms: 0,
last_check: 0,
}
}
}
impl ContentResolver {
/// Create a new resolver
pub fn new(nodes: Vec<String>) -> Self {
Self {
nodes,
node_health: Arc::new(RwLock::new(HashMap::new())),
}
}
/// Resolve content by CID
pub async fn resolve(&self, cid: &ContentId) -> Result<Vec<u8>> {
if self.nodes.is_empty() {
return Err(Error::Network("No storage nodes configured".to_string()));
}
// Find providers for this CID
let providers = self.find_providers(cid).await?;
if providers.is_empty() {
return Err(Error::NotFound(format!(
"No providers found for CID {}",
cid.to_string_repr()
)));
}
// For small content, try direct fetch
if cid.size <= 1024 * 1024 {
return self.fetch_direct(cid, &providers).await;
}
// Fetch metadata to determine chunk count
let metadata = self.fetch_metadata(cid, &providers).await?;
// Fetch all chunks and reassemble
let mut all_data = Vec::with_capacity(cid.size as usize);
for chunk_index in 0..metadata.chunk_count {
let chunk_data = self.fetch_chunk(cid, chunk_index, &providers).await?;
all_data.extend_from_slice(&chunk_data);
}
// Trim to exact size (last chunk might have padding)
all_data.truncate(cid.size as usize);
// Verify CID matches
if !cid.verify(&all_data) {
return Err(Error::ReassemblyFailed(
"Content hash does not match CID".to_string(),
));
}
Ok(all_data)
}
/// Fetch small content directly
async fn fetch_direct(&self, cid: &ContentId, providers: &[String]) -> Result<Vec<u8>> {
for provider in providers {
match self.fetch_content_from_provider(provider, cid).await {
Ok(data) => {
// Verify before returning
if cid.verify(&data) {
return Ok(data);
}
}
Err(_) => continue,
}
}
Err(Error::NotFound(format!(
"Could not fetch content for CID {}",
cid.to_string_repr()
)))
}
/// Fetch content from a single provider
async fn fetch_content_from_provider(
&self,
_provider: &str,
_cid: &ContentId,
) -> Result<Vec<u8>> {
// TODO: HTTP request to provider
// GET {provider}/content/{cid}
Err(Error::Network("Not implemented".to_string()))
}
/// Find providers for a CID
async fn find_providers(&self, _cid: &ContentId) -> Result<Vec<String>> {
// TODO: Query DHT for providers
// For now, return all configured nodes
Ok(self.nodes.clone())
}
/// Fetch content metadata
async fn fetch_metadata(&self, cid: &ContentId, providers: &[String]) -> Result<ContentMetadata> {
for provider in providers {
match self.query_metadata(provider, cid).await {
Ok(metadata) => return Ok(metadata),
Err(_) => continue, // Try next provider
}
}
Err(Error::NotFound("Could not fetch metadata".to_string()))
}
/// Query metadata from a single provider
async fn query_metadata(&self, _provider: &str, cid: &ContentId) -> Result<ContentMetadata> {
// TODO: HTTP request to provider
// For now, calculate from size
let chunk_size = 1024 * 1024; // 1 MB
let chunk_count = ((cid.size as usize) + chunk_size - 1) / chunk_size;
Ok(ContentMetadata {
size: cid.size,
chunk_count,
chunk_size,
})
}
/// Fetch a single chunk
async fn fetch_chunk(
&self,
cid: &ContentId,
chunk_index: usize,
providers: &[String],
) -> Result<Vec<u8>> {
// Try to fetch from multiple providers
for provider in providers {
match self.fetch_chunk_from_provider(provider, cid, chunk_index).await {
Ok(data) => return Ok(data),
Err(_) => continue,
}
}
Err(Error::NotFound(format!(
"Could not fetch chunk {} for CID {}",
chunk_index,
cid.to_string_repr()
)))
}
/// Fetch chunk from a single provider
async fn fetch_chunk_from_provider(
&self,
_provider: &str,
_cid: &ContentId,
_chunk_index: usize,
) -> Result<Vec<u8>> {
// TODO: HTTP request to provider
// GET {provider}/chunks/{cid}/{chunk_index}
Err(Error::Network("Not implemented".to_string()))
}
/// Update node health stats
async fn update_health(&self, node: &str, success: bool, latency_ms: u32) {
let mut health = self.node_health.write().await;
let entry = health.entry(node.to_string()).or_default();
if success {
entry.successes += 1;
} else {
entry.failures += 1;
}
entry.latency_ms = latency_ms;
entry.last_check = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs();
}
/// Get nodes sorted by health
pub async fn sorted_nodes(&self) -> Vec<String> {
let health = self.node_health.read().await;
let mut nodes: Vec<_> = self.nodes.iter().cloned().collect();
nodes.sort_by(|a, b| {
let health_a = health.get(a);
let health_b = health.get(b);
match (health_a, health_b) {
(Some(ha), Some(hb)) => {
// Sort by success rate, then latency
let rate_a = if ha.successes + ha.failures > 0 {
ha.successes as f64 / (ha.successes + ha.failures) as f64
} else {
0.5
};
let rate_b = if hb.successes + hb.failures > 0 {
hb.successes as f64 / (hb.successes + hb.failures) as f64
} else {
0.5
};
rate_b
.partial_cmp(&rate_a)
.unwrap_or(std::cmp::Ordering::Equal)
.then_with(|| ha.latency_ms.cmp(&hb.latency_ms))
}
(Some(_), None) => std::cmp::Ordering::Less,
(None, Some(_)) => std::cmp::Ordering::Greater,
(None, None) => std::cmp::Ordering::Equal,
}
});
nodes
}
}
/// Content metadata from storage network
#[derive(Debug, Clone)]
struct ContentMetadata {
/// Total size in bytes
size: u64,
/// Number of chunks
chunk_count: usize,
/// Size of each chunk
chunk_size: usize,
}

View file

@ -1,109 +1,69 @@
//! RocksDB storage layer for Synor blockchain.
//! Synor Storage Layer
//!
//! This crate provides persistent storage for:
//! - Blocks and block headers
//! - Transactions
//! - UTXO set
//! - DAG relations and GHOSTDAG data
//! - State snapshots
//! - Smart contract bytecode and state
//! Decentralized storage network for the Synor blockchain ecosystem.
//! Enables permanent, censorship-resistant storage of any file type.
//!
//! # Column Families
//! # Architecture
//!
//! The database is organized into column families:
//! - `headers`: Block headers indexed by hash
//! - `blocks`: Full block bodies indexed by hash
//! - `transactions`: Transactions indexed by txid
//! - `utxos`: UTXO entries indexed by outpoint
//! - `relations`: DAG parent-child relations
//! - `ghostdag`: GHOSTDAG data (blue scores, merge sets)
//! - `reachability`: Reachability intervals
//! - `metadata`: Chain metadata (tips, pruning point, etc.)
//! - `contracts`: Contract bytecode indexed by contract ID
//! - `contract_state`: Contract storage (key-value pairs per contract)
//! The storage layer operates as L2 alongside the Synor blockchain (L1):
//! - **Content Addressing**: Files identified by cryptographic hash (CID)
//! - **Erasure Coding**: Data split with redundancy for fault tolerance
//! - **Storage Proofs**: Nodes prove they're storing data to earn rewards
//! - **Gateways**: HTTP access for web browsers
//!
//! # Example
//!
//! ```rust,ignore
//! use synor_storage::{ContentId, Chunker, StorageClient};
//!
//! // Upload a file
//! let data = std::fs::read("myfile.zip")?;
//! let cid = ContentId::from_content(&data);
//! client.store(cid, &data).await?;
//!
//! // Retrieve by CID
//! let retrieved = client.retrieve(&cid).await?;
//! ```
#![allow(dead_code)]
pub mod cid;
pub mod chunker;
pub mod erasure;
pub mod proof;
pub mod deal;
pub mod error;
pub mod cache;
pub mod db;
pub mod stores;
// Node module - storage node implementation
pub mod node;
pub use cache::{CacheConfig, CacheSizeInfo, CacheStats, StorageCache};
pub use db::{Database, DatabaseConfig, DbError};
pub use stores::{
BatchStore, BlockBody, BlockStore, ChainState, ContractStateStore, ContractStore,
GhostdagStore, HeaderStore, MetadataStore, RelationsStore, StoredContract, StoredGhostdagData,
StoredRelations, StoredUtxo, TransactionStore, UtxoStore,
};
// Gateway module - HTTP access to stored content
pub mod gateway;
/// Column family names.
pub mod cf {
pub const HEADERS: &str = "headers";
pub const BLOCKS: &str = "blocks";
pub const TRANSACTIONS: &str = "transactions";
pub const UTXOS: &str = "utxos";
pub const RELATIONS: &str = "relations";
pub const GHOSTDAG: &str = "ghostdag";
pub const REACHABILITY: &str = "reachability";
pub const METADATA: &str = "metadata";
/// Contract bytecode storage.
pub const CONTRACTS: &str = "contracts";
/// Contract state storage (key-value per contract).
pub const CONTRACT_STATE: &str = "contract_state";
pub use cid::ContentId;
pub use chunker::{Chunk, Chunker};
pub use error::{Error, Result};
pub use node::{NodeConfig, StorageNode, NodeState, NodeStats};
pub use gateway::{Gateway, GatewayConfig, GatewayStats};
/// All column family names.
pub fn all() -> Vec<&'static str> {
vec![
HEADERS,
BLOCKS,
TRANSACTIONS,
UTXOS,
RELATIONS,
GHOSTDAG,
REACHABILITY,
METADATA,
CONTRACTS,
CONTRACT_STATE,
]
}
/// Storage layer configuration
#[derive(Debug, Clone)]
pub struct StorageConfig {
/// Chunk size in bytes (default: 1MB)
pub chunk_size: usize,
/// Number of data shards for erasure coding
pub data_shards: usize,
/// Number of parity shards for erasure coding
pub parity_shards: usize,
/// Replication factor (copies per shard)
pub replication_factor: usize,
}
/// Key prefixes for different data types within column families.
pub mod keys {
/// Prefix for block height -> hash index.
pub const HEIGHT_TO_HASH: u8 = 0x01;
/// Prefix for tip blocks.
pub const TIPS: u8 = 0x02;
/// Prefix for pruning point.
pub const PRUNING_POINT: u8 = 0x03;
/// Prefix for chain state.
pub const CHAIN_STATE: u8 = 0x04;
/// Creates a prefixed key.
pub fn prefixed_key(prefix: u8, data: &[u8]) -> Vec<u8> {
let mut key = Vec::with_capacity(1 + data.len());
key.push(prefix);
key.extend_from_slice(data);
key
}
/// Encodes a u64 as big-endian bytes for lexicographic ordering.
pub fn encode_u64(value: u64) -> [u8; 8] {
value.to_be_bytes()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_column_families() {
let cfs = cf::all();
assert_eq!(cfs.len(), 10);
assert!(cfs.contains(&cf::HEADERS));
assert!(cfs.contains(&cf::UTXOS));
assert!(cfs.contains(&cf::CONTRACTS));
assert!(cfs.contains(&cf::CONTRACT_STATE));
impl Default for StorageConfig {
fn default() -> Self {
Self {
chunk_size: 1024 * 1024, // 1 MB
data_shards: 10,
parity_shards: 4,
replication_factor: 3,
}
}
}

View file

@ -0,0 +1,279 @@
//! Storage Node - Stores and serves data on the Synor network
//!
//! Storage nodes are the backbone of decentralized storage:
//! - Store erasure-coded chunks from deals
//! - Respond to proof challenges from L1
//! - Participate in P2P network for data distribution
//! - Earn rewards for successful storage
mod store;
mod network;
mod prover;
pub use store::ChunkStore;
pub use network::StorageNetwork;
pub use prover::ProofSubmitter;
use crate::cid::ContentId;
use crate::deal::{StorageDeal, StorageOffer};
use crate::erasure::Shard;
use crate::proof::{Challenge, StorageProof};
use crate::error::{Error, Result};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tokio::sync::RwLock;
/// Storage node configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NodeConfig {
/// Node identity (public key)
pub node_id: [u8; 32],
/// Storage capacity in bytes
pub capacity: u64,
/// Minimum price per byte per epoch
pub min_price: u64,
/// Maximum price per byte per epoch
pub max_price: u64,
/// Data directory for chunk storage
pub data_dir: String,
/// Regions this node serves
pub regions: Vec<String>,
/// P2P listen addresses
pub listen_addrs: Vec<String>,
/// Bootstrap nodes for P2P discovery
pub bootstrap_nodes: Vec<String>,
/// L1 RPC endpoint for proof submission
pub l1_rpc: String,
/// Node's stake on L1 (for slashing)
pub stake: u64,
}
impl Default for NodeConfig {
fn default() -> Self {
Self {
node_id: [0u8; 32],
capacity: 100 * 1024 * 1024 * 1024, // 100 GB default
min_price: 1,
max_price: 10,
data_dir: "./synor-storage-data".to_string(),
regions: vec!["global".to_string()],
listen_addrs: vec!["/ip4/0.0.0.0/tcp/4001".to_string()],
bootstrap_nodes: vec![],
l1_rpc: "http://localhost:8545".to_string(),
stake: 0,
}
}
}
/// Storage node state
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum NodeState {
/// Starting up
Starting,
/// Syncing with network
Syncing,
/// Fully operational
Ready,
/// Temporarily offline
Offline,
/// Shutting down
ShuttingDown,
}
/// Storage node statistics
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct NodeStats {
/// Total bytes stored
pub bytes_stored: u64,
/// Number of active deals
pub active_deals: u64,
/// Successful proofs submitted
pub proofs_submitted: u64,
/// Failed proofs
pub proofs_failed: u64,
/// Total earnings (atomic SYNOR)
pub total_earnings: u64,
/// Uptime in seconds
pub uptime_seconds: u64,
/// Connected peers
pub connected_peers: u32,
}
/// A running storage node
pub struct StorageNode {
/// Node configuration
config: NodeConfig,
/// Current state
state: Arc<RwLock<NodeState>>,
/// Node statistics
stats: Arc<RwLock<NodeStats>>,
/// Local chunk storage
chunk_store: Arc<ChunkStore>,
/// Active deals this node is serving
deals: Arc<RwLock<Vec<StorageDeal>>>,
/// Our storage offer
offer: Arc<RwLock<StorageOffer>>,
}
impl StorageNode {
/// Create a new storage node
pub async fn new(config: NodeConfig) -> Result<Self> {
// Initialize chunk store
let chunk_store = Arc::new(ChunkStore::new(&config.data_dir)?);
// Create initial offer
let used_capacity = chunk_store.used_capacity();
let available = config.capacity.saturating_sub(used_capacity);
let offer = StorageOffer {
provider: config.node_id,
available_capacity: available,
price_per_byte_epoch: config.min_price,
min_duration: 100, // ~100 blocks minimum
max_duration: 1000000, // ~1M blocks max (~1 year at 6s blocks)
regions: config.regions.clone(),
stake: config.stake,
uptime: 100,
success_rate: 100,
};
Ok(Self {
config,
state: Arc::new(RwLock::new(NodeState::Starting)),
stats: Arc::new(RwLock::new(NodeStats::default())),
chunk_store,
deals: Arc::new(RwLock::new(Vec::new())),
offer: Arc::new(RwLock::new(offer)),
})
}
/// Start the storage node
pub async fn start(&self) -> Result<()> {
*self.state.write().await = NodeState::Syncing;
// TODO: Start P2P network
// TODO: Connect to L1 for challenge monitoring
// TODO: Start proof submission loop
*self.state.write().await = NodeState::Ready;
Ok(())
}
/// Stop the storage node gracefully
pub async fn stop(&self) -> Result<()> {
*self.state.write().await = NodeState::ShuttingDown;
// TODO: Stop accepting new deals
// TODO: Finish pending proof submissions
// TODO: Disconnect from P2P network
*self.state.write().await = NodeState::Offline;
Ok(())
}
/// Get current node state
pub async fn state(&self) -> NodeState {
*self.state.read().await
}
/// Get node statistics
pub async fn stats(&self) -> NodeStats {
self.stats.read().await.clone()
}
/// Store a shard from an accepted deal
pub async fn store_shard(&self, deal_id: [u8; 32], shard: Shard) -> Result<()> {
// Verify we have accepted this deal
let deals = self.deals.read().await;
if !deals.iter().any(|d| d.deal_id == deal_id) {
return Err(Error::InvalidDeal("Deal not found".to_string()));
}
drop(deals);
// Store the shard
self.chunk_store.store_shard(&deal_id, &shard).await?;
// Update stats
let mut stats = self.stats.write().await;
stats.bytes_stored += shard.data.len() as u64;
// Update available capacity in offer
let mut offer = self.offer.write().await;
offer.available_capacity = offer.available_capacity.saturating_sub(shard.data.len() as u64);
Ok(())
}
/// Retrieve a shard
pub async fn get_shard(&self, deal_id: &[u8; 32], shard_index: u8) -> Result<Shard> {
self.chunk_store.get_shard(deal_id, shard_index).await
}
/// Accept a storage deal
pub async fn accept_deal(&self, deal: StorageDeal) -> Result<()> {
// Verify we have capacity
let offer = self.offer.read().await;
if offer.available_capacity < deal.size {
return Err(Error::InsufficientCapacity);
}
drop(offer);
// Add deal to active deals
let mut deals = self.deals.write().await;
deals.push(deal);
// Update stats
let mut stats = self.stats.write().await;
stats.active_deals += 1;
Ok(())
}
/// Generate proof for a challenge
pub async fn generate_proof(&self, challenge: Challenge) -> Result<StorageProof> {
// Find the deal for this CID
let deals = self.deals.read().await;
let deal = deals
.iter()
.find(|d| d.cid == challenge.cid)
.ok_or_else(|| Error::NotFound("CID not stored".to_string()))?;
// Get the challenged shard/chunk
let shard = self.chunk_store.get_shard(&deal.deal_id, challenge.chunk_index as u8).await?;
// Build Merkle proof for the requested byte range
let proof_data = shard.data
.get(challenge.byte_offset as usize..(challenge.byte_offset + challenge.byte_length) as usize)
.ok_or_else(|| Error::ProofVerificationFailed("Byte range out of bounds".to_string()))?
.to_vec();
// Build Merkle tree for proof
let tree = crate::proof::build_merkle_tree(&shard.data, 1024);
let leaf_index = challenge.byte_offset as usize / 1024;
let merkle_proof = tree.proof(leaf_index);
Ok(StorageProof {
node_id: self.config.node_id,
challenge: challenge.clone(),
chunk_merkle_root: tree.root(),
proof_data,
merkle_proof,
timestamp: std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_secs(),
signature: vec![], // TODO: Sign with node key
})
}
/// Get current storage offer
pub async fn offer(&self) -> StorageOffer {
self.offer.read().await.clone()
}
/// Update storage offer pricing
pub async fn update_pricing(&self, min_price: u64, max_price: u64) {
let mut offer = self.offer.write().await;
offer.price_per_byte_epoch = min_price;
}
}

View file

@ -0,0 +1,238 @@
//! Storage Network - P2P layer for storage nodes
//!
//! Handles peer discovery, data transfer, and network coordination.
//! Built on libp2p when the 'node' feature is enabled.
use crate::cid::ContentId;
use crate::error::{Error, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
/// Peer information
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PeerInfo {
/// Peer ID (derived from public key)
pub peer_id: [u8; 32],
/// Multiaddresses for this peer
pub addresses: Vec<String>,
/// Regions served
pub regions: Vec<String>,
/// Available capacity
pub available_capacity: u64,
/// Last seen timestamp
pub last_seen: u64,
/// Reputation score (0-100)
pub reputation: u8,
}
/// Network message types
#[derive(Debug, Clone, Serialize, Deserialize)]
pub enum NetworkMessage {
/// Request to retrieve a shard
GetShard {
deal_id: [u8; 32],
shard_index: u8,
},
/// Response with shard data
ShardResponse {
deal_id: [u8; 32],
shard_index: u8,
#[serde(with = "serde_bytes")]
data: Vec<u8>,
},
/// Announce storage availability
AnnounceCapacity {
available: u64,
price_per_byte: u64,
regions: Vec<String>,
},
/// Request to find providers for a CID
FindProviders {
cid: ContentId,
},
/// Response with provider list
ProvidersFound {
cid: ContentId,
providers: Vec<[u8; 32]>,
},
/// Gossip about a new deal
NewDeal {
deal_id: [u8; 32],
cid: ContentId,
size: u64,
},
}
/// Storage network abstraction
pub struct StorageNetwork {
/// Our peer ID
local_peer_id: [u8; 32],
/// Known peers
peers: HashMap<[u8; 32], PeerInfo>,
/// CID to providers mapping (content routing)
providers: HashMap<ContentId, Vec<[u8; 32]>>,
/// Network state
state: NetworkState,
}
/// Network state
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum NetworkState {
/// Not connected
Disconnected,
/// Connecting to bootstrap nodes
Bootstrapping,
/// Fully connected
Connected,
}
impl StorageNetwork {
/// Create a new storage network
pub fn new(local_peer_id: [u8; 32]) -> Self {
Self {
local_peer_id,
peers: HashMap::new(),
providers: HashMap::new(),
state: NetworkState::Disconnected,
}
}
/// Start the network
#[cfg(feature = "node")]
pub async fn start(&mut self, listen_addrs: &[String], bootstrap: &[String]) -> Result<()> {
self.state = NetworkState::Bootstrapping;
// TODO: Initialize libp2p swarm
// - Create noise transport for encryption
// - Create yamux multiplexer
// - Set up Kademlia DHT for peer discovery
// - Set up gossipsub for deal announcements
// - Connect to bootstrap nodes
self.state = NetworkState::Connected;
Ok(())
}
/// Start the network (stub for non-node builds)
#[cfg(not(feature = "node"))]
pub async fn start(&mut self, _listen_addrs: &[String], _bootstrap: &[String]) -> Result<()> {
self.state = NetworkState::Connected;
Ok(())
}
/// Stop the network
pub async fn stop(&mut self) {
self.state = NetworkState::Disconnected;
}
/// Get network state
pub fn state(&self) -> NetworkState {
self.state
}
/// Add a known peer
pub fn add_peer(&mut self, info: PeerInfo) {
self.peers.insert(info.peer_id, info);
}
/// Remove a peer
pub fn remove_peer(&mut self, peer_id: &[u8; 32]) {
self.peers.remove(peer_id);
}
/// Get peer info
pub fn get_peer(&self, peer_id: &[u8; 32]) -> Option<&PeerInfo> {
self.peers.get(peer_id)
}
/// Get all connected peers
pub fn connected_peers(&self) -> Vec<&PeerInfo> {
self.peers.values().collect()
}
/// Find providers for a CID
pub async fn find_providers(&self, cid: &ContentId) -> Vec<[u8; 32]> {
self.providers.get(cid).cloned().unwrap_or_default()
}
/// Announce that we provide a CID
pub async fn announce_provider(&mut self, cid: ContentId) {
let providers = self.providers.entry(cid).or_default();
if !providers.contains(&self.local_peer_id) {
providers.push(self.local_peer_id);
}
}
/// Request a shard from a peer
pub async fn request_shard(
&self,
peer_id: &[u8; 32],
deal_id: [u8; 32],
shard_index: u8,
) -> Result<Vec<u8>> {
let _peer = self.peers.get(peer_id).ok_or_else(|| {
Error::Network(format!("Peer not found: {:?}", hex::encode(peer_id)))
})?;
// TODO: Send GetShard message via libp2p
// For now, return error as network not fully implemented
Err(Error::Network("Network request not implemented".to_string()))
}
/// Broadcast a deal announcement
pub async fn announce_deal(&self, deal_id: [u8; 32], cid: ContentId, size: u64) -> Result<()> {
let _msg = NetworkMessage::NewDeal { deal_id, cid, size };
// TODO: Broadcast via gossipsub
Ok(())
}
/// Update reputation for a peer
pub fn update_reputation(&mut self, peer_id: &[u8; 32], delta: i8) {
if let Some(peer) = self.peers.get_mut(peer_id) {
peer.reputation = peer.reputation.saturating_add_signed(delta);
}
}
}
/// Content routing - find where data is stored
pub struct ContentRouter {
/// Local provider records
records: HashMap<ContentId, Vec<[u8; 32]>>,
}
impl ContentRouter {
/// Create a new content router
pub fn new() -> Self {
Self {
records: HashMap::new(),
}
}
/// Add a provider record
pub fn add_provider(&mut self, cid: ContentId, provider: [u8; 32]) {
let providers = self.records.entry(cid).or_default();
if !providers.contains(&provider) {
providers.push(provider);
}
}
/// Remove a provider record
pub fn remove_provider(&mut self, cid: &ContentId, provider: &[u8; 32]) {
if let Some(providers) = self.records.get_mut(cid) {
providers.retain(|p| p != provider);
}
}
/// Find providers for a CID
pub fn find_providers(&self, cid: &ContentId) -> &[[u8; 32]] {
self.records.get(cid).map(|v| v.as_slice()).unwrap_or(&[])
}
}
impl Default for ContentRouter {
fn default() -> Self {
Self::new()
}
}

View file

@ -0,0 +1,268 @@
//! Proof Submitter - Submits storage proofs to L1
//!
//! Monitors L1 for challenges and submits proofs.
//! Failed proofs result in slashing of provider stake.
use crate::proof::{Challenge, StorageProof};
use crate::error::{Error, Result};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
/// Challenge status
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum ChallengeStatus {
/// Challenge received, generating proof
Pending,
/// Proof generated, awaiting submission
ProofReady,
/// Proof submitted to L1
Submitted,
/// Proof verified on L1
Verified,
/// Proof failed verification
Failed,
/// Challenge expired (missed deadline)
Expired,
}
/// A tracked challenge
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TrackedChallenge {
/// The challenge
pub challenge: Challenge,
/// Current status
pub status: ChallengeStatus,
/// Generated proof (if ready)
pub proof: Option<StorageProof>,
/// L1 block when challenge was received
pub received_block: u64,
/// L1 block deadline for submission
pub deadline_block: u64,
/// Transaction hash if submitted
pub tx_hash: Option<[u8; 32]>,
/// Error message if failed
pub error: Option<String>,
}
/// Proof submission configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ProverConfig {
/// L1 RPC endpoint
pub l1_rpc: String,
/// Node private key for signing
pub node_key: [u8; 32],
/// Maximum gas price (in wei)
pub max_gas_price: u64,
/// Proof submission deadline buffer (blocks before deadline)
pub deadline_buffer: u64,
}
impl Default for ProverConfig {
fn default() -> Self {
Self {
l1_rpc: "http://localhost:8545".to_string(),
node_key: [0u8; 32],
max_gas_price: 100_000_000_000, // 100 gwei
deadline_buffer: 10,
}
}
}
/// Proof submitter service
pub struct ProofSubmitter {
/// Configuration
config: ProverConfig,
/// Active challenges being processed
challenges: HashMap<[u8; 32], TrackedChallenge>,
/// Proof submission statistics
stats: ProverStats,
}
/// Prover statistics
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ProverStats {
/// Total challenges received
pub challenges_received: u64,
/// Proofs submitted
pub proofs_submitted: u64,
/// Proofs verified
pub proofs_verified: u64,
/// Proofs failed
pub proofs_failed: u64,
/// Challenges expired (missed deadline)
pub challenges_expired: u64,
/// Total gas spent
pub total_gas_spent: u64,
}
impl ProofSubmitter {
/// Create a new proof submitter
pub fn new(config: ProverConfig) -> Self {
Self {
config,
challenges: HashMap::new(),
stats: ProverStats::default(),
}
}
/// Start monitoring for challenges
pub async fn start(&mut self) -> Result<()> {
// TODO: Subscribe to L1 events for storage challenges
// - Listen for ChallengeIssued events
// - Track deadline for each challenge
// - Generate and submit proofs before deadline
Ok(())
}
/// Stop the prover
pub async fn stop(&mut self) -> Result<()> {
// TODO: Complete pending proof submissions
Ok(())
}
/// Add a challenge to track
pub fn add_challenge(&mut self, challenge: Challenge, current_block: u64, deadline_block: u64) {
let challenge_id = self.challenge_id(&challenge);
let tracked = TrackedChallenge {
challenge,
status: ChallengeStatus::Pending,
proof: None,
received_block: current_block,
deadline_block,
tx_hash: None,
error: None,
};
self.challenges.insert(challenge_id, tracked);
self.stats.challenges_received += 1;
}
/// Set proof for a challenge
pub fn set_proof(&mut self, challenge: &Challenge, proof: StorageProof) -> Result<()> {
let challenge_id = self.challenge_id(challenge);
let tracked = self.challenges.get_mut(&challenge_id).ok_or_else(|| {
Error::Internal("Challenge not found".to_string())
})?;
tracked.proof = Some(proof);
tracked.status = ChallengeStatus::ProofReady;
Ok(())
}
/// Submit a proof to L1
pub async fn submit_proof(&mut self, challenge: &Challenge) -> Result<[u8; 32]> {
let challenge_id = self.challenge_id(challenge);
let tracked = self.challenges.get_mut(&challenge_id).ok_or_else(|| {
Error::Internal("Challenge not found".to_string())
})?;
if tracked.proof.is_none() {
return Err(Error::Internal("Proof not ready".to_string()));
}
// TODO: Submit proof transaction to L1
// 1. Encode proof as L1 transaction data
// 2. Sign with node key
// 3. Submit to L1 RPC
// 4. Wait for transaction receipt
// Placeholder tx hash
let tx_hash = *blake3::hash(challenge_id.as_slice()).as_bytes();
tracked.tx_hash = Some(tx_hash);
tracked.status = ChallengeStatus::Submitted;
self.stats.proofs_submitted += 1;
Ok(tx_hash)
}
/// Mark a challenge as verified
pub fn mark_verified(&mut self, challenge: &Challenge) {
let challenge_id = self.challenge_id(challenge);
if let Some(tracked) = self.challenges.get_mut(&challenge_id) {
tracked.status = ChallengeStatus::Verified;
self.stats.proofs_verified += 1;
}
}
/// Mark a challenge as failed
pub fn mark_failed(&mut self, challenge: &Challenge, error: String) {
let challenge_id = self.challenge_id(challenge);
if let Some(tracked) = self.challenges.get_mut(&challenge_id) {
tracked.status = ChallengeStatus::Failed;
tracked.error = Some(error);
self.stats.proofs_failed += 1;
}
}
/// Check for expired challenges
pub fn check_expired(&mut self, current_block: u64) -> Vec<[u8; 32]> {
let mut expired = Vec::new();
for (id, tracked) in &mut self.challenges {
if tracked.status == ChallengeStatus::Pending
|| tracked.status == ChallengeStatus::ProofReady
{
if current_block >= tracked.deadline_block {
tracked.status = ChallengeStatus::Expired;
self.stats.challenges_expired += 1;
expired.push(*id);
}
}
}
expired
}
/// Get pending challenges that need proofs
pub fn pending_challenges(&self) -> Vec<&Challenge> {
self.challenges
.values()
.filter(|t| t.status == ChallengeStatus::Pending)
.map(|t| &t.challenge)
.collect()
}
/// Get challenges ready for submission
pub fn ready_for_submission(&self) -> Vec<&Challenge> {
self.challenges
.values()
.filter(|t| t.status == ChallengeStatus::ProofReady)
.map(|t| &t.challenge)
.collect()
}
/// Get prover statistics
pub fn stats(&self) -> &ProverStats {
&self.stats
}
/// Generate unique ID for a challenge
fn challenge_id(&self, challenge: &Challenge) -> [u8; 32] {
let mut data = Vec::new();
data.extend_from_slice(&challenge.block_hash);
data.extend_from_slice(&challenge.epoch.to_le_bytes());
data.extend_from_slice(&challenge.cid.digest);
data.extend_from_slice(&challenge.chunk_index.to_le_bytes());
*blake3::hash(&data).as_bytes()
}
/// Clean up old completed/failed challenges
pub fn cleanup(&mut self, max_age_blocks: u64, current_block: u64) {
self.challenges.retain(|_, tracked| {
match tracked.status {
ChallengeStatus::Verified | ChallengeStatus::Failed | ChallengeStatus::Expired => {
current_block - tracked.received_block < max_age_blocks
}
_ => true,
}
});
}
}

View file

@ -0,0 +1,238 @@
//! Chunk Store - Persistent storage for erasure-coded shards
//!
//! Uses the filesystem (or RocksDB when 'node' feature is enabled)
//! to store shards organized by deal ID.
use crate::erasure::Shard;
use crate::error::{Error, Result};
use std::path::{Path, PathBuf};
use tokio::fs;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
/// Persistent storage for shards
pub struct ChunkStore {
/// Base directory for storage
base_dir: PathBuf,
/// Bytes currently used
used_bytes: std::sync::atomic::AtomicU64,
}
impl ChunkStore {
/// Create a new chunk store
pub fn new(base_dir: &str) -> Result<Self> {
let path = PathBuf::from(base_dir);
// Create directory synchronously for initialization
std::fs::create_dir_all(&path)?;
// Calculate used capacity from existing files
let used = Self::calculate_used_capacity(&path);
Ok(Self {
base_dir: path,
used_bytes: std::sync::atomic::AtomicU64::new(used),
})
}
fn calculate_used_capacity(path: &Path) -> u64 {
let mut total = 0u64;
if let Ok(entries) = std::fs::read_dir(path) {
for entry in entries.flatten() {
if let Ok(metadata) = entry.metadata() {
if metadata.is_file() {
total += metadata.len();
} else if metadata.is_dir() {
total += Self::calculate_used_capacity(&entry.path());
}
}
}
}
total
}
/// Get bytes currently used
pub fn used_capacity(&self) -> u64 {
self.used_bytes.load(std::sync::atomic::Ordering::Relaxed)
}
/// Store a shard
pub async fn store_shard(&self, deal_id: &[u8; 32], shard: &Shard) -> Result<()> {
// Create deal directory
let deal_dir = self.deal_dir(deal_id);
fs::create_dir_all(&deal_dir).await?;
// Shard file path
let shard_path = deal_dir.join(format!("shard_{:03}.bin", shard.index));
// Serialize shard
let encoded = serde_json::to_vec(shard)?;
// Write atomically (write to temp, then rename)
let temp_path = deal_dir.join(format!("shard_{:03}.tmp", shard.index));
let mut file = fs::File::create(&temp_path).await?;
file.write_all(&encoded).await?;
file.sync_all().await?;
fs::rename(&temp_path, &shard_path).await?;
// Update used capacity
self.used_bytes.fetch_add(
encoded.len() as u64,
std::sync::atomic::Ordering::Relaxed,
);
Ok(())
}
/// Retrieve a shard
pub async fn get_shard(&self, deal_id: &[u8; 32], shard_index: u8) -> Result<Shard> {
let shard_path = self.deal_dir(deal_id).join(format!("shard_{:03}.bin", shard_index));
if !shard_path.exists() {
return Err(Error::NotFound(format!(
"Shard {} not found for deal",
shard_index
)));
}
let mut file = fs::File::open(&shard_path).await?;
let mut data = Vec::new();
file.read_to_end(&mut data).await?;
let shard: Shard = serde_json::from_slice(&data)?;
Ok(shard)
}
/// Delete all shards for a deal
pub async fn delete_deal(&self, deal_id: &[u8; 32]) -> Result<()> {
let deal_dir = self.deal_dir(deal_id);
if deal_dir.exists() {
// Calculate size before deletion
let size = Self::calculate_used_capacity(&deal_dir);
// Remove directory
fs::remove_dir_all(&deal_dir).await?;
// Update used capacity
self.used_bytes.fetch_sub(size, std::sync::atomic::Ordering::Relaxed);
}
Ok(())
}
/// List all shard indices for a deal
pub async fn list_shards(&self, deal_id: &[u8; 32]) -> Result<Vec<u8>> {
let deal_dir = self.deal_dir(deal_id);
if !deal_dir.exists() {
return Ok(Vec::new());
}
let mut indices = Vec::new();
let mut entries = fs::read_dir(&deal_dir).await?;
while let Some(entry) = entries.next_entry().await? {
let name = entry.file_name();
let name_str = name.to_string_lossy();
if name_str.starts_with("shard_") && name_str.ends_with(".bin") {
// Parse index from "shard_XXX.bin"
if let Some(idx_str) = name_str.strip_prefix("shard_").and_then(|s| s.strip_suffix(".bin")) {
if let Ok(idx) = idx_str.parse::<u8>() {
indices.push(idx);
}
}
}
}
indices.sort();
Ok(indices)
}
/// Check if a deal exists
pub async fn has_deal(&self, deal_id: &[u8; 32]) -> bool {
self.deal_dir(deal_id).exists()
}
/// Get the directory for a deal
fn deal_dir(&self, deal_id: &[u8; 32]) -> PathBuf {
// Use first 4 bytes as subdirectory for sharding
let subdir = hex::encode(&deal_id[..2]);
let deal_hex = hex::encode(deal_id);
self.base_dir.join(subdir).join(deal_hex)
}
/// Verify shard integrity
pub async fn verify_shard(&self, deal_id: &[u8; 32], shard_index: u8) -> Result<bool> {
let shard = self.get_shard(deal_id, shard_index).await?;
// Verify hash
let computed_hash = *blake3::hash(&shard.data).as_bytes();
Ok(computed_hash == shard.hash)
}
/// Get storage statistics
pub fn stats(&self) -> StoreStats {
StoreStats {
used_bytes: self.used_capacity(),
base_dir: self.base_dir.to_string_lossy().to_string(),
}
}
}
/// Storage statistics
#[derive(Debug, Clone)]
pub struct StoreStats {
/// Bytes used
pub used_bytes: u64,
/// Base directory path
pub base_dir: String,
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_store_and_retrieve_shard() {
let temp_dir = tempfile::tempdir().unwrap();
let store = ChunkStore::new(temp_dir.path().to_str().unwrap()).unwrap();
let deal_id = [1u8; 32];
let shard = Shard {
index: 0,
is_data: true,
data: vec![1, 2, 3, 4, 5],
hash: *blake3::hash(&[1, 2, 3, 4, 5]).as_bytes(),
};
// Store
store.store_shard(&deal_id, &shard).await.unwrap();
// Retrieve
let retrieved = store.get_shard(&deal_id, 0).await.unwrap();
assert_eq!(retrieved.data, shard.data);
assert_eq!(retrieved.hash, shard.hash);
}
#[tokio::test]
async fn test_list_shards() {
let temp_dir = tempfile::tempdir().unwrap();
let store = ChunkStore::new(temp_dir.path().to_str().unwrap()).unwrap();
let deal_id = [2u8; 32];
for i in 0u8..5 {
let shard = Shard {
index: i as usize,
is_data: i < 3,
data: vec![i; 10],
hash: *blake3::hash(&vec![i; 10]).as_bytes(),
};
store.store_shard(&deal_id, &shard).await.unwrap();
}
let indices = store.list_shards(&deal_id).await.unwrap();
assert_eq!(indices, vec![0, 1, 2, 3, 4]);
}
}

View file

@ -0,0 +1,265 @@
//! Storage Proofs - Proof of Spacetime
//!
//! Storage nodes must prove they're actually storing data.
//! Proofs are verified on-chain (L1) for reward distribution.
use crate::cid::ContentId;
use serde::{Deserialize, Serialize};
/// Node identifier (public key)
pub type NodeId = [u8; 32];
/// Storage proof challenge from L1
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Challenge {
/// Block hash used as randomness source
pub block_hash: [u8; 32],
/// Challenge epoch number
pub epoch: u64,
/// CID being challenged
pub cid: ContentId,
/// Random chunk index to prove
pub chunk_index: u32,
/// Random byte range within chunk to return
pub byte_offset: u64,
pub byte_length: u64,
}
impl Challenge {
/// Generate a challenge from block hash and CID
pub fn generate(
block_hash: [u8; 32],
epoch: u64,
cid: ContentId,
total_chunks: u32,
chunk_size: usize,
) -> Self {
// Use block hash to deterministically select chunk and range
let chunk_seed = u64::from_le_bytes(block_hash[0..8].try_into().unwrap());
let chunk_index = (chunk_seed % total_chunks as u64) as u32;
let offset_seed = u64::from_le_bytes(block_hash[8..16].try_into().unwrap());
let byte_offset = offset_seed % (chunk_size as u64 / 2);
let byte_length = 1024.min(chunk_size as u64 - byte_offset); // Max 1KB proof
Self {
block_hash,
epoch,
cid,
chunk_index,
byte_offset,
byte_length,
}
}
}
/// Storage proof submitted by node
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct StorageProof {
/// Node submitting the proof
pub node_id: NodeId,
/// Challenge being answered
pub challenge: Challenge,
/// Merkle root of the full chunk
pub chunk_merkle_root: [u8; 32],
/// Proof data (requested byte range)
#[serde(with = "serde_bytes")]
pub proof_data: Vec<u8>,
/// Merkle proof showing proof_data is part of chunk
pub merkle_proof: Vec<[u8; 32]>,
/// Timestamp of proof generation
pub timestamp: u64,
/// Signature over the proof (64 bytes, stored as Vec for serde compatibility)
#[serde(with = "serde_bytes")]
pub signature: Vec<u8>,
}
impl StorageProof {
/// Verify the proof is valid
pub fn verify(&self) -> bool {
// 1. Verify proof data length matches challenge
if self.proof_data.len() != self.challenge.byte_length as usize {
return false;
}
// 2. Verify Merkle proof
if !self.verify_merkle_proof() {
return false;
}
// 3. Verify signature (TODO: implement with actual crypto)
// self.verify_signature()
true
}
fn verify_merkle_proof(&self) -> bool {
if self.merkle_proof.is_empty() {
return false;
}
// Compute hash of proof data
let mut current = *blake3::hash(&self.proof_data).as_bytes();
// Walk up the Merkle tree
for sibling in &self.merkle_proof {
let mut combined = [0u8; 64];
combined[..32].copy_from_slice(&current);
combined[32..].copy_from_slice(sibling);
current = *blake3::hash(&combined).as_bytes();
}
// Should match chunk Merkle root
current == self.chunk_merkle_root
}
}
/// Build a Merkle tree from chunk data
pub fn build_merkle_tree(data: &[u8], leaf_size: usize) -> MerkleTree {
let leaves: Vec<[u8; 32]> = data
.chunks(leaf_size)
.map(|chunk| *blake3::hash(chunk).as_bytes())
.collect();
MerkleTree::from_leaves(leaves)
}
/// Simple Merkle tree for chunk proofs
#[derive(Debug, Clone)]
pub struct MerkleTree {
/// All nodes in the tree (leaves first, then internal nodes)
nodes: Vec<[u8; 32]>,
/// Number of leaves
leaf_count: usize,
}
impl MerkleTree {
/// Build tree from leaf hashes
pub fn from_leaves(leaves: Vec<[u8; 32]>) -> Self {
let leaf_count = leaves.len();
let mut nodes = leaves;
// Build tree bottom-up
let mut level_start = 0;
let mut level_size = leaf_count;
while level_size > 1 {
let next_level_size = (level_size + 1) / 2;
for i in 0..next_level_size {
let left_idx = level_start + i * 2;
let right_idx = left_idx + 1;
let left = nodes[left_idx];
let right = if right_idx < level_start + level_size {
nodes[right_idx]
} else {
left // Duplicate for odd number
};
let mut combined = [0u8; 64];
combined[..32].copy_from_slice(&left);
combined[32..].copy_from_slice(&right);
nodes.push(*blake3::hash(&combined).as_bytes());
}
level_start += level_size;
level_size = next_level_size;
}
Self { nodes, leaf_count }
}
/// Get the root hash
pub fn root(&self) -> [u8; 32] {
*self.nodes.last().unwrap_or(&[0u8; 32])
}
/// Generate proof for a leaf
pub fn proof(&self, leaf_index: usize) -> Vec<[u8; 32]> {
if leaf_index >= self.leaf_count {
return Vec::new();
}
let mut proof = Vec::new();
let mut idx = leaf_index;
let mut level_start = 0;
let mut level_size = self.leaf_count;
while level_size > 1 {
let sibling_idx = if idx % 2 == 0 {
idx + 1
} else {
idx - 1
};
if sibling_idx < level_size {
proof.push(self.nodes[level_start + sibling_idx]);
} else {
proof.push(self.nodes[level_start + idx]);
}
idx /= 2;
level_start += level_size;
level_size = (level_size + 1) / 2;
}
proof
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_merkle_tree() {
let data = b"AAAABBBBCCCCDDDD";
let tree = build_merkle_tree(data, 4);
assert!(!tree.nodes.is_empty());
assert_eq!(tree.leaf_count, 4);
}
#[test]
fn test_merkle_proof() {
let leaves: Vec<[u8; 32]> = (0..4)
.map(|i| *blake3::hash(&[i]).as_bytes())
.collect();
let tree = MerkleTree::from_leaves(leaves.clone());
let root = tree.root();
// Verify proof for each leaf
for (i, leaf) in leaves.iter().enumerate() {
let proof = tree.proof(i);
// Verify by walking up
let mut current = *leaf;
for (j, sibling) in proof.iter().enumerate() {
let mut combined = [0u8; 64];
if (i >> j) % 2 == 0 {
combined[..32].copy_from_slice(&current);
combined[32..].copy_from_slice(sibling);
} else {
combined[..32].copy_from_slice(sibling);
combined[32..].copy_from_slice(&current);
}
current = *blake3::hash(&combined).as_bytes();
}
assert_eq!(current, root);
}
}
#[test]
fn test_challenge_generation() {
let block_hash = [1u8; 32];
let cid = ContentId::from_content(b"test");
let challenge = Challenge::generate(block_hash, 1, cid, 100, 1024);
assert!(challenge.chunk_index < 100);
assert!(challenge.byte_length <= 1024);
}
}

140
docker-compose.storage.yml Normal file
View file

@ -0,0 +1,140 @@
# Synor Storage Layer - Docker Compose
# Decentralized storage network components
version: '3.9'
services:
# Storage Node 1
storage-node-1:
build:
context: .
dockerfile: docker/storage-node/Dockerfile
container_name: synor-storage-node-1
hostname: storage-node-1
restart: unless-stopped
environment:
- RUST_LOG=info
- NODE_ID=storage-node-1
- L1_RPC=http://synor-node-1:8545
volumes:
- storage-node-1-data:/data/storage
- ./docker/storage-node/config.toml:/config/config.toml:ro
ports:
- "4101:4001" # P2P
- "5101:5001" # API
- "8101:8080" # Gateway
networks:
- synor-storage-net
- synor-testnet
depends_on:
- synor-node-1
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Storage Node 2
storage-node-2:
build:
context: .
dockerfile: docker/storage-node/Dockerfile
container_name: synor-storage-node-2
hostname: storage-node-2
restart: unless-stopped
environment:
- RUST_LOG=info
- NODE_ID=storage-node-2
- L1_RPC=http://synor-node-1:8545
- BOOTSTRAP_NODES=/dns4/storage-node-1/tcp/4001
volumes:
- storage-node-2-data:/data/storage
- ./docker/storage-node/config.toml:/config/config.toml:ro
ports:
- "4102:4001"
- "5102:5001"
- "8102:8080"
networks:
- synor-storage-net
- synor-testnet
depends_on:
- storage-node-1
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Storage Node 3
storage-node-3:
build:
context: .
dockerfile: docker/storage-node/Dockerfile
container_name: synor-storage-node-3
hostname: storage-node-3
restart: unless-stopped
environment:
- RUST_LOG=info
- NODE_ID=storage-node-3
- L1_RPC=http://synor-node-1:8545
- BOOTSTRAP_NODES=/dns4/storage-node-1/tcp/4001
volumes:
- storage-node-3-data:/data/storage
- ./docker/storage-node/config.toml:/config/config.toml:ro
ports:
- "4103:4001"
- "5103:5001"
- "8103:8080"
networks:
- synor-storage-net
- synor-testnet
depends_on:
- storage-node-1
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Public Gateway (load-balanced entry point)
storage-gateway:
image: nginx:alpine
container_name: synor-storage-gateway
hostname: storage-gateway
restart: unless-stopped
volumes:
- ./docker/storage-gateway/nginx.conf:/etc/nginx/nginx.conf:ro
ports:
- "8180:80" # Public gateway
- "8181:443" # HTTPS (if configured)
networks:
- synor-storage-net
depends_on:
- storage-node-1
- storage-node-2
- storage-node-3
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost/health"]
interval: 15s
timeout: 5s
retries: 3
volumes:
storage-node-1-data:
driver: local
storage-node-2-data:
driver: local
storage-node-3-data:
driver: local
networks:
synor-storage-net:
driver: bridge
ipam:
config:
- subnet: 172.21.0.0/16
synor-testnet:
external: true

View file

@ -0,0 +1,195 @@
# Synor Storage Gateway - nginx configuration
# Load balances requests across storage nodes
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Logging
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
# Performance
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml application/json application/javascript
application/xml application/xml+rss text/javascript application/wasm;
# Rate limiting
limit_req_zone $binary_remote_addr zone=gateway_limit:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
# Storage node upstream (round-robin with health checks)
upstream storage_nodes {
least_conn;
server storage-node-1:8080 weight=1 max_fails=3 fail_timeout=30s;
server storage-node-2:8080 weight=1 max_fails=3 fail_timeout=30s;
server storage-node-3:8080 weight=1 max_fails=3 fail_timeout=30s;
keepalive 32;
}
# Cache configuration
proxy_cache_path /var/cache/nginx/storage levels=1:2
keys_zone=storage_cache:100m max_size=10g
inactive=60m use_temp_path=off;
server {
listen 80;
server_name _;
# Security headers
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "DENY" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# CORS headers
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, HEAD, OPTIONS" always;
add_header Access-Control-Allow-Headers "Content-Type, Authorization" always;
# Handle preflight requests
if ($request_method = 'OPTIONS') {
add_header Access-Control-Allow-Origin "*";
add_header Access-Control-Allow-Methods "GET, HEAD, OPTIONS";
add_header Access-Control-Allow-Headers "Content-Type, Authorization";
add_header Access-Control-Max-Age 86400;
add_header Content-Length 0;
add_header Content-Type text/plain;
return 204;
}
# Health check endpoint
location /health {
access_log off;
return 200 'OK';
add_header Content-Type text/plain;
}
# Metrics endpoint (internal only)
location /metrics {
allow 172.21.0.0/16;
deny all;
stub_status;
}
# Gateway info
location /info {
default_type application/json;
return 200 '{"gateway":"synor-storage","version":"0.1.0","nodes":3}';
}
# Content requests by CID (synor1...)
location ~ ^/synor1[A-Za-z0-9]+$ {
limit_req zone=gateway_limit burst=20 nodelay;
limit_conn conn_limit 10;
proxy_pass http://storage_nodes;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
# Caching
proxy_cache storage_cache;
proxy_cache_valid 200 1h;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
add_header X-Cache-Status $upstream_cache_status;
# Timeouts
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 16k;
proxy_busy_buffers_size 32k;
# Max content size (100MB)
client_max_body_size 100m;
}
# Content with subpath (synor1.../path/to/file)
location ~ ^/synor1[A-Za-z0-9]+/.+ {
limit_req zone=gateway_limit burst=20 nodelay;
limit_conn conn_limit 10;
proxy_pass http://storage_nodes;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_cache storage_cache;
proxy_cache_valid 200 1h;
add_header X-Cache-Status $upstream_cache_status;
proxy_connect_timeout 10s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# API endpoints (pass through to nodes)
location /api/ {
limit_req zone=gateway_limit burst=10 nodelay;
proxy_pass http://storage_nodes;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Connection "";
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# Default - 404
location / {
return 404 '{"error":"Not found","hint":"Use /synor1<cid> to fetch content"}';
add_header Content-Type application/json;
}
# Error pages
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
}

View file

@ -0,0 +1,74 @@
# Synor Storage Node Dockerfile
# Multi-stage build for optimized production image
# Stage 1: Build
FROM rust:1.83-bookworm AS builder
WORKDIR /build
# Install dependencies for libp2p and rocksdb
RUN apt-get update && apt-get install -y \
cmake \
libclang-dev \
libssl-dev \
pkg-config \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy workspace files
COPY Cargo.toml Cargo.lock ./
COPY crates/ ./crates/
# Build the storage node binary
RUN cargo build --release -p synor-storage --bin synor-storage-node
# Stage 2: Runtime
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y \
ca-certificates \
libssl3 \
curl \
&& rm -rf /var/lib/apt/lists/*
# Create non-root user
RUN useradd -m -u 1000 synor
# Create data directories
RUN mkdir -p /data/storage /config && chown -R synor:synor /data /config
WORKDIR /app
# Copy the built binary
COPY --from=builder /build/target/release/synor-storage-node /app/synor-storage-node
# Copy configuration template
COPY docker/storage-node/config.toml /config/config.toml
# Make binary executable
RUN chmod +x /app/synor-storage-node && chown synor:synor /app/synor-storage-node
USER synor
# Storage node ports
# 4001: P2P (libp2p)
# 5001: API (internal)
# 8080: Gateway (HTTP)
EXPOSE 4001 5001 8080
# Environment defaults
ENV RUST_LOG=info
ENV DATA_DIR=/data/storage
ENV GATEWAY_ENABLED=true
ENV GATEWAY_ADDR=0.0.0.0:8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Data volume
VOLUME ["/data/storage"]
# Entry point
ENTRYPOINT ["/app/synor-storage-node"]
CMD ["--config", "/config/config.toml", "--data-dir", "/data/storage"]

View file

@ -0,0 +1,88 @@
# Synor Storage Node Configuration
[node]
# Node identity (leave empty to auto-generate)
node_id = ""
# Storage capacity in bytes (default: 100 GB)
capacity = 107374182400
# Pricing (atomic SYNOR units per byte per epoch)
min_price = 1
max_price = 10
# Data directory
data_dir = "/data/storage"
# Regions served (for geographic routing)
regions = ["global"]
# Stake amount on L1 (for slashing)
stake = 0
[network]
# P2P listen addresses
listen_addrs = [
"/ip4/0.0.0.0/tcp/4001",
"/ip4/0.0.0.0/udp/4001/quic-v1"
]
# Bootstrap nodes for peer discovery
bootstrap_nodes = [
# Add mainnet bootstrap nodes here
# "/ip4/1.2.3.4/tcp/4001/p2p/12D3KooW..."
]
# Maximum connections
max_connections = 100
[l1]
# L1 RPC endpoint for proof submission
rpc = "http://synor-node:8545"
# Proof submission settings
proof_gas_limit = 500000
max_gas_price = 100000000000 # 100 gwei
[gateway]
# Enable HTTP gateway
enabled = true
# Gateway listen address
listen_addr = "0.0.0.0:8080"
# Maximum content size to serve (100 MB)
max_content_size = 104857600
# Cache size (1 GB)
cache_size = 1073741824
# Request timeout (seconds)
timeout_secs = 60
# Enable upload endpoint (false for public gateways)
enable_upload = false
# CORS origins
cors_origins = ["*"]
# Rate limit (requests per minute per IP)
rate_limit = 100
[storage]
# Chunk size (1 MB)
chunk_size = 1048576
# Erasure coding
data_shards = 10
parity_shards = 4
# Replication factor
replication_factor = 3
[logging]
# Log level: trace, debug, info, warn, error
level = "info"
# Log format: json, pretty
format = "json"

View file

@ -0,0 +1,613 @@
# Synor Storage Layer Architecture
> Decentralized storage layer for the Synor ecosystem
## Overview
Synor Storage is a **Layer 2 decentralized storage network** that enables permanent, censorship-resistant storage of any file type. It operates alongside the Synor blockchain (L1) for payments and proofs.
```
┌─────────────────────────────────────────────────────────────────────────┐
│ USER APPLICATIONS │
│ Next.js │ React │ Flutter │ Mobile Apps │ OS Images │ Any │
└─────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────┐
│ SYNOR GATEWAY LAYER │
│ HTTP Gateway │ IPFS Bridge │ S3-Compatible API │ CLI/SDK │
└─────────────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────────────┐
│ SYNOR STORAGE (L2) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Content │ │ Erasure │ │ Storage │ │ Retrieval │ │
│ │ Addressing │ │ Coding │ │ Proofs │ │ Market │ │
│ │ (CID/Hash) │ │ (Reed-Sol) │ │ (PoST) │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ STORAGE NODE NETWORK │ │
│ │ Node 1 │ Node 2 │ Node 3 │ Node 4 │ ... │ Node N │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
▼ (Proofs & Payments)
┌─────────────────────────────────────────────────────────────────────────┐
│ SYNOR BLOCKCHAIN (L1) │
│ Transactions │ Smart Contracts │ Storage Registry │ Payments │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## Core Components
### 1. Content Addressing (CID)
Every file is identified by its cryptographic hash, not location.
```rust
// Content Identifier structure
pub struct ContentId {
/// Hash algorithm (0x12 = SHA2-256, 0x1B = Keccak-256, 0x27 = Blake3)
pub hash_type: u8,
/// Hash digest
pub digest: [u8; 32],
/// Content size in bytes
pub size: u64,
}
impl ContentId {
/// Create CID from file content
pub fn from_content(data: &[u8]) -> Self {
let digest = blake3::hash(data);
Self {
hash_type: 0x27, // Blake3
digest: *digest.as_bytes(),
size: data.len() as u64,
}
}
/// Encode as string (base58)
pub fn to_string(&self) -> String {
// Format: synor1<base58(hash_type + digest)>
let mut bytes = vec![self.hash_type];
bytes.extend_from_slice(&self.digest);
format!("synor1{}", bs58::encode(&bytes).into_string())
}
}
```
**Example CIDs:**
```
synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG (32KB file)
synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF (4GB file)
```
### 2. File Chunking & Erasure Coding
Large files are split into chunks with redundancy:
```rust
/// Chunk configuration
pub const CHUNK_SIZE: usize = 1024 * 1024; // 1MB chunks
pub const DATA_SHARDS: usize = 10; // Original data pieces
pub const PARITY_SHARDS: usize = 4; // Redundancy pieces
pub const TOTAL_SHARDS: usize = 14; // Total pieces
pub const REPLICATION_FACTOR: usize = 3; // Copies per shard
/// A file is broken into chunks, each chunk is erasure-coded
pub struct StoredFile {
pub cid: ContentId,
pub chunks: Vec<Chunk>,
pub metadata: FileMetadata,
}
pub struct Chunk {
pub index: u32,
pub shards: Vec<Shard>, // 14 shards per chunk
}
pub struct Shard {
pub index: u8,
pub hash: [u8; 32],
pub locations: Vec<NodeId>, // Which nodes store this shard
}
```
**Fault Tolerance:**
- File survives loss of 4 out of 14 shards per chunk
- With 3x replication, can lose 12 of 42 total copies
- Network can lose ~30% of nodes and still recover all data
### 3. Storage Proofs (Proof of Spacetime)
Storage nodes must prove they're actually storing data:
```rust
/// Proof of Spacetime - proves storage over time
pub struct StorageProof {
/// Node proving storage
pub node_id: NodeId,
/// Content being proven
pub cid: ContentId,
/// Proof challenge (random seed from L1 block)
pub challenge: [u8; 32],
/// Merkle proof of random chunk
pub merkle_proof: Vec<[u8; 32]>,
/// Timestamp
pub timestamp: u64,
/// Signature
pub signature: Signature,
}
/// Verification challenge from L1
pub struct Challenge {
/// Block hash used as randomness source
pub block_hash: [u8; 32],
/// Selected chunk index
pub chunk_index: u32,
/// Selected byte range within chunk
pub byte_range: (u64, u64),
}
```
**Proof Flow:**
1. L1 contract emits challenge every epoch (e.g., every 30 minutes)
2. Storage nodes submit proofs for their stored data
3. Failed proofs result in slashing (loss of staked SYNOR)
4. Successful proofs earn storage rewards
### 4. Storage Deals & Economics
```rust
/// Storage deal between user and network
pub struct StorageDeal {
/// Unique deal ID
pub deal_id: [u8; 32],
/// Content being stored
pub cid: ContentId,
/// Client paying for storage
pub client: Address,
/// Storage duration (blocks or seconds)
pub duration: u64,
/// Price per byte per epoch (in SYNOR tokens)
pub price_per_byte: u64,
/// Total collateral locked
pub collateral: u64,
/// Start time
pub start_block: u64,
/// Deal status
pub status: DealStatus,
}
pub enum DealStatus {
Pending, // Awaiting storage node acceptance
Active, // Being stored
Completed, // Duration finished
Failed, // Node failed proofs
}
```
**Pricing Model:**
```
Base Price: 0.0001 SYNOR per MB per month
Example Costs:
- 1 GB for 1 year: ~1.2 SYNOR
- 100 GB for 1 year: ~120 SYNOR
- 1 TB for 1 year: ~1,200 SYNOR
Permanent Storage (one-time fee):
- ~20 years equivalent: 20x monthly cost
- 1 GB permanent: ~24 SYNOR
```
---
## Storage Node Architecture
```rust
/// Storage node configuration
pub struct StorageNodeConfig {
/// Node identity
pub node_id: NodeId,
/// Storage capacity offered (bytes)
pub capacity: u64,
/// Stake amount (SYNOR tokens)
pub stake: u64,
/// Price per byte per epoch
pub price: u64,
/// Supported regions
pub regions: Vec<Region>,
/// L1 RPC endpoint for proofs
pub l1_rpc: String,
/// P2P listen address
pub p2p_addr: String,
/// HTTP gateway address
pub gateway_addr: Option<String>,
}
/// Storage node state
pub struct StorageNode {
/// Configuration
pub config: StorageNodeConfig,
/// Local storage backend
pub storage: Box<dyn StorageBackend>,
/// Active deals
pub deals: HashMap<DealId, StorageDeal>,
/// Peer connections
pub peers: HashMap<NodeId, PeerConnection>,
/// L1 client for submitting proofs
pub l1_client: L1Client,
}
/// Pluggable storage backends
pub trait StorageBackend: Send + Sync {
fn store(&mut self, key: &[u8; 32], data: &[u8]) -> Result<()>;
fn retrieve(&self, key: &[u8; 32]) -> Result<Vec<u8>>;
fn delete(&mut self, key: &[u8; 32]) -> Result<()>;
fn capacity(&self) -> u64;
fn used(&self) -> u64;
}
/// Backend implementations
pub struct FileSystemBackend { root: PathBuf }
pub struct RocksDbBackend { db: rocksdb::DB }
pub struct S3Backend { client: aws_sdk_s3::Client, bucket: String }
```
---
## Gateway Layer
### HTTP Gateway
Serves content over HTTP for web browsers:
```rust
/// Gateway routes
/// GET /ipfs/{cid} - Retrieve by CID (IPFS compatibility)
/// GET /synor/{cid} - Retrieve by Synor CID
/// GET /{name}.synor - Resolve Synor name to CID
/// POST /upload - Upload file, returns CID
/// GET /pins - List pinned content
/// POST /pin/{cid} - Pin content (prevent garbage collection)
pub struct Gateway {
/// HTTP server
pub server: HttpServer,
/// Connection to storage nodes
pub storage_client: StorageClient,
/// Name resolution
pub name_resolver: NameResolver,
/// Cache for hot content
pub cache: LruCache<ContentId, Bytes>,
}
/// Name resolution (synor names → CID)
pub struct NameResolver {
/// L1 client for on-chain names
pub l1_client: L1Client,
/// Local cache
pub cache: HashMap<String, ContentId>,
}
```
### S3-Compatible API
For existing tools and workflows:
```
Endpoint: https://s3.synor.cc
# AWS CLI compatible
aws s3 --endpoint-url https://s3.synor.cc cp ./myapp s3://mybucket/
aws s3 --endpoint-url https://s3.synor.cc ls s3://mybucket/
# Returns CID as ETag
# Access via: https://gateway.synor.cc/synor/{cid}
```
### IPFS Bridge
Bidirectional bridge with IPFS:
```rust
/// Import content from IPFS
pub async fn import_from_ipfs(ipfs_cid: &str) -> Result<ContentId> {
let data = ipfs_client.get(ipfs_cid).await?;
let synor_cid = storage_client.store(&data).await?;
Ok(synor_cid)
}
/// Export content to IPFS
pub async fn export_to_ipfs(synor_cid: &ContentId) -> Result<String> {
let data = storage_client.retrieve(synor_cid).await?;
let ipfs_cid = ipfs_client.add(&data).await?;
Ok(ipfs_cid)
}
```
---
## CLI Commands
```bash
# Upload a file
synor storage upload ./myapp.zip
# Output: Uploaded! CID: synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# Upload a directory (Next.js build)
synor storage upload ./out --recursive
# Output: Uploaded! CID: synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF
# Download content
synor storage download synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG -o ./download.zip
# Pin content (ensure it stays available)
synor storage pin synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG --duration 1y --pay 10
# Register a name
synor names register myapp.synor synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF
# Check storage status
synor storage status synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# Output:
# CID: synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# Size: 45.2 MB
# Replicas: 8/10 nodes
# Health: 100%
# Expires: Never (permanent)
# Regions: US-East, EU-West, Asia-Pacific
# Run a storage node
synor storage node --capacity 100GB --stake 1000 --port 4001
```
---
## Deployment Types
### 1. Static Web Apps (Next.js, React, Vue, etc.)
```bash
# Build and deploy
npm run build
synor storage upload ./dist --recursive
synor names register myapp.synor $CID
# Access via: https://myapp.synor.cc
```
### 2. Mobile Apps (Flutter, React Native)
```bash
# Build APK/IPA
flutter build apk
synor storage upload ./build/app/outputs/flutter-apk/app-release.apk
# Users download from: https://gateway.synor.cc/synor/{cid}
# Or via app store with blockchain verification
```
### 3. Desktop Apps (Tauri, Electron)
```bash
# Build for all platforms
npm run tauri build
# Upload each platform
synor storage upload ./target/release/bundle/macos/MyApp.app.tar.gz
synor storage upload ./target/release/bundle/windows/MyApp.exe
synor storage upload ./target/release/bundle/linux/myapp.deb
# Register versioned names
synor names register myapp-macos.synor $CID_MACOS
synor names register myapp-windows.synor $CID_WINDOWS
synor names register myapp-linux.synor $CID_LINUX
```
### 4. Docker Images
```bash
# Export and upload
docker save myapp:latest | gzip > myapp-latest.tar.gz
synor storage upload ./myapp-latest.tar.gz
# Pull from Synor (requires synor-docker plugin)
docker pull synor://synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF
```
### 5. Operating System Images
```bash
# Upload ISO
synor storage upload ./ubuntu-24.04-desktop-amd64.iso
# Output:
# CID: synor1QmVeryLongCIDForLargeFile...
# Size: 4.7 GB
# Cost: ~5.6 SYNOR (permanent storage)
#
# Download URL: https://gateway.synor.cc/synor/synor1QmVery...
# Direct torrent: synor://synor1QmVery...
```
**Note on OS Execution:**
Storage layer STORES the OS image. To RUN it, you need:
- Download and boot on your hardware, OR
- Use future Synor Compute layer for cloud VMs
---
## Smart Contracts for Storage
### StorageRegistry Contract
```rust
/// L1 contract for storage deals
contract StorageRegistry {
/// Create a new storage deal
fn create_deal(
cid: ContentId,
duration: u64,
replication: u8,
) -> DealId;
/// Storage node accepts a deal
fn accept_deal(deal_id: DealId);
/// Submit storage proof
fn submit_proof(proof: StorageProof) -> bool;
/// Claim rewards for successful storage
fn claim_rewards(deal_id: DealId, proofs: Vec<ProofId>);
/// Slash node for failed proof
fn slash(node_id: NodeId, deal_id: DealId);
/// Extend deal duration
fn extend_deal(deal_id: DealId, additional_duration: u64);
}
```
### NameRegistry Contract
```rust
/// L1 contract for Synor names
contract NameRegistry {
/// Register a name (e.g., myapp.synor)
fn register(name: String, cid: ContentId) -> bool;
/// Update name to point to new CID
fn update(name: String, new_cid: ContentId);
/// Transfer name ownership
fn transfer(name: String, new_owner: Address);
/// Resolve name to CID
fn resolve(name: String) -> Option<ContentId>;
/// Reverse lookup: CID to names
fn reverse(cid: ContentId) -> Vec<String>;
}
```
---
## Network Topology
```
┌─────────────────┐
│ L1 Blockchain │
│ (Proofs/Payments)│
└────────┬────────┘
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Gateway │ │ Gateway │ │ Gateway │
│ US-East │ │ EU-West │ │ Asia-Pac │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
┌────────┴────────┬─────┴─────┬────────┴────────┐
│ │ │ │
▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Storage │◄───►│ Storage │◄│ Storage │◄────►│ Storage │
│ Node 1 │ │ Node 2 │ │ Node 3 │ │ Node N │
│ (1 TB) │ │ (500GB) │ │ (2 TB) │ │ (10 TB) │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │ │ │
└───────────────┴───────────┴────────────────┘
P2P Network (libp2p / QUIC)
```
---
## Implementation Phases
### Phase 1: Core Storage (4-6 weeks)
- [ ] Content addressing (CID generation)
- [ ] File chunking & reassembly
- [ ] Basic storage node
- [ ] P2P network (libp2p)
- [ ] CLI upload/download
### Phase 2: Proofs & Economics (4-6 weeks)
- [ ] Storage proof generation
- [ ] L1 StorageRegistry contract
- [ ] Deal creation & management
- [ ] Reward distribution
- [ ] Slashing mechanism
### Phase 3: Gateway & Access (2-3 weeks)
- [ ] HTTP gateway
- [ ] Name resolution (*.synor)
- [ ] S3-compatible API
- [ ] IPFS bridge
### Phase 4: Production Hardening (2-3 weeks)
- [ ] Erasure coding
- [ ] Geographic distribution
- [ ] Cache layer
- [ ] Monitoring & metrics
- [ ] Auto-scaling
---
## Directory Structure
```
crates/
├── synor-storage/ # Core storage library
│ ├── src/
│ │ ├── lib.rs
│ │ ├── cid.rs # Content addressing
│ │ ├── chunker.rs # File chunking
│ │ ├── erasure.rs # Reed-Solomon coding
│ │ ├── proof.rs # Storage proofs
│ │ └── deal.rs # Storage deals
│ └── Cargo.toml
├── synor-storage-node/ # Storage node binary
│ ├── src/
│ │ ├── main.rs
│ │ ├── server.rs # P2P server
│ │ ├── backend.rs # Storage backends
│ │ └── prover.rs # Proof generation
│ └── Cargo.toml
├── synor-gateway/ # HTTP gateway
│ ├── src/
│ │ ├── main.rs
│ │ ├── routes.rs # HTTP endpoints
│ │ ├── resolver.rs # Name resolution
│ │ └── cache.rs # Content cache
│ └── Cargo.toml
└── synor-storage-contracts/ # L1 smart contracts
├── storage_registry.rs
└── name_registry.rs
```
---
## Comparison with Other Storage Networks
| Feature | Synor Storage | IPFS | Filecoin | Arweave |
|---------|---------------|------|----------|---------|
| Persistence | Paid deals | No guarantee | Paid deals | Permanent |
| Consensus | PoST + L1 | None | PoST | SPoRA |
| Native Token | SYNOR | None | FIL | AR |
| Retrieval Speed | Fast (cached) | Variable | Slow | Moderate |
| Smart Contracts | Yes (L1) | No | Limited | SmartWeave |
| L1 Integration | Native | No | Separate | Separate |
| Cost Model | Pay per time | Free* | Market | One-time |
---
*Last Updated: January 10, 2026*