Complete implementation of the Synor Storage Layer (L2) for decentralized content storage. This enables permanent, censorship-resistant storage of any file type including Next.js apps, Flutter apps, and arbitrary data. Core modules: - cid.rs: Content addressing with Blake3/SHA256 hashing (synor1... format) - chunker.rs: File chunking for parallel upload/download (1MB chunks) - erasure.rs: Reed-Solomon erasure coding (10+4 shards) for fault tolerance - proof.rs: Storage proofs with Merkle trees for verification - deal.rs: Storage deals and market economics (3 pricing tiers) Infrastructure: - node/: Storage node service with P2P networking and local storage - gateway/: HTTP gateway for browser access with LRU caching - Docker deployment with nginx load balancer Architecture: - Operates as L2 alongside Synor L1 blockchain - Storage proofs verified on-chain for reward distribution - Can lose 4 shards per chunk and still recover data - Gateway URLs: /synor1<cid> for content access All 28 unit tests passing.
20 KiB
20 KiB
Synor Storage Layer Architecture
Decentralized storage layer for the Synor ecosystem
Overview
Synor Storage is a Layer 2 decentralized storage network that enables permanent, censorship-resistant storage of any file type. It operates alongside the Synor blockchain (L1) for payments and proofs.
┌─────────────────────────────────────────────────────────────────────────┐
│ USER APPLICATIONS │
│ Next.js │ React │ Flutter │ Mobile Apps │ OS Images │ Any │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ SYNOR GATEWAY LAYER │
│ HTTP Gateway │ IPFS Bridge │ S3-Compatible API │ CLI/SDK │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ SYNOR STORAGE (L2) │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Content │ │ Erasure │ │ Storage │ │ Retrieval │ │
│ │ Addressing │ │ Coding │ │ Proofs │ │ Market │ │
│ │ (CID/Hash) │ │ (Reed-Sol) │ │ (PoST) │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ STORAGE NODE NETWORK │ │
│ │ Node 1 │ Node 2 │ Node 3 │ Node 4 │ ... │ Node N │ │
│ └─────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
│
▼ (Proofs & Payments)
┌─────────────────────────────────────────────────────────────────────────┐
│ SYNOR BLOCKCHAIN (L1) │
│ Transactions │ Smart Contracts │ Storage Registry │ Payments │
└─────────────────────────────────────────────────────────────────────────┘
Core Components
1. Content Addressing (CID)
Every file is identified by its cryptographic hash, not location.
// Content Identifier structure
pub struct ContentId {
/// Hash algorithm (0x12 = SHA2-256, 0x1B = Keccak-256, 0x27 = Blake3)
pub hash_type: u8,
/// Hash digest
pub digest: [u8; 32],
/// Content size in bytes
pub size: u64,
}
impl ContentId {
/// Create CID from file content
pub fn from_content(data: &[u8]) -> Self {
let digest = blake3::hash(data);
Self {
hash_type: 0x27, // Blake3
digest: *digest.as_bytes(),
size: data.len() as u64,
}
}
/// Encode as string (base58)
pub fn to_string(&self) -> String {
// Format: synor1<base58(hash_type + digest)>
let mut bytes = vec![self.hash_type];
bytes.extend_from_slice(&self.digest);
format!("synor1{}", bs58::encode(&bytes).into_string())
}
}
Example CIDs:
synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG (32KB file)
synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF (4GB file)
2. File Chunking & Erasure Coding
Large files are split into chunks with redundancy:
/// Chunk configuration
pub const CHUNK_SIZE: usize = 1024 * 1024; // 1MB chunks
pub const DATA_SHARDS: usize = 10; // Original data pieces
pub const PARITY_SHARDS: usize = 4; // Redundancy pieces
pub const TOTAL_SHARDS: usize = 14; // Total pieces
pub const REPLICATION_FACTOR: usize = 3; // Copies per shard
/// A file is broken into chunks, each chunk is erasure-coded
pub struct StoredFile {
pub cid: ContentId,
pub chunks: Vec<Chunk>,
pub metadata: FileMetadata,
}
pub struct Chunk {
pub index: u32,
pub shards: Vec<Shard>, // 14 shards per chunk
}
pub struct Shard {
pub index: u8,
pub hash: [u8; 32],
pub locations: Vec<NodeId>, // Which nodes store this shard
}
Fault Tolerance:
- File survives loss of 4 out of 14 shards per chunk
- With 3x replication, can lose 12 of 42 total copies
- Network can lose ~30% of nodes and still recover all data
3. Storage Proofs (Proof of Spacetime)
Storage nodes must prove they're actually storing data:
/// Proof of Spacetime - proves storage over time
pub struct StorageProof {
/// Node proving storage
pub node_id: NodeId,
/// Content being proven
pub cid: ContentId,
/// Proof challenge (random seed from L1 block)
pub challenge: [u8; 32],
/// Merkle proof of random chunk
pub merkle_proof: Vec<[u8; 32]>,
/// Timestamp
pub timestamp: u64,
/// Signature
pub signature: Signature,
}
/// Verification challenge from L1
pub struct Challenge {
/// Block hash used as randomness source
pub block_hash: [u8; 32],
/// Selected chunk index
pub chunk_index: u32,
/// Selected byte range within chunk
pub byte_range: (u64, u64),
}
Proof Flow:
- L1 contract emits challenge every epoch (e.g., every 30 minutes)
- Storage nodes submit proofs for their stored data
- Failed proofs result in slashing (loss of staked SYNOR)
- Successful proofs earn storage rewards
4. Storage Deals & Economics
/// Storage deal between user and network
pub struct StorageDeal {
/// Unique deal ID
pub deal_id: [u8; 32],
/// Content being stored
pub cid: ContentId,
/// Client paying for storage
pub client: Address,
/// Storage duration (blocks or seconds)
pub duration: u64,
/// Price per byte per epoch (in SYNOR tokens)
pub price_per_byte: u64,
/// Total collateral locked
pub collateral: u64,
/// Start time
pub start_block: u64,
/// Deal status
pub status: DealStatus,
}
pub enum DealStatus {
Pending, // Awaiting storage node acceptance
Active, // Being stored
Completed, // Duration finished
Failed, // Node failed proofs
}
Pricing Model:
Base Price: 0.0001 SYNOR per MB per month
Example Costs:
- 1 GB for 1 year: ~1.2 SYNOR
- 100 GB for 1 year: ~120 SYNOR
- 1 TB for 1 year: ~1,200 SYNOR
Permanent Storage (one-time fee):
- ~20 years equivalent: 20x monthly cost
- 1 GB permanent: ~24 SYNOR
Storage Node Architecture
/// Storage node configuration
pub struct StorageNodeConfig {
/// Node identity
pub node_id: NodeId,
/// Storage capacity offered (bytes)
pub capacity: u64,
/// Stake amount (SYNOR tokens)
pub stake: u64,
/// Price per byte per epoch
pub price: u64,
/// Supported regions
pub regions: Vec<Region>,
/// L1 RPC endpoint for proofs
pub l1_rpc: String,
/// P2P listen address
pub p2p_addr: String,
/// HTTP gateway address
pub gateway_addr: Option<String>,
}
/// Storage node state
pub struct StorageNode {
/// Configuration
pub config: StorageNodeConfig,
/// Local storage backend
pub storage: Box<dyn StorageBackend>,
/// Active deals
pub deals: HashMap<DealId, StorageDeal>,
/// Peer connections
pub peers: HashMap<NodeId, PeerConnection>,
/// L1 client for submitting proofs
pub l1_client: L1Client,
}
/// Pluggable storage backends
pub trait StorageBackend: Send + Sync {
fn store(&mut self, key: &[u8; 32], data: &[u8]) -> Result<()>;
fn retrieve(&self, key: &[u8; 32]) -> Result<Vec<u8>>;
fn delete(&mut self, key: &[u8; 32]) -> Result<()>;
fn capacity(&self) -> u64;
fn used(&self) -> u64;
}
/// Backend implementations
pub struct FileSystemBackend { root: PathBuf }
pub struct RocksDbBackend { db: rocksdb::DB }
pub struct S3Backend { client: aws_sdk_s3::Client, bucket: String }
Gateway Layer
HTTP Gateway
Serves content over HTTP for web browsers:
/// Gateway routes
/// GET /ipfs/{cid} - Retrieve by CID (IPFS compatibility)
/// GET /synor/{cid} - Retrieve by Synor CID
/// GET /{name}.synor - Resolve Synor name to CID
/// POST /upload - Upload file, returns CID
/// GET /pins - List pinned content
/// POST /pin/{cid} - Pin content (prevent garbage collection)
pub struct Gateway {
/// HTTP server
pub server: HttpServer,
/// Connection to storage nodes
pub storage_client: StorageClient,
/// Name resolution
pub name_resolver: NameResolver,
/// Cache for hot content
pub cache: LruCache<ContentId, Bytes>,
}
/// Name resolution (synor names → CID)
pub struct NameResolver {
/// L1 client for on-chain names
pub l1_client: L1Client,
/// Local cache
pub cache: HashMap<String, ContentId>,
}
S3-Compatible API
For existing tools and workflows:
Endpoint: https://s3.synor.cc
# AWS CLI compatible
aws s3 --endpoint-url https://s3.synor.cc cp ./myapp s3://mybucket/
aws s3 --endpoint-url https://s3.synor.cc ls s3://mybucket/
# Returns CID as ETag
# Access via: https://gateway.synor.cc/synor/{cid}
IPFS Bridge
Bidirectional bridge with IPFS:
/// Import content from IPFS
pub async fn import_from_ipfs(ipfs_cid: &str) -> Result<ContentId> {
let data = ipfs_client.get(ipfs_cid).await?;
let synor_cid = storage_client.store(&data).await?;
Ok(synor_cid)
}
/// Export content to IPFS
pub async fn export_to_ipfs(synor_cid: &ContentId) -> Result<String> {
let data = storage_client.retrieve(synor_cid).await?;
let ipfs_cid = ipfs_client.add(&data).await?;
Ok(ipfs_cid)
}
CLI Commands
# Upload a file
synor storage upload ./myapp.zip
# Output: Uploaded! CID: synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# Upload a directory (Next.js build)
synor storage upload ./out --recursive
# Output: Uploaded! CID: synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF
# Download content
synor storage download synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG -o ./download.zip
# Pin content (ensure it stays available)
synor storage pin synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG --duration 1y --pay 10
# Register a name
synor names register myapp.synor synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF
# Check storage status
synor storage status synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# Output:
# CID: synor1QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG
# Size: 45.2 MB
# Replicas: 8/10 nodes
# Health: 100%
# Expires: Never (permanent)
# Regions: US-East, EU-West, Asia-Pacific
# Run a storage node
synor storage node --capacity 100GB --stake 1000 --port 4001
Deployment Types
1. Static Web Apps (Next.js, React, Vue, etc.)
# Build and deploy
npm run build
synor storage upload ./dist --recursive
synor names register myapp.synor $CID
# Access via: https://myapp.synor.cc
2. Mobile Apps (Flutter, React Native)
# Build APK/IPA
flutter build apk
synor storage upload ./build/app/outputs/flutter-apk/app-release.apk
# Users download from: https://gateway.synor.cc/synor/{cid}
# Or via app store with blockchain verification
3. Desktop Apps (Tauri, Electron)
# Build for all platforms
npm run tauri build
# Upload each platform
synor storage upload ./target/release/bundle/macos/MyApp.app.tar.gz
synor storage upload ./target/release/bundle/windows/MyApp.exe
synor storage upload ./target/release/bundle/linux/myapp.deb
# Register versioned names
synor names register myapp-macos.synor $CID_MACOS
synor names register myapp-windows.synor $CID_WINDOWS
synor names register myapp-linux.synor $CID_LINUX
4. Docker Images
# Export and upload
docker save myapp:latest | gzip > myapp-latest.tar.gz
synor storage upload ./myapp-latest.tar.gz
# Pull from Synor (requires synor-docker plugin)
docker pull synor://synor1QmXgm5QVTy8pRtKrTPmoA8gQ3rFvasewbZMCAdudAiDuF
5. Operating System Images
# Upload ISO
synor storage upload ./ubuntu-24.04-desktop-amd64.iso
# Output:
# CID: synor1QmVeryLongCIDForLargeFile...
# Size: 4.7 GB
# Cost: ~5.6 SYNOR (permanent storage)
#
# Download URL: https://gateway.synor.cc/synor/synor1QmVery...
# Direct torrent: synor://synor1QmVery...
Note on OS Execution: Storage layer STORES the OS image. To RUN it, you need:
- Download and boot on your hardware, OR
- Use future Synor Compute layer for cloud VMs
Smart Contracts for Storage
StorageRegistry Contract
/// L1 contract for storage deals
contract StorageRegistry {
/// Create a new storage deal
fn create_deal(
cid: ContentId,
duration: u64,
replication: u8,
) -> DealId;
/// Storage node accepts a deal
fn accept_deal(deal_id: DealId);
/// Submit storage proof
fn submit_proof(proof: StorageProof) -> bool;
/// Claim rewards for successful storage
fn claim_rewards(deal_id: DealId, proofs: Vec<ProofId>);
/// Slash node for failed proof
fn slash(node_id: NodeId, deal_id: DealId);
/// Extend deal duration
fn extend_deal(deal_id: DealId, additional_duration: u64);
}
NameRegistry Contract
/// L1 contract for Synor names
contract NameRegistry {
/// Register a name (e.g., myapp.synor)
fn register(name: String, cid: ContentId) -> bool;
/// Update name to point to new CID
fn update(name: String, new_cid: ContentId);
/// Transfer name ownership
fn transfer(name: String, new_owner: Address);
/// Resolve name to CID
fn resolve(name: String) -> Option<ContentId>;
/// Reverse lookup: CID to names
fn reverse(cid: ContentId) -> Vec<String>;
}
Network Topology
┌─────────────────┐
│ L1 Blockchain │
│ (Proofs/Payments)│
└────────┬────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│ Gateway │ │ Gateway │ │ Gateway │
│ US-East │ │ EU-West │ │ Asia-Pac │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
┌────────┴────────┬─────┴─────┬────────┴────────┐
│ │ │ │
▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Storage │◄───►│ Storage │◄│ Storage │◄────►│ Storage │
│ Node 1 │ │ Node 2 │ │ Node 3 │ │ Node N │
│ (1 TB) │ │ (500GB) │ │ (2 TB) │ │ (10 TB) │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │ │ │
└───────────────┴───────────┴────────────────┘
P2P Network (libp2p / QUIC)
Implementation Phases
Phase 1: Core Storage (4-6 weeks)
- Content addressing (CID generation)
- File chunking & reassembly
- Basic storage node
- P2P network (libp2p)
- CLI upload/download
Phase 2: Proofs & Economics (4-6 weeks)
- Storage proof generation
- L1 StorageRegistry contract
- Deal creation & management
- Reward distribution
- Slashing mechanism
Phase 3: Gateway & Access (2-3 weeks)
- HTTP gateway
- Name resolution (*.synor)
- S3-compatible API
- IPFS bridge
Phase 4: Production Hardening (2-3 weeks)
- Erasure coding
- Geographic distribution
- Cache layer
- Monitoring & metrics
- Auto-scaling
Directory Structure
crates/
├── synor-storage/ # Core storage library
│ ├── src/
│ │ ├── lib.rs
│ │ ├── cid.rs # Content addressing
│ │ ├── chunker.rs # File chunking
│ │ ├── erasure.rs # Reed-Solomon coding
│ │ ├── proof.rs # Storage proofs
│ │ └── deal.rs # Storage deals
│ └── Cargo.toml
│
├── synor-storage-node/ # Storage node binary
│ ├── src/
│ │ ├── main.rs
│ │ ├── server.rs # P2P server
│ │ ├── backend.rs # Storage backends
│ │ └── prover.rs # Proof generation
│ └── Cargo.toml
│
├── synor-gateway/ # HTTP gateway
│ ├── src/
│ │ ├── main.rs
│ │ ├── routes.rs # HTTP endpoints
│ │ ├── resolver.rs # Name resolution
│ │ └── cache.rs # Content cache
│ └── Cargo.toml
│
└── synor-storage-contracts/ # L1 smart contracts
├── storage_registry.rs
└── name_registry.rs
Comparison with Other Storage Networks
| Feature | Synor Storage | IPFS | Filecoin | Arweave |
|---|---|---|---|---|
| Persistence | Paid deals | No guarantee | Paid deals | Permanent |
| Consensus | PoST + L1 | None | PoST | SPoRA |
| Native Token | SYNOR | None | FIL | AR |
| Retrieval Speed | Fast (cached) | Variable | Slow | Moderate |
| Smart Contracts | Yes (L1) | No | Limited | SmartWeave |
| L1 Integration | Native | No | Separate | Separate |
| Cost Model | Pay per time | Free* | Market | One-time |
Last Updated: January 10, 2026