Adds unit tests covering tensor operations, type enums, client functionality, and serialization for all 12 SDK implementations: - JavaScript (Vitest): tensor, types, client tests - Python (pytest): tensor, types, client tests - Go: standard library tests with httptest - Flutter (flutter_test): tensor, types tests - Java (JUnit 5): tensor, types tests - Rust: embedded module tests - Ruby (minitest): tensor, types tests - C# (xUnit): tensor, types tests Tests cover: - Tensor creation (zeros, ones, random, randn, eye, arange, linspace) - Tensor operations (reshape, transpose, indexing) - Reductions (sum, mean, std, min, max) - Activations (relu, sigmoid, softmax) - Serialization/deserialization - Type enums and configuration - Client request building - Error handling
60 lines
1.6 KiB
Rust
60 lines
1.6 KiB
Rust
//! Synor Compute SDK for Rust
|
|
//!
|
|
//! Access distributed heterogeneous compute resources (CPU, GPU, TPU, NPU, LPU, FPGA, DSP)
|
|
//! for AI/ML workloads at 90% cost reduction compared to traditional cloud.
|
|
//!
|
|
//! # Quick Start
|
|
//!
|
|
//! ```rust,no_run
|
|
//! use synor_compute::{SynorCompute, Tensor, ProcessorType, Precision};
|
|
//!
|
|
//! #[tokio::main]
|
|
//! async fn main() -> Result<(), Box<dyn std::error::Error>> {
|
|
//! // Create client
|
|
//! let client = SynorCompute::new("your-api-key");
|
|
//!
|
|
//! // Matrix multiplication on GPU
|
|
//! let a = Tensor::rand(&[512, 512]);
|
|
//! let b = Tensor::rand(&[512, 512]);
|
|
//! let result = client.matmul(&a, &b)
|
|
//! .precision(Precision::FP16)
|
|
//! .processor(ProcessorType::GPU)
|
|
//! .send()
|
|
//! .await?;
|
|
//!
|
|
//! if result.is_success() {
|
|
//! println!("Time: {}ms", result.execution_time_ms.unwrap_or(0));
|
|
//! }
|
|
//!
|
|
//! // LLM inference
|
|
//! let response = client.inference("llama-3-70b", "Explain quantum computing")
|
|
//! .send()
|
|
//! .await?;
|
|
//! println!("{}", response.result.unwrap_or_default());
|
|
//!
|
|
//! // Streaming inference
|
|
//! use futures::StreamExt;
|
|
//! let mut stream = client.inference_stream("llama-3-70b", "Write a poem").await?;
|
|
//! while let Some(token) = stream.next().await {
|
|
//! print!("{}", token?);
|
|
//! }
|
|
//!
|
|
//! Ok(())
|
|
//! }
|
|
//! ```
|
|
|
|
mod types;
|
|
mod tensor;
|
|
mod client;
|
|
mod error;
|
|
|
|
#[cfg(test)]
|
|
mod tests;
|
|
|
|
pub use types::*;
|
|
pub use tensor::Tensor;
|
|
pub use client::SynorCompute;
|
|
pub use error::{Error, Result};
|
|
|
|
/// SDK version
|
|
pub const VERSION: &str = "0.1.0";
|