docs(sdk): add comprehensive documentation for all 12 SDKs

Add README.md documentation for:
- Main SDK overview with quick start guides
- JavaScript/TypeScript SDK
- Python SDK
- Go SDK
- Rust SDK
- Java SDK
- Kotlin SDK
- Swift SDK
- Flutter/Dart SDK
- C SDK
- C++ SDK
- C#/.NET SDK
- Ruby SDK

Each README includes:
- Installation instructions
- Quick start examples
- Tensor operations
- Matrix operations (matmul, conv2d, attention)
- LLM inference (single and streaming)
- Configuration options
- Error handling
- Type definitions
This commit is contained in:
Gulshan Yadav 2026-01-11 18:05:03 +05:30
parent e2a3b66123
commit 162227dc71
13 changed files with 2820 additions and 0 deletions

166
sdk/README.md Normal file
View file

@ -0,0 +1,166 @@
# Synor Compute SDKs
Access distributed heterogeneous compute resources (CPU, GPU, TPU, NPU, LPU, FPGA, DSP, WebGPU, WASM) at 90% cost reduction compared to traditional cloud.
## Available SDKs
| Language | Package | Status |
|----------|---------|--------|
| [JavaScript/TypeScript](./js) | `synor-compute` | Production |
| [Python](./python) | `synor-compute` | Production |
| [Go](./go) | `github.com/synor/compute-sdk-go` | Production |
| [Flutter/Dart](./flutter) | `synor_compute` | Production |
| [Java](./java) | `io.synor:compute-sdk` | Production |
| [Kotlin](./kotlin) | `io.synor:compute-sdk-kotlin` | Production |
| [Swift](./swift) | `SynorCompute` | Production |
| [Rust](./rust) | `synor-compute` | Production |
| [C](./c) | `libsynor-compute` | Production |
| [C++](./cpp) | `synor-compute` | Production |
| [C#/.NET](./csharp) | `SynorCompute` | Production |
| [Ruby](./ruby) | `synor_compute` | Production |
## Features
- **Matrix Operations**: MatMul, Conv2D, Pooling, BatchNorm
- **AI/ML**: Flash Attention, FFT, Inference (LLMs, Vision, Embeddings)
- **Multi-Precision**: FP64, FP32, FP16, BF16, INT8, INT4
- **Automatic Routing**: Cost, Speed, Energy, or Balanced optimization
- **Streaming**: SSE-based streaming for LLM inference
- **Job Management**: Async job submission with status polling
## Quick Start
### JavaScript/TypeScript
```typescript
import { SynorCompute } from 'synor-compute';
const client = new SynorCompute('your-api-key');
// Matrix multiplication
const result = await client.matmul(a, b, {
precision: 'fp16',
processor: 'gpu'
});
// LLM inference with streaming
for await (const chunk of client.inferenceStream('llama-3-70b', prompt)) {
process.stdout.write(chunk);
}
```
### Python
```python
from synor_compute import SynorCompute, Tensor
client = SynorCompute('your-api-key')
# Matrix multiplication
a = Tensor.random((512, 512))
b = Tensor.random((512, 512))
result = await client.matmul(a, b, precision='fp16', processor='gpu')
# LLM inference with streaming
async for chunk in client.inference_stream('llama-3-70b', prompt):
print(chunk, end='')
```
### Go
```go
import "github.com/synor/compute-sdk-go"
client := synor.NewClient("your-api-key")
// Matrix multiplication
result, err := client.MatMul(ctx, a, b, synor.WithPrecision(synor.FP16))
// LLM inference
response, err := client.Inference(ctx, "llama-3-70b", prompt)
```
### Rust
```rust
use synor_compute::{SynorCompute, Tensor, Precision, ProcessorType};
let client = SynorCompute::new("your-api-key");
// Matrix multiplication
let result = client.matmul(&a, &b)
.precision(Precision::FP16)
.processor(ProcessorType::GPU)
.send()
.await?;
// LLM inference with streaming
let mut stream = client.inference_stream("llama-3-70b", prompt).await?;
while let Some(token) = stream.next().await {
print!("{}", token?);
}
```
## API Endpoints
All SDKs connect to the Synor Compute API:
- **Production**: `https://api.synor.io/compute/v1`
- **Local (Docker)**: `http://localhost:17250`
## Processor Types
| Type | Description |
|------|-------------|
| `cpu` | General-purpose CPU computation |
| `gpu` | NVIDIA/AMD GPU acceleration |
| `tpu` | Google TPU for ML workloads |
| `npu` | Neural Processing Units |
| `lpu` | Language Processing Units (Groq) |
| `fpga` | Field-Programmable Gate Arrays |
| `dsp` | Digital Signal Processors |
| `webgpu` | Browser-based GPU |
| `wasm` | WebAssembly runtime |
| `auto` | Automatic selection (default) |
## Precision Levels
| Level | Bits | Use Case |
|-------|------|----------|
| `fp64` | 64 | Scientific computing |
| `fp32` | 32 | General purpose (default) |
| `fp16` | 16 | AI/ML training |
| `bf16` | 16 | Large language models |
| `int8` | 8 | Quantized inference |
| `int4` | 4 | Extreme quantization |
## Balancing Strategies
| Strategy | Priority |
|----------|----------|
| `speed` | Minimize latency |
| `cost` | Minimize cost |
| `energy` | Minimize carbon footprint |
| `latency` | Real-time requirements |
| `balanced` | Optimal tradeoff (default) |
## Local Development with Docker
Deploy the compute infrastructure locally:
```bash
cd /path/to/Blockchain.cc
docker-compose -f docker-compose.compute.yml up -d
```
Services available:
- **Compute API**: `http://localhost:17250`
- **CPU Workers**: `http://localhost:17260-17261`
- **WASM Worker**: `http://localhost:17262`
- **Spot Market**: `http://localhost:17270`
- **Redis**: `localhost:17280`
- **Prometheus**: `http://localhost:17290`
## License
MIT License - see individual SDK packages for details.

258
sdk/c/README.md Normal file
View file

@ -0,0 +1,258 @@
# Synor Compute SDK for C
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### Using CMake
```cmake
find_package(SynorCompute REQUIRED)
target_link_libraries(your_app PRIVATE synor_compute)
```
### Manual Installation
```bash
git clone https://github.com/synor/compute-sdk-c
cd compute-sdk-c
mkdir build && cd build
cmake ..
make
sudo make install
```
## Quick Start
```c
#include <synor_compute.h>
#include <stdio.h>
int main() {
// Initialize client
synor_client_t* client = synor_client_create("your-api-key");
if (!client) {
fprintf(stderr, "Failed to create client\n");
return 1;
}
// Create tensors
size_t shape[] = {512, 512};
synor_tensor_t* a = synor_tensor_random(shape, 2, SYNOR_FP32);
synor_tensor_t* b = synor_tensor_random(shape, 2, SYNOR_FP32);
// Matrix multiplication on GPU
synor_matmul_options_t opts = {
.precision = SYNOR_FP16,
.processor = SYNOR_PROCESSOR_GPU,
.priority = SYNOR_PRIORITY_NORMAL
};
synor_result_t* result = synor_matmul(client, a, b, &opts);
if (result && result->status == SYNOR_STATUS_COMPLETED) {
printf("Time: %ldms\n", result->execution_time_ms);
printf("Cost: $%.6f\n", result->cost);
}
// Cleanup
synor_result_free(result);
synor_tensor_free(a);
synor_tensor_free(b);
synor_client_free(client);
return 0;
}
```
## Tensor Operations
```c
// Create tensors
size_t shape[] = {3, 3};
synor_tensor_t* zeros = synor_tensor_zeros(shape, 2, SYNOR_FP32);
synor_tensor_t* ones = synor_tensor_ones(shape, 2, SYNOR_FP32);
synor_tensor_t* random = synor_tensor_random(shape, 2, SYNOR_FP32);
// From data
float data[] = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f};
size_t data_shape[] = {2, 3};
synor_tensor_t* tensor = synor_tensor_create(data, 6, data_shape, 2, SYNOR_FP32);
// Get tensor info
size_t ndim = synor_tensor_ndim(tensor);
size_t size = synor_tensor_size(tensor);
const size_t* tensor_shape = synor_tensor_shape(tensor);
// Get data pointer
const float* tensor_data = synor_tensor_data(tensor);
```
## Matrix Operations
```c
// Matrix multiplication
synor_matmul_options_t matmul_opts = {
.precision = SYNOR_FP16,
.processor = SYNOR_PROCESSOR_GPU
};
synor_result_t* result = synor_matmul(client, a, b, &matmul_opts);
// 2D Convolution
synor_conv2d_options_t conv_opts = {
.stride_h = 1, .stride_w = 1,
.padding_h = 1, .padding_w = 1,
.precision = SYNOR_FP32
};
synor_result_t* conv = synor_conv2d(client, input, kernel, &conv_opts);
// Attention
synor_attention_options_t attn_opts = {
.num_heads = 8,
.flash = true,
.precision = SYNOR_FP16
};
synor_result_t* attn = synor_attention(client, query, key, value, &attn_opts);
```
## LLM Inference
```c
// Single response
synor_inference_options_t opts = {
.max_tokens = 512,
.temperature = 0.7f,
.top_p = 0.9f
};
synor_inference_result_t* response = synor_inference(
client, "llama-3-70b", "Explain quantum computing", &opts
);
if (response) {
printf("%s\n", response->result);
synor_inference_result_free(response);
}
// Streaming with callback
void on_chunk(const char* chunk, void* user_data) {
printf("%s", chunk);
fflush(stdout);
}
synor_inference_stream(client, "llama-3-70b", "Write a poem",
&opts, on_chunk, NULL);
```
## Configuration
```c
synor_config_t config = {
.api_key = "your-api-key",
.base_url = "https://api.synor.io/compute/v1",
.default_processor = SYNOR_PROCESSOR_GPU,
.default_precision = SYNOR_FP16,
.timeout_secs = 30,
.debug = false
};
synor_client_t* client = synor_client_create_with_config(&config);
```
## Error Handling
```c
synor_result_t* result = synor_matmul(client, a, b, &opts);
if (!result) {
const char* error = synor_get_last_error();
fprintf(stderr, "Error: %s\n", error);
} else if (result->status == SYNOR_STATUS_FAILED) {
fprintf(stderr, "Job failed: %s\n", result->error_message);
}
// Check for specific errors
synor_error_t err = synor_get_error_code();
switch (err) {
case SYNOR_ERROR_NETWORK:
fprintf(stderr, "Network error\n");
break;
case SYNOR_ERROR_AUTH:
fprintf(stderr, "Authentication failed\n");
break;
case SYNOR_ERROR_INVALID_ARG:
fprintf(stderr, "Invalid argument\n");
break;
}
```
## Types
```c
// Processor types
typedef enum {
SYNOR_PROCESSOR_CPU,
SYNOR_PROCESSOR_GPU,
SYNOR_PROCESSOR_TPU,
SYNOR_PROCESSOR_NPU,
SYNOR_PROCESSOR_LPU,
SYNOR_PROCESSOR_FPGA,
SYNOR_PROCESSOR_AUTO
} synor_processor_t;
// Precision
typedef enum {
SYNOR_FP64,
SYNOR_FP32,
SYNOR_FP16,
SYNOR_BF16,
SYNOR_INT8,
SYNOR_INT4
} synor_precision_t;
// Job status
typedef enum {
SYNOR_STATUS_PENDING,
SYNOR_STATUS_RUNNING,
SYNOR_STATUS_COMPLETED,
SYNOR_STATUS_FAILED,
SYNOR_STATUS_CANCELLED
} synor_status_t;
```
## Memory Management
All Synor objects must be freed:
```c
synor_tensor_free(tensor);
synor_result_free(result);
synor_inference_result_free(response);
synor_client_free(client);
```
## Thread Safety
The client is thread-safe. Each thread can share the same client instance.
## Requirements
- C99 or later
- libcurl for HTTP
- OpenSSL for TLS
## Building
```bash
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make
```
## Testing
```bash
cd build
ctest
```
## License
MIT

274
sdk/cpp/README.md Normal file
View file

@ -0,0 +1,274 @@
# Synor Compute SDK for C++
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### Using CMake
```cmake
find_package(SynorCompute REQUIRED)
target_link_libraries(your_app PRIVATE synor::compute)
```
### vcpkg
```bash
vcpkg install synor-compute
```
### Conan
```
[requires]
synor-compute/0.1.0
```
## Quick Start
```cpp
#include <synor/compute.hpp>
#include <iostream>
int main() {
synor::Client client("your-api-key");
// Matrix multiplication on GPU
auto a = synor::Tensor::random({512, 512});
auto b = synor::Tensor::random({512, 512});
auto result = client.matmul(a, b)
.precision(synor::Precision::FP16)
.processor(synor::ProcessorType::GPU)
.execute();
if (result.isSuccess()) {
std::cout << "Time: " << result.executionTimeMs() << "ms\n";
std::cout << "Cost: $" << result.cost() << "\n";
}
return 0;
}
```
## Modern C++ Features
### Auto Type Deduction
```cpp
auto tensor = synor::Tensor::random({10, 10});
auto result = client.matmul(a, b).execute();
```
### Structured Bindings (C++17)
```cpp
auto [success, data, error] = client.matmul(a, b).execute();
if (success) {
std::cout << "Result shape: " << data.shape() << "\n";
}
```
### std::optional Results
```cpp
if (auto time = result.executionTimeMs()) {
std::cout << "Execution time: " << *time << "ms\n";
}
```
## Tensor Operations
```cpp
// Create tensors
auto zeros = synor::Tensor::zeros({3, 3});
auto ones = synor::Tensor::ones({2, 2});
auto random = synor::Tensor::random({10, 10});
auto randn = synor::Tensor::randn({100});
auto eye = synor::Tensor::eye(3);
// From std::vector
std::vector<float> data = {1, 2, 3, 4, 5, 6};
auto tensor = synor::Tensor(data, {2, 3});
// From initializer list
auto tensor2 = synor::Tensor({1.0f, 2.0f, 3.0f}, {3});
// Operations
auto reshaped = tensor.reshape({3, 2});
auto transposed = tensor.transpose();
// Math
float mean = tensor.mean();
float sum = tensor.sum();
float std_dev = tensor.std();
```
## Builder Pattern API
```cpp
// Matrix multiplication
auto result = client.matmul(a, b)
.precision(synor::Precision::FP16)
.processor(synor::ProcessorType::GPU)
.priority(synor::Priority::High)
.strategy(synor::Strategy::Speed)
.execute();
// 2D Convolution
auto conv = client.conv2d(input, kernel)
.stride(1, 1)
.padding(1, 1)
.execute();
// Attention
auto attention = client.attention(query, key, value)
.numHeads(8)
.flash(true)
.execute();
```
## Async API with std::future
```cpp
#include <future>
auto future = client.matmul(a, b)
.precision(synor::Precision::FP16)
.executeAsync();
// Do other work...
auto result = future.get();
```
## LLM Inference
```cpp
// Single response
auto response = client.inference("llama-3-70b", "Explain quantum computing")
.maxTokens(512)
.temperature(0.7)
.execute();
std::cout << response.result().value_or("") << "\n";
// Streaming with callback
client.inferenceStream("llama-3-70b", "Write a poem",
[](std::string_view chunk) {
std::cout << chunk << std::flush;
});
```
## Configuration
```cpp
synor::Config config;
config.apiKey = "your-api-key";
config.baseUrl = "https://api.synor.io/compute/v1";
config.defaultProcessor = synor::ProcessorType::GPU;
config.defaultPrecision = synor::Precision::FP16;
config.timeout = std::chrono::seconds(30);
config.debug = true;
synor::Client client(config);
```
## Error Handling
```cpp
try {
auto result = client.matmul(a, b).execute();
} catch (const synor::ApiError& e) {
std::cerr << "API Error " << e.statusCode() << ": " << e.what() << "\n";
} catch (const synor::NetworkError& e) {
std::cerr << "Network error: " << e.what() << "\n";
} catch (const synor::InvalidArgumentError& e) {
std::cerr << "Invalid argument: " << e.what() << "\n";
}
// Or with std::expected (C++23)
auto result = client.matmul(a, b).tryExecute();
if (result) {
std::cout << "Success!\n";
} else {
std::cerr << "Error: " << result.error().message() << "\n";
}
```
## Types
```cpp
// Processor types
enum class ProcessorType {
CPU, GPU, TPU, NPU, LPU, FPGA, Auto
};
// Precision
enum class Precision {
FP64, FP32, FP16, BF16, INT8, INT4
};
// Job status
enum class JobStatus {
Pending, Running, Completed, Failed, Cancelled
};
```
## RAII Memory Management
All Synor objects use RAII:
```cpp
{
auto tensor = synor::Tensor::random({100, 100});
auto result = client.matmul(tensor, tensor).execute();
} // Automatic cleanup
```
## Move Semantics
Efficient moves for large tensors:
```cpp
auto tensor = synor::Tensor::random({1000, 1000});
auto moved = std::move(tensor); // No copy
```
## Thread Safety
The client is thread-safe. Use shared_ptr for multi-threaded access:
```cpp
auto client = std::make_shared<synor::Client>("your-api-key");
// Multiple threads can use client safely
std::thread t1([&client]() { client->matmul(a, b).execute(); });
std::thread t2([&client]() { client->matmul(c, d).execute(); });
```
## Requirements
- C++17 or later
- CMake 3.16+
- libcurl
- nlohmann/json
## Building
```bash
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
```
## Testing
```bash
cd build
ctest --output-on-failure
```
## License
MIT

250
sdk/csharp/README.md Normal file
View file

@ -0,0 +1,250 @@
# Synor Compute SDK for C#/.NET
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### NuGet
```bash
dotnet add package SynorCompute
```
### Package Manager Console
```powershell
Install-Package SynorCompute
```
## Quick Start
```csharp
using SynorCompute;
var client = new SynorClient("your-api-key");
// Matrix multiplication on GPU
var a = Tensor.Random(512, 512);
var b = Tensor.Random(512, 512);
var result = await client.MatMulAsync(a, b, new MatMulOptions
{
Precision = Precision.FP16,
Processor = ProcessorType.GPU
});
if (result.IsSuccess)
{
Console.WriteLine($"Time: {result.ExecutionTimeMs}ms");
Console.WriteLine($"Cost: ${result.Cost}");
}
```
## Tensor Operations
```csharp
// Create tensors
var zeros = Tensor.Zeros(3, 3);
var ones = Tensor.Ones(2, 2);
var random = Tensor.Random(10, 10);
var randn = Tensor.Randn(100);
var eye = Tensor.Eye(3);
// From array
float[,] data = { { 1, 2, 3 }, { 4, 5, 6 } };
var tensor = Tensor.FromArray(data);
// From 1D with shape
var data1d = new float[] { 1, 2, 3, 4, 5, 6 };
var tensor1d = new Tensor(data1d, new[] { 2, 3 });
// Operations
var reshaped = tensor.Reshape(3, 2);
var transposed = tensor.Transpose();
// Math
var mean = tensor.Mean();
var sum = tensor.Sum();
var std = tensor.Std();
```
## Async/Await API
```csharp
// Matrix multiplication
var result = await client.MatMulAsync(a, b, new MatMulOptions
{
Precision = Precision.FP16,
Processor = ProcessorType.GPU,
Strategy = BalancingStrategy.Speed
});
// 2D Convolution
var conv = await client.Conv2DAsync(input, kernel, new Conv2DOptions
{
Stride = (1, 1),
Padding = (1, 1)
});
// Attention
var attention = await client.AttentionAsync(query, key, value, new AttentionOptions
{
NumHeads = 8,
Flash = true
});
```
## LLM Inference
```csharp
// Single response
var response = await client.InferenceAsync("llama-3-70b", "Explain quantum computing",
new InferenceOptions
{
MaxTokens = 512,
Temperature = 0.7f
});
Console.WriteLine(response.Result);
// Streaming with IAsyncEnumerable
await foreach (var chunk in client.InferenceStreamAsync("llama-3-70b", "Write a poem"))
{
Console.Write(chunk);
}
```
## Configuration
```csharp
var config = new SynorConfig
{
ApiKey = "your-api-key",
BaseUrl = "https://api.synor.io/compute/v1",
DefaultProcessor = ProcessorType.GPU,
DefaultPrecision = Precision.FP16,
Timeout = TimeSpan.FromSeconds(30),
Debug = true
};
var client = new SynorClient(config);
```
## Dependency Injection
```csharp
// In Startup.cs or Program.cs
services.AddSynorCompute(options =>
{
options.ApiKey = Configuration["Synor:ApiKey"];
options.DefaultProcessor = ProcessorType.GPU;
});
// In your service
public class ComputeService
{
private readonly ISynorClient _client;
public ComputeService(ISynorClient client)
{
_client = client;
}
public async Task<Tensor> ComputeAsync(Tensor a, Tensor b)
{
var result = await _client.MatMulAsync(a, b);
return result.Data;
}
}
```
## Error Handling
```csharp
try
{
var result = await client.MatMulAsync(a, b);
}
catch (SynorApiException ex)
{
Console.WriteLine($"API Error {ex.StatusCode}: {ex.Message}");
}
catch (SynorNetworkException ex)
{
Console.WriteLine($"Network error: {ex.Message}");
}
catch (SynorException ex)
{
Console.WriteLine($"Error: {ex.Message}");
}
```
## LINQ Integration
```csharp
// Process multiple tensors
var tensors = new[] { a, b, c, d };
var results = await Task.WhenAll(
tensors.Select(t => client.MatMulAsync(t, identity))
);
```
## Types
```csharp
// Processor types
public enum ProcessorType
{
CPU, GPU, TPU, NPU, LPU, FPGA, Auto
}
// Precision
public enum Precision
{
FP64, FP32, FP16, BF16, INT8, INT4
}
// Job status
public enum JobStatus
{
Pending, Running, Completed, Failed, Cancelled
}
// Balancing strategy
public enum BalancingStrategy
{
Speed, Cost, Energy, Latency, Balanced
}
```
## Cancellation Support
```csharp
var cts = new CancellationTokenSource();
// Cancel after 10 seconds
cts.CancelAfter(TimeSpan.FromSeconds(10));
try
{
var result = await client.MatMulAsync(a, b, cancellationToken: cts.Token);
}
catch (OperationCanceledException)
{
Console.WriteLine("Operation was cancelled");
}
```
## Requirements
- .NET 6.0 or later
- System.Text.Json
## Testing
```bash
dotnet test
```
## License
MIT

249
sdk/flutter/README.md Normal file
View file

@ -0,0 +1,249 @@
# Synor Compute SDK for Flutter/Dart
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
Add to `pubspec.yaml`:
```yaml
dependencies:
synor_compute: ^0.1.0
```
Then run:
```bash
flutter pub get
```
## Quick Start
```dart
import 'package:synor_compute/synor_compute.dart';
void main() async {
final client = SynorCompute('your-api-key');
// Matrix multiplication on GPU
final a = Tensor.random([512, 512]);
final b = Tensor.random([512, 512]);
final result = await client.matmul(a, b,
precision: Precision.fp16,
processor: ProcessorType.gpu,
);
if (result.isSuccess) {
print('Time: ${result.executionTimeMs}ms');
print('Cost: \$${result.cost}');
}
}
```
## Tensor Operations
```dart
// Create tensors
final zeros = Tensor.zeros([3, 3]);
final ones = Tensor.ones([2, 2]);
final random = Tensor.random([10, 10]);
final randn = Tensor.randn([100]);
final eye = Tensor.eye(3);
// From list
final data = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
final tensor = Tensor(data, shape: [2, 3]);
// From typed data (efficient)
final float32List = Float32List.fromList(data);
final tensor2 = Tensor.fromTypedData(float32List, shape: [2, 3]);
// Operations
final reshaped = tensor.reshape([3, 2]);
final transposed = tensor.transpose();
// Math
final mean = tensor.mean();
final sum = tensor.sum();
final std = tensor.std();
```
## Matrix Operations
```dart
// Matrix multiplication
final result = await client.matmul(a, b,
precision: Precision.fp16,
processor: ProcessorType.gpu,
strategy: BalancingStrategy.speed,
);
// 2D Convolution
final conv = await client.conv2d(input, kernel,
stride: (1, 1),
padding: (1, 1),
);
// Attention
final attention = await client.attention(query, key, value,
numHeads: 8,
flash: true,
);
```
## LLM Inference
```dart
// Single response
final response = await client.inference(
'llama-3-70b',
'Explain quantum computing',
maxTokens: 512,
temperature: 0.7,
);
print(response.result);
// Streaming
await for (final chunk in client.inferenceStream(
'llama-3-70b',
'Write a poem',
)) {
stdout.write(chunk);
}
```
## Configuration
```dart
final config = SynorConfig(
apiKey: 'your-api-key',
baseUrl: 'https://api.synor.io/compute/v1',
defaultProcessor: ProcessorType.gpu,
defaultPrecision: Precision.fp16,
timeout: Duration(seconds: 30),
debug: true,
);
final client = SynorCompute.withConfig(config);
```
## Flutter Widget Integration
```dart
import 'package:flutter/material.dart';
import 'package:synor_compute/synor_compute.dart';
class ComputeWidget extends StatefulWidget {
@override
State<ComputeWidget> createState() => _ComputeWidgetState();
}
class _ComputeWidgetState extends State<ComputeWidget> {
final client = SynorCompute('your-api-key');
String? result;
bool isLoading = false;
Future<void> compute() async {
setState(() => isLoading = true);
try {
final response = await client.inference(
'llama-3-70b',
'Hello',
);
setState(() => result = response.result);
} catch (e) {
setState(() => result = 'Error: $e');
} finally {
setState(() => isLoading = false);
}
}
@override
Widget build(BuildContext context) {
return Column(
children: [
if (isLoading)
CircularProgressIndicator()
else if (result != null)
Text(result!),
ElevatedButton(
onPressed: compute,
child: Text('Compute'),
),
],
);
}
}
```
## Riverpod Integration
```dart
import 'package:flutter_riverpod/flutter_riverpod.dart';
final synorProvider = Provider((ref) => SynorCompute('your-api-key'));
final inferenceProvider = FutureProvider.family<String, String>((ref, prompt) async {
final client = ref.watch(synorProvider);
final result = await client.inference('llama-3-70b', prompt);
return result.result ?? '';
});
```
## Error Handling
```dart
try {
final result = await client.matmul(a, b);
} on SynorException catch (e) {
print('API Error: ${e.message} (${e.statusCode})');
} catch (e) {
print('Unexpected error: $e');
}
```
## Types
```dart
// Processor types
enum ProcessorType {
cpu, gpu, tpu, npu, lpu, fpga, auto
}
// Precision
enum Precision {
fp64, fp32, fp16, bf16, int8, int4
}
// Job status
enum JobStatus {
pending, running, completed, failed, cancelled
}
// Balancing strategy
enum BalancingStrategy {
speed, cost, energy, latency, balanced
}
```
## Platform Support
| Platform | Status |
|----------|--------|
| Android | Supported |
| iOS | Supported |
| Web | Supported |
| macOS | Supported |
| Windows | Supported |
| Linux | Supported |
## Testing
```bash
flutter test
```
## License
MIT

174
sdk/go/README.md Normal file
View file

@ -0,0 +1,174 @@
# Synor Compute SDK for Go
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
```bash
go get github.com/synor/compute-sdk-go
```
## Quick Start
```go
package main
import (
"context"
"fmt"
"log"
synor "github.com/synor/compute-sdk-go"
)
func main() {
client := synor.NewClient("your-api-key")
// Create tensors
a := synor.NewTensor(data, []int{512, 512}, synor.FP32)
b := synor.Zeros([]int{512, 512}, synor.FP32)
// Matrix multiplication
ctx := context.Background()
result, err := client.MatMul(ctx, a, b,
synor.WithPrecision(synor.FP16),
synor.WithProcessor(synor.GPU),
)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Execution time: %.2fms\n", result.Metrics.ExecutionTimeMs)
}
```
## Configuration
```go
config := synor.Config{
APIKey: "your-api-key",
Endpoint: "https://api.synor.io/compute/v1",
Strategy: synor.Balanced,
Precision: synor.FP16,
Timeout: 30 * time.Second,
}
client := synor.NewClientWithConfig(config)
```
## Tensor Operations
```go
// Create tensors
zeros := synor.Zeros([]int{3, 3}, synor.FP32)
ones := synor.Ones([]int{2, 2}, synor.FP32)
// From slice
data := []float32{1, 2, 3, 4, 5, 6}
tensor := synor.NewTensor(data, []int{2, 3}, synor.FP32)
// Serialize for API
serialized := tensor.Serialize()
```
## Matrix Operations
```go
// Matrix multiplication
result, err := client.MatMul(ctx, a, b,
synor.WithPrecision(synor.FP16),
synor.WithProcessor(synor.GPU),
synor.WithStrategy(synor.Speed),
)
// 2D Convolution
conv, err := client.Conv2D(ctx, input, kernel,
synor.WithStride(1, 1),
synor.WithPadding(1, 1),
)
// Attention
attention, err := client.Attention(ctx, query, key, value,
synor.WithNumHeads(8),
synor.WithFlash(true),
)
```
## LLM Inference
```go
// Single response
response, err := client.Inference(ctx, "llama-3-70b", "Explain quantum computing",
synor.WithMaxTokens(512),
synor.WithTemperature(0.7),
)
fmt.Println(response.Result)
// Streaming (using channel)
stream, err := client.InferenceStream(ctx, "llama-3-70b", "Write a poem")
for chunk := range stream {
fmt.Print(chunk)
}
```
## Job Management
```go
// Submit async job
job, err := client.SubmitJob(ctx, "matmul", map[string]interface{}{
"a": a.Serialize(),
"b": b.Serialize(),
})
// Get status
status, err := client.GetJobStatus(ctx, job.ID)
// Cancel
err = client.CancelJob(ctx, job.ID)
```
## Error Handling
```go
result, err := client.MatMul(ctx, a, b)
if err != nil {
if synorErr, ok := err.(*synor.SynorError); ok {
fmt.Printf("API Error: %s (status: %d)\n",
synorErr.Message, synorErr.StatusCode)
}
}
```
## Processor Types
```go
synor.CPU // General-purpose CPU
synor.GPU // NVIDIA/AMD GPU
synor.TPU // Google TPU
synor.NPU // Neural Processing Unit
synor.LPU // Language Processing Unit
synor.FPGA // Field-Programmable Gate Array
synor.WASM // WebAssembly runtime
synor.WebGPU // Browser GPU
```
## Precision Levels
```go
synor.FP64 // 64-bit float
synor.FP32 // 32-bit float (default)
synor.FP16 // 16-bit float
synor.BF16 // Brain float 16
synor.INT8 // 8-bit integer
synor.INT4 // 4-bit integer
```
## Testing
```bash
go test ./...
```
## License
MIT

195
sdk/java/README.md Normal file
View file

@ -0,0 +1,195 @@
# Synor Compute SDK for Java
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### Maven
```xml
<dependency>
<groupId>io.synor</groupId>
<artifactId>compute-sdk</artifactId>
<version>0.1.0</version>
</dependency>
```
### Gradle
```groovy
implementation 'io.synor:compute-sdk:0.1.0'
```
## Quick Start
```java
import io.synor.compute.*;
public class Example {
public static void main(String[] args) {
SynorCompute client = new SynorCompute("your-api-key");
// Matrix multiplication on GPU
Tensor a = Tensor.random(512, 512);
Tensor b = Tensor.random(512, 512);
JobResult<Tensor> result = client.matmul(a, b)
.precision(Precision.FP16)
.processor(ProcessorType.GPU)
.execute();
if (result.isSuccess()) {
System.out.println("Time: " + result.getExecutionTimeMs() + "ms");
System.out.println("Cost: $" + result.getCost());
}
}
}
```
## Tensor Operations
```java
// Create tensors
Tensor zeros = Tensor.zeros(3, 3);
Tensor ones = Tensor.ones(2, 2);
Tensor random = Tensor.random(10, 10);
Tensor randn = Tensor.randn(100);
Tensor eye = Tensor.eye(3);
// From array
double[][] data = {{1, 2, 3}, {4, 5, 6}};
Tensor tensor = Tensor.fromArray(data);
// Operations
Tensor reshaped = tensor.reshape(3, 2);
Tensor transposed = tensor.transpose();
// Math
double mean = tensor.mean();
double sum = tensor.sum();
double std = tensor.std();
```
## Builder Pattern API
```java
// Matrix multiplication
JobResult<Tensor> result = client.matmul(a, b)
.precision(Precision.FP16)
.processor(ProcessorType.GPU)
.priority(Priority.HIGH)
.execute();
// 2D Convolution
JobResult<Tensor> conv = client.conv2d(input, kernel)
.stride(1, 1)
.padding(1, 1)
.execute();
// Attention
JobResult<Tensor> attention = client.attention(query, key, value)
.numHeads(8)
.flash(true)
.execute();
```
## Async API with CompletableFuture
```java
import java.util.concurrent.CompletableFuture;
CompletableFuture<JobResult<Tensor>> future = client.matmul(a, b)
.precision(Precision.FP16)
.executeAsync();
future.thenAccept(result -> {
System.out.println("Completed: " + result.isSuccess());
});
```
## LLM Inference
```java
// Single response
InferenceResult response = client.inference("llama-3-70b", "Explain quantum computing")
.maxTokens(512)
.temperature(0.7)
.execute();
System.out.println(response.getResult());
// Streaming with callback
client.inferenceStream("llama-3-70b", "Write a poem", chunk -> {
System.out.print(chunk);
});
```
## Configuration
```java
SynorConfig config = SynorConfig.builder()
.apiKey("your-api-key")
.baseUrl("https://api.synor.io/compute/v1")
.defaultProcessor(ProcessorType.GPU)
.defaultPrecision(Precision.FP16)
.timeout(Duration.ofSeconds(30))
.debug(true)
.build();
SynorCompute client = new SynorCompute(config);
```
## Error Handling
```java
try {
JobResult<Tensor> result = client.matmul(a, b).execute();
} catch (SynorException e) {
System.err.println("API Error: " + e.getMessage());
System.err.println("Status: " + e.getStatusCode());
}
```
## Enums
```java
// Processor types
ProcessorType.CPU
ProcessorType.GPU
ProcessorType.TPU
ProcessorType.NPU
ProcessorType.LPU
ProcessorType.FPGA
ProcessorType.AUTO
// Precision
Precision.FP64
Precision.FP32
Precision.FP16
Precision.BF16
Precision.INT8
Precision.INT4
// Job status
JobStatus.PENDING
JobStatus.RUNNING
JobStatus.COMPLETED
JobStatus.FAILED
```
## Requirements
- Java 11 or higher
- Gson for JSON serialization
- OkHttp for HTTP client
## Testing
```bash
mvn test
# or
./gradlew test
```
## License
MIT

159
sdk/js/README.md Normal file
View file

@ -0,0 +1,159 @@
# Synor Compute SDK for JavaScript/TypeScript
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
```bash
npm install synor-compute
# or
pnpm add synor-compute
# or
yarn add synor-compute
```
## Quick Start
```typescript
import { SynorCompute, Tensor } from 'synor-compute';
const client = new SynorCompute('your-api-key');
// Matrix multiplication on GPU
const a = Tensor.random([512, 512]);
const b = Tensor.random([512, 512]);
const result = await client.matmul(a, b, {
precision: 'fp16',
processor: 'gpu'
});
console.log(`Execution time: ${result.executionTimeMs}ms`);
console.log(`Cost: $${result.cost}`);
```
## Tensor Operations
```typescript
// Create tensors
const zeros = Tensor.zeros([3, 3]);
const ones = Tensor.ones([2, 2]);
const random = Tensor.random([10, 10]);
const randn = Tensor.randn([100]); // Normal distribution
// From array
const data = Tensor.from([1, 2, 3, 4, 5, 6], [2, 3]);
// Operations
const reshaped = data.reshape([3, 2]);
const transposed = data.transpose();
```
## Matrix Operations
```typescript
// Matrix multiplication
const result = await client.matmul(a, b, {
precision: 'fp16',
processor: 'gpu',
strategy: 'speed'
});
// 2D Convolution
const conv = await client.conv2d(input, kernel, {
stride: [1, 1],
padding: [1, 1]
});
// Flash Attention
const attention = await client.attention(query, key, value, {
numHeads: 8,
flash: true
});
```
## LLM Inference
```typescript
// Single response
const response = await client.inference('llama-3-70b', 'Explain quantum computing', {
maxTokens: 512,
temperature: 0.7
});
console.log(response.result);
// Streaming response
for await (const chunk of client.inferenceStream('llama-3-70b', 'Write a poem')) {
process.stdout.write(chunk);
}
```
## Configuration
```typescript
const client = new SynorCompute({
apiKey: 'your-api-key',
baseUrl: 'https://api.synor.io/compute/v1', // or localhost:17250 for local
defaultProcessor: 'gpu',
defaultPrecision: 'fp16',
defaultStrategy: 'balanced',
timeout: 30000,
debug: false
});
```
## Job Management
```typescript
// Submit async job
const job = await client.submitJob('matmul', { a, b });
// Poll for status
const status = await client.getJobStatus(job.jobId);
// Cancel job
await client.cancelJob(job.jobId);
```
## Error Handling
```typescript
import { SynorError } from 'synor-compute';
try {
const result = await client.matmul(a, b);
} catch (error) {
if (error instanceof SynorError) {
console.error(`API Error: ${error.message} (${error.statusCode})`);
}
}
```
## TypeScript Support
Full TypeScript support with exported types:
```typescript
import type {
Tensor,
ProcessorType,
Precision,
BalancingStrategy,
JobStatus,
SynorConfig,
MatMulOptions,
InferenceOptions,
JobResult
} from 'synor-compute';
```
## Testing
```bash
npm test
# or
pnpm test
```
## License
MIT

206
sdk/kotlin/README.md Normal file
View file

@ -0,0 +1,206 @@
# Synor Compute SDK for Kotlin
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### Gradle (Kotlin DSL)
```kotlin
implementation("io.synor:compute-sdk-kotlin:0.1.0")
```
### Gradle (Groovy)
```groovy
implementation 'io.synor:compute-sdk-kotlin:0.1.0'
```
## Quick Start
```kotlin
import io.synor.compute.*
import kotlinx.coroutines.runBlocking
fun main() = runBlocking {
val client = SynorCompute("your-api-key")
// Matrix multiplication on GPU
val a = Tensor.random(512, 512)
val b = Tensor.random(512, 512)
val result = client.matmul(a, b) {
precision = Precision.FP16
processor = ProcessorType.GPU
}
if (result.isSuccess) {
println("Time: ${result.executionTimeMs}ms")
println("Cost: $${result.cost}")
}
}
```
## Kotlin Coroutines Support
```kotlin
// Suspend functions
suspend fun compute() {
val result = client.matmul(a, b)
println(result.result)
}
// Flows for streaming
client.inferenceStream("llama-3-70b", "Write a poem")
.collect { chunk ->
print(chunk)
}
```
## Tensor Operations
```kotlin
// Create tensors
val zeros = Tensor.zeros(3, 3)
val ones = Tensor.ones(2, 2)
val random = Tensor.random(10, 10)
val randn = Tensor.randn(100)
val eye = Tensor.eye(3)
// From array
val data = arrayOf(
floatArrayOf(1f, 2f, 3f),
floatArrayOf(4f, 5f, 6f)
)
val tensor = Tensor.from(data)
// Operations
val reshaped = tensor.reshape(3, 2)
val transposed = tensor.transpose()
// Math (extension properties)
val mean = tensor.mean
val sum = tensor.sum
val std = tensor.std
```
## DSL-Style API
```kotlin
// Matrix multiplication with DSL
val result = client.matmul(a, b) {
precision = Precision.FP16
processor = ProcessorType.GPU
priority = Priority.HIGH
strategy = BalancingStrategy.SPEED
}
// Convolution
val conv = client.conv2d(input, kernel) {
stride = 1 to 1
padding = 1 to 1
}
// Attention
val attention = client.attention(query, key, value) {
numHeads = 8
flash = true
}
```
## LLM Inference
```kotlin
// Single response
val response = client.inference("llama-3-70b", "Explain quantum computing") {
maxTokens = 512
temperature = 0.7
}
println(response.result)
// Streaming with Flow
client.inferenceStream("llama-3-70b", "Write a poem")
.collect { chunk ->
print(chunk)
}
```
## Configuration
```kotlin
val config = SynorConfig(
apiKey = "your-api-key",
baseUrl = "https://api.synor.io/compute/v1",
defaultProcessor = ProcessorType.GPU,
defaultPrecision = Precision.FP16,
timeout = 30.seconds,
debug = true
)
val client = SynorCompute(config)
```
## Error Handling
```kotlin
try {
val result = client.matmul(a, b)
} catch (e: SynorException) {
println("API Error: ${e.message} (${e.statusCode})")
}
// Or with Result type
val result = runCatching {
client.matmul(a, b)
}
result.onSuccess { println("Success: ${it.result}") }
.onFailure { println("Failed: ${it.message}") }
```
## Extension Functions
```kotlin
// Operator overloading
val c = a * b // Matrix multiplication
val d = a + b // Element-wise addition
// Infix functions
val result = a matmul b
```
## Types
```kotlin
// Sealed classes for type safety
sealed class ProcessorType {
object Cpu : ProcessorType()
object Gpu : ProcessorType()
object Tpu : ProcessorType()
object Auto : ProcessorType()
}
enum class Precision {
FP64, FP32, FP16, BF16, INT8, INT4
}
enum class JobStatus {
PENDING, RUNNING, COMPLETED, FAILED, CANCELLED
}
```
## Requirements
- Kotlin 1.9+
- Kotlinx Coroutines
- Kotlinx Serialization
## Testing
```bash
./gradlew test
```
## License
MIT

187
sdk/python/README.md Normal file
View file

@ -0,0 +1,187 @@
# Synor Compute SDK for Python
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
```bash
pip install synor-compute
# or
poetry add synor-compute
```
## Quick Start
```python
import asyncio
from synor_compute import SynorCompute, Tensor
async def main():
client = SynorCompute('your-api-key')
# Matrix multiplication on GPU
a = Tensor.random((512, 512))
b = Tensor.random((512, 512))
result = await client.matmul(a, b, precision='fp16', processor='gpu')
print(f"Execution time: {result.execution_time_ms}ms")
print(f"Cost: ${result.cost}")
asyncio.run(main())
```
## NumPy Integration
```python
import numpy as np
from synor_compute import Tensor
# Create from NumPy
arr = np.random.randn(100, 100).astype(np.float32)
tensor = Tensor.from_numpy(arr)
# Convert back to NumPy
result_np = tensor.numpy()
```
## Tensor Operations
```python
# Create tensors
zeros = Tensor.zeros((3, 3))
ones = Tensor.ones((2, 2))
random = Tensor.random((10, 10))
randn = Tensor.randn((100,)) # Normal distribution
# Operations
reshaped = tensor.reshape((50, 200))
transposed = tensor.T
# Math operations
mean = tensor.mean()
std = tensor.std()
```
## Matrix Operations
```python
# Matrix multiplication
result = await client.matmul(a, b,
precision='fp16',
processor='gpu',
strategy='speed'
)
# 2D Convolution
conv = await client.conv2d(input_tensor, kernel,
stride=(1, 1),
padding=(1, 1)
)
# Flash Attention
attention = await client.attention(query, key, value,
num_heads=8,
flash=True
)
```
## LLM Inference
```python
# Single response
response = await client.inference(
'llama-3-70b',
'Explain quantum computing',
max_tokens=512,
temperature=0.7
)
print(response.result)
# Streaming response
async for chunk in client.inference_stream('llama-3-70b', 'Write a poem'):
print(chunk, end='', flush=True)
```
## Configuration
```python
from synor_compute import SynorCompute, Config
config = Config(
api_key='your-api-key',
base_url='https://api.synor.io/compute/v1',
default_processor='gpu',
default_precision='fp16',
default_strategy='balanced',
timeout=30.0,
debug=False
)
client = SynorCompute(config)
```
## Synchronous API
For non-async contexts:
```python
from synor_compute import SynorComputeSync
client = SynorComputeSync('your-api-key')
result = client.matmul(a, b) # Blocking call
```
## Job Management
```python
# Submit async job
job = await client.submit_job('matmul', {'a': a, 'b': b})
# Poll for status
status = await client.get_job_status(job.job_id)
# Wait for completion
result = await client.wait_for_job(job.job_id, timeout=60.0)
# Cancel job
await client.cancel_job(job.job_id)
```
## Error Handling
```python
from synor_compute import SynorError
try:
result = await client.matmul(a, b)
except SynorError as e:
print(f"API Error: {e.message} (status: {e.status_code})")
```
## Type Hints
Full type hint support:
```python
from synor_compute.types import (
ProcessorType,
Precision,
BalancingStrategy,
JobStatus,
MatMulOptions,
InferenceOptions,
JobResult
)
```
## Testing
```bash
pytest
# or
python -m pytest tests/
```
## License
MIT

251
sdk/ruby/README.md Normal file
View file

@ -0,0 +1,251 @@
# Synor Compute SDK for Ruby
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
Add to `Gemfile`:
```ruby
gem 'synor_compute'
```
Then run:
```bash
bundle install
```
Or install directly:
```bash
gem install synor_compute
```
## Quick Start
```ruby
require 'synor_compute'
client = SynorCompute::Client.new('your-api-key')
# Matrix multiplication on GPU
a = SynorCompute::Tensor.random([512, 512])
b = SynorCompute::Tensor.random([512, 512])
result = client.matmul(a, b,
precision: :fp16,
processor: :gpu
)
if result.success?
puts "Time: #{result.execution_time_ms}ms"
puts "Cost: $#{result.cost}"
end
```
## Tensor Operations
```ruby
# Create tensors
zeros = SynorCompute::Tensor.zeros([3, 3])
ones = SynorCompute::Tensor.ones([2, 2])
random = SynorCompute::Tensor.random([10, 10])
randn = SynorCompute::Tensor.randn([100])
eye = SynorCompute::Tensor.eye(3)
# From array
data = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0]
tensor = SynorCompute::Tensor.new(data, shape: [2, 3])
# Operations
reshaped = tensor.reshape([3, 2])
transposed = tensor.transpose
# Math
mean = tensor.mean
sum = tensor.sum
std = tensor.std
```
## Matrix Operations
```ruby
# Matrix multiplication
result = client.matmul(a, b,
precision: :fp16,
processor: :gpu,
strategy: :speed
)
# 2D Convolution
conv = client.conv2d(input, kernel,
stride: [1, 1],
padding: [1, 1]
)
# Attention
attention = client.attention(query, key, value,
num_heads: 8,
flash: true
)
```
## LLM Inference
```ruby
# Single response
response = client.inference('llama-3-70b', 'Explain quantum computing',
max_tokens: 512,
temperature: 0.7
)
puts response.result
# Streaming with block
client.inference_stream('llama-3-70b', 'Write a poem') do |chunk|
print chunk
end
# Streaming with Enumerator
client.inference_stream('llama-3-70b', 'Write a poem').each do |chunk|
print chunk
end
```
## Configuration
```ruby
config = SynorCompute::Config.new(
api_key: 'your-api-key',
base_url: 'https://api.synor.io/compute/v1',
default_processor: :gpu,
default_precision: :fp16,
timeout: 30,
debug: true
)
client = SynorCompute::Client.new(config)
# Or with block
SynorCompute.configure do |config|
config.api_key = 'your-api-key'
config.default_processor = :gpu
end
```
## Rails Integration
```ruby
# config/initializers/synor_compute.rb
SynorCompute.configure do |config|
config.api_key = Rails.application.credentials.synor[:api_key]
config.default_processor = :gpu
end
# In your controller/service
class ComputeService
def self.client
@client ||= SynorCompute::Client.new
end
def self.compute(a, b)
client.matmul(a, b)
end
end
```
## Error Handling
```ruby
begin
result = client.matmul(a, b)
rescue SynorCompute::ApiError => e
puts "API Error #{e.status_code}: #{e.message}"
rescue SynorCompute::NetworkError => e
puts "Network error: #{e.message}"
rescue SynorCompute::Error => e
puts "Error: #{e.message}"
end
```
## Types
```ruby
# Processor types (symbols)
:cpu, :gpu, :tpu, :npu, :lpu, :fpga, :auto
# Precision (symbols)
:fp64, :fp32, :fp16, :bf16, :int8, :int4
# Job status (symbols)
:pending, :running, :completed, :failed, :cancelled
# Balancing strategy (symbols)
:speed, :cost, :energy, :latency, :balanced
```
## Job Management
```ruby
# Submit async job
job = client.submit_job(:matmul, a: a, b: b)
# Poll for status
status = client.job_status(job.id)
# Wait for completion
result = client.wait_for_job(job.id, timeout: 60)
# Cancel job
client.cancel_job(job.id)
```
## Ruby-Specific Features
### Method Chaining
```ruby
result = client
.matmul(a, b)
.tap { |r| puts "Computing..." }
.then { |r| r.data if r.success? }
```
### Duck Typing
```ruby
# Any object with #to_tensor method works
class MyData
def to_tensor
SynorCompute::Tensor.new(@data, shape: @shape)
end
end
result = client.matmul(MyData.new, b)
```
### Frozen String Literals
The gem is compatible with frozen string literals:
```ruby
# frozen_string_literal: true
require 'synor_compute'
```
## Requirements
- Ruby 3.0+
- faraday gem for HTTP
## Testing
```bash
bundle exec rake test
# or
bundle exec rspec
```
## License
MIT

224
sdk/rust/README.md Normal file
View file

@ -0,0 +1,224 @@
# Synor Compute SDK for Rust
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
Add to `Cargo.toml`:
```toml
[dependencies]
synor-compute = "0.1"
tokio = { version = "1", features = ["full"] }
```
## Quick Start
```rust
use synor_compute::{SynorCompute, Tensor, Precision, ProcessorType};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = SynorCompute::new("your-api-key");
// Matrix multiplication on GPU
let a = Tensor::rand(&[512, 512]);
let b = Tensor::rand(&[512, 512]);
let result = client.matmul(&a, &b)
.precision(Precision::FP16)
.processor(ProcessorType::Gpu)
.send()
.await?;
if result.is_success() {
println!("Time: {}ms", result.execution_time_ms.unwrap_or(0));
println!("Cost: ${}", result.cost.unwrap_or(0.0));
}
Ok(())
}
```
## Tensor Operations
```rust
// Create tensors
let zeros = Tensor::zeros(&[3, 3]);
let ones = Tensor::ones(&[2, 2]);
let random = Tensor::rand(&[10, 10]); // Uniform [0, 1)
let randn = Tensor::randn(&[100]); // Normal distribution
let eye = Tensor::eye(3); // Identity matrix
// From data
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::new(&[2, 3], data);
// Ranges
let range = Tensor::arange(0.0, 10.0, 1.0);
let linspace = Tensor::linspace(0.0, 1.0, 100);
// Operations
let reshaped = tensor.reshape(&[3, 2]);
let transposed = tensor.transpose();
// Math
let mean = tensor.mean();
let sum = tensor.sum();
let std = tensor.std();
// Activations
let relu = tensor.relu();
let sigmoid = tensor.sigmoid();
let softmax = tensor.softmax();
```
## Builder Pattern API
```rust
// Matrix multiplication with options
let result = client.matmul(&a, &b)
.precision(Precision::FP16)
.processor(ProcessorType::Gpu)
.priority(Priority::High)
.send()
.await?;
// 2D Convolution
let result = client.conv2d(&input, &kernel)
.stride((1, 1))
.padding((1, 1))
.precision(Precision::FP32)
.send()
.await?;
// Attention
let result = client.attention(&query, &key, &value)
.num_heads(8)
.flash(true)
.precision(Precision::FP16)
.send()
.await?;
```
## LLM Inference
```rust
// Single response
let response = client.inference("llama-3-70b", "Explain quantum computing")
.send()
.await?;
println!("{}", response.result.unwrap_or_default());
// Streaming with futures
use futures::StreamExt;
let mut stream = client.inference_stream("llama-3-70b", "Write a poem").await?;
while let Some(token) = stream.next().await {
print!("{}", token?);
}
```
## Configuration
```rust
use synor_compute::Config;
let config = Config::new("your-api-key")
.base_url("https://api.synor.io/compute/v1")
.default_processor(ProcessorType::Gpu)
.default_precision(Precision::FP16)
.timeout_secs(30)
.debug(true);
let client = SynorCompute::with_config(config);
```
## Error Handling
```rust
use synor_compute::{Error, Result};
async fn compute() -> Result<()> {
let result = client.matmul(&a, &b).send().await?;
match result.status {
JobStatus::Completed => println!("Success!"),
JobStatus::Failed => {
if let Some(err) = result.error {
eprintln!("Job failed: {}", err);
}
}
_ => {}
}
Ok(())
}
// Pattern matching on errors
match client.matmul(&a, &b).send().await {
Ok(result) => println!("Result: {:?}", result),
Err(Error::Api { status_code, message }) => {
eprintln!("API error {}: {}", status_code, message);
}
Err(Error::InvalidArgument(msg)) => {
eprintln!("Invalid argument: {}", msg);
}
Err(e) => eprintln!("Other error: {}", e),
}
```
## Types
```rust
// Processor types
ProcessorType::Cpu
ProcessorType::Gpu
ProcessorType::Tpu
ProcessorType::Npu
ProcessorType::Lpu
ProcessorType::Fpga
ProcessorType::Auto // Automatic selection
// Precision levels
Precision::FP64
Precision::FP32
Precision::FP16
Precision::BF16
Precision::INT8
Precision::INT4
// Job status
JobStatus::Pending
JobStatus::Running
JobStatus::Completed
JobStatus::Failed
JobStatus::Cancelled
// Priority
Priority::Low
Priority::Normal
Priority::High
Priority::Critical
```
## Features
Enable optional features in `Cargo.toml`:
```toml
[dependencies]
synor-compute = { version = "0.1", features = ["serde", "rayon"] }
```
- `serde` - Serialization support (enabled by default)
- `rayon` - Parallel tensor operations
## Testing
```bash
cargo test
```
## License
MIT

227
sdk/swift/README.md Normal file
View file

@ -0,0 +1,227 @@
# Synor Compute SDK for Swift
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### Swift Package Manager
Add to `Package.swift`:
```swift
dependencies: [
.package(url: "https://github.com/synor/compute-sdk-swift", from: "0.1.0")
]
```
### Xcode
File > Add Packages > Enter URL:
`https://github.com/synor/compute-sdk-swift`
## Quick Start
```swift
import SynorCompute
let client = SynorCompute(apiKey: "your-api-key")
// Matrix multiplication on GPU
let a = Tensor.random(shape: [512, 512])
let b = Tensor.random(shape: [512, 512])
Task {
let result = try await client.matmul(a, b,
precision: .fp16,
processor: .gpu
)
if result.isSuccess {
print("Time: \(result.executionTimeMs ?? 0)ms")
print("Cost: $\(result.cost ?? 0)")
}
}
```
## Tensor Operations
```swift
// Create tensors
let zeros = Tensor.zeros(shape: [3, 3])
let ones = Tensor.ones(shape: [2, 2])
let random = Tensor.random(shape: [10, 10])
let randn = Tensor.randn(shape: [100])
let eye = Tensor.eye(size: 3)
// From array
let data: [Float] = [1, 2, 3, 4, 5, 6]
let tensor = Tensor(data: data, shape: [2, 3])
// Operations
let reshaped = tensor.reshape(to: [3, 2])
let transposed = tensor.transpose()
// Math
let mean = tensor.mean()
let sum = tensor.sum()
let std = tensor.std()
```
## Async/Await API
```swift
// Matrix multiplication
let result = try await client.matmul(a, b,
precision: .fp16,
processor: .gpu,
strategy: .speed
)
// Convolution
let conv = try await client.conv2d(input, kernel,
stride: (1, 1),
padding: (1, 1)
)
// Attention
let attention = try await client.attention(query, key, value,
numHeads: 8,
flash: true
)
```
## LLM Inference
```swift
// Single response
let response = try await client.inference(
model: "llama-3-70b",
prompt: "Explain quantum computing",
maxTokens: 512,
temperature: 0.7
)
print(response.result ?? "")
// Streaming with AsyncSequence
for try await chunk in client.inferenceStream(
model: "llama-3-70b",
prompt: "Write a poem"
) {
print(chunk, terminator: "")
}
```
## Configuration
```swift
let config = SynorConfig(
apiKey: "your-api-key",
baseUrl: "https://api.synor.io/compute/v1",
defaultProcessor: .gpu,
defaultPrecision: .fp16,
timeout: 30,
debug: true
)
let client = SynorCompute(config: config)
```
## SwiftUI Integration
```swift
import SwiftUI
import SynorCompute
struct ComputeView: View {
@StateObject private var vm = ComputeViewModel()
var body: some View {
VStack {
if vm.isLoading {
ProgressView()
} else if let result = vm.result {
Text("Result: \(result)")
}
Button("Compute") {
Task { await vm.compute() }
}
}
}
}
@MainActor
class ComputeViewModel: ObservableObject {
@Published var result: String?
@Published var isLoading = false
private let client = SynorCompute(apiKey: "your-api-key")
func compute() async {
isLoading = true
defer { isLoading = false }
do {
let response = try await client.inference(
model: "llama-3-70b",
prompt: "Hello"
)
result = response.result
} catch {
result = "Error: \(error.localizedDescription)"
}
}
}
```
## Error Handling
```swift
do {
let result = try await client.matmul(a, b)
} catch let error as SynorError {
switch error {
case .apiError(let statusCode, let message):
print("API Error \(statusCode): \(message)")
case .networkError(let underlying):
print("Network error: \(underlying)")
case .invalidArgument(let message):
print("Invalid argument: \(message)")
}
} catch {
print("Unexpected error: \(error)")
}
```
## Types
```swift
// Processor types
enum ProcessorType: String, Codable {
case cpu, gpu, tpu, npu, lpu, fpga, auto
}
// Precision
enum Precision: String, Codable {
case fp64, fp32, fp16, bf16, int8, int4
}
// Job status
enum JobStatus: String, Codable {
case pending, running, completed, failed, cancelled
}
```
## Requirements
- iOS 15.0+ / macOS 12.0+ / tvOS 15.0+ / watchOS 8.0+
- Swift 5.9+
## Testing
```bash
swift test
```
## License
MIT