synor/sdk/kotlin/README.md
Gulshan Yadav 162227dc71 docs(sdk): add comprehensive documentation for all 12 SDKs
Add README.md documentation for:
- Main SDK overview with quick start guides
- JavaScript/TypeScript SDK
- Python SDK
- Go SDK
- Rust SDK
- Java SDK
- Kotlin SDK
- Swift SDK
- Flutter/Dart SDK
- C SDK
- C++ SDK
- C#/.NET SDK
- Ruby SDK

Each README includes:
- Installation instructions
- Quick start examples
- Tensor operations
- Matrix operations (matmul, conv2d, attention)
- LLM inference (single and streaming)
- Configuration options
- Error handling
- Type definitions
2026-01-11 18:05:03 +05:30

206 lines
3.6 KiB
Markdown

# Synor Compute SDK for Kotlin
Access distributed heterogeneous compute at 90% cost reduction.
## Installation
### Gradle (Kotlin DSL)
```kotlin
implementation("io.synor:compute-sdk-kotlin:0.1.0")
```
### Gradle (Groovy)
```groovy
implementation 'io.synor:compute-sdk-kotlin:0.1.0'
```
## Quick Start
```kotlin
import io.synor.compute.*
import kotlinx.coroutines.runBlocking
fun main() = runBlocking {
val client = SynorCompute("your-api-key")
// Matrix multiplication on GPU
val a = Tensor.random(512, 512)
val b = Tensor.random(512, 512)
val result = client.matmul(a, b) {
precision = Precision.FP16
processor = ProcessorType.GPU
}
if (result.isSuccess) {
println("Time: ${result.executionTimeMs}ms")
println("Cost: $${result.cost}")
}
}
```
## Kotlin Coroutines Support
```kotlin
// Suspend functions
suspend fun compute() {
val result = client.matmul(a, b)
println(result.result)
}
// Flows for streaming
client.inferenceStream("llama-3-70b", "Write a poem")
.collect { chunk ->
print(chunk)
}
```
## Tensor Operations
```kotlin
// Create tensors
val zeros = Tensor.zeros(3, 3)
val ones = Tensor.ones(2, 2)
val random = Tensor.random(10, 10)
val randn = Tensor.randn(100)
val eye = Tensor.eye(3)
// From array
val data = arrayOf(
floatArrayOf(1f, 2f, 3f),
floatArrayOf(4f, 5f, 6f)
)
val tensor = Tensor.from(data)
// Operations
val reshaped = tensor.reshape(3, 2)
val transposed = tensor.transpose()
// Math (extension properties)
val mean = tensor.mean
val sum = tensor.sum
val std = tensor.std
```
## DSL-Style API
```kotlin
// Matrix multiplication with DSL
val result = client.matmul(a, b) {
precision = Precision.FP16
processor = ProcessorType.GPU
priority = Priority.HIGH
strategy = BalancingStrategy.SPEED
}
// Convolution
val conv = client.conv2d(input, kernel) {
stride = 1 to 1
padding = 1 to 1
}
// Attention
val attention = client.attention(query, key, value) {
numHeads = 8
flash = true
}
```
## LLM Inference
```kotlin
// Single response
val response = client.inference("llama-3-70b", "Explain quantum computing") {
maxTokens = 512
temperature = 0.7
}
println(response.result)
// Streaming with Flow
client.inferenceStream("llama-3-70b", "Write a poem")
.collect { chunk ->
print(chunk)
}
```
## Configuration
```kotlin
val config = SynorConfig(
apiKey = "your-api-key",
baseUrl = "https://api.synor.io/compute/v1",
defaultProcessor = ProcessorType.GPU,
defaultPrecision = Precision.FP16,
timeout = 30.seconds,
debug = true
)
val client = SynorCompute(config)
```
## Error Handling
```kotlin
try {
val result = client.matmul(a, b)
} catch (e: SynorException) {
println("API Error: ${e.message} (${e.statusCode})")
}
// Or with Result type
val result = runCatching {
client.matmul(a, b)
}
result.onSuccess { println("Success: ${it.result}") }
.onFailure { println("Failed: ${it.message}") }
```
## Extension Functions
```kotlin
// Operator overloading
val c = a * b // Matrix multiplication
val d = a + b // Element-wise addition
// Infix functions
val result = a matmul b
```
## Types
```kotlin
// Sealed classes for type safety
sealed class ProcessorType {
object Cpu : ProcessorType()
object Gpu : ProcessorType()
object Tpu : ProcessorType()
object Auto : ProcessorType()
}
enum class Precision {
FP64, FP32, FP16, BF16, INT8, INT4
}
enum class JobStatus {
PENDING, RUNNING, COMPLETED, FAILED, CANCELLED
}
```
## Requirements
- Kotlin 1.9+
- Kotlinx Coroutines
- Kotlinx Serialization
## Testing
```bash
./gradlew test
```
## License
MIT