Quickstart
klite is a Kafka-compatible broker in a single Go binary. This guide gets you from zero to producing and consuming messages in under a minute.
Install
Section titled “Install”Download a release binary
Section titled “Download a release binary”# macOS (Apple Silicon)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-darwin-arm64 -o klite
# macOS (Intel)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-darwin-amd64 -o klite
# Linux (amd64)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-linux-amd64 -o klite
# Linux (arm64)curl -L https://github.com/klaudworks/klite/releases/latest/download/klite-linux-arm64 -o klite
chmod +x kliteBuild from source
Section titled “Build from source”go install github.com/klaudworks/klite/cmd/klite@latestDocker
Section titled “Docker”docker run -p 9092:9092 ghcr.io/klaudworks/kliteSee the Docker guide for persistent volumes and custom configuration.
Start the broker
Section titled “Start the broker”./kliteThat’s it. klite starts listening on localhost:9092 with sane defaults:
- Data stored in
./data - Topics auto-created on first produce or metadata request
- 1 partition per topic by default
- Log level:
info
You’ll see output like:
2025/01/15 10:30:00 INFO broker started listen=:9092 cluster_id=abc123 node_id=0Produce and consume
Section titled “Produce and consume”Use any Kafka client. klite speaks the standard Kafka wire protocol.
With kcat (recommended for testing)
Section titled “With kcat (recommended for testing)”# Produce a messageecho "hello klite" | kcat -P -b localhost:9092 -t my-topic
# Consume all messageskcat -C -b localhost:9092 -t my-topic -eWith Go (franz-go)
Section titled “With Go (franz-go)”package main
import ( "context" "fmt" "github.com/twmb/franz-go/pkg/kgo")
func main() { // Producer client, _ := kgo.NewClient(kgo.SeedBrokers("localhost:9092")) defer client.Close()
ctx := context.Background() client.Produce(ctx, &kgo.Record{ Topic: "my-topic", Value: []byte("hello from Go"), }, func(r *kgo.Record, err error) { if err != nil { panic(err) } fmt.Printf("produced to partition %d offset %d\n", r.Partition, r.Offset) }) client.Flush(ctx)
// Consumer consumer, _ := kgo.NewClient( kgo.SeedBrokers("localhost:9092"), kgo.ConsumeTopics("my-topic"), ) defer consumer.Close()
fetches := consumer.PollFetches(ctx) fetches.EachRecord(func(r *kgo.Record) { fmt.Printf("consumed: %s\n", string(r.Value)) })}With Python (confluent-kafka)
Section titled “With Python (confluent-kafka)”from confluent_kafka import Producer, Consumer
# Producep = Producer({'bootstrap.servers': 'localhost:9092'})p.produce('my-topic', value='hello from Python')p.flush()
# Consumec = Consumer({ 'bootstrap.servers': 'localhost:9092', 'group.id': 'my-group', 'auto.offset.reset': 'earliest',})c.subscribe(['my-topic'])
msg = c.poll(5.0)if msg: print(f"consumed: {msg.value().decode()}")c.close()With Node.js (kafkajs)
Section titled “With Node.js (kafkajs)”const { Kafka } = require('kafkajs');
const kafka = new Kafka({ brokers: ['localhost:9092'] });
// Produceconst producer = kafka.producer();await producer.connect();await producer.send({ topic: 'my-topic', messages: [{ value: 'hello from Node.js' }],});await producer.disconnect();
// Consumeconst consumer = kafka.consumer({ groupId: 'my-group' });await consumer.connect();await consumer.subscribe({ topic: 'my-topic', fromBeginning: true });await consumer.run({ eachMessage: async ({ message }) => { console.log(`consumed: ${message.value.toString()}`); },});What just happened?
Section titled “What just happened?”When you produced to my-topic, klite:
- Auto-created the topic with 1 partition (configurable via
--default-partitions) - Assigned offsets to each RecordBatch using the Kafka v2 format
- Stored the data in the write-ahead log under
./data/ - Flushed to S3 if an S3 bucket was configured (otherwise WAL-only)
The topic persists across restarts. klite replays the WAL and metadata log on startup to recover state.
Next steps
Section titled “Next steps”- Configure klite — customize listen address, data directory, S3 backend
- Run in Docker — containerized deployment with persistent volumes
- Deploy to Kubernetes — Helm chart with StatefulSet and PVCs
- Consumer groups — coordinate multiple consumers
- Architecture — understand how klite works under the hood