Kubernetes & Helm
Deploy klite to Kubernetes as a single-replica StatefulSet with persistent storage. The Helm chart handles PVCs, services, and configuration.
Prerequisites
Section titled “Prerequisites”- Kubernetes 1.24+
- Helm 3.x
kubectlconfigured for your cluster- A StorageClass that supports
ReadWriteOnce(default on most clusters)
Install with Helm
Section titled “Install with Helm”# Add the klite Helm repositoryhelm repo add klite https://charts.klite.iohelm repo update
# Install with default settingshelm install klite klite/klite
# Or install into a specific namespacehelm install klite klite/klite --namespace kafka --create-namespaceVerify the installation
Section titled “Verify the installation”# Check the pod is runningkubectl get pods -l app.kubernetes.io/name=klite
# Check the servicekubectl get svc klite
# View logskubectl logs -l app.kubernetes.io/name=klite -fConnect to klite
Section titled “Connect to klite”From within the cluster, use the service DNS name:
klite.default.svc.cluster.local:9092For local development, port-forward:
kubectl port-forward svc/klite 9092:9092
# In another terminalecho "hello k8s" | kcat -P -b localhost:9092 -t my-topickcat -C -b localhost:9092 -t my-topic -eConfiguration
Section titled “Configuration”values.yaml reference
Section titled “values.yaml reference”replicaCount: 1 # klite is a single-broker; always 1
image: repository: ghcr.io/klaudworks/klite tag: latest # TODO: Pin to a specific version in production pullPolicy: IfNotPresent
# klite configuration (passed as flags)config: listenAddr: ":9092" dataDir: "/data" logLevel: "info" defaultPartitions: 3 autoCreateTopics: true # advertisedAddr is auto-derived from the pod's service address
# S3 storage backend (optional)s3: enabled: false bucket: "" region: "us-east-1" endpoint: "" # Set for MinIO / S3-compatible stores # Credentials via existing secret existingSecret: "" # Or inline (not recommended for production) accessKeyId: "" secretAccessKey: ""
# Persistencepersistence: enabled: true storageClass: "" # Use cluster default size: 10Gi accessMode: ReadWriteOnce
# Serviceservice: type: ClusterIP port: 9092
# Resourcesresources: requests: cpu: 100m memory: 128Mi limits: cpu: "2" memory: 1Gi
# Probes# TODO: Switch to HTTP health endpoint when availablelivenessProbe: tcpSocket: port: kafka initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 5readinessProbe: tcpSocket: port: kafka initialDelaySeconds: 5 periodSeconds: 5 timeoutSeconds: 3
# Pod-level settingsnodeSelector: {}tolerations: []affinity: {}podAnnotations: {}podSecurityContext: runAsNonRoot: true runAsUser: 1000 fsGroup: 1000securityContext: readOnlyRootFilesystem: true allowPrivilegeEscalation: false
# ServiceMonitor for Prometheus (optional)# TODO: Add metrics endpoint to kliteserviceMonitor: enabled: false interval: 30sCustom values
Section titled “Custom values”# Production-like deploymenthelm install klite klite/klite -f my-values.yaml
# Or override individual valueshelm install klite klite/klite \ --set config.defaultPartitions=6 \ --set persistence.size=50Gi \ --set resources.limits.memory=2GiS3 backend
Section titled “S3 backend”With AWS S3
Section titled “With AWS S3”Create a Kubernetes secret with your AWS credentials:
kubectl create secret generic klite-s3 \ --from-literal=aws-access-key-id=AKIA... \ --from-literal=aws-secret-access-key=...Then reference it in your values:
s3: enabled: true bucket: my-klite-data region: us-west-2 existingSecret: klite-s3For EKS, consider using IAM Roles for Service Accounts (IRSA) instead of static credentials:
serviceAccount: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/klite-s3-role
s3: enabled: true bucket: my-klite-data region: us-west-2 # No credentials needed -- IRSA provides them via the service accountWith MinIO (in-cluster)
Section titled “With MinIO (in-cluster)”s3: enabled: true bucket: klite endpoint: http://minio.minio.svc.cluster.local:9000 existingSecret: minio-credentialsStatefulSet details
Section titled “StatefulSet details”The Helm chart creates a StatefulSet (not a Deployment) to ensure:
- Stable network identity (
klite-0) - Persistent volume claim bound to the pod
- Ordered, graceful shutdown
The WAL data directory is mounted at /data from a PVC. klite replays the WAL on startup, so data survives pod restarts and rescheduling.
Upgrading
Section titled “Upgrading”helm repo updatehelm upgrade klite klite/kliteklite replays its WAL on startup, so upgrades are safe. The pod will shut down gracefully (SIGTERM), and the new version will replay the WAL to recover state.
Uninstalling
Section titled “Uninstalling”helm uninstall kliteThis removes the StatefulSet, Service, and ConfigMap but preserves the PVC by default. To delete data too:
kubectl delete pvc -l app.kubernetes.io/name=kliteExample: klite with a sample application
Section titled “Example: klite with a sample application”Deploy klite alongside a simple producer/consumer:
apiVersion: apps/v1kind: Deploymentmetadata: name: kafka-producerspec: replicas: 1 selector: matchLabels: app: kafka-producer template: metadata: labels: app: kafka-producer spec: containers: - name: producer image: confluentinc/cp-kafkacat:latest command: - /bin/sh - -c - | while true; do echo "message at $(date)" | kafkacat -P -b klite:9092 -t demo sleep 5 done---apiVersion: apps/v1kind: Deploymentmetadata: name: kafka-consumerspec: replicas: 1 selector: matchLabels: app: kafka-consumer template: metadata: labels: app: kafka-consumer spec: containers: - name: consumer image: confluentinc/cp-kafkacat:latest command: - kafkacat - -C - -b - klite:9092 - -t - demo - -G - demo-groupTroubleshooting
Section titled “Troubleshooting”Pod stuck in CrashLoopBackOff
Section titled “Pod stuck in CrashLoopBackOff”Check logs:
kubectl logs klite-0Common causes:
- PVC not bound (check
kubectl get pvc) - Port 9092 conflict (check
kubectl get svc) - Invalid S3 credentials
Clients can’t connect
Section titled “Clients can’t connect”Ensure the service is reachable:
kubectl run test --rm -it --image=alpine -- sh -c "apk add netcat-openbsd && nc -zv klite 9092"See Troubleshooting for more common issues.
Next steps
Section titled “Next steps”- Configuration reference — all flags and environment variables
- Monitoring — observability in Kubernetes
- Architecture — understand klite internals