Run Kubernetes AI Debugging Locally Using k8sgpt + Ollama (No OpenAI, 100% Free)
As Kubernetes clusters grow, debugging issues like ImagePullBackOff, CrashLoopBackOff, or scheduling failures becomes time-consuming.
k8sgpt solves this by analyzing your cluster and explaining issues in plain English using AI.
In this blog, I’ll show how to run k8sgpt locally with Ollama (no OpenAI key required) using Minikube on Windows.
This setup is ideal for:
Kubernetes SREs
DevOps Engineers
Platform teams
Anyone who wants AI-assisted debugging without cloud dependency
๐งฑ Architecture
Minikube (Kubernetes)
|
k8sgpt (CLI)
|
Ollama (Local LLM - llama3.1)
✔ Fully local
✔ No API key
✔ No cost
✔ Works offline
✅ Prerequisites
Windows 10/11 (64-bit)
Minikube installed and running
kubectl configured
Ollama installed
Verify:
kubectl get nodes
ollama list
Expected:
minikube Ready
llama3.1
๐น Step 1: Download k8sgpt (Windows)
Go to:
๐ https://github.com/k8sgpt-ai/k8sgpt/releases
Download:
k8sgpt_Windows_x86_64.zip
Extract it and move:
k8sgpt.exe → C:\Program Files\k8sgpt\
Add this directory to your PATH.
Verify:
k8sgpt version
๐น Step 2: Verify Kubernetes Context
kubectl config current-context
Output:
minikube
๐น Step 3: Remove OpenAI Backend (Important)
If OpenAI was previously configured:
k8sgpt auth remove --backends openai
This avoids quota and authentication errors.
๐น Step 4: Configure Ollama as AI Provider
Add Ollama with explicit model name:
k8sgpt auth add --backend ollama --model llama3.1
Set Ollama as default provider:
k8sgpt auth default --provider ollama
Verify:
k8sgpt auth list
Expected:
Default:
> ollama
Active:
> ollama
๐น Step 5: Verify Ollama Endpoint
curl http://localhost:11434/api/tags
You should see:
llama3.1
๐น Step 6: Run k8sgpt Analysis
Basic analysis:
k8sgpt analyze
With AI explanation:
k8sgpt analyze --explain
For cleaner output:
k8sgpt analyze --explain --filter=Pod,Node,Deployment
๐งช Step 7: Test with a Real Failure
Create a broken pod:
apiVersion: v1
kind: Pod
metadata:
name: broken-pod
spec:
containers:
- name: test
image: nginx:doesnotexist
Apply:
kubectl apply -f broken.yaml
Now run:
k8sgpt analyze --explain --filter=Pod
✅ Output (Example)
Detects
ImagePullBackOffExplains root cause
Suggests fix
Generated locally using llama3.1
๐ง Why This Setup Is Powerful
| Feature | Benefit |
|---|---|
| Local LLM | No internet required |
| No OpenAI | Zero cost |
| Minikube | Safe learning environment |
| k8sgpt | Fast RCA |
| Ollama | Production-grade local AI |
๐ Production Notes
This setup works the same on large clusters (50+ nodes)
In production, you can:
Run k8sgpt as CronJob
Integrate with Slack / MCP / ChatOps
Use with EFK / OpenSearch logs
Extend to Robin.io environments
๐ Conclusion
By combining k8sgpt + Ollama, you get an AI-powered Kubernetes debugging assistant that:
Runs locally
Costs nothing
Protects data privacy
Scales from Minikube → Production
This is an excellent way for SREs to adopt AI safely.
No comments:
Post a Comment