GreenKube
Measure, understand, and reduce the carbon footprint of your Kubernetes infrastructure. Make your cloud operations both cost-effective and environmentally responsible.
GreenKube is an open-source tool designed to help DevOps, SRE, and FinOps teams navigate the complexity of sustainability reporting (CSRD) and optimize their cloud costs (FinOps) through better energy efficiency (GreenOps).

🎯 Mission
The EU’s Corporate Sustainability Reporting Directive (CSRD) requires companies to report the carbon footprint of their value chain—including cloud services (Scope 3). GreenKube addresses this urgent need by providing tools to:
- Estimate the energy consumption and CO₂e emissions of each Kubernetes workload.
- Report these metrics in a format aligned with regulatory requirements (ESRS E1).
- Optimize infrastructure to simultaneously reduce cloud bills and environmental impact.
✨ Features (Version 0.2.2)
📊 Dashboard & Visualization
- Modern Web Dashboard: Built-in SvelteKit SPA with real-time charts (ECharts), interactive per-pod metrics table, node inventory, and optimization recommendations — all served from the same container as the API.
- REST API: Full-featured FastAPI backend with comprehensive endpoints for metrics, nodes, namespaces, recommendations, timeseries, and configuration. OpenAPI docs included at
/api/v1/docs.
📈 Comprehensive Resource Monitoring
- Multi-Resource Metrics Collection: Beyond CPU, GreenKube now tracks:
- CPU usage (actual utilization in millicores)
- Memory usage (bytes consumed)
- Network I/O (bytes received/transmitted)
- Disk I/O (bytes read/written)
- Storage (ephemeral storage requests and usage)
- Pod restarts (restart count per container)
- GPU usage (millicores, when available)
- Energy Estimation: Calculates pod-level energy consumption (Joules) using Prometheus metrics and a built-in library of cloud instance power profiles.
- Carbon Footprint Tracking: Converts energy to CO₂e emissions using real-time or default grid carbon intensity data.
🎯 Optimization & Reporting
- Smart Recommendations: Identifies optimization opportunities:
- Zombie pods (idle but costly workloads)
- Oversized pods (underutilized CPU/memory)
- Rightsizing suggestions with potential cost and emission savings
- Pod & Namespace Reporting: Detailed reports of CO₂e emissions, energy usage, and costs per pod and namespace.
- Historical Analysis: Report on any time period (
--last 7d, --last 3m) with flexible grouping (--daily, --monthly, --yearly).
- Data Export: Export reports to CSV or JSON for integration with other tools and BI systems.
🔧 Infrastructure & Deployment
- Demo Mode: Deploy a standalone demo pod with
kubectl run to explore GreenKube with realistic sample data—no live cluster metrics needed.
- Flexible Data Backends: Supports PostgreSQL (default/recommended), SQLite (local/dev), and Elasticsearch (production scale) for storing metrics and carbon intensity data.
- Service Auto-Discovery: Automatically discovers in-cluster Prometheus and OpenCost services to simplify setup (manually configurable via Helm values).
- Helm Chart Deployment: Production-ready Helm chart with PostgreSQL StatefulSet, configurable persistence, RBAC, and health probes.
- Cloud Provider Support: Built-in profiles for AWS, GCP, Azure, OVH, and Scaleway with automatic region-to-carbon-zone mapping.
📦 Dependencies
The chart requires the following services to be available in the cluster:
- OpenCost – for cost data.
- Prometheus – for metrics collection.
GreenKube uses service auto‑discovery to locate these services automatically. If they are deployed in non‑standard namespaces or with custom names, auto‑discovery may fail. In that case, set the service URLs manually in values.yaml (see the prometheus.url and opencost.url fields).
🚀 Installation & Usage
The recommended way to install GreenKube is via the official Helm chart.
1. Add the GreenKube Helm Repository
First, add the GreenKube chart repository to your local Helm setup:
helm repo add greenkube https://GreenKubeCloud.github.io/GreenKube
helm repo update
Create a file named my-values.yaml to customize your deployment:
secrets:
# Get your API token from https://www.electricitymaps.com/
# Optional: without it, GreenKube uses a default carbon intensity
# value (configurable via config.defaultIntensity) for all zones.
electricityMapsToken: "YOUR_API_TOKEN_HERE"
# Uncomment to manually set your Prometheus URL
# (If left empty, GreenKube will try to auto-discover it)
# config:
# prometheus:
# url: "http://prometheus-k8s.monitoring.svc.cluster.local:9090"
Note: GreenKube works without an Electricity Maps token. When no token is provided, a default carbon intensity value (config.defaultIntensity, default: 500 gCO₂e/kWh) is used for all zones. This gives approximate results. For accurate, zone-specific carbon data, provide a token from Electricity Maps.
Install the Chart
Install the Helm chart into a dedicated namespace (e.g., greenkube):
helm install greenkube greenkube/greenkube \
-f my-values.yaml \
-n greenkube \
--create-namespace
This deploys GreenKube with the collector, the API server, and the web dashboard — all in a single image.
🎮 Quick Start with Demo Mode
Want to explore GreenKube with realistic sample data? Deploy the demo mode as a standalone pod:
# 1. Deploy GreenKube demo as a one-time pod
kubectl run greenkube-demo \
--image=greenkube/greenkube:0.2.2 \
--restart=Never \
--command -- greenkube demo --no-browser --port 9000
# 2. Wait for it to start (about 10 seconds)
kubectl wait --for=condition=Ready pod/greenkube-demo --timeout=30s
# 3. Port-forward to access the dashboard
kubectl port-forward pod/greenkube-demo 9000:9000
# 4. Open http://localhost:9000 in your browser
This demo mode:
- Creates a temporary SQLite database with 7 days of realistic Kubernetes metrics
- Generates sample data for 22 pods across 5 namespaces (production, staging, monitoring, data-pipeline, ci-cd)
- Includes carbon emissions, costs, resource usage, and optimization recommendations
- Runs independently from your production GreenKube installation
Demo options:
# Generate 14 days of data instead of 7
kubectl run greenkube-demo --image=greenkube/greenkube:0.2.2 --restart=Never \
--command -- greenkube demo --no-browser --days 14 --port 9000
# Clean up when done
kubectl delete pod greenkube-demo
Perfect for:
- Evaluating GreenKube before deploying to your production cluster
- Testing the dashboard and API endpoints with realistic data
- Learning how GreenKube calculates carbon footprint and cost
- Creating screenshots or demos for your team
🖥️ Web Dashboard
GreenKube ships with a built-in web dashboard (SvelteKit SPA served by the API). Once deployed, access it via port-forward:
kubectl port-forward svc/greenkube-api 8000:8000 -n greenkube
Then open http://localhost:8000 in your browser.
The dashboard includes:
- Dashboard — KPI cards (CO₂, cost, energy, pods), time-series charts (ECharts), namespace breakdown pie chart, and top pods by emissions/cost
- Metrics — Interactive table with sortable and searchable per-pod metrics including energy, cost, and all resource consumption data (CPU, memory, network, disk, storage)
- Nodes — Cluster node inventory with CPU/memory capacity bars, hardware profiles, cloud provider info, and carbon zones
- Recommendations — Actionable optimization suggestions (zombie pods, rightsizing opportunities) with estimated savings in cost and CO₂e
- Settings — Current configuration, API health status, version info, and database connection details
🎨 Dashboard Features
- Auto-refresh with configurable polling interval
- Responsive design works on desktop and mobile
- Dark/light theme support
- Export capabilities for charts and data tables
- Advanced filtering by namespace, time range, and resource type
🔌 API Reference
The API is available at /api/v1 and serves both JSON endpoints and the web dashboard.
| Endpoint |
Description |
GET /api/v1/health |
Health check and version |
GET /api/v1/version |
Application version |
GET /api/v1/config |
Current configuration |
GET /api/v1/metrics?namespace=&last=24h |
Per-pod metrics |
GET /api/v1/metrics/summary?namespace=&last=24h |
Aggregated summary |
GET /api/v1/metrics/timeseries?granularity=day&last=7d |
Time-series data |
GET /api/v1/namespaces |
List of active namespaces |
GET /api/v1/nodes |
Cluster node inventory |
GET /api/v1/recommendations?namespace= |
Optimization recommendations |
Interactive API docs are available at /api/v1/docs (Swagger UI).
API Examples
# Get a health check
curl http://localhost:8000/api/v1/health
# {"status":"ok","version":"0.2.2"}
# Get metrics for the last 24 hours
curl "http://localhost:8000/api/v1/metrics?last=24h"
# Get metrics summary for a specific namespace
curl "http://localhost:8000/api/v1/metrics/summary?namespace=default&last=7d"
# {"total_co2e_grams":142.5,"total_embodied_co2e_grams":12.3,"total_cost":0.87,...}
# Get hourly timeseries data for the last 7 days
curl "http://localhost:8000/api/v1/metrics/timeseries?granularity=hour&last=7d"
# Get optimization recommendations
curl "http://localhost:8000/api/v1/recommendations?namespace=production"
📈 Running Reports & Getting Recommendations
The primary way to interact with GreenKube is by using kubectl exec to run commands inside the running pod.
1. Find your GreenKube pod:
kubectl get pods -n greenkube
(Look for a pod named something like greenkube-7b5…)
2. Run an on-demand report:
# Replace <pod-name> with the name from the previous step
kubectl exec -it <pod-name> -n greenkube -- bash
3. Run a report:
See the doc or greenkube report --help to see more options.
4. Get optimization recommendations:
🏗️ Architecture Summary
GreenKube follows a clean, hexagonal architecture with strict separation between core business logic and infrastructure adapters.
Core Components
Collectors (Input Adapters):
- PrometheusCollector: Fetches CPU, memory, network I/O, disk I/O, and restart count metrics via PromQL queries
- NodeCollector: Gathers node metadata (zones, instance types, capacity) from Kubernetes API
- PodCollector: Collects resource requests (CPU, memory, ephemeral storage) from pod specs
- OpenCostCollector: Retrieves cost allocation data for financial reporting
- ElectricityMapsCollector: Fetches real-time carbon intensity data by geographic zone
Processing Pipeline:
- Estimator: Converts Prometheus CPU metrics into EnergyMetric objects (Joules) using cloud instance power profiles
- Processor: Orchestrates the entire data collection and processing pipeline:
- Runs all collectors concurrently via
asyncio.gather
- Reconstructs historical node states from database snapshots
- Groups metrics by carbon zone for efficient intensity lookups
- Aggregates estimation flags and reasons for transparency
- Manages per-pod resource maps (CPU, memory, network, disk, storage, restarts)
- Calculator: Converts energy (Joules → kWh) to carbon emissions (CO₂e) using grid intensity and PUE
- Maintains per-run cache of (zone, timestamp) → intensity mappings
- Supports normalization (hourly/daily/none) for efficient lookups
Business Logic:
- Recommender: Analyzes CombinedMetric data to identify optimization opportunities:
- Zombie detection (idle pods consuming resources)
- Rightsizing analysis (over-provisioned CPU/memory)
- Autoscaling recommendations based on variability
- Carbon-aware scheduling opportunities
Storage (Output Adapters):
- Repositories: Abstract interfaces implemented for multiple backends:
- PostgresRepository: Production-grade persistent storage (asyncpg driver)
- SQLiteRepository: Local development and testing (aiosqlite driver)
- ElasticsearchRepository: High-scale time-series storage and analytics
- NodeRepository: Historical node state snapshots for accurate time-range reporting
- EmbodiedRepository: Boavizta API integration for hardware embodied emissions
API & Presentation:
- FastAPI Server: REST API with OpenAPI documentation, CORS support, health checks
- SvelteKit Dashboard: Modern SPA with:
- Server-side rendering (SSR) for fast initial load
- Client-side navigation for smooth UX
- Chart.js/ECharts for interactive visualizations
- Tailwind CSS for responsive design
Data Flow
- Collection Phase (async/concurrent):
Prometheus → CPU, memory, network, disk metrics
Kubernetes → Node metadata, pod resource requests
OpenCost → Cost allocation data
- Processing Phase:
Raw metrics → Energy estimation (Joules per pod)
Node metadata → Cloud zone mapping
Historical data → Node state reconstruction
- Calculation Phase:
Energy + Grid intensity + PUE → CO₂e emissions
Metrics + Cost data → Combined metrics
- Analysis Phase:
Combined metrics → Recommendations engine
Time-series data → Trend analysis
- Storage & Presentation:
Combined metrics → Database (Postgres/SQLite/ES)
Database → API → Web Dashboard
API → CLI reports/exports
Key Design Principles
- Async-First: Fully leverages Python
asyncio for non-blocking I/O operations
- Database Agnostic: Repository pattern abstracts storage implementation
- Cloud Agnostic: Supports AWS, GCP, Azure, OVH, Scaleway with extensible mapping
- Resilient: Graceful degradation when data sources are unavailable
- Transparent: Clear flagging of estimated vs. measured values with reasoning
- Modular: Each component is independently testable and replaceable
- Observable: Comprehensive logging at all pipeline stages
🔬 How Energy & CO₂e Estimation Works
GreenKube’s estimation pipeline converts raw Kubernetes metrics into actionable carbon data in four steps:
- Collect CPU usage — Prometheus provides per-pod CPU utilisation in millicores over each collection interval.
- Map to power — Each node’s instance type is matched to a power profile (min/max watts per vCPU) derived from SPECpower benchmarks and the Cloud Carbon Footprint coefficient database. The power draw is linearly interpolated between min watts (idle) and max watts (100 % utilisation).
- Apply PUE — The estimated power is multiplied by the Power Usage Effectiveness factor for the cloud provider’s data centre (e.g. 1.135 for AWS, 1.10 for GCP).
- Convert to CO₂e — Energy (kWh) is multiplied by the grid carbon intensity of the node’s geographic zone. When available, real-time intensity is fetched from the Electricity Maps API; otherwise a configurable default is used.
Embodied emissions are estimated separately via the Boavizta API, which models the manufacturing footprint of cloud instances amortised over their expected lifespan.
For provider-specific coefficients and the full derivation, see docs/power_estimation_methodology.md.
📋 Changelog
See CHANGELOG.md for a full version history and the GitHub Releases page for published releases.
🤝 Contributing
GreenKube is a community-driven project, and we welcome all contributions! Check out our CONTRIBUTING.md file to learn how to get involved.
Development Setup
# Clone and install
git clone https://github.com/GreenKubeCloud/GreenKube.git
cd GreenKube
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,test]"
pre-commit install
# Run the tests
pytest
# Start the API locally (uses SQLite by default)
DB_TYPE=sqlite greenkube-api
# Run the frontend
cd frontend && npm install && npm run dev
-
Report Bugs: Open an Issue with a detailed description.
-
Suggest Features: Let’s discuss them in the GitHub Discussions.
-
Submit Code: Make a Pull Request!
📄 Licence
This project is licensed under the Apache 2.0 License. See the LICENSE file for more details.