- Published on
Automate Grafana Dashboards with Terraform and Helm
- Authors
- Name
- Ruan Bekker
- @ruanbekker
In my personal homelab I use Terraform quite heavily and while im transitioning my deployments to ArgoCD, I still do a lot of helm-based deployments with Terraform. I have published a post on Managing Helm Releases with Terraform for more information on that.
Goal
The goal of this post is to show how I found a interesting way how I define my Grafana dashboards as code. We'll walk through the workflow: first, you create your dashboard in Grafana and export the JSON configuration. This JSON file is then stored directly in Git, allowing for easy tracking and version control of your dashboards. Updates are straightforward - simply modify the JSON files directly in your repository.
This approach enables you to define multiple dashboards in their raw JSON format within a configuration directory. Terraform's kubernetes_manifest resource then takes over, reading the JSON files and parsing them into a ConfigMap manifest using the templatefile function. This generated manifest deploys a ConfigMap, which Grafana's sidecar automatically imports, populating your Grafana instance with your defined dashboards.
About
This Terraform kubernetes_manifest
resource will render a configmap manifest that we are reading from templates/dashboards-configmap.yaml.tpl
and then renders the data section in our configmap with the data that it reads from our json files defined at templates/dashboards/application-*.json
:
As you can see in the next figure, the configmap will render the json filenames and the json content inside the data items underneath the data section:
Where the end result will be the following configmap:
If that sounds like something that interests you, lets get it deployed.
Prerequisites
If you want to follow along, you will need the following:
- Kubernetes Cluster - You can follow this post if you want to deploy a local cluster.
- kube-prometheus-stack.
- Terraform
I have the following values for my kube-prometheus-stack release:
## kube-prometheus-stack default values:
## https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml
fullnameOverride: "kube-prometheus-stack"
grafana:
## Using default values from
## https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
enabled: true
sidecar:
alerts:
enabled: true
label: grafana_alert
labelValue: "1"
dashboards:
enabled: true
label: grafana_dashboard
labelValue: "1"
The important part is that we have grafana.sidecar.alerts
and grafana.sidecar.dashboards
enabled and that we define the label key and value that needs to be present in the config map, so that the sidecar knows which configmap to import.
Defining the directory structure
As I am using terraform, we need to create the directory structure for our module and the environment where we are sourcing the module from:
├── environments
│ └── test
│ ├── main.tf
│ └── providers.tf
└── modules
└── monitoring-stack
├── main.tf
├── outputs.tf
├── templates
│ ├── dashboard-configmaps.yaml.tpl
│ └── dashboards
│ ├── applications-health.json
│ └── applications-overview.json
└── variables.tf
Let's create the directories:
mkdir -p environments/test
mkdir -p modules/monitoring-stack/templates/dashboards
Then we can create the files:
touch environments/test/{main,providers}.tf
touch modules/monitoring-stack/{main,outputs,variables}.tf
touch modules/monitoring-stack/templates/dashboard-configmaps.yaml.tpl
touch modules/monitoring-stack/templates/dashboards/applications-{overview,health}.json
Terraform Module
In this section we will only use the kubernetes_manifest
resource within our module, but we can make the assumption that we can extend this module to whatever we would like to build.
Inside our modules/monitoring-stack/main.tf
:
resource "kubernetes_manifest" "selfmanaged_dashboards" {
count = var.import_selfmanaged_dashboards ? 1 : 0
manifest = yamldecode(templatefile("${path.module}/templates/dashboards/dashboards-configmap.yaml.tpl", {
name = "grafana-selfmanaged-dashboards"
namespace = var.namespace
data = var.import_selfmanaged_dashboards ? {
"applications-overview.json" = file("${path.module}/templates/dashboards/applications-overview.json"),
"applications-health.json" = file("${path.module}/templates/dashboards/applications-health.json")
} : {}
}))
}
Then our modules/monitoring-stack/variables.tf
variable "namespace" {
type = string
default = "monitoring"
}
variable "import_selfmanaged_dashboards" {
type = bool
default = true
}
Our configmap template:
apiVersion: v1
kind: ConfigMap
metadata:
name: ${name}
namespace: ${namespace}
labels:
grafana_dashboard: "1"
data:
%{ for filename, content in data ~}
${filename}: |
${indent(4, content)}
%{ endfor ~}
Then if you are following along, I have defined 2 json files in the following locations:
modules/monitoring-stack//templates/dashboards/applications-overview.json
modules/monitoring-stack//templates/dashboards/applications-health.json
You can head over to grafana, create your dashboard, (but you dont save the dashboard as we are defining it in code), you can select the dashboard settings icon at the top, and select the "json model", which will show you the json content for your dashboard. You can copy that and paste it into the files mentioned above.
For a example, I have defined one panel with the metric job{="coredns"}
and exported the json, which will look like this:
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": {
"type": "grafana",
"uid": "-- Grafana --"
},
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"fiscalYearStartMonth": 0,
"graphTooltip": 0,
"id": 500,
"links": [],
"liveNow": false,
"panels": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"fieldConfig": {
"defaults": {
"color": {
"mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unitScale": true
},
"overrides": []
},
"gridPos": {
"h": 7,
"w": 24,
"x": 0,
"y": 0
},
"id": 1,
"options": {
"legend": {
"calcs": [],
"displayMode": "list",
"placement": "bottom",
"showLegend": true
},
"tooltip": {
"mode": "single",
"sort": "none"
}
},
"targets": [
{
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"editorMode": "code",
"expr": "up{job=\"coredns\"}",
"instant": false,
"legendFormat": "__auto",
"range": true,
"refId": "A"
}
],
"title": "coredns prometheus job",
"type": "timeseries"
}
],
"refresh": "",
"schemaVersion": 39,
"tags": [],
"templating": {
"list": []
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "Application Health",
"uid": "d4e438ae-ece1-4ee7-bf30-9f2b1b8f762d",
"version": 2,
"weekStart": ""
}
Source your module
Now we want to configure our environment, where we will source our module:
module "monitoring" {
source = "../../modules/monitoring-stack"
namespace = "monitoring"
import_selfmanaged_dashboards = true
}
As you can see namespace
and import_selfmanaged_dashboards
are default values, so we don't need to define them, but I'm just showing the values so that you can change them if you want to.
Then we need to define our providers and since we are using a kubernetes_manifest
resource, we need to define our provider configuration, which will require our kubernetes cluster endpoint details. This might differ from yours, but I'm going to use the following:
terraform {
required_version = ">= 1.0"
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.29.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "kind-cluster"
}
Deploy
From the environments/test
directory, first we need to initialize terraform to download the providers:
terraform init
Then deploy:
terraform apply
When you head over to Grafana and search for "Application Health" you will find the dashboard that looks more or less like this:
What about alerts?
The same way we defined dashboards using kubernetes_manifest
, can be used to define alerts. The only difference would be that you need to define the label grafana_alert: "1"
in the configmap. And the alert data needs to be provided in the json files.
Grafana allows you to export your alert in the Alerting section:
When you select export, grafana provides a couple of export formats, then you can select json:
Then you should be able to define your alerts in code as well.
Taking it further
This is a basic example, but from here you can extend the module as much as you want.
Thank You
Thanks for reading, if you like my content, feel free to check out my website, and subscribe to my newsletter or follow me at @ruanbekker on Twitter.
- Linktree: https://go.ruan.dev/links
- Patreon: https://go.ruan.dev/patreon