InfluxDB is an open source time series database designed to handle high write and query loads. It is designed to be used for use cases involving large amounts of time stamped data such as monitoring, IoT sensor data and real time analytics. A common use case for InfluxDB with Kubernetes is centralized aggregation of Prometheus metrics data from multiple clusters for long term storage etc.
You will create a workload using InfluxDB's official Helm chart
You will then deploy InfluxDB to a managed cluster (perhaps hosting shared infrastructure services used by applications deployed in the same or different clusters)
Important
This recipe describes the steps to create and use a InfluxDB workload using the Web Console. The entire workflow can also be fully automated and embedded into an automation pipeline.
Although deploying a simple Helm chart can be trivial for a quick sniff test, there are a number of considerations that have to be factored in for a stable deployment. Some of them are described below.
The InfluxDB service deployed on the cluster needs to be exposed externally for it to be practical. In this recipe, we will use the managed nginx Ingress Controller in the default blueprint to expose the InfluxDB service externally.
InfluxDB's Ingress needs to be secured using TLS. It is impractical to manually handle certificates and private keys. In this recipe, we will use a cert-manager addon in our cluster blueprint to manage the lifecycle of certificates for the InfluxDB Server's Ingress.
It is imperative to secure InfluxDB's admin user" and "admin password" and not have users manually handle these secrets. In this recipe, we will also use the controller's Integration with HashiCorp Vault to secure InfluxDB's credentials.
In this step, we will be creating a custom "values.yaml" file with overrides for our InfluxDB deployment.
Copy the following yaml document into the "influxdb-custom-values.yaml" file
## influxdb custom values## Specify a service type## Change to NodePort or LoadBalancer if does not want to use ingress##service:type:ClusterIP## Persist data to a persistent volume##persistence:enabled:true# storageClass: "-"accessMode:ReadWriteOncesize:8Gi## Configure resource requests and limitsresources:requests:memory:256Micpu:0.1limits:memory:2Gicpu:2## Configure ingress for influxdb if you would like to expose the influxdb using ingressingress:enabled:trueannotations:kubernetes.io/ingress.class:nginxcert-manager.io/cluster-issuer:"letsencrypt-http"hostname:influxdb.eks.gorafay.netpath:/tls:truesecretName:influxdb-ingress-tls## Add pod annotations to use the vault intergationpodAnnotations:rafay.dev/secretstore:vault## replace "infra" with your configured vault rolevault.secretstore.rafay.dev/role:"infra"## Add ENV for getting influxdb admin username and password from vault secret storesenv:## replace infra-apps/data/influxdb#data.admin_username with the vault secret path to your influxdb admin username-name:INFLUXDB_ADMIN_USERvalue:secretstore:vault:infra-apps/data/influxdb#data.admin_username## replace infra-apps/data/influxdb#data.admin_password with the vault secret path to your influxdb admin password-name:INFLUXDB_ADMIN_PASSWORDvalue:secretstore:vault:infra-apps/data/influxdb#data.admin_password# Configure init script to create database#initScripts:enabled:truescripts:init.iql:|+CREATE DATABASE "prometheus" WITH DURATION 30d REPLICATION 1 NAME "rp_30d"# Configure backup for influxdb if not yet have backup solution at cluster levelbackup:enabled:true## By default emptyDir is used as a transitory volume before uploading to object store.## As such, ensure that a sufficient ephemeral storage request is set to prevent node disk filling completely.resources:requests:# memory: 512Mi# cpu: 2ephemeral-storage:"8Gi"# limits:# memory: 1Gi# cpu: 4# ephemeral-storage: "16Gi"## If backup destination is PVC, or want to use intermediate PVC before uploading to object store.persistence:enabled:true# storageClass: "-"accessMode:ReadWriteOncesize:8Gi## Backup cronjob scheduleschedule:"00***"## Amazon S3 or compatible## Secret is expected to have AWS (or compatible) credentials stored in `credentials` field.## for the credentials format.## The bucket should already exist.s3:destination:s3://influxdb-bk/demo## Optional. Specify if you're using an alternate S3 endpoint.#endpointUrl: ""
You can optionally verify whether the correct resources have been created on the cluster.
Once the workload is published, click on Debug
Click on Kubectl to open a virtual terminal for kubectl proxy access right to the "influxdb" namespace context of the cluster
First, we will verify the status of the pods
kubectl get pod
Second, we will verify the InfluxDB persistent volume claim status
kubectl get pvc
Next, we will verify the Ingress for InfluxDB service
kubectl get ingress
Finally, we will verify the InfluxDB service
kubectl get svc
Shown below is an example for what you should see on a cluster where InfluxDB has been deployed as a Helm workload through the controller.
Alternatively, users with Infrastructure Admin or Organization Admin roles can view the status of all Kubernetes resources created by this InfluxDB workload by going to Infrastructure > Clusters > cluster_name > Resources and filter by Workloads "influxdb" as below: