Skip to content

Enable Monitoring

Enabling Monitoring

There are two options for installing the KongClusterPlugin.

Install using Workloads

Enable the Prometheus plugin in Kong at the global level. Each request to the Kubernetes cluster is tracked by Prometheus.

To create a workload using the Kubernetes YAML approach, follow the Create Workloads process and use the KongClusterPlugin.yaml example below.

KongClusterPlugin.yaml

apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
plugin: prometheus

Install using Helm charts

Create a Kong umbrella helm chart that deploys the KongClusterPlugin along with the Kong installation via a helm chart.

  1. Run the following command to create the Kong umbrella chart.

    sh
    helm create kong-umbrella-chart
    

    Note

    Warning messages about group-readable and world-readable might display.

  2. To check the installation, run cd kong-umbrella-chart. There should be charts and template folder and some YAML files.

  3. Install the Kong helm chart as a dependency chart along with the KongClusterPlugin as part of the chart.
  4. Amend the chart.yaml file with Kong as a dependency, as shown below.

    apiVersion: v2
    name: kong-umberella
    description: A Helm chart for Kubernetes
    type: application
    version: 0.1.0
    # This is the version number of the application being deployed. This
    version number should be
    # incremented each time you make changes to the application. Versions are
    not expected to
    # follow Semantic Versioning. They should reflect the version the
    application is using.
    appVersion: 2.8
    dependencies:
    - name: kong
    version: 2.9.1
    repository: https://github.com/Kong/charts
    
  5. Remove the content of the values.yaml file and keep it as a blank file. The values.yaml file will not be used for any customization, but the file is still needed.

  6. Remove most of the default files from the /kong-umbrella-chart/template/ folder. Do not delete the KongClusterPlugin.yaml file.

    sh
    ls -ltrh templates/
    total 32K
    drwxr-xr-x 2 infracloud infracloud 4.0K Jun 21 19:47 tests
    -rw-r--r-- 1 infracloud infracloud 397 Jun 21 19:47 service.yaml
    -rw-r--r-- 1 infracloud infracloud 344 Jun 21 19:47 serviceaccount.yaml
    -rw-r--r-- 1 infracloud infracloud 1.8K Jun 21 19:47 NOTES.txt
    -rw-r--r-- 1 infracloud infracloud 2.1K Jun 21 19:47 ingress.yaml
    -rw-r--r-- 1 infracloud infracloud 952 Jun 21 19:47 hpa.yaml
    -rw-r--r-- 1 infracloud infracloud 1.9K Jun 21 19:47 _helpers.tpl
    -rw-r--r-- 1 infracloud infracloud 1.9K Jun 21 19:47 deployment.yaml
    
  7. Follow the Helm Charts instructions to install the Helm chart.


Install Grafana Helm chart

Installing the Grafana Helm chart is similar to installing the Kong Helm chart as a Workload.

Install Summary

  • Create a monitoring namespace.
  • Use the grafana-custom-values.yaml file (see below).
  • Integrate the Grafana Helm chart repository.
  • Install the Grafana Helm chart using a Workload.

Grafana YAML file

The grafana-custom-values.yaml file contains the following:

  • Uses the Managed Prometheus service as a data source.
  • Provides the Kong Grafana dashboard for visualization.

grafana-custom-values.yaml

## Custom values for Grafana
## Test framework configuration
testFramework:
  enabled: false

## Pod Annotations
podAnnotations: {}

## Deployment annotations
annotations: {}

## Service - set to type: LoadBalancer to expose service via load balancing instead of using ingress
service:
  enabled: true
  type: ClusterIP
  annotations: {}
  labels: {}

## Ingress configuration to expose Grafana to external using ingress
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: kong

## Resource Limits and Requests settings
resources: {}
#  limits:
#    cpu: 100m
#    memory: 128Mi
#  requests:
#    cpu: 100m
#    memory: 128Mi

## Node labels for pod assignment
nodeSelector: {}

## Tolerations for pod assignment
tolerations: []

## Affinity for pod assignment
affinity: {}

## Enable persistence using Persistent Volume Claims
persistence:
  type: pvc
  enabled: true
#  storageClassName: default
  accessModes:
  - ReadWriteOnce
  size: 10Gi
#  annotations: {}
#  existingClaim:

#  Administrator credentials when not using an existing secret (see below)
adminUser: admin
#  adminPassword: strongpassword

# Use an existing secret for the admin user.
admin:
  existingSecret: ""
  userKey: admin-user
  passwordKey: admin-password

## Extra environment variables
env: {}
envValueFrom: {}
envFromSecret: ""

## Configure Grafana datasources to point to Rafay Prometheus Service
datasources:
datasources.yaml:
  apiVersion: 1
  datasources:
  - name: Rafay-Prometheus
    type: prometheus
    url: http://rafay-prometheus-server.rafay-infra.svc.cluster.local:9090
    access: proxy
    isDefault: true

## Configure Grafana dashboard providers for importing dashboards by defaults
dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - name: 'default'
      orgId: 1
      folder: ''
      type: file
      disableDeletion: false
      editable: true
      options:
        path: /var/lib/grafana/dashboards/default

## Configure grafana dashboard to import by default. gnetId is dashboard ID from https://grafana.com/grafana/dashboards
dashboards:
  default:
    k8sClusterDashboard:
      gnetId: 7249
      datasource: Rafay-Prometheus
    k8sClusterResource:
      gnetId: 12114
      datasource: Rafay-Prometheus
    k8sNamespaceResource:
      gnetId: 12117
      datasource: Rafay-Prometheus
    k8sPodResource:
      gnetId: 12120
      datasource: Rafay-Prometheus
    k8sNodeResource:
      gnetId: 12119
      datasource: Rafay-Prometheus
    k8sNodeExporter:
      gnetId: 11074
      datasource: Rafay-Prometheus
    k8sDeployStsDs:
      gnetId: 8588
      datasource: Rafay-Prometheus
    k8sAppMetrics:
      gnetId: 1471
      datasource: Rafay-Prometheus
    k8sNetworkingCluster:
      gnetId: 12124
      datasource: Rafay-Prometheus
    k8sNetworkingNamespace:
      gnetId: 12125
      datasource: Rafay-Prometheus
    k8sNetworkingPod:
      gnetId: 12661
      datasource: Rafay-Prometheus

# new grafana dashboards for Kong monitoring
  kong-dash:
    gnetId: 7424 # Install the following Grafana dashboard in the
    revision: 5 # instance: https://grafana.com/dashboards/7424
    datasource: Rafay-Prometheus

After publishing the Grafana Helm workload, verify the installation by running the following command.

sh
kubectl get all -n monitoring

Set up Port Forwards

For the purposes of this exercise, port-forwarding is used to get access to the Grafana, Managed Prometheus, and Kong proxy. It is not advisable to do this in production. In a production environment, use a Kubernetes Service with an external IP address or a load balancer.

  1. Open a new terminal and run the following command to allow access to Prometheus using localhost:9090.

    bash
    POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/instance=promstack-kube-prometheus-prometheus" -o jsonpath="{.items[0].metadata.name}")
    kubectl --namespace monitoring port-forward $POD_NAME 9090 &
    
  2. Run the following command to allow access to Grafana using localhost:3000.

    POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}")
    kubectl --namespace monitoring port-forward $POD_NAME 3000 &
    
  3. Run the following command to allow access to the Kong proxy using localhost:8000. For this exercise, a plain-text HTTP proxy is used. Use the IP address of a LoadBalancer if running this in a cloud environment.

    POD_NAME=$(kubectl get pods --namespace kong -o jsonpath="{.items[0].metadata.name}")
    kubectl --namespace kong port-forward $POD_NAME 8000 &
    

Access the Grafana Dashboard

Accessing Grafana requires the Admin user password.

  1. Run the following command to read the Admin user password.

    bash
    kubectl get secret --namespace monitoring grafana-helm -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    
  2. Using a web browser, go to http://localhost:3000.

  3. Use admin for the username and use the password obtained earlier.