Overview
The default cluster blueprint automatically deploys "Prometheus", "Kube State Metrics", "Metrics Server" and other related components on managed clusters (i.e. both imported and provisioned types).
Prometheus is a pull-based system and it sends an HTTP request (scrape) based on the configuration defined in the deployment file. The response to this scrape request is stored and parsed in storage along with the metrics for the scrape itself. The storage is a custom database on the Prometheus server and can handle a massive influx of data.
Managed Prometheus¶
Every cluster managed by the Controller is automatically bootstrapped with cluster and workload monitoring components. These are provisioned in the "rafay-infra" namespace and are actively monitored for availability and performance by the Controller and the Ops/Support organization. As part of default cluster blueprint, these components are kept current and up to date.
These cluster and workload monitoring components are used to power the cluster dashboards in the Console. Customers can also use the monitoring components to power HPA for their workloads.
Here is an example of what cluster admins should see in this namespace using kubeCTL.
kubectl get po -n rafay-infra
NAME READY STATUS RESTARTS AGE
log-aggregator-6847784f79-88vl8 1/1 Running 2 32d
log-router-k7hdj 2/2 Running 5 32d
rafay-metrics-server-79879c65cb-6rg4v 1/1 Running 5 32d
rafay-prometheus-adapter-7cc76d654c-sq7f8 1/1 Running 4 32d
rafay-prometheus-alertmanager-0 2/2 Running 0 9d
rafay-prometheus-kube-state-metrics-567cff6b85-5thr2 1/1 Running 3 32d
rafay-prometheus-node-exporter-btkvp 1/1 Running 0 9d
rafay-prometheus-server-0 2/2 Running 0 9d
BYO Prometheus¶
The Managed Prometheus and associated components are specifically configured and deployed in a manner so that they will not interfere with a customer's Prometheus deployment. As a result, customers can deploy Prometheus either before or after the clusters are managed by the Controller.