Kubernetes Monitoring with OCI Observability & Management Platform
OCI Kubernetes Monitoring Solution is a turn-key Kubernetes monitoring and management package based on OCI Logging Analytics cloud service, OCI Monitoring, OCI Management Agent and Fluentd.
It enables DevOps, Cloud Admins, Developers, and Sysadmins to
across their entire environment - using Logs, Metrics, and Object metadata.
It does extensive enrichment of logs, metrics and object information to enable cross correlation across entities from different tiers in OCI Logging Analytics. A collection of dashboards is provided to get users started quickly.
:stop_sign: Upgrading to a major version (like 2.x to 3.x)? See upgrade section below for details. :warning:
ALL {resource.type='managementagent', resource.compartment.id='OCI Management Agent Compartment OCID'}
ALL {instance.compartment.id='OCI Management Agent Compartment OCID'}
Allow dynamic-group <OCI Management Agent Dynamic Group> to use metrics in compartment <Compartment Name> WHERE target.metrics.namespace = 'mgmtagent_kubernetes_metrics'
Allow dynamic-group <OKE Instances Dynamic Group> to {LOG_ANALYTICS_LOG_GROUP_UPLOAD_LOGS} in compartment <Compartment Name>
OR
Allow group <User Group> to {LOG_ANALYTICS_LOG_GROUP_UPLOAD_LOGS} in compartment <Compartment Name>
Deployment Method | Supported Environments | Collection Automation | Dashboards | Customzations |
---|---|---|---|---|
Helm | All* | :heavy_check_mark: | Manual | Full Control (Recommended) |
OCI Resource Manager | OKE | :heavy_check_mark: | :heavy_check_mark: | Partial Control |
Terraform | OKE | :heavy_check_mark: | :heavy_check_mark: | Partial Control |
kubectl | All* | Manual | Manual | Full Control (Not recommended) |
* For some environments, modification of the configuration may be required.
global:
# -- OCID for OKE cluster or a unique ID for other Kubernetes clusters.
kubernetesClusterID:
# -- Provide a unique name for the cluster. This would help in uniquely identifying the logs and metrics data at OCI Logging Analytics and OCI Monitoring respectively.
kubernetesClusterName:
oci-onm-logan:
# Go to OCI Logging Analytics Administration, click Service Details, and note the namespace value.
ociLANamespace:
# OCI Logging Analytics Log Group OCID
ociLALogGroupID:
oci-onm-mgmt-agent:
mgmtagent:
# Provide the base64 encoded content of the Management Agent Install Key file
installKeyFileContent:
Use the following helm install
command to the install the chart. Provide a desired release name, path to override_values.yaml and path to helm chart.
helm install <release-name> --values <path-to-override-values.yaml> <path-to-helm-chart>
Refer this for further details on helm install
.
Use the following helm upgrade
command if any further changes to override_values.yaml needs to be applied or a new chart version needs to be deployed.
helm upgrade <release-name> --values <path-to-override-values.yaml> <path-to-helm-chart>
Refer this for further details on helm upgrade
.
Dashboards needs to be imported manually. Below is an example for importing Dashboards using OCI CLI.
Download and configure OCI CLI or open cloud-shell where OCI CLI is pre-installed. Alternative methods like REST API, SDK, Terraform etc can also be used.
Find the OCID of the compartment, where the dashboards need to be imported.
Download the dashboard JSONs from here.
Replace all the instances of the keyword - "${compartment_ocid}
" in the JSONs with the Compartment OCID identified in previous step.
Following command is for quick reference that can be used in a linux/cloud-shell envirnment :
sed -i "s/\${compartment_ocid}/<Replace-with-Compartment-OCID>/g" *.json
Run the following commands to import the dashboards.
oci management-dashboard dashboard import --from-json file://cluster.json
oci management-dashboard dashboard import --from-json file://node.json
oci management-dashboard dashboard import --from-json file://workload.json
oci management-dashboard dashboard import --from-json file://pod.json
oci management-dashboard dashboard import --from-json file://service-type-lb.json
Use the following helm uninstall
command to uninstall the chart. Provide the release name used when creating the chart.
helm upgrade <release-name> --values <path-to-override-values.yaml> <path-to-helm-chart>
Refer this for further details on helm uninstall
.
Launch OCI Resource Manager Stack in OCI Tenancy and Region of the OKE Cluster, which you want to monitor.
Refer here.
Refer here.
Use the following helm template
command to generate the resource yaml files. Provide path to override_values.yaml, path to helm chart and path to a dir where the yaml files to be generated.
helm template --values <path-to-override-values.yaml> <path-to-helm-chart> --output-dir <path-to-dir-to-store-the-yamls>
Refer this for further details on helm template
.
Use kubectl
tool to apply the yaml files generated in the previous step in the following order.
kubectl apply -f namespace.yaml
kubectl apply -f clusterrole.yaml
kubectl apply -f clusterrolebinding.yaml
kubectl apply -f serviceAccount.yaml
kubectl apply -f logs-configmap.yaml
kubectl apply -f objects-configmap.yaml
kubectl apply -f fluentd-daemonset.yaml
kubectl apply -f fluentd-deployment.yaml
For non OKE or when you choose to use Config file based AuthZ for monitoring the logs, you may need to apply oci-config-secret.yaml before applying fluentd-daemonset.yaml & fluentd-deployment.yaml. Refer here for how to configure Config based AuthZ.
kubectl apply -f mgmt-agent-secrets.yaml
kubectl apply -f metrics-configmap.yaml
kubectl apply -f mgmt-agent-statefulset.yaml
kubectl apply -f mgmt-agent-headless-service.yaml
kubectl apply -f metric_server.yaml
Refer here.
One of the major changes introduced in 3.0.0 is refactoring of helm chart where major features of the solution got split into separate sub-charts. 2.x has only support for logs and objects collection using Fluentd and OCI Logging Analytics and this is now moved into a separate chart oci-onm-logan and included as a sub-chart to the main chart oci-onm. This is a breaking change w.r.t the values.yaml and any customisations that you might have done on top of it. There is no breaking change w.r.t functionality offered in 2.x. For full list of changes in 3.x, refer to changelog.
You may fall into one of the below categories and may need to take actions accordingly.
We recommend you to uninstall the release created using 2.x chart and follow the installation instructions mentioned here for installing the release using 3.x chart.
image:
url: <Container Image URL>
imagePullPolicy: Always
ociLANamespace: <OCI LA Namespace>
ociLALogGroupID: ocid1.loganalyticsloggroup.oc1.phx.amaaaaaa......
kubernetesClusterID: ocid1.cluster.oc1.phx.aaaaaaaaa.......
kubernetesClusterName: <Cluster Name>
global:
# -- OCID for OKE cluster or a unique ID for other Kubernetes clusters.
kubernetesClusterID: ocid1.cluster.oc1.phx.aaaaaaaaa.......
# -- Provide a unique name for the cluster. This would help in uniquely identifying the logs and metrics data at OCI Logging Analytics and OCI Monitoring respectively.
kubernetesClusterName: <Cluster Name>
oci-onm-logan:
# Go to OCI Logging Analytics Administration, click Service Details, and note the namespace value.
ociLANamespace: <OCI LA Namespace>
# OCI Logging Analytics Log Group OCID
ociLALogGroupID: ocid1.loganalyticsloggroup.oc1.phx.amaaaaaa......
If you have modified values.yaml provided in helm chart directly, we recommend you to identify all the changes and move them to override_values.yaml and follow the instructions provided in install or upgrade sections under this. We recommend you to use override_values.yaml for updating values for any variables or to incorporate any customisations on top of existing values.yaml.
If you are already using a separate values.yaml for your customisations, you still need to compare 2.x vs 3.x variable heirarchy and make the necessary changes accordingly.
2.x
runtime: docker
image:
url: <Container Image URL>
imagePullPolicy: Always
ociLANamespace: <OCI LA Namespace>
ociLALogGroupID: ocid1.loganalyticsloggroup.oc1.phx.amaaaaaa......
kubernetesClusterID: ocid1.cluster.oc1.phx.aaaaaaaaa.......
kubernetesClusterName: <Cluster Name>
3.x
global:
# -- OCID for OKE cluster or a unique ID for other Kubernetes clusters.
kubernetesClusterID: ocid1.cluster.oc1.phx.aaaaaaaaa.......
# -- Provide a unique name for the cluster. This would help in uniquely identifying the logs and metrics data at OCI Logging Analytics and OCI Monitoring respectively.
kubernetesClusterName: <Cluster Name>
oci-onm-logan:
runtime: docker
# Go to OCI Logging Analytics Administration, click Service Details, and note the namespace value.
ociLANamespace: <OCI LA Namespace>
# OCI Logging Analytics Log Group OCID
ociLALogGroupID: ocid1.loganalyticsloggroup.oc1.phx.amaaaaaa......
2.x
...
...
custom-log1:
path: /var/log/containers/custom-1.log
ociLALogSourceName: "Custom1 Logs"
#multilineStartRegExp:
isContainerLog: true
...
...
3.x
...
...
oci-onm-logan:
...
...
custom-log1:
path: /var/log/containers/custom-1.log
ociLALogSourceName: "Custom1 Logs"
#multilineStartRegExp:
isContainerLog: true
...
...
...
...
The difference is all about moving the required configuration (variable definitions) under oci-onm-logan section appropriately.
Copyright (c) 2023, Oracle and/or its affiliates. Licensed under the Universal Permissive License v1.0 as shown at https://oss.oracle.com/licenses/upl.