Creates a complete GitOps-based operational stack on your Kubernetes clusters
Creates a complete GitOps-based operational stack on your Kubernetes clusters:
The gitops-playground is derived from our experiences in consulting, operating the myCloudogu platform and is used in our GitOps trainings for both Flux and ArgoCD.
You can try the GitOps Playground on a local Kubernetes cluster by running a single command:
bash <(curl -s \
https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) --bind-ingress-port=80 \
&& docker run --rm -t --pull=always -u $(id -u) \
-v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
--net=host \
ghcr.io/cloudogu/gitops-playground --yes --argocd --ingress-nginx --base-url=http://localhost
# If you want to try all features, you might want to add these params: --mail --monitoring --vault=dev
Note that on some linux distros like debian do not support subdomains of localhost.
There you might have to use --base-url=http://local.gd
(see local ingresses).
See the list of applications to get started.
We recommend running this command as an unprivileged user, that is inside the docker group.
The GitOps Playground provides a reproducible environment for setting up a GitOps-Stack. It provides an image for automatically setting up a Kubernetes Cluster including CI-server (Jenkins), source code management (SCM-Manager), Monitoring and Alerting (Prometheus, Grafana, MailHog), Secrets Management (Hashicorp Vault and External Secrets Operator) and of course Argo CD as GitOps operator.
The playground also deploys a number of example applications.
The GitOps Playground lowers the barriers for operating your application on Kubernetes using GitOps.
It creates a complete GitOps-based operational stack on your Kubernetes clusters.
No need to read lots of books and operator
docs, getting familiar with CLIs, ponder about GitOps Repository folder structures and promotion to different environments, etc.
The GitOps Playground is a pre-configured environment to see GitOps in motion, including more advanced use cases like
notifications, monitoring and secret management.
In addition to creating an operational stack in production, you can run the playground locally, for learning and developing new features.
We aim to be compatible with various environments, e.g. OpenShift and in an air-gapped network. The support for these is work in progress.
There a several options for running the GitOps playground
The diagrams below show an overview of the playground's architecture and three scenarios for running the playground. For a simpler overview including all optional features such as monitoring and secrets management see intro at the very top.
Note that running Jenkins inside the cluster is meant for demo purposes only. The third graphic shows our production scenario with the Cloudogu EcoSystem (CES). Here better security and build performance is achieved using ephemeral Jenkins build agents spawned in the cloud.
Playground on local machine | Production environment with Cloudogu EcoSystem |
---|---|
You can apply the GitOps playground to
bash <(curl -s \
https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh)
You can apply the playground to your cluster using our container image ghcr.io/cloudogu/gitops-playground
.
On success, the container prints a little intro on how to get started with the GitOps playground.
There are several options for running the container:
docker
kubectl
.All options offer the same parameters, see below.
When connecting to k3d it is easiest to apply the playground via a local container in the host network and pass k3d's kubeconfig.
CLUSTER_NAME=gitops-playground
docker pull ghcr.io/cloudogu/gitops-playground
docker run --rm -t -u $(id -u) \
-v ~/.config/k3d/kubeconfig-${CLUSTER_NAME}.yaml:/home/.kube/config \
--net=host \
ghcr.io/cloudogu/gitops-playground # additional parameters go here
Note:
docker pull
in advance makes sure you have the newest image, even if you ran this command before.localhost
and to use k3d's kubeconfig without altering, as it
access the API server via a port bound to localhost.kubeconfig-${CLUSTER_NAME}.yaml.
docker exec -it \
$(docker ps -q --filter ancestor=ghcr.io/cloudogu/gitops-playground) \
bash -c -- 'tail -f -n +1 /tmp/playground-log-*'
For remote clusters it is easiest to apply the playground via kubectl. You can find info on how to install kubectl here.
# Create a temporary ServiceAccount and authorize via RBAC.
# This is needed to install CRDs, etc.
kubectl create serviceaccount gitops-playground-job-executer -n default
kubectl create clusterrolebinding gitops-playground-job-executer \
--clusterrole=cluster-admin \
--serviceaccount=default:gitops-playground-job-executer
# Then start apply the playground with the following command:
# To access services on remote clusters, add either --remote or --ingress-nginx --base-url=$yourdomain
kubectl run gitops-playground -i --tty --restart=Never \
--overrides='{ "spec": { "serviceAccount": "gitops-playground-job-executer" } }' \
--image ghcr.io/cloudogu/gitops-playground \
-- --yes --argocd # additional parameters go here.
# If everything succeeded, remove the objects
kubectl delete clusterrolebinding/gitops-playground-job-executer \
sa/gitops-playground-job-executer pods/gitops-playground -n default
In general docker run
should work here as well. But GKE, for example, uses gcloud and python in their kubeconfig.
Running inside the cluster avoids these kinds of issues.
The following describes more parameters and use cases.
You can get a full list of all options like so:
docker run -t --rm ghcr.io/cloudogu/gitops-playground --help
You can also use a configuration file to specify the parameters (--config-file
or --config-map
).
That file must be a YAML file. You can find the schema here.
Note that currently, only part of the configuration parameters are supported.
See here how to configure IntelliJ IDEA to use the schema and offer autocompletion and validation.
You can use --output-config-file
to output the current config as a YAML file.
Note that only the currently supported parameters will be used.
The config file is not yet a complete replacement for CLI parameters.
docker run --rm -t --pull=always -u $(id -u) \
-v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
-v $(pwd)/gitops-playground.yaml:/config/gitops-playground.yaml \
--net=host \
ghcr.io/cloudogu/gitops-playground --yes --argocd --config-file=/config/gitops-playground.yaml
Create the serviceaccount and clusterrolebinding
$ cat config.yaml # for example
features:
monitoring:
active: true
# Convention:
# Find the ConfigMap inside the current namespace for the config map
# From the config map, pick the key "config.yaml"
kubectl create configmap gitops-config --from-file=config.yaml
kubectl run gitops-playground -i --tty --restart=Never \
--overrides='{ "spec": { "serviceAccount": "gitops-playground-job-executer" } }' \
--image ghcr.io/cloudogu/gitops-playground \
-- --yes --argocd --config-map=gitops-config
Afterwards, you might want to do a clean up. In addition, you might want to delete the config-map as well.
kubectl delete cm gitops-config
In the default installation the GitOps-Playground comes without an Ingress-Controller.
We use Nginx as default Ingress-Controller.
It can be enabled via the configfile or parameter --ingress-nginx
.
In order to make use of the ingress controller, it is recommended to use it in conjunction with --base-url
, which will create Ingress
objects for all components of the GitOps playground.
It is possible to deploy Ingress
objects for all components. You can either
--base-url=https://example.com
) or--argocd-url https://argocd.example.com
--grafana-url https://grafana.example.com
--vault-url https://vault.example.com
--mailhog-url https://mailhog.example.com
--petclinic-base-domain petclinic.example.com
--nginx-base-domain nginx.example.com
Note:
jenkins-url
and scmm-url
are for external services and do not lead to ingresses, but you can set them via --base-url
for now.Ingress
you need an ingress controller. If your cluster does not provide one, the Playground can deploy one for you, via the --ingress-nginx
parameter.The ingresses can also be used when running the playground on your local machine:
To use them locally,
init-cluster.sh
) with --bind-ingress-port
, e.g. 80
or 8080
.:port
, e.g. localhost:8080
):
--base-url=http://localhost
*.localhost
entries to your hosts
file.kubectl get ingress -A
to get a full listhttp://argocd.localhost
, for example--base-url=http://local.gd
(or 127.0.0.1.nip.io
, 127.0.0.1.sslip.io
, or others)
http://argocd.local.gd
, for examplelocalhost:80
or even 127.0.0.1:80
(NoRouteToHostException
)--bind-ingress-port=127.0.0.1:80
--argocd
- deploy Argo CD GitOps operator⚠️ Note that switching between operators is not supported.
That is, expect errors (for example with cluster-resources) if you apply the playground once with Argo CD and the next time without it. We recommend resetting the cluster withinit-cluster.sh
beforehand.
See our Quickstart Guide on how to set up the instance.
Then set the following parameters.
# Note:
# * In this case --password only sets the Argo CD admin password (Jenkins and
# SCMM are external)
# * Insecure is needed, because the local instance will not have a valid cert
--jenkins-url=https://192.168.56.2/jenkins \
--scmm-url=https://192.168.56.2/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--insecure
Using Google Container Registry (GCR) fits well with our cluster creation example via Terraform on Google Kubernetes Engine (GKE), see our docs.
Note that you can get a free CES demo instance set up with a Kubernetes Cluster as GitOps Playground here.
# Note: In this case --password only sets the Argo CD admin password (Jenkins
# and SCMM are external)
--jenkins-url=https://your-ecosystem.cloudogu.net/jenkins \
--scmm-url=https://your-ecosystem.cloudogu.net/scm \
--jenkins-username=admin \
--jenkins-password=yourpassword \
--scmm-username=admin \
--scmm-password=yourpassword \
--password=yourpassword \
--registry-url=eu.gcr.io \
--registry-path=yourproject \
--registry-username=_json_key \
--registry-password="$( cat account.json | sed 's/"/\\"/g' )"
Images used by the gitops-build-lib are set in the gitopsConfig
in each Jenkinsfile
of an application like that:
def gitopsConfig = [
...
buildImages : [
helm: 'ghcr.io/cloudogu/helm:3.10.3-1',
kubectl: 'bitnami/kubectl:1.29',
kubeval: 'ghcr.io/cloudogu/helm:3.10.3-1',
helmKubeval: 'ghcr.io/cloudogu/helm:3.10.3-1',
yamllint: 'cytopia/yamllint:1.25-0.7'
],...
To override each image in all the applications you can use following parameters:
--kubectl-image someRegistry/someImage:1.0.0
--helm-image someRegistry/someImage:1.0.0
--kubeval-image someRegistry/someImage:1.0.0
--helmkubeval-image someRegistry/someImage:1.0.0
--yamllint-image someRegistry/someImage:1.0.0
Images used by various tools and exercises can be configured using the following parameters:
--grafana-image someRegistry/someImage:1.0.0
--external-secrets-image someRegistry/someImage:1.0.0
--external-secrets-certcontroller-image someRegistry/someImage:1.0.0
--external-secrets-webhook-image someRegistry/someImage:1.0.0
--vault-image someRegistry/someImage:1.0.0
--nginx-image someRegistry/someImage:1.0.0
Note that specifying a tag is mandatory.
If you are using a remote cluster, you can set the --argocd-url
parameter so that argocd-notification messages have a
link to the corresponding application.
You can specify email addresses for notifications (note that by default, MailHog will not actually send emails)
--argocd-email-from
: Sender E-Mail address. Default is [email protected]
)--argocd-email-to-admin
: Alerts send to admin. Default is [email protected]
)--argocd-email-to-user
: Alerts send to user. Default is [email protected]
)Set the parameter --monitoring
to enable deployment of monitoring and alerting tools like prometheus, grafana and mailhog.
See Monitoring tools for details.
You can specify email addresses for notifications (note that by default, MailHog will not actually send emails)
--grafana-email-from
: Sender E-Mail address. Default is [email protected]
--grafana-email-to
: Recipient E-Mail address. Default is [email protected]
The gitops-playground uses MailHog to showcase notifications.
Alternatively, you can configure an external mailserver.
Note that you can't use both at the same time.
If you set either --mailhog
or --mail
parameter, MailHog will be installed
If you set --smtp-*
parameters, a external Mailserver will be used and MailHog will not be deployed.
Set the parameter --mailhog
to enable MailHog.
This will deploy MailHog and configure Argo CD and Grafana to send mails to MailHog.
Sender and recipient email addresses can be set via parameters in some applications, e.g. --grafana-email-from
or --argocd-email-to-user
.
Parameters:
--mailhog
: Activate MailHog as internal Mailserver--mailhog-url
: Specify domain name (ingress) under which MailHog will be servedIf you want to use an external Mailserver you can set it with these parameters
--smtp-address
: External Mailserver SMTP address or IP--smtp-port
: External Mailserver SMTP port--smtp-user
: External Mailserver login username--smtp-password
: External Mailservers login password. Make sure to put your password in single quotes.This will configure Argo CD and Grafana to send mails using your external mailserver.
In addition you should set matching sender and recipient email addresses, e.g. --grafana-email-from
or --argocd-email-to-user
.
Set the parameter --vault=[dev|prod]
to enable deployment of secret management tools hashicorp vault and external
secrets operator.
See Secrets management tools for details.
For k3d, you can just k3d cluster delete gitops-playground
. This will delete the whole cluster.
If you want to delete k3d use rm .local/bin/k3d
.
To remove the playground without deleting the cluster, use the option --destroy
.
You need to pass the same parameters when deploying the playground to ensure that the destroy script
can authenticate with all tools.
Note that this option has limitations. It does not remove CRDs, namespaces, locally deployed SCM-Manager, Jenkins and registry, plugins for SCM-Manager and Jenkins
host
network, so it's easiest to access via ingress controller and ingresses.--base-url=http://localhost --ingress-nginx
should work on both Windows and Mac.jenkins.localhost
, you could try using --base-url=http://local.gd
or similar, as described in local ingresses.On macOS and when using the Windows Subsystem Linux on Windows (WSL), you can just run our TL;DR command after installing Docker.
For Windows, we recommend using Windows Subsystem for Linux version 2 (WSL2) with a native installation of Docker Engine, because it's easier to set up and less prone to errors.
For macOS, please increase the Memory limit in Docker Desktop (for your DockerVM) to be > 10 GB. Recommendation: 16GB.
bash <(curl -s \
https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh) --bind-ingress-port=80 \
&& docker run --rm -t --pull=always -u $(id -u) \
-v ~/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config \
--net=host \
ghcr.io/cloudogu/gitops-playground --yes --argocd --ingress-nginx --base-url=http://localhost
# If you want to try all features, you might want to add these params: --mail --monitoring --vault=dev
When you encounter errors with port 80 you might want to use e.g.
--bind-ingress-port=8080
and--base-url=http://localhost:8080
instead.$ docker run -t -d -u 0:133 -v ... -e ******** bitnami/kubectl:1.25.4 cat
docker top e69b92070acf3c1d242f4341eb1fa225cc40b98733b0335f7237a01b4425aff3 -eo pid,comm
process apparently never started in /tmp/gitops-playground-jenkins-agent/workspace/xample-apps_petclinic-plain_main/.configRepoTempDir@tmp/durable-7f109066
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
Cannot contact default-1bg7f: java.nio.file.NoSuchFileException: /tmp/gitops-playground-jenkins-agent/workspace/xample-apps_petclinic-plain_main/.configRepoTempDir@tmp/durable-7f109066/output.txt
CrashLoopBackoff
s of running pods due to liveness probe timeouts.Here is how you can start the playground from a Windows-native PowerShell console:
K3D_VERSION
, e.g. using winget
winget install k3d --version x.y.z
K3S_VERSION
in init-cluster.sh for $image
, then execute$ingress_port = "80"
$registry_port = "30000"
$image = "rancher/k3s:v1.25.5-k3s2"
# Note that ou can query the image version used by playground like so:
# (Invoke-WebRequest -Uri 'https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/init-cluster.sh').Content -split "`r?`n" | Select-String -Pattern 'K8S_VERSION=|K3S_VERSION='
k3d cluster create gitops-playground `
--k3s-arg=--kube-apiserver-arg=service-node-port-range=8010-65535@server:0 `
-p ${ingress_port}:80@server:0:direct `
-v /var/run/docker.sock:/var/run/docker.sock@server:0 `
--image=${image} `
-p ${registry_port}:30000@server:0:direct
# Write $HOME/.config/k3d/kubeconfig-gitops-playground.yaml
k3d kubeconfig write gitops-playground
-v gitops-playground-build-cache:/tmp@server:0
to persist the Cache of the Jenkins agent between restarts of k3d containers.$registry_port
other than 30000
append the command --internal-registry-port=$registry_port
bellowdocker run --rm -t --pull=always `
-v $HOME/.config/k3d/kubeconfig-gitops-playground.yaml:/home/.kube/config `
--net=host `
ghcr.io/cloudogu/gitops-playground --yes --argocd --ingress-nginx --base-url=http://localhost:$ingress_port # more params go here
As described above the GitOps playground comes with a number of applications. Some of them can be accessed via web.
The URLs of the applications depend on the environment the playground is deployed to. The following lists all applications and how to find out their respective URLs for a GitOps playground being deployed to local or remote cluster.
For remote clusters you need the external IP, no need to specify the port (everything running on port 80). Basically, you can get the IP address as follows:
kubectl -n "${namespace}" get svc "${serviceName}" \
--template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}"
There is also a convenience script scripts/get-remote-url
. The script waits, if externalIP is not present, yet.
You could use this conveniently like so:
bash <(curl -s \
https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
jenkins default
You can open the application in the browser right away, like so for example:
xdg-open $(bash <(curl -s \
https://raw.githubusercontent.com/cloudogu/gitops-playground/main/scripts/get-remote-url) \
jenkins default)
If deployed within the cluster, all applications can be accessed via: admin/admin
Note that you can change (and should for a remote cluster!) the password with the --password
argument.
There also is a --username
parameter, which is ignored for argocd. That is, for now argos username ist always admin
.
Argo CD's web UI is available at
scripts/get-remote-url argocd-server argocd
(remote k8s)--argocd-url
to specify domain nameArgo CD is installed in a production-ready way, that allows for operating Argo CD with Argo CD, using GitOps and providing a repo per team pattern.
When installing the GitOps playground, the following steps are performed to bootstrap Argo CD:
argocd
(management and config of Argo CD itself),example-apps
(example for a developer/application team's GitOps repo) andcluster-resources
(example for a cluster admin or infra/platform team's repo; see below for details)AppProject
called argocd
and an Application
called
bootstrap
. These are also contained within the argocd
repository.From there everything is managed via GitOps. This diagram shows how it works.
bootstrap
application manages the folder applications
, which also contains bootstrap
itself.bootstrap
application can be done via GitOps. The bootstrap
application also deploys
other apps (App Of Apps pattern)argocd
application manages the folder argocd
which contains Argo CD's resources as an umbrella helm chart.values.yaml
and deploying additional resources (such as secrets and
ingresses) via the templates
folder. The actual ArgoCD chart is declared in the Chart.yaml
Chart.yaml
contains the Argo CD helm chart as dependency
. It points to a deterministic version of the Chart
(pinned via Chart.lock
) that is pulled from the Chart repository on the internet.projects
application manages the projects
folder, that contains the following AppProjects
:
argocd
project, used for bootstrappingdefault
project (which is restricted to eliminate threats to security)cluster-resources
(for platform admin, needs more access to cluster) andexample-apps
(for developers, needs less access to cluster)cluster-resources
application points to the cluster-resources
git repository (argocd
folder), which
has the typical folder structure of a GitOps repository (explained in the next step). This way, the platform admins
use GitOps in the same way as their "customers" (the developers) and can provide better support.example-apps
application points to the example-apps
git repository (argocd
folder again). Like the
cluster-resources
, it also has the typical folder structure of a GitOps repository:
apps
- contains the kubernetes resources of all applications (the actual YAML)argocd
- contains Argo CD Applications
that point to subfolders of apps
(App Of Apps pattern, again)misc
- contains kubernetes resources, that do not belong to specific applications (namespaces, RBAC,
resources used by multiple apps, etc.)misc
application points to the misc
foldermy-app-staging
application points to the apps/my-app/staging
folder within the same repo. This provides a
folder structure for release promotion. The my-app-*
applications implement the Environment per App Pattern.
This pattern allows each application to have its own environments, e.g. production and staging or none at all.
Note that the actual YAML here could either be pushed manually or using the CI server.
The applications contain examples that push config changes from the app repo to the GitOps
repo using the CI server. This implementation mixes the Repo per Team and Repo per App patterns
my-app-production
application, that points to the
apps/my-app/production
folder within the same repo.production
folders from manual access, if supported by the SCM of your choice.petclinic-plain.yaml
)application.namespaces
setting)ApplicationSet
, using the git
generator for directories
(not used in GitOps playground, yet)To keep things simpler, the GitOps playground only uses one kubernetes cluster, effectively implementing the Standalone
pattern. However, the repo structure could also be used to serve multiple clusters, in a Hub and Spoke pattern:
Additional clusters could either be defined in the vaules.yaml
or as secrets via the templates
folder.
We're also working on an optional implementation of the namespaced pattern, using the Argo CD operator.
And advanced question: Why does the GitOps playground not use the argocd-autopilot?
The short answer is: As of 2023-05, version 0.4.15 it looks far from ready for production.
Here is a diagram that shows how the repo structure created by autopilot looks like:
Here are some thoughts why we deem it not a good fit for production:
kustomization.yaml
(3️ in the diagram) points to a base
within the autopilot repo, which in turn
points to the stable
branch of the Argo CD repo.autopilot-bootstrap
application (1️ in the diagram) not within the GitOps repo and lives only in the
cluster?ApplicationSet
within the AppProject
's yaml pointing to a config.json
(more difficult to
write than YAML) is difficult to grasp (4️ and 6️ in the diagram)cluster-resources
ApplicationSet
is a good approach to multi-cluster but again, requires writing JSON
(4️ in the diagram).The playground installs cluster-resources (like prometheus, grafana, vault, external secrets operator, etc.) via the repo
argocd/cluster-resources
. See ADR for more details.
When installing without Argo CD, the tools are installed using helm imperatively, we fall back to using imperative helm installation as a kind of neutral ground.
Jenkins is available at
scripts/get-remote-url jenkins default
(remote k8s)You can enable browser notifications about build results via a button in the lower right corner of Jenkins Web UI.
Note that this only works when using localhost
or https://
.
You can set an external jenkins server via the following parameters when applying the playground. See parameters for examples.
--jenkins-url
,--jenkins-username
,--jenkins-password
Note that the example applications pipelines will only run on a Jenkins that uses agents that provide
a docker host. That is, Jenkins must be able to run e.g. docker ps
successfully on the agent.
The user has to have the following privileges:
SCM-Manager is available at
scripts/get-remote-url scmm-scm-manager default
(remote k8s)You can set an external SCM-Manager via the following parameters when applying the playground. See Parameters for examples.
--scmm-url
,--scmm-username
,--scmm-password
The user on the scm has to have privileges to:
Set the parameter --monitoring
so the kube-prometheus-stack
via its helm-chart
is being deployed including Argo CD dashboards.
This leads to the following tools to be exposed:
scripts/get-remote-url mailhog monitoring
(remote k8s)--mailhog-url
to specify domain namescripts/get-remote-url kube-prometheus-stack-grafana monitoring
(remote k8s)--grafana-url
to specify domain nameGrafana can be used to query and visualize metrics via prometheus. Prometheus is not exposed by default.
In addition, argocd-notifications is set up. Applications deployed with Argo CD now will alert via email to mailhog the sync status failed, for example.
Note that this only works with Argo CD so far
Via the vault
parameter, you can deploy Hashicorp Vault and the External Secrets Operator into your GitOps playground.
With this, the whole flow from secret value in Vault to kubernetes Secret
via External Secrets Operator can be seen in
action:
For this to work, the GitOps playground configures the whole chain in Kubernetes and vault (when dev mode is used):
namespaces
argocd-staging
and argocd-production
:
SecretStore
and ServiceAccount
(used to authenticate with vault)ExternalSecrets
secrets
For testing you can set the parameter --vault=dev
to deploy vault in development mode. This will lead to
SecretStore
s for external secrets operatoradmin/admin
account (can be overriden with --username
and --password
)The secrets are then picked up by the vault-backend
SecretStore
s (connects External Secrets Operator with Vault) in
the namespace argocd-staging
and argocd-production
namespaces
You can reach the vault UI on
scripts/get-remote-url vault-ui secrets
(remote k8s)--vault-url
to specify domain namekubectl logs -n secrets vault-0 | grep 'Root Token'
When using vault=prod
you'll have to initialize vault manually but on the other hand it will persist changes.
If you want the example app to work, you'll have to manually
vault
service accounts in argocd-production
and argocd-staging
namspaces. See SecretStore
s and
dev-post-start.sh for an example.With vault in dev
mode and ArgoCD enabled, the example app applications/nginx/argocd/helm-jenkins
will be deployed
in a way that exposes the vault secrets secret/<environment>/nginx-secret
via HTTP on the URL http://<host>/secret
,
for example http://localhost:30024/secret
.
While exposing secrets on the web is a very bad practice, it's very good for demoing auto reload of a secret changed in vault.
To demo this, you could
while ; do echo -n "$(date '+%Y-%m-%d %H:%M:%S'): " ; \
curl http://localhost:30024/secret/ ; echo; sleep 1; done
This usually takes between a couple of seconds and 1-2 minutes.
This time consists of ExternalSecret
's refreshInterval
, as well as the kubelet sync period
(defaults to 1 Minute)
The following video shows this demo in time-lapse:
The playground comes with example applications that allow for experimenting with different GitOps features.
All applications are deployed via separated application and GitOps repos:
petclinic-plain
) and GitOps repo (e.g. argocd/example-app
)The applications implement a simple staging mechanism:
argocd/nginx-helm-umbrella
)Note that the GitOps-related logic is implemented in the gitops-build-lib for Jenkins. See the README there for more options like
Please note that it might take about a minute after the pull request has been accepted for the GitOps operator to start deploying. Alternatively, you can trigger the deployment via ArgoCD's UI or CLI.
Jenkinsfile for plain
deployment
scripts/get-remote-url spring-petclinic-plain argocd-staging
--petclinic-base-domain
to specify base domain. Then use staging.petclinic-plain.$base-domain
scripts/get-remote-url spring-petclinic-plain argocd-production
--petclinic-base-domain
to specify base domain. Then use production.petclinic-plain.$base-domain
Jenkinsfile for helm
deployment
scripts/get-remote-url spring-petclinic-helm argocd-staging
--petclinic-base-domain
to specify base domain. Then use staging.petclinic-helm.$base-domain
scripts/get-remote-url spring-petclinic-helm argocd-production
--petclinic-base-domain
to specify base domain. Then use production.petclinic-helm.$base-domain
scripts/get-remote-url nginx argocd-staging
--nginx-base-domain
to specify base domain. Then use staging.nginx.$base-domain
scripts/get-remote-url nginx argocd-production
--nginx-base-domain
to specify base domain. Then use production.nginx.$base-domain
nginx-helm-umbrella
scripts/get-remote-url nginx-helm-umbrella argocd-production
--nginx-base-domain
to specify base domain. Then use production.nginx-helm-umbrella.$base-domain