Microservices demo application on cloud-hosted Kubernetes cluster
The purpose of this repository is to provide the fully automated setup of a nice-looking (see screenshots) showcase / testbed for a cloud-native (precisely defined application by Microsoft) on a cloud-hosted Kubernetes cluster (here GKE by Google Cloud) based on an interesting service mesh. So, additionally, the setup will install tooling (coming from the CNCF Cloud Trail Map for many of them) to make the application and its service mesh observable and manageable.
This application, licensed under Apache terms (same terms for all components used in this worklfow - So, allowing free reuse) is the "Online Boutique" (formerly known as Hipster Shop - developed by a Google team but not an official product of them). It is composed of 10 polyglot microservices behind a nice-looking web frontend calling them to serve client requests. A load-generator - part of the package - will generate traffic while the application is running to make use of tools (Prometheus, OpenTelemetry, etc.) more attractive.
Another goal of this repository is to help people exploring the cloud-native architecture: when you fork it, you rapidly get a working cluster with a somewhat "real-life" application and decent tooling to experiment with, without the need for a long trial-and-error process starting with infrastructure to set it up from scratch. It makes it much faster to grasp the philosophy of the distributed architecture proposed by Kubernetes.
So, happy forking for your own use! (see Setup section for all technical details) And come back regularly or get notified by following this repository: we will add additional tools in subsequent updates.
We implement here a Github workflow (microservices-on-gke.yml & shells in sh directory - see our other repository for other workflows) which allows to automatically deploy a fresh cluster on GKE and to deploy the application on it whenever needed via a single click. On our side, this same workflow is also started automatically on a recurring basis (at least weekly) via Github's cron facility (included in workflow yaml) to make sure that the deployment remains fully operational as underlying GKE infrastructure and implemented components evolve. You can access logs of previous runs in the Actions Tab.
On successful completion of the workflow, the Online Boutique is accessible from anywhere on the Internet at the public IP address (dynamically created and published by GKE) displayed in the final lines of workflow execution step "Deploy application on GKE". Indeed, it is the IP address of the K8s service 'frontend-external' defined by the deployment. Hence, you can also get it at any time via 'kubectl get service 'frontend-external'' provided that you went through proper setup as described below.
To check the activity of the load generator, you can at any time run 'kubectl logs -l app=loadgenerator -c main' You should get something like the following describing how many requests were already triggered:
kubectl logs -l app=loadgenerator -c main
GET /product/66VCHSJNUP 600 0(0.00%) 77 34 1048 | 41 0.10 0.00
GET /product/6E92ZMYYFZ 563 0(0.00%) 77 34 1763 | 41 0.00 0.00
GET /product/9SIQT8TOJO 593 0(0.00%) 73 34 1013 | 41 0.30 0.00
GET /product/L9ECAV7KIM 631 0(0.00%) 82 34 1349 | 42 0.20 0.00
GET /product/LS4PSXUNUM 608 0(0.00%) 83 34 896 | 42 0.20 0.00
GET /product/OLJCESPC7Z 623 0(0.00%) 69 34 1079 | 41 0.10 0.00
POST /setCurrency 808 0(0.00%) 82 44 1089 | 51 0.20 0.00
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 9517 0(0.00%) 1.80 0.00
If you want to easily inject more traffic, you can additionally use the hey or fortio utilities as we did in our Knative project : see correponding workflow script.
You have first to implement the requirements of the Setup section before trying to access the dashboards.
To keep things simple, we access all tools and dashboards via the proxy functions available in Kubernetes: either directly via 'kubectl proxy' or indirectly via 'istioctl dashboard xxx'. Only limited additional definitions are then required: it's just fine for a demo and initial tests. Of course, the laptop running the proxies must be authentified to gcloud via SDK with proper credentials giving rights to cluster administration.
Available dashboards:
(click on pictures to enlarge them - also use the hyperlinks provided with each dashboard description to have a good overview of the features of each tool from its official documentation)
The workflow has following steps:
Application can now be accessed as described above
application service mesh
This demo application contains an interesting service mesh to give some substance to demos and tests: its schema is given above. This mesh which is thoroughly used by a traffic generator - also part of the demo package - which generates constant solid traffic to make the implementation of monitoring tools.
Interesting points of Online Boutique:
Multi-language: the microservices corresponding to the various application features were on purpose written on purpose by the authors in numerous languages (Go, NodeJS, Java, C#, Python) to demonstrate a key strength of container-based applications: many microservices collaborate in a "polyglot" environment where each team (or individual) can program in its language of choice while ad hoc frameworks for each language make sure that all Kubernetes standards (probes, etc.) and architecture can be easily respected with minimal effort to obtain a coherent and compliant global system, manageable by the standard palette of Kubernetes tools. This polyglot aspect is reinforced by the mixed use of http and gRpc, which are both understood by the monitoring tools.
Service Mesh: the application graph shows relationships between the various services and the front-end. Indeed, the application is made of13 pods. This high-level of granularity is the accepted Kubernetes pattern for application architecture: it brings numerous advantages like continuous delivery, exhaustive unit testing, higher resilience, optimal scalability, etc. But, it also requires the use of a thorough set of tools to maximize the observability of the system. If "divide and conquer" is the motto of cloud-native architectures, the motto of their operations is probably "observe to sustain": when working with Kubernetes application, one feels very quickly the need for (very) solid tools monitoring automatically the myriad of objects (services, pods, ingress, volumes, etc.) composing the system.
GCP Tooling: the application is instrumented for Stackdriver (profiling, logging, debugging). So, the source code of this application provides the right guidance to see how to code in order to obtain the right leverage on tools directly available from the GCP service portfolio.
To start with, you need a Google Cloud account with a project in it where the GKE APIs have been enabled. Obtain the id of your project from GCP dashboard. Additionally, you need to create in this project a service account and give it proper GKE credentials: right to create, administer and delete a cluster. Save its private key in json format.
Then, fork our repository and define the required Github Secrets in your forked repository:
To easily launch the workflow, you can launch it with the manual dispatch feature of Github that you can see as a launch button in the Action tab of your project for the "Deploy Online Boutique" workflow. Similarly, you can stop it via similar button in "Terminate Online Boutique" workflow.
When the deployment workflow completes successfully, you should be able to access the Online Boutique from anywhere on the Internet at the pubic IP address displayed in the final lines of step "Deploy application on GKE" (or via
To get access to the cluster via kubectl and to the dashboards via istioctl, you need to install on your machine the gcloud SDK, connect to GCP with your userid (having at least same credentials as service account above). Then, use 'gcloud container clusters get-credentials <CLUSTER-NAME> --zone <GCP-ZONE> --project=<PROJECT-ID>' with your own values. It will prepare and install on your machine the proper config and credentials files - usually located in ~/.kube - to give you access to your cluster via kubectl and istioctl.
Finally, you should install kubectl and install istioctl if not present on your laptop yet.