This code demonstrates deployment of a Microservices based application Game On! on to Kubernetes cluster. Game On! is a throwback text-based adventure built to help you explore microservice architectures and related concepts.
Read this in other languages: 한국어、中国.
This code demonstrates deployment of a Microservices based application Game On! on to Kubernetes cluster which exists together with a polyglot ecosystem. Game On! is a throwback text-based adventure built to help you explore microservice architectures and related concepts. GameOn! deployment has two sets of microservice, core and platform. The core microservices are written in Java, coexisting with other polyglot microservices. In addition, there are platform services, which provide service discovery, registration and routing for different microservices. All run in in Docker containers managed by Kubernetes Cluster.
There are five core Java microservices, using JAX-RS, CDI etc. part of the MicroProfile spec.
In addition, Proxy and WebApp complete the core microservices
To deploy the game locally, follow the instructions via docker-compose in GameOn repository here.
To follow the steps here, create a Kubernetes cluster with either Minikube for local testing, with IBM Cloud Private, or with IBM Bluemix Container Service to deploy in cloud. The code here is regularly tested against Kubernetes Cluster from Bluemix Container Service using Travis.
Change these values on the gameon-configmap.yaml
file. Change PLACEHOLDER_IP
to the public IP of your cluster. You can get the IP from bx cs workers <your-cluster-name>
for the Bluemix Container Service. Ex. 192.168.99.100
For minikube, you can get the IP using
minikube ip
FRONT_END_PLAYER_URL: https://PLACEHOLDER_IP:30443/players/v1/accounts
FRONT_END_SUCCESS_CALLBACK: https://PLACEHOLDER_IP:30443/#/login/callback
FRONT_END_FAIL_CALLBACK: https://PLACEHOLDER_IP:30443/#/game
FRONT_END_AUTH_URL: https://PLACEHOLDER_IP:30443/auth
...
PROXY_DOCKER_HOST: 'PLACEHOLDER_IP'
An easy way to change these values is to do
sed -i s#PLACEHOLDER_IP#<Public-IP-of-your-cluster#g gameon-configmap.yaml
or sed -i '' s#PLACEHOLDER_IP#<Public-IP-of-your-cluster>#g gameon-configmap.yaml
.
Then, apply the config map on your cluster:
$ kubectl create -f gameon-configmap.yaml
configmap "gameon-env" created
You would need to create a volume for your cluster. You can use the provided yaml file. The required keystores will be stored in this volume. The volume will also be used by the core services.
$ kubectl create -f local-volume.yaml
persistent volumes "local-volume-1" created
persistent volumes "keystore-claim" created
You can now create the required keystores using the setup.yaml file. This will create a Pod and create the keystores.
$ kubectl create -f setup.yaml
You can find the Dockerfile and script for generating keystore in containers/setup/ folder. You can build your own image using the provided Dockerfile.
Once it is done, the Pod will not run again. You can delete the Pod after using kubectl delete pod setup
(optional).
If you want to confirm that the Pod has successfully imported the keystores, you can view the Pod's logs.
$ kubectl logs setup
Checking for keytool...
Checking for openssl...
Generating key stores using <Public-IP-of-your-cluster>:30443
Certificate stored in file <keystore/gameonca.crt>
Certificate was added to keystore
Certificate reply was installed in keystore
Certificate stored in file <keystore/app.pem>
MAC verified OK
Certificate was added to keystore
Entry for alias <*> successfully imported.
...
Entry for alias <**> successfully imported.
Import command completed: 104 entries successfully imported, 0 entries failed or cancelled
You can now create the Platform services and deployments of the app.
$ kubectl create -f platform
OR alternatively
$ kubectl create -f platform/controller.yaml
$ kubectl create -f platform/<file-name>.yaml
...
$ kubectl create -f platform/registry.yaml
To check if the control plane (controller and registry) is up:
$ curl -sw "%{http_code}" "<Public IP of your cluster>:31200/health" -o /dev/null
$ curl -sw "%{http_code}" "<Public IP of your kubernetes>:31300/uptime" -o /dev/null
If both of them outputs 200, you can proceed to the next step.
Note: It can take around 1-2 minutes for the Pods to setup completely.
Finally, you can create the Core services and deployments of the app. (If you want to have social logins, please follow the steps here before deploying the core services)
$ kubectl create -f core
OR alternatively
$ kubectl create -f core/auth.yaml
$ kubectl create -f core/<file-name>.yaml
...
$ kubectl create -f core/webapp.yaml
To verify if the core services has finished setting up, you would need to check the logs of the Pod of the proxy. You can get the Pod name of the proxy using kubectl get pods
kubectl logs proxy-***-**
You should look for the map, auth, mediator, player and room servers. Confirm if they are UP.
[WARNING] 094/205214 (11) : Server room/room1 is UP, reason: Layer7 check passed ...
[WARNING] 094/205445 (11) : Server auth/auth1 is UP, reason: Layer7 check passed ...
[WARNING] 094/205531 (11) : Server map/map1 is UP, reason: Layer7 check passed ...
[WARNING] 094/205531 (11) : Server mediator/mediator1 is UP, reason: Layer7 check passed ...
[WARNING] 094/205531 (11) : Server player/player1 is UP, reason: Layer7 check passed ...
It can take around 5-10 minutes for these services to setup completely.
Now that you have successfully deployed your own app in the Bluemix Kubernetes Container Service, you can access it via its IP address and assigned Port.
https://169.xxx.xxx.xxx:30443/ You will need to use https on port 30443.
/help
- lists all commands available/sos
- go back the the first room/exits
- lists all available exits/go <N,S,E,W>
- go to the room in that directionYou may want to add social logins so you and your friends can explore the rooms together. To add social logins you would need to have developer accounts on the social app you want to use.
You will need to redeploy your Core services with the your own modified yaml files. The next step will show you where to add your API Keys.
You can register your application in this link: New OAuth Application For the Homepage URL, you will need to put the IP address of your cluster and the port 30443.
For the Authorization callback URL, you will need to put the IP address and the port 30443 and point to the auth service of the app.
You can edit that in the GitHub later if you made a new cluster. Now, take note of the Client ID and Client Secret of the app. You will need to add this in the environment variables on the yaml files of your Core services
...
- name: GITHUB_APP_ID
value : '<yourGitHubClientId>'
- name: GITHUB_APP_SECRET
value : '<yourGitHubClientSecret>'
...
The application uses the keys(name) GITHUB_APP_ID and GITHUB_APP_SECRET and must exactly match this in yaml files.
You can register your application with your Twitter account in this link: Create new app
For the name field, you can put the name you want for your app. For the Homepage URL, you will need to put the IP address of your cluster and the port 30443.
For the Authorization callback URL, you will need to put the IP address and the port 30443 and point to the auth service of the app.
Go to the Keys and Access Tokens section of the twitter application you just registered and take note of the Consumer Key and Consumer Secret of the app. You will need to add this in the environment variables on the yaml files of your Core services
...
- name: TWITTER_CONSUMER_KEY
value : '<yourGitHubClientId>'
- name: TWITTER_CONSUMER_SECRET
value : '<yourGitHubClientSecret>'
...
The application uses the keys(name) TWITTER_CONSUMER_KEY and TWITTER_CONSUMER_SECRET and must exactly match these in the core yaml files.
You can build your own rooms by following this guide by the GameOn team. They have some sample rooms written in Java, Swift, Go, and more.
In this journey, you will deploy the sample room written in Java. You will deploy it in the same cluster as your GameOn App.
You can create these rooms by executing
$ kubectl create -f sample-room
To register the deployed rooms in the cluster, you will need to use the UI of your app.
Click on the Registered Rooms button at the top right.
Enter the necessary information of the room. (Leave the Github Repo and Health Endpoint fields blank.) Then click Register
Note: In the samples, the Java Room uses port 9080, while the Swift room uses port 8080.
You now have successfully registered your room in your Map. You can go to it directly by typing these commands in the UI: /listmyrooms
and use the id in /teleport <id-of-the-room>
. Explore the game.
You can learn more about the details of registering a room here.
You can build your own room by following GameOn's guide
https://
on port 30443.kubectl logs <pod-name-of-the-service>
or kubectl logs <pod-name-of-the-service> -f
to follow the logs.kubectl delete pvc -l app=gameon
kubectl delete pv local-volume-1
. This would ensure the keystores are deleted on the volume.kubectl delete -f platform
kubectl delete -f core
kubectl delete svc,deploy,pvc -l app=gameon
kubectl delete pod setup
kubectl delete pv local-volume-1
kubectl delete -f gameon-configmap.yaml
This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.