Docker applied in development, devops, testing, product management etc.
Every industry is undergoing significant fast-paced changes with the disruption of technologies, that were once only in our imagination. Software and Hardware democratization, open source, internet as information equalizer, crowd collaboration, MOOC, IoT - the list is unending. As a technologist and having deep respect for the effort that engineers put in, I feel an obligation to contribute to the democratization.
Contrary to the belief of folks who live in the fear that "machines will overtake the world!", I believe that machines and automation will actually make the world better, efficient and exciting to live in. Even if that were NOT the case, the solution would be NOT to fear, but step up your game and get ahead of machines (Do you have another option?). So regardless, I feel we should embrace technologies with optimism and view machines as complementing human skills.
Software over the web is re-defining the business models we established for years. Airbnb, Uber, Cloud tech have shown us how software can be world-changing. Companies across all industries are now relying on software as competitive edge, so they either have to hire more engineers to desigh and develop software or hand over that competitive edge to a vendor and be at their mercy. The former seems to be the trend.
Software Development with high quality, security, reliability etc. is not as easy as people in non-software functions assume. Software Engineering is NOT the same as "IT Support". "Software engineers do NOT necessarily fix your computers!".
Software Development involves quite a bit of trade-off decisions and is composed of millions of components talking to each other. The intent is to say that there are dependency chains of communication and software engineers have to build layer on top of layers, that eventually becomes a software , either available on internet or something that can be installed by oneself.
Docker helps minimize the complexity of managing those dependency chains across layers
Each of the link in the below table is a video that demonstrates the value add. If the video demonstrates value in what you do, then step into folders for granular details.
It is highly recommended to complete the fundamental concepts videos before moving further
Fundamental Concepts |
---|
Install Docker - on a *nix server (28:21) |
Install Docker ToolBox - Development environment for docker-engine, docker-compose, hypervisors etc. (5:25) |
Install Docker natively on mac - As a developer, use docker during development (3:24) |
Docker Machine - Virtualbox - docker-machine on virtualbox hypervisor(9:02) |
Docker Machine - AWS - docker-machine on amazon ec2 (Private)(14:59) |
Docker Machine - Azure - docker-machine on Azure (Private)(24:58) |
Docker images - Docker for creating images(13:51) |
Docker Containers - Instantiate docker images - containers(31:59) |
Docker Containers - More commands - A little more deeper dive into containers(14:06) |
Docker build - Understanding docker build(37:33) |
Best Practices |
---|
Docker Build Labels - Build labels, microbadger etc. (13:53) |
Docker bench security for docker hosts - Security best practices for deploying container in production(5:47) |
Docker Bash completion tips - Use Homebrew bash completion instead of --help to save time |
Some of the labels below apply at build time, and some at deploy time (i.e. they are function of environment for e.g.)
"release" : "stable", "release" : "canary"
"environment" : "dev", "environment" : "qa", "environment" : "production"
"tier" : "frontend", "tier" : "backend", "tier" : "cache"
"partition" : "customerA", "partition" : "customerB"
"track" : "daily", "track" : "weekly"
This might be useful to do the following:
$ kubectl get pods -l environment=production,tier=frontend
This code helps developers, devops and testers (ok I will be honest , test "automation" engineers) understand how docker containers ease dependency nightmare(s) in the realm of build/deployment of application environments, configuration management, better collaboration and isolated contained environments. Specifically
docker-machine create --driver virtualbox default
# docker-machine create -d virtualbox --virtualbox-memory "6000" default
Running pre-create checks...
Creating machine...
(default) Creating VirtualBox VM...
(default) Creating SSH key...
(default) Starting VM...
Waiting for machine to be running, this may take a few minutes...
Machine is running, waiting for SSH to be available...
Detecting operating system of created instance...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect Docker to this machine, run: docker-machine env default
pradeep@seleniumframework>/bin/bash copy_certs_default.sh
ca.pem 100% 1042 1.0KB/s 00:00
cert.pem 100% 1082 1.1KB/s 00:00
key.pem 100% 1675 1.6KB/s 00:00
docker-compose up
- This will start Jenkins, Nexus, SonarQube and Selenium GRID (See below the port mappings) - see Videos in the table above for demonstrationService | Link | Credentials |
---|---|---|
Jenkins | http://${docker-machine ip default}:18080/ | Initial set up required |
SonarQube | http://${docker-machine ip default}:19000/ | admin/admin |
Nexus | http://${docker-machine ip default}:18081/nexus | admin/admin123 |
Nexus 3 | http://$(docker-machine ip default):18082 | admin/admin123 |
Selenium Grid | http://${docker-machine ip default}:4444/grid/console | no login required |
Selenium Chrome node | vnc on $(docker-machine ip default}:15900 | no login required |
Selenium Firefox node | vnc on $(docker-machine ip default}:15901 | no login required |
docker-compose ps
should yield all the containers and its port mappingspradeep@seleniumframework>docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------------------------------------------------------
cicdctstack_jenkins_1 /bin/tini -- /usr/local/bi Up 0.0.0.0:50000->50000/tcp,
... 0.0.0.0:18080->8080/tcp
cicdctstack_nexus_1 /bin/sh -c ${JAVA_HOME}/bi Up 0.0.0.0:18081->8081/tcp
...
cicdctstack_nodech_1 /opt/bin/entry_point.sh Up 0.0.0.0:15900->5900/tcp
cicdctstack_nodeff_1 /opt/bin/entry_point.sh Up 0.0.0.0:15901->5900/tcp
cicdctstack_selhub_1 /opt/bin/entry_point.sh Up 0.0.0.0:14444->4444/tcp
cicdctstack_sonar_1 ./bin/run.sh Up 0.0.0.0:15432->5432/tcp,
0.0.0.0:19000->9000/tcp,
9092/tcp
cicdctstack_sonardb_1 /docker-entrypoint.sh Up 5432/tcp
postgres
docker build -t "myrvm" -f rvm_test_image.dockerfile
- This will build a docker image myrvm:latest. Might take a while if your internet connection is slow, but lets remember that this is creating an image, so we don't do this step oftendocker images
- Check that myrvm image appearsdocker logs <jenkins_container_id>
Continue with default plugins selection
Plugins installation progresses as below
Rest of the explanation is inside the course videos
Access Nexus2 on port 18081 on docker host. Alternately, if you have the port mapping defined, you can access it using localhost:18081.
Access Nexus 3 port 18082 on docker host. We use Nexus 3 so that we can set up a docker repo. Nexus 2 does't have that support.
cAdvisor from google gives information on web UI at the host and container leve granularity.
cAdvisor from Google provides basic container and host monitoring visualization, however weave scope takes it to a new level
docker-machine ssh default
sudo curl -L git.io/scope -o /usr/local/bin/scope
sudo chmod a+x /usr/local/bin/scope
scope launch
Access on http://$(docker-machine ip default):4040
docker rmi $(docker images | grep "^<none>" | awk "{print $3}")
OR docker images -q --filter "dangling=true" | xargs -l docker rmi
- Remove all images that are not taggeddocker rmi $(docker images | awk '$1 ~ /blah/ { print $3 }')
- Remove all images that have "blah" in its name (REPOSITORY column)docker stop $(docker ps -aq)
- Stop all running containersdocker rm $(docker ps -aq)
- Remove all non-running containersdocker exec cicdctstack_jenkins_1 cat /var/jenkins_home/secrets/initialAdminPassword
- Retrieves initial Admin password required for jenkins initial configurationdocker ps -a -q --filter ancestor=<image-name>
- Return all containers that were spun off from the image-namedocker rm $(docker stop $(docker ps -a -q --filter ancestor=<image-name> --format="{{.ID}}"))
OR docker ps --filter ancestor=blah -q | xargs -l docker rm
- stop all matching containers that have a matching image-namedcleanup(){ docker rm -v $(docker ps --filter status=exited -q 2>/dev/null) 2>/dev/null docker rmi $(docker images --filter dangling=true -q 2>/dev/null) 2>/dev/null}
- Add this shell function in your ~/.bash_profile. That way you can clean up containers first and then remove images.docker system prune
- delete ALL unused data (i.e. in order: containers stopped, volumes without containers and images with no containers). Available in docker 1.13 PR 26108
brew install bash-completion
- Add the instruction that is asked to add to ~/.bash_profile. Then execute this. This will greatly help auto-complete sub commands on docker, docker-compose and docker-machineInstructor led training classes are available on request. Please email