Docker is an exciting technology when the developers are focusing on the design of their applications in a cloud-native approach. One of the key characteristics of designing a cloud-native application is to containerize the application. Designing applications such way will save you from hearing some of the words from other developers during development time.

  • “The application is not working in my local machine!”
  • “I’m facing version conflicts.”
  • “Libraries are missing.”

Every time we onboard new developers in our team we have to fix several issues to build and run the applications successfully in a new machine that leads the onboarding period a bit longer and forces another expert to engage when he/she is focusing on something deliverables. 

Hence, the realization is let’s containerize the applications that I am working on. Throughout this tutorial, will share with you the commands that I used to make it happen finally. 

What Is a Docker Image?

Docker image is nothing but a snapshot of the current state of your application which is running in your local. Image is used by Docker engine to run the application exactly as same as it is running in your machine without any further actions.  The same image can be used for shipping applications and run in all the deployment environments which means you don’t need to worry about the environment or dependencies to keep running the application in production. 

Create an Image

List down all the instructions in a text file and name it as Dockerfile. 

Navigate to the terminal to the same directory of Dockerfile

docker build -t userservice .

List the View of Created Images

docker images 

docker images -a Show all images (default hides intermediate images)

Delete Image

docker image rm userservice 

docker image rm $(docker images  -aq)  Delete all images by id 

What is a Docker Container? 

Docker containers are isolated Linux environments created by the Docker Engine where the applications are running in a container besides other containers. 

Create a Container

 docker run --rm -it --name userservice --publish 8081:8080 userservice

  •  --rm  option is to delete the container when it shutdown. 
  •  --name flag  go give the container a name
  •  --publish  is to bind the host port 8081 with the container port 8080

List View of The Created Containers

 docker ps 

 docker ps -a 

Inspect low-level information of the container: 

 docker inspect userservice 

View the logs of a container

 docker logs userservice 

 docker logs -f userservice 

(-f is to follow log output(

Persist Application Logs Into the Host Directory

Create a directory where you want to save the application log files then mount the volume with Docker by appending  --mount  flag in  docker run  command. docker run --rm -it --name userservice -v $HOME/docker/logs/userservice:/var/log --publish 8081:8080 userservice 

Persistent Database

By default, Docker containers are transient which means you will lose the data once the containers are deleted. To save the data for future use you need to persist database state in the host directory. 

 docker run -d --name postgres --publish 5433:5432 -e POSTGRES_DB=local_userservice -e POSTGRES_USER=local_userservice -e POSTGRES_PASSWORD=userservice_pass --hostname postgres -v ${HOME}/docker/postgres-volume:/var/lib/postgresql/data mdillon/postgis

Separate Container Configuration Data

Environment variable:

docker run -d --name postgres --publish 5433:5432 -e POSTGRES_DB=local_userservice -e POSTGRES_USER=local_userservice -e POSTGRES_PASSWORD=userservice_pass --hostname postgres mdillon/postgis

Find the Log Files of docker-engine

Docker log directory


Run the container in an interactive mode that will populate the console log. Flag  -it  in container run command is to run the container in interactive and pseudo TTL mode. 

Attach the Container to A Network

View the all available networks

 docker network ls 

Let’s attach a running container postgres to the projects_default network.

 docker network connect projects_default postgres  

Inspect the projects_default network to see the connected containers.

 docker inspect network projects_default  

Hide the Secrets from Container

Don’t compromise the database password or AWS keys with the container. There are a couple of ways to keep aside the secrets from the container, environment variables, mount a directory and put the secrets in a file, and use any of available the secret service, IAM role of AWS. 

Mount directory, where is the file and want to make it available it userservice/config directory inside the container. 

 mkdir -p /Users/amjad.hossain/docker/userservice/config  

 docker run --rm -it --name userservice --mount type=bind,source=/Users/amjad.hossain/docker/userservice/config,target=/usr/src/userservice/config --publish 8081:8080 userservice

SSH into Containers

 docker exec -it userservice /bin/sh  

 docker exec -it postgres bash  

Stop container

  docker stop postgres  

Delete container

If you use  --rm to run a container that will remove the container when you stop it.

Delete a stoped container:

 $docker rm postges 

Forcefully delete a running container:

 $docker rm -f postges 

Delete all containers by id:

 docker rm -f $(docker ps -aq)  

Point Application Container to Host Database

The user service is running inside a container but it has configured to connect to the database which is running in the Docker host machine.

Pointing to host db

Pointing to host db

The database configuration of userserivce 

The localhost on the datasource URL tells userservice to search the database inside the container network but it is not there. Let’s configure the datasource to connect to the Postgres which is outside docker. For mac users it simple to change.

Point Application Container to Container Database

Point Application Container to Container Db

Point Application Container to Container Db

You need to inspect the container to get the Postgres container IP address and add on the datasource URL

 $docker instpect postgres 

Container IP would change each time when container restarted

Docker Compose to Create Multiple Containers Together

Define multiple docker container applications in a docker-compose.yml file to configure and running docker applications together.  In this case, you can use the postgres service name on datasource URL to point the application to the container database. 


 Navigate terminal to the same directory of docker-compose.yml file and run the following command:

docker-compose up -d 

View the created containers:

 docker ps 

Important things to remember to containerize applications are isolation, network, security, and monitoring.

Source link

Write A Comment