For enquiries call:

Phone

+1-469-442-0620

Easter Sale-mobile

HomeBlogDevOpsDocker In Production: Deployment, Advantages & Best Practices

Docker In Production: Deployment, Advantages & Best Practices

Published
05th Sep, 2023
Views
view count loader
Read it in
14 Mins
In this article
    Docker In Production: Deployment, Advantages & Best Practices

    Docker is a container orchestration platform that allows you to build, ship, and run distributed applications. Docker containers encapsulate all the files that make up an application, including code, runtime and system libraries. Docker containers have native integration with Windows and Linux systems and are supported by the Docker Engine platform. This makes it easy to implement a single solution for managing your applications across multiple platforms.

    Docker runs an application such as MySQL in a single container. It’s a lightweight virtual machine-like package containing an OS, the application files, and all dependencies. A web application will probably require several containers, e.g., codebase (and language runtime), a database, a web server, etc. 

    A container is launched from an image. In essence, it is a container template that defines the OS, installation processes, settings, etc., in a Dockerfile configuration. Any number of containers can be started from the same image. Containers start in a clean (image) state and data are not permanently stored. You can mount Docker volumes or bind host folders to retain state between restarts. 

    Containers are isolated from the host and other containers. You can define a network and open TCP/IP ports to permit communication. Each container is started with a single Docker command. Docker Compose is a utility that can launch multiple containers in one step using a docker-compose.yml configuration file. Optionally, orchestration tools such as Docker Swarm and Kubernetes can be used for container management and replication on production systems.

    Docker in Production

    Before the birth of Docker, the software industry faced some major issues. One of them was the unpredicted behavior of an application in case of server migration. With Docker in production, server migration of applications has been a charm. Let us take a look at some of the issues the industry faced before Docker became widely used: 

    Dependency Matrix - Whenever software is built on top of any runtime or platform with a specific version of a language, e.g, Java 8 or Python 3.5, having the same version installed on the system was a challenge as it might need a different version for system applications. 

    Time-consuming Migration - As soon as the software is migrated to the new environment, managers, developers, and the system administrators used to start hunting the bugs produced because of a new environment. One question that was frequently asked was “what is the difference between this environment and the last environment where everything worked fine?” 

    “It Works on my Machine!” - This was the biggest problem whenever a new developer joined the team and needed the project to be set up on his system. For every troubleshooting, developers used to say, “It works on my system. I don’t know why it is not working on yours”. 

    The Docker ecosystem is constantly changing. The most popular Docker images are being improved and updated continuously, which makes them more reliable, faster and more secure. In the modern-day world, docker is something we hear day-in and day-outDocker is used to run an application by containerizing and solving the major issue, runs in the machine but not in prod. To learn more about docker, checkout learning opportunities like Docker Training online. 

    From a DevOps perspective, Docker has been an integral part of it. From managing deployments, setting up the infra, ensuring the code developments are not blocked due to any infra issues and many moreWith Docker being in the DevOps lifecycle, it enables rapid feature development. DevOps is one of the pivotal parts in code deployment lifecycle, to learn and get certification in Docker do check out DevOps Certification training course. 

    Deploying Containers Across Environments with Docker in Production 

    When it comes to Docker images, big is bad! Big means slow and hard to work with. Also, it brings in more potential vulnerabilities and possibly a bigger attack surfaceFor these reasons, Docker images should be small. The aim of the game is to only ship production images with the stuff needed to run your app in production. 

    The problem is that keeping images small was hard work. For example, the way you write your Dockerfiles has a huge impact on the size of your images. A common example is that every RUN instruction adds a new layer. As a result, it’s usually considered a best practice to include multiple commands as part of a single RUN instruction—all glued together with double-ampersands (&&) and backslash (\) line-breaks. While this isn’t rocket science, it requires time and discipline. 

    Another issue is that we don’t clean up after ourselves. We RUN a command against an image that pulls some build-time tools and leave all those tools in the image when we ship it for production. But this is not ideal. 

    Here, Multi-stage builds comes to the rescue. Multi-stage builds are all about optimizing builds without adding complexity. These builds have a single Dockerfile containing multiple FROM instructions. Each FROM instruction is a new build stage that can easily COPY artifacts from previous stages. 

    Example 

    Let us create a sample Dockerfile for understanding a Linux-based application, so it will only work on a Linux Docker host. It is also quite old, so don’t deploy it to an important system, and be sure to delete it as soon as you are finished. 

    The Dockerfile is shown below: 

    FROM node:latest AS storefront 
    WORKDIR /usr/src/atsea/app/react-app. 
    COPY react-app. 
    RUN npm install. 
    RUN npm run build. 
    FROM maven:latest AS appserver 
    WORKDIR /usr/src/atsea 
    COPY pom.xml. 
    RUN mvn -B -f pom.xml -s /usr/share/maven/ref/settings-docker.xml dependency:resolve 
    COPY. . 
    RUN mvn -B -s /usr/share/maven/ref/settings-docker.xml package -DskipTests 
    FROM java:8-jdk-alpine AS production 
    RUN adduser -Dh /home/gordon gordon 
    WORKDIR /static 
    COPY --from=storefront /usr/src/atsea/app/react-app/build/. 
    WORKDIR /app 
    COPY --from=appserver /usr/src/atsea/target/AtSea-0.0.1-SNAPSHOT.jar . 
    ENTRYPOINT ["java""-jar""/app/AtSea-0.0.1-SNAPSHOT.jar"] 
    CMD ["--spring.profiles.active=postgres"] 

    The first thing to note is that Dockerfile has three FROM instructions. Each of these constitutes a distinct build stage. Internally, they’re numbered from the top starting at 0. However, we have also given each stage a friendly name. 

    • Stage 0 is called storefront. 
    • Stage 1 is called appserver. 
    • Stage 2 is called production. 

    Stage 1  

    The storefront stage pulls the node:latest image, which is over 900MB in size. It sets the working directory, copies in some app code, and uses two RUN instructions to perform some npm magic. This adds three layers and considerable size. The resulting image is even bigger than the base node:latest image, as it contains lots of build stuff and not a lot of app code. 

    Stage 2

    The appserver stage pulls the maven:latest image which, is over 500MB in size. It adds four layers of content via two COPY instructions and two RUN instructions. This produces another very large image with lots of build tools and very little actual production code. 

    Stage 3

    The production stage starts by pulling the java:8-jdk-alpine image. This image is approximately 150MB. That is considerably smaller than the node and maven images used by the previous build stages. It adds a user, sets the working directory, and copies in some app code from the image produced by the storefront stage.  

    After that, it sets a different working directory and copies in the application code from the image produced by the appserver stage. Finally, it sets the main application for the image to run when it starts as a container. 

    An important thing to note is that COPY --from instructions are used to only copy production-related application code from the images built by the previous stages. They do not copy build artifacts that are not needed for production. 

    Multi-stage builds are new with Docker 17.05 and are an excellent feature for building small production-worthy images. 

    Adopting Docker in a Production Environment: Enterprise Considerations  

    The first thing you need to consider when adopting Docker is the size of your infrastructure. You need a good-sized infrastructure with plenty of RAM, CPUand Disk space. In addition, you need to ensure that all the containers run on separate hosts (or physical machines) and not share resources. This is because having many containers on one host can lead to performance issues and increase complexity in maintaining your infrastructure.

    Another important thing to consider when adopting Docker is that it needs a lot of secure storage space. The storage should be backed up regularly and must be encrypted so that no one can access it without permission from you or your team members who have access to its keys. You must also make sure that there are enough servers available for running your application. On top of it if there aren't enough servers available then your application might fail due to high load on someone else's server. 

    Constantly Changing Docker Ecosystem  

    The Docker ecosystem is constantly changing. The most popular Docker images are constantly being improved and updated, which makes them more reliable, faster and more secure. 

    If you want to use the latest version of your favorite image, you have to use the latest version of Docker Engine. All the images in the official repository are available for free. If you want to build your own image and publish it as an image in the public registry, you need a paid license for Docker Hub Pro or Enterprise Edition 

    Enforcing Policy and Controls for Docker in Production  

    Docker can be configured to be extremely secure. It supports all of the major Linux security technologies, including kernel namespaces, cgroups, capabilities, MAC, and seccomp. It ships with sensible defaults for all of these, but you can customize or even disable them. 

    Over and above the general Linux security technologies, Docker includes an extensive set of its own security technologies. Swarm Mode is built on TLS and is extremely simple to configure and customize. Image scanning can perform binary-level scans of images and provide detailed reports of known vulnerabilities. Docker Content Trust lets you sign and verify content, and Docker Secrets allow you to securely share sensitive data with containers and Swarm services. 

    The net result is that your Docker environment can be configured to be as secure or insecure as you desire; it all depends on how you configure it.

    Best Practices for Running Docker in Production  

    1. Use Specific Docker Image Versions 

    When using Docker in Production, if we don’t use specific image version in the build, by default it picks up the latest tag of the image. 

    Issues with this approach. 

    • you might get a different image version as in the previous build. 
    • the new image version may break stuff. 
    • latest tag is unpredictable, causing unexpected behaviour 

    So instead of a random latest image tag, we would like to fixate the image version.  The rule here is: the more specific the better. 

    2. Docker Monitoring and Logging   

    To securely manage your Docker deployment, you need to gain visibility into the entire ecosystem. You can achieve this by using a monitoring solution that tracks container instances across the environment and allows you to automate responses to fault conditionstesting and deployment. 

    3. Cautionary Use of Docker Socket   

    In the visualizer service, we have mounted Docker socket /var/run/docker.sock on the container. Bind mounting the Docker daemon socket gives a lot of power to a container as it can control the daemon. It must be used with caution and only with containers we can trust. There are a lot of third-party tools that demand this socket to be mounted while using their service. 

    You should verify such services with Docker Content Trust and vulnerability management processes before using them. 

    Docker Security Best Practices  

    Docker has provided numerous benefits over its competitors. However, most of its components are shared with the host kernel. So, if proper security measures are not taken, the host system can be at risk of being compromised and let an attacker take control of it. 

    1. Host Machine Compromise 

    Since containers use the host’s kernel as a shared kernel for running processes, a compromised container kernel can exploit or attack the entire host system. 

    2. Container Mismanagement 

    If somehow, a user is able to escape the container namespace, it will be able to interact with the other processes on the host and can stop or kill the processes. 

    3. Maxing out Utilization 

    Sometimes a container uses all the resources of the host machine if it is not restricted. This will force other services to halt and stop the execution. 

    4. Issue with Untrusted Images 

    Docker allows us to run all images present on Docker Hub as well as a local build. So, when an image from an untrusted source is run on the machine, the attacker’s malicious program may get access to the kernel or steal all the data present in the container and mounted volumes. 

    5. Best Practices to Mitigate Risks 

    No matter what measures you take, security can be breached at any level, and no one can totally remove the security risks. However, we can mitigate the risks by following some best processes to ensure that we close all the gates for an attacker to get access to our host machine. 

    6. Container Lifecycle Management 

    Through the container lifecycle management process, we establish a strong foundation for the review process of creating, updating, and deleting a container. This takes care of all the security measures at the start while creating a container. When a container is updated instead of reviewing only the updated layer, we should review all layers again. 

    7. Information Management 

    Never push any sensitive data like passwords, ssh keys, tokens, certificates in an image. It should be encrypted and kept inside a secret manager. Access to these secrets should be explicitly provided to the services and only when they are running. 

    8. No Root-level Access 

    A container should never be run with root-level access. A role-based access control system will reduce the possibility of accidental access to other processes running in the same namespace. Many of the organizations restrict access using the active directory and provide appropriate access based on user roles. 

    In general, we can use Linux’s inbuilt commands to create a temporary non-root user on the fly. 

    FROM python:3.9 
    RUNgroupadd-r myuser&&useradd-r -g myusermyuser 
    <HEREDOWHATYOUHAVETODOASAROOTUSERLIKEINSTALLINGPACKAGESETC.> 
    USERmyuser  

    or while running a container from the image use, 

    docker run-u 4000python:3.9 

    This will run the container as a non-root user. 

    9. Trusted Image Source 

    Check the authenticity of every image pulled from Docker Hub. Use Docker Content Trust to check the authenticity of the Image. Docker Content Trust is a new feature incorporated into Docker 1.8. It is disabled by default, but once enabled, allows you to verify the integrity, authenticity, and publication date of all Docker images from the Docker Hub registry. 

    Enable Docker content trust using export DOCKER_CONTENT_TRUST=1 and try to pull this unsigned image docker pull dganapathi/test123. 

    # docker pull dganapathi/test123 
    Usingdefault tag: latest 
    Error: remote trust data does not exist for docker.io/dganapathi/test123: notary.docker.io does not have trust data for docker.io/dganapathi/test123  

    Docker will check whether the image is safe or not and will throw an error if it is not. 

    10. Mounting Volumes as Read-only 

    One of the best practices is to mount the host filesystem as read-only if there is no data saved by the container. We can do that by simply using a ro flag for mounts using -v argument or readonly with the --mount argument. 

    Example: 

    $ docker run -v volume-name:/path/in/container:ro python:3.9 

    or 

    $ docker run --mount source=volume-name,destination=/path/in/container,readonly python:3.9

    Advantages of Using Docker   

    1. Industry Demand  

    As Docker promises an equivalent environment in both development and production, companies don’t have to test applications twice in different environments. As a result, Docker adoption by companies increases daily. 

    2. Isolation from the Main System  

    As a developer, you will always experiment with libraries and different versions of programming languages. For example, if you are testing asyncio support for one application that needs Python 3.7, and you decide not to use it, you might need to uninstall Python 3.7 and install the previous version. With Docker, you simply remove the container without zero complexities. 

    3. Configurations  

    There are a lot of different configurations required for every project. Maintaining a list of configurations is very difficult. Docker provides the capability to configure images with different configurations and tag images. 

    4. Docker Hub 

    Ever imagined sharing your machine like you share the code using Github? Docker Hub provides access to thousands of images that are configured with the environment so that when your code works in your machine, you can build images and share it all over the internet. 

    Conclusion 

    Docker is very flexible and can be used in multiple ways, and we just scraped the surface here. You can now use docker to manage your own data and text files or act as a communication layer between servers. We hope this article made you familiar with the basics of how to use Docker in production mode and how it can help you. To get certified in docker and Kubernetes, check out KnowledgeHut’s Kubernetes Docker certification. 

    Frequently Asked Questions (FAQs)

    1Can Docker be used in production?

    Yes, it can be used in production, provided we ensure best security practices are practiced. 

    2Is Docker good for production database?

    Docker is only as safe as its implemented safety measures. Technically, it can be used in production database, but it is not a best practice. 

    3What are the most common misconceptions about using Docker in production?

    The main misconception of Docker in Production is Security issues, but this can be mitigated by following best security practices for docker. 

    4Can I use docker to create GUI application?

    Docker does not prioritize GUI applications. Docker is not suitable for rich UI applications as it is mainly for console-based apps. 

    Profile

    DhineshSunder Ganapathi

    Author

    DhineshSunder Ganapathi is an experienced Software Engineer in Data-Platform, Data Integrations, and Backend Technologies with a demonstrated history of working in the information technology and services industry. He has a prolific knowledge of Python, Flask, FASTAPI, Mysql, Airflow, AWS, Docker, REST APIs, Shell-scripting, and Distributed Systems. In addition, Dhinesh is a budding author, a tech blogger, a chess evangelist, and a candid toastmaster.

    Share This Article
    Ready to Master the Skills that Drive Your Career?

    Avail your free 1:1 mentorship session.

    Select
    Your Message (Optional)

    Upcoming DevOps Batches & Dates

    NameDateFeeKnow more
    Course advisor icon
    Course Advisor
    Whatsapp/Chat icon