A revolution of sorts is playing out in Linux based software development. Application containers are tools that allow software to run in an isolated environment, using only the resources the application requires, as if it was running on its own dedicated server. A single operating system can run multiple application containers at once because they are lightweight, launch quickly, and have a consistent runtime model.
They are also portable. Application containers use cutting edge technology to package and run a single service in a variety of environments. That is important for developers because software rarely operates under identical conditions from one user to the next, or from one system to the next. Even seemingly subtle hardware or supporting software differences that exist between the development and testing environments, or between testing and production, can cause headaches.
“Application containers have the potential to simplify a lot of what we do in the software development process. The biggest advantage is consistency across multiple environments,” said TDK Chief Technology Officer Mark Henman. “Getting that consistency from developer to developer, through testing and all the way to production ensures everybody is looking at the application the same way.”
Application containers help level the playing field by creating a complete environment with everything needed to launch an application – processes, networking, libraries, dependencies, file systems – isolated in one package. This makes many infrastructure differences moot issues.
"Everything we do is custom and based on what our customers need. Even if we do two projects that are based on the same technologies, those clients might use different tools and different testing mechanisms, so we have to adapt for that. Application containers can help us do that quickly and efficiently," Henman said.
How Application Containers Work
The concept is based on tool sets that have been around a long time but have been refined in recent years. Docker, which was released as open source in 2013, is the prominent player. Other tools which can aid in development and deployment, but approach the problems from different angles, include Vagrant, Chef and Puppet.
The key technology in Docker is a union file system. This is a layered approach that permits transparent overlaying of independent file systems, merges read-only and read-write file systems, and sets write priorities. Several application containers can use the same operating system kernel, but each container runs in isolation. Whatever is layered inside the container cannot see any resources or processes that are on the outside.
Image located at https://www.docker.com/
The isolation results from "cgroups" that allow containers to run within their own instance on Linux servers. Docker application container images can be created by launching a base image, making any required changes and creating a snapshot of the new image. Another option is to create a Dockerfile instruction set.
“Docker is very lightweight. Let’s say I’m working on two different projects and they share some base technologies. The way Docker builds those images to actually run them is by trying to use things it has used in the past. Even if I’m creating an entirely new virtual machine, because it can layer those things in, Docker doesn’t have to build it from scratch each time,” Henman said.
Fast Learning Curve
Another attribute of application containers is that they are easy to configure. Henman recently used Docker for the first time to create a container to run the TDK employee time card application. He estimated the process took just 25 minutes.
“If I was to create another container based on similar technologies, I could probably put that container together in 10 minutes and roll it out to all the developers in our testing environment. They would do some fine tuning and roll that into a production environment,” Henman said. “Application containers allow us to move quickly from project to project. Once the baseline is set, everyone can share the container to make sure they are running the code the same way.”
Henman said there are some nuances involved. For example, when working from the command line in Docker, a new container is launched by default each time. That can cause frustration for those with a mindset that most applications resume where they left off after exiting. Also, if too many containers are launched simultaneously, system memory could be under strain.
“Once you get into a normal routine of using it, the process accelerates. If I need to change one piece in the middle, I can do that without having to go through a complete re-build. That increases the speed to use the application. Because Docker tries to be very lightweight and give you only what you need to run, it streamlines the whole process,” Henman said.
Potential for the Future
Docker technology is being embraced in cloud environments by Microsoft, Amazon and Google. That acceptance permits applications to be widely deployed without any changes being made. The technology is designed to run the application in a given environment based on the Dockerfile instructions.
Currently, application containers are Linux centric. But can you imagine a day when software developers are able to run the very same application, regardless of whether they use Windows, Mac or Linux? That could be in the future as Docker is working with Microsoft and Apple so application containers would run natively on all three major platforms.
"If they do that it will be nice because developers can switch from one environment to another. If someone is running a Windows machine and another is running a Mac, they then could all use the same tool. And that’s where we’ll really be able to streamline what’s going on because there will be a consistent environment across all platforms," Henman said. "One of the goals of Docker would be to build an image and run it in all environments as quickly as possible."
Contact the Solutions team at TDK Technologies to learn more about how application containers can help your company.