Software container technology first emerged in the early 2010s, and it’s been making waves across the IT industry ever since. Over the past few years, Google, Microsoft, and Amazon have all begun offering their own solutions, further cementing software containers as a fixture of modern IT.
Much like its predecessor, virtualization, software container tech promised to meet organizations’ needs for faster app development and to change IT operations forever. And given its sky-high adoption rates, the tech seems to be performing as hoped. In a 2019 survey of more than 500 IT pros, over 87 percent of respondents said they were actively running container tech (up from 55 percent in 2017), according to a report by Portworx and Aqua Security.
Here’s what you need to know about software containers and the risks you need to consider if your organization is looking to implement this tech.
What are software containers, and why do we like them?
Google, a top authority on the subject, describes a container as a “logical packaging mechanism in which applications can be abstracted from the environment in which they actually run.” This separation between software and environment means that apps can be deployed quickly and consistently across many devices and locations at once.
One of the largest hurdles to quick and consistent application deployment is the fact that transferring code from one environment to another is a process full of variables. When a software environment isn’t identical to the one for which the app was designed, issues often arise.
“You’re going to test using Python 2.7, and then it’s going to run on Python 3 in production and something weird will happen,” says Solomon Hykes, founder of container pioneer Docker, in an article for CIO. “Or you’ll rely on the behavior of a certain version of an SSL library, and another one will be installed. You’ll run your tests on Debian, and production is on Red Hat, and all sorts of weird things happen.”
Software containers solve these headaches by creating independent runtime environments. Developers put their code and any dependencies (like specific libraries) into containers that can run anywhere — no virtual machine needed.
Containers also create what’s called an isolation boundary. That way, if anything goes wrong within a container, it won’t affect the whole server. Developers also don’t have to install a full operating system into a container, just whatever basics the software needs to perform the tasks it was created to do. Everything is lighter, faster and generally smoother.
What are the risks associated with containers?
As an IT pro, one of your first thoughts as you look to adopt any cutting edge tech is likely, “What are the security risks?” After all, sometimes an organization’s desire to ship products more quickly can outweigh its attention to due diligence—much to the chagrin of IT teams everywhere.
In the case of software containers, one issue is the technology’s complexity. When IT security teams don’t fully understand the tech, it’s more challenging for them to determine how to best identify vulnerabilities.
As DevOps engineer Jonathan Bethune observes, “Greater abstraction away from hardware also brings with it the risk of less transparency and control. When something breaks in a system running hundreds of containers, we have to hope that the failure bubbles up somewhere we can detect.”
For example, “images,” the file systems and configurations that create the independent runtime environments, act as the recipe for containers. Images can either be created or downloaded as public files. If these public files aren’t properly vetted and validated, they could be full of invisible vulnerabilities.
Another issue is that relying on third-party services for your containers means that if something goes wrong on their side, you could be left in the dark. Bethune also notes how you need to trust that a provider isn’t maintaining any back doors into your data that could be exploited. The big names in the container business do have decent track records in this regard, but you should consider the risks and carefully vet your provider if you’re working with highly sensitive data or you have extremely stringent uptime and accessibility requirements.
How to protect an environment that uses containers
Containers offer many benefits for developing and deploying applications, and adoption of the tech will likely continue to grow. As an IT leader, it’s your responsibility to protect your organization from any threats this growth might usher into your environment.
Here are three ways to protect your network:
Educate your team on container technology
You can’t protect what you don’t understand, so take time to brief all IT personnel on software containers, how they work, and how security risks may develop around them. Your team may grumble about another presentation, but they’ll thank you in the long term for information on this ubiquitous new tech.
Monitor and audit your container ecosystem
Make sure you map the flow of data to the best of your ability and regularly monitor container activity so you can quickly identify and react to possible breaches. This may require an investment in a third-party tool that specializes in container security.
Focus on endpoint security
Endpoint security is critical to protecting your company, whether you’re using container tech or not. However, because containers can introduce new vulnerabilities, it’s a good idea to double down. Identify the weak points in your environment (e.g., aging equipment or unsecured devices) and address the issue immediately. For example, replace older printers with new, highly secure printing systems that were built with modern threats in mind.
Enterprise technology is becoming more complex every day and making the business of protecting your environment much more challenging. By staying up to date on emerging tech and securing your endpoints, you can make sure you’re prepared for whatever’s next.