As a boutique development studio, we are naturally driven towards innovation and the adoption of all latest and greatest technology. Innovation allows us to supply our customers with great value at affordable cost – and grants us sufficient time to volunteer for various good causes.
Docker, being the flagship of innovation in modern-day DevOps, certainly landed on our tech-radars back in 2013, but we were hesitant to use it excessively – in part due to the steep learning curve, but also for a variety of other reasons.
While discussing security topics at this year’s OpenFest 2015, we repeatedly got the question if we’re using Docker and for what. That is also the reason we decided to summarize our decision-making process in this short blog post.
Our team has always had sufficient talent to deploy code – even for some of the most advanced projects. While CI servers are a must-have in today’s BDD world, automated deployment is not. The reason? People err less when sweating a deployment, especially compared to counting on an automated “black-box” solution. “Black boxes” have a different environment than the developer’s PC. “Black boxes” also introduce a bottleneck, because usually devops teams are smaller and quite busy.
Docker, on the other hand, came to life promising to put all woes away, effectively minimizing overhead and – to an extent – deeming devops unnecessary.
Yet we still pride ourselves in keeping customer data secure at all costs. No promise of optimization or miraculous woe healing could make us break that promise.
Take a look at the video below where Daniel Walsh disassembles Docker’s security mechanisms: