Operating Docker containers is not only for folks with gentle OCD that like the whole lot to be in its correct place. Containerization has develop into a useful technique for skilled software program improvement groups trying to keep away from a visit down into the depths of dependency hell. By maintaining the whole lot an software wants in a single package deal, remoted from the remainder of the system, constant software efficiency and better ranges of safety are a lot simpler to attain. However the whole lot, together with Docker, comes at a price, proper?
Standard knowledge says that the first price is a small efficiency hit. In any case, any extra software program layers should have some computational price, so this makes intuitive sense. For many purposes, particularly within the enterprise world, we’ve got an overabundance of {hardware} sources lately. As such, a tiny efficiency hit is mostly a suitable trade-off for the myriad advantages of containerization. However on this planet of real-time purposes and robotics any latency is an excessive amount of, so Docker is basically prevented.
Nonetheless, standard knowledge does typically fail us. Shouldn’t we ask whether or not or not our intuitions are literally true earlier than making an necessary resolution? The workforce over at robocore believes that to be the case, so that they took a deep dive into Docker to get some onerous information and decide if it actually does sluggish issues down. The outcomes may shock you and make you rethink your improvement technique.
The workforce centered on robotics workloads with strict real-time necessities like management loops, high-rate sensor streams, and notion pipelines. Utilizing a Jetson Orin Nano, they ran benchmarks evaluating Dockerized ROS 2 setups to native execution. Assessments measured latency, throughput, and jitter underneath each idle and closely loaded CPU situations.
What they discovered is that at idle, the variations between native and containerized execution have been negligible. Extra curiously, underneath heavy load, Docker typically matched, and even outperformed, native when it comes to worst-case latency. This will appear counterintuitive, however it was found that the rationale for the sudden increase got here from Linux’s Fully Honest Scheduler (CFS). CFS can typically allocate CPU time extra evenly to a container’s course of group than to equal processes working straight on the host, smoothing out efficiency spikes.
Throughput assessments additionally confirmed little efficiency penalty underneath Docker. In reality, containerized setups typically held goal message charges extra persistently underneath CPU stress. Jitter benchmarks, that are necessary for understanding the soundness of management loops, confirmed that median efficiency was very near native efficiency. Cautious configuration, equivalent to growing shared reminiscence, utilizing host IPC, and explicitly pinning CPU cores, might additional enhance container efficiency.
The primary takeaway of this work is that Docker doesn’t kill real-time efficiency, no less than not when configured correctly. So, the following time somebody dismisses Docker for robotics as too sluggish, you may simply ask if they’ve truly measured it. The information means that with the best setup you may have each the comfort of containers and the efficiency your robots require.
A comparability of latency over time (📷: robocore)
Throughput check outcomes (📷: robocore)