Recently I was working to wrap calls to a binary with a webservice. The binary reads input from a file and writes output to another file. It’s a producer-consumer model where the web-service represents the producer and the binary is the consumer. I developed a webservice that coordinates reading and writing to the binary file. Due to the nature of web-services these are multiple simultaneous requests. Its not the smoothest implementation but it works well enough. The binary is designed for batch operation. Give it a large file of input and at rates of approximately 50,000 rows/second it will spit out results in the output file.
In order to collate the input with the output the web-service needs to do some cordination. The coordination limited performance to approximately 20 rows / second per thread. My initial test of 10 threads achieved 200 rows/second. A follow-up test of 20 threads achieved 400 rows/second. Subsequent testing showed it scaled linearly. This was all on one process.
I wanted to see if running multiple instances of the web-service/binary increased performance. My plan was to put the binary and web-service into a docker container and launch 2-5 containers and load balance with something like haproxy/nginx between the instances. Dockerizing the process was straightforward enough. The challenge came when I attempted to stream the input file to the binary. I used
tail for that. When I launched tail in my container I got a message stating that the file system was unrecognized as a result it would use polling.
root@22c1514435a3:/# tail -F /input tail: unrecognized file system type 0x794c7630 for ‘/input’. please report this to email@example.com. reverting to polling
Performance tests confirmed this behavior. Performance fell from a disheartening 20 rows/sec to a dismal 1 row/second. It appeared that tail was polling the file system for changes at a rate of once per second. After reading this bug report I set out to find an updated base image that had a newer version of tail.
I tried all of the images list in debian’s docker repository. The good news is that I was able to get rid of the error message. The bad news is that none of them increased performance. Newer versions had no error messages but they apparently were still polling under the covers.
I thought there were a few ways to attack the problem.
1) replace tail with another Linux process 2) avoid writing to the file system 3) wrap the binary process in something that could interact directly via standard in / standard out.
I decided to focus on #1 and #2. My search for a replacement tail didn’t get me too far. The best I could find was a
grep setup that still relied on polling. For #2 I came across named pipes. These are actually called fifo queues in Linux. They are things that look and act like file descriptors but actually do not write to disk. The information stays in ram. This looked promising in my initial testing without docker this improved performance 10-20%.
Sadly inside docker there was no change in results. Performance remained at 1 row/second with a named pipe replacing the input file. I have yet to try approach #3, but so far I have to abandon docker for a non-docker solution to get a workable level of performance
Wrapping things up a bit, tail was unable to recognize the docker file system. Tail fell back on polling instead of using inotify. My setup relied on tail to continously stream data to a binary. Without inotify tail delivered data once per second tanking performance and any chance of using docker for this. The silver lining is the discovery of named pipes which increased performance of the non-docker setup by 20%.