Skip to Content.
Sympa Menu

en - Re: [sympa-users] Docker for Sympa

Subject: The mailing list for listmasters using Sympa

List archive

Chronological Thread  
  • From: Peter Schober <address@concealed>
  • To: address@concealed
  • Subject: Re: [sympa-users] Docker for Sympa
  • Date: Tue, 20 Mar 2018 18:16:33 +0100

* Marc Chantreux <address@concealed> [2018-03-20 17:06]:
> i have some questions on "how to handle multiple processes in the
> same container" ? do you install openrc or something and use it as
> the entrypoint (AFAIK: that's what we started to try) ?
>
> if not: what to do when one process inside the docker crashes ? how
> to catch this event to handle it ?

Wether you need something init-like depends on the details of the
service, esp wrt signal handlers and whether the application leaves
zombie processes:
https://github.com/docker-library/official-images#init

Here's a more elaborate write-up wrt zombie processes and the special
role of something init-like for cleaning those up:
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/

And this seems like a solid article on the whole "multiple processes"
and "containers vs. VMs" topic:
https://runnable.com/docker/rails/run-multiple-processes-in-a-container

So the "Docker way" of things would indeed be splitting things up into
several small containers doing one thing (concern, not process) each,
e.g. one with a web server exposing the FCGI socket (unix domain or
TCP) to other containers (i.e., no Perl on that machine, may be
changed from Apache httpd to Nginx without affecting the rest of the
system, etc.), another container running the FCGI workers and talking
to the socket from the webserver container, same thing again with the
RDBMS, then add data volumes to persist the data from the RDBMS, a
syslog container to forward logs elsewhere (or persist them to the
data volume), etc.
You're basically rebuilding the whole integration a GNU/Linux
distribution gives you from pieces run within isolated (as good as
Linux is able to do that, which is not much, security-wise)
containers.

The (claimed) upside of all this complexity is that you can easily
update/replace individual containers (i.e., components), package them
up and run them elsewhere, as well as gain better horizontal
scalability (e.g. add more FCGI workers).
The price obviously is the all-important management and orchestration
of all those moving, often stateless, parts.

Whether there's a /need/ for Sympa deployers to be running Sympa that
way I couldn't say. Whether there's a need for the Sympa project to
be providing such a setup I don't know, either (but I'm sceptical).


The *alternative* approach (as presented by Matthew C.) is treating
containers as VMs and stuffing all into one image. Certainly easier to
do from the traditional stating point but may not allow to reap all
the (claimed) benefits of containerisation.
I.e., here the question is "Why bother with containers at all"?

The technology doesn't limit you to either model. What is the goal?
Make things scale better? Get something published that others can
easily play with? The answers to those questions will differ
significantly depending on focus/aims.

-peter



Archive powered by MHonArc 2.6.19+.

Top of Page