Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It may be a matter of opinion but advocating to run cron, sshd, and so on in your containers, let alone in every single one by providing a base image to do that seems plain wrong.

Let's take an example. You have Nginx, a web app, and a database. You can put everything in the same container or not. If you choose to put everything in different containers, you will be able to use tools at the Docker level to manage them (e.g. replace one of those processes).

And the fundamental idea is that we expect to have plenty of Docker images around that you can pick and play with, and those Docker-level tools will be able to manage all those things.

Now if you put everything in the same container, you're back to square one, reinventing the tools to manage those individual process. You can say that you don't need to re-invent anything, because you're used to full-fledged operating systems. Still, if you have a nice story to deploy containers on multiple hosts, to send logs across those hosts, and so on, the road will be more straightfoward when you decide to use multiple hosts.

This is about uniformity. I want processes (and containers around them), and hosts, that's it. I don't want additional levels. I don't want processes, arbitrarily grouped inside some VMs (or containers), and hosts. Two levels instead of three.



Right, cron and sshd are open for debate, but at the very least you have to make your PID 1 behave correctly by reaping adopted child processes. That is a major part of baseimage-docker.

Baseimage-docker is not advocating putting everything in the same container. It's advocating putting all the necessary, low-level services in the same container. What if your app happens to use a library that needs to schedule things to run periodically using cron? To me it doesn't make sense to split that cron job to another container. The app might physically consist of multiple processes and components, but I think it should logically behave as a single unit.

For stuff like Nginx and the database, it's not so clear what is the right thing to do. It depends your use case. I don't think that putting those major services in the same container is always correct (though it might be), but I also don't think that splitting them out to Docker containers is always the right thing to do.

You say that that putting stuff in the same container puts us back to square one. I think splitting them puts us back to square one. Your base OS already runs all your processes as single units. You have to worry about each one of them separately, resulting in lots of moving parts that all increase deployment complexity. The beauty of Docker should be that you can group things. If you don't group things then why would you be using Docker? You might as well apt-get install your app and have it run as a normal daemon.

One use case where it really really makes sense to put everything in the same container: when distributing an app to end users who have little to no system administration knowledge. For example, what if you want to distribute the Discourse forum software? It depends on Rails, Nginx and PostgreSQL. Users are already having a lot of trouble installing Ruby, running 'bundle install', setting up Nginx and setting up PostgreSQL. Imagine if they can just 'docker run discourse' and it immediately listens on port 80, or whatever port they prefer, with the database and everything already taken care of for them.


I guess we both understand things well enough to know that limits to draw are not rigid. That being said, here is my take on what you say.

An app should logically behave as a single unit. I would say that's true, and that unit is a cluster of containers. Docker is not yet ready as a tool to manage clusters of containers, but I believe it will. In the meantime tools like Fig or Gaudi are exploring the design space.

You say that having everything separate is back to square one, because you have to manage things separately. My opinion is to develop tools to manage cluster of containers, not to cram things to fit in a single one (I'm not being harsh, sorry if it sounds like). If you use Docker to group things (at the container level, instead of at the cluster of containers level), what should we do if I want to share something with you (a program) ? I can be nice and provide a Dockerfile, but you would still have to put it in your existing "logical single unit", thus loosing the benefits of, e.g. dependencies isolation.

The distribution case for enduser is a good one, where the limits will depend on what you really want. For instance if you don't expect people to expand your app by adding additional processes, why not. But I think it is still a workaround for the cluster-level tool I keep talking about.

I am using Docker to create a cluster of containers (for https://reesd.com). Since the infamous cluster-level tool of my dream doesn't exist yet, I'm still relying on Bash scripts (because I feel like exploring my possibilities and don't want to start writing a solidified tool). The script is pretty simple: a bunch of `docker run -d`, saving containers IDs and IPs around (this could be replaced by `docker inspect` and such).

Well that script is done so it can be run next to itself multiple times. So I can have multiple instances of the whole Reesd service on my laptop. To deploy it, I run the same script, possibly next to the live one. I have additional scripts to e.g. replace one specific container (say, the web app). So really when talking about uniformity, I want to be able to run Reesd on my laptop, or on multiple machines, and possibly side-by-side, using the same Docker features.

A possibility that I haven't tried regarding your last paragraph is the docker-in-docker feature.


Very interesting viewpoint. Yes, if Docker performs cluster management right then that would change a lot of things. I see that the CoreOS guys released Fleet today, possibly in response to this article. I'll have a look at this later. https://news.ycombinator.com/item?id=7260596




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: