Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why not just use

CMD ["/sbin/init"]

And start your app through an init.rd script?

The article says "upstart" is designed to be run on real hardware and not a virtualised system. If that is true, then perhaps there is value in baseimage-docker, but details are lacking.



So why don't you try it and see whether it works?

One of the things /sbin/init does is checking and mounting your filesystems. But you can't do that in an unprivileged Docker container because you don't have direct hardware access. This is only one example of where things go wrong. The entire init process is full of these kinds of code where it is assumed that there is direct hardware access.

Even when your container is started with -privileged, you still can't do that. The host OS is already controlling the hardware.

Also, /sbin/init usually does not like having SIGTERM sent to it, which is what 'docker stop' does. Depending on the implementation, /sbin/init either terminates uncleanly (causing the entire container to be killed uncleanly) or ignores the signal outright (causing the 'docker stop' timeout to kick in, also causing the container to be killed uncleanly).


It depends on the init system, however.

systemd makes an effort to ensure that running /sbin/init inside of a container works and can be detected by the software and services underneath it[1]. In general this means that if you take a copy of Arch or Fedora and try to run it inside of a container it works properly without any hacks.

For your own services you can also start to do the right things by using the virtualization detection code[2] that is built in. The most immediately useful one being: ConditionVirtualization=container and !container. With these directives you can tell your services to run or not run depending on whether you are in a container or on real hardware.

[1]: http://www.freedesktop.org/wiki/Software/systemd/ContainerIn... [2] http://www.freedesktop.org/software/systemd/man/systemd.unit...


ConditionVirtualization=container seems like a "with great power comes great ability to screw up in subtle and horrible ways" sort of feature, wherein when you need it, you really need it, but most of the time, a different approach will be vastly preferable.


Absolutely. This was in the context of the parent talking about doing things without certain privileges or skipping unnecessary steps.


Same with Gentoo's homebrew init system OpenRC, which has had support for running in various kinds of containers for a while now.


Or just read the source code. http://svn.savannah.nongnu.org/viewvc/*checkout*/sysvinit/tr...

Init doesn't care if the OS it's running on is physical or virtual at all. That's why stock init works on both virtual and physical machines. Init doesn't have to mount anything to work. In fact, it's common in embedded environments that init does basically nothing but reap children and start a single bash script.

For sysvinit, it's designed to never fail because it's supposed to keep the entire operating system functioning. But it can and does exit; how do you think your system shuts down? Sysvinit will respond to several different signals, but not SIGTERM. If you want it to stop you can use 'telinit 0' or 'telinit a/b/c' (which probably only works for users with privilege), send it SIGPWR or SIGINT, or use the /dev/initctl control channel.


Maybe that is how the init you linked works, but the docker instructions take you through the basics of setting up an Ubuntu image, where upstart is used (not sysvinit) and if you stray off the path, or try to upgrade the base image to a newer release, you find very quickly that upstart does not tolerate running in a container very well, and that a lot of things expected it to be running, you'd have to work around it. The OP is targeting Ubuntu users (or users of the 'base' image distributed in the docker registry.)

Not to mention, Docker places its own hooks in /sbin/init (still true?) so if you had an init that you wanted to run, you had better put it somewhere else inside of your image, because the file will be overriden when your app container starts.

This is a good discussion that starts in about the same place: https://github.com/dotcloud/docker/issues/223


Oh jesus that's a huge mess. So I guess it turns out Docker was designed to do a bunch of wacky things under the hood because they never expected users to use their tool in different ways. It looks like as a fix some people are running full-blown copies of Ubuntu under Docker (because somehow that's better than OpenVZ??)


I'm not sure how you infer this from what I said?

Even Ubuntu have plans to ditch upstart long-term, following Debian's lead to adopt systemd. Nobody is running "full-blown copies of Ubuntu" under docker, because full-blown would imply "with upstart" and using your distributions' built-in /sbin/init is precluded by the fact that docker overrides it, and also that Upstart doesn't work well in a container.

So people are running parts of Ubuntu without understanding that sometimes those parts had expectations of being child processes of upstart, and that upstart would always be running, when in docker-land that's not a reasonable expectation (and OP addresses this by adding in a more reasonable init, which probably handles most of these concerns.)

I just built the baseimage-docker, trying to get into it with ssh now, I can't imagine giving anyone a container and not letting them ssh into it, and I've added ssh to containers before in a way that I knew was non-supported or broken, so it's good to have this example. It looks like they are indeed doing it right.


I think that some time since the issue I linked you from ~11 months ago, they stopped overriding /sbin/init and started putting their hooks in /.dockerinit instead, since many folks tend to put something they care about in /sbin/init.

Here's the reference: https://github.com/dotcloud/docker/pull/898

So yes, there is some mess associated with changing your way of business over to doing containers, but a lot of these problems have been solved in one way over six months ago.

We're left arguing about the people who didn't catch the solution, don't know the expected way of doing things, haven't RTFM'ed or just aren't interested in making permanent solutions out of their docker containers. That is one of the strengths; docker lowers the cost of deployment (in test) when you can take an image that solves only the problem you care about and deploy it into a disposable container.

You will always get the occasional "I want to put this in docker" from someone who maybe isn't understanding, and of course some times that person is your boss. Then sometimes what they're asking for is perfectly reasonable like "let's use it for an SSH forwarding endpoint". Docker (and CoreOS) certainly put some hurdles in front of generally easy and perfectly normal ideas, for better or for worse.

Your first challenge in this case will be to get your container listening on a port 22 of some public interface somewhere, since containers are designed not to be exposed like that until you RTFM; you know whose fault it will be when something is badly configured after you just went and skipped ahead to the section on forwarding ports without spending time on the rest of the instructions at all.

They are definitely curating the GitHub Issues database, you have to give them that.


And as long as we're linking to the source code of init apps:

https://github.com/dotcloud/docker/blob/master/dockerinit/do...


It isn't the core init system, the /sbin/init executable, itself that's causing problems. It's all the scripts that are part of the init system as a whole. Scripts that try to run fsck and mount volumes and populate /dev and stuff.

As for signals: ok it's great that it responds to signals, but that's not the point. `docker stop` sends SIGTERM. So your PID must respond to SIGTERM and forward the signal to all other processes, period.


That's an easy fix. Provide an empty inittab and a false mtab (or link it to /proc/mounts). Almost every stock install image for every distro does this.

Sounds like Docker has a design flaw :) You could alternately just write a shell script that sends SIGPWR or uses /dev/initctl and then use `docker wait`. It would be better if Docker included support for running a custom executable to stop your container. Maybe they'll add support for it once enough people run into problems like this.


I take back my comment. I am not familiar with Docker; I was confusing it with a raw lxc container, and when I setup an Ubuntu lxc image, I didn't have to do anything special for the init. That's probably because the Ubuntu image had worked out the kinks already.


No need to take it back, it was a fine question and good feedback.


Old systemv init would work for this purpose, but not upstart.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: