http://blog.dscpl.com.au/2014/12/hosting-python-wsgi-applications-using.html has a good starting point.
Would be nice to support development config (supplanting our Vagrant image) as well as a production-ready config (for which we don't have any good docs/images currently)
We need to determine how we want to run the different services (mongo, solr, taskd, mail listener, webapp) since docker prefers one process per container. E.g. the default init scripts for mongo won't run inside docker. We could have a different container for each service, which would illustrate how they can be separate (good for production, complex for local). We might then need a tool like
fig
ordecking
to help orchestrate all those containers together. Or we could make it all run on a single host with one of thephusion
base images or other workarounds.Last edit: Dave Brondsema 2014-12-17
Fig seems like a strong contender.
I like the fact that it's python/yaml based and can aggregate logs.
I also like that it's now an official docker project.
panamax has also been on my radar.
At PyCon I learned about supervisord as a way to run multiple services from one single command. This could be used if we want to have a single docker box run everything. https://docs.docker.com/articles/using_supervisord/
However it sounds like the better way to use Docker is as-intended, one box per service. This would be needed anyway for us to have a realistic production deployment option.
Fig has been superseded by Docker Compose, which is probably the way we should go.
I've pushed some earlier work to branch db/7806. This is pretty much a conversion of the INSTALL file and tries to do everything on one box, which is not how we want to do it. Notably
service mongod start
doesn't work since docker base ubuntu images have their services system disabled intentionally.Closed #773, #778, #779, #780.
ib/7806
Created configuration suitable for development. See
INSTALL-docker.markdown
for details.I had a few interruptions that caused me to rebuild the 'web' image. (Two were the Virtualbox VM got into "aborted" state, one was
pip install
locked up when my laptop hibernated). Running the 'build' step more than once seemed to take a while. Is there a way to save that image? Perhaps related: Indocker-compose.yml
there is a "allura_web" image referenced a few times. I assume that comes from the "web" declaration in same file?I got this error after 'web' was built but the 'pip' command was incomplete. The container was stopped and I couldn't start it, forcing me to rebuild. This might be a corner case, not sure, but happened to me.
And then I got stuck on this error. I didn't try to hard to fix it, but this is where I left off my testing.
In theory you don't need to run 'build' stage if it is successfully completed at least once. And docker saves intermediate images and reuses them on rebuild if command in Dockerfile is not changed, so second rebuild should be faster. This is not true for some commands, e.g. solr image is always
wget
s solr.I don't know what we can do regarding your first interruption, but the second one (pip install) implies that your image was already build, so you should be able to skip "build" stage in that case and just re-run
pip install
.Regarding "paster not found". Maybe deleting existing containers and then trying to run
pip install
again would help ($ docker-compose rm
)? It should not require image rebuilding, you should have image in your cache already.That's right "allura_web" comes from the "web" image. Compose uses current directory + image name to give the name to the image, so if your allura code is not in the "allura" directory you should use
-p allura
for everydocker-compose
command. Unfortunately they don't provide a way to refer to current project name from config file, so "allura_web" just hardcoded there.One possibility is to upload prebuilt image to Docker Hub and use that in our
docker-compose.yml
, so that you fetch it once and reuse from cache whenever needed, but that's not really different from "build it once and reuse from cache whenever needed".Regarding mongo error. Do you have enough disk space available on your VirtualBox VM? Seems like mongo can't allocate files it needs.
I'm past the rebuild & "paster not found" issues so I won't try to reproduce them again now. Docker Hub will be good to use at some point I think, but I agree its not really needed now.
For mongo: I've got a 20G boot2docker image.
docker-compose run mongo df -h
says:And mongo wants >3G on /data/db, and /data comes from the /allura-data volume right? Where is that set up?
This helped:
command: mongod --smallfiles
I'm still curious to learn where /allura-data is setup.
In
docker-compsose.yml
, under the "mongo" section.It means
/allura-data/mongo
from host system (in your case boot2docker VM) is mounted to/data/db
in the container. If host system path is not available it is created by Docker Compose automatically.Thanks. Everything seems to be working well now.
Should we put
command: mongod --smallfiles
in the "mongo" section ofdocker-compose.yml
? It was necessary for me.The
"paster": executable file not found in $PATH
error I reproduced now. It happens if you try to rundocker-compose up -d
before running the 3 adhoc 'web' commands. Perhaps that could be handled a bit more gracefully, but it was an error on my part, due to things aborting and not being exactly sure how to get started again.I think we should put mongo command to
docker-compose.yml
. I guess otherboot2docker
deployments will hit this issue too, so it is worth to prevent by default.The "paster" error is just due to python requirements not being installed inside docker. I'm not sure what we can do about that, maybe create some helper script that will run all three commands and check their return status?
Merged, including
--smallfiles