Dillon Walls wants to merge 2 commits from /u/dill0wn/allura/ to master, 2025-08-29
Our Dockerfile does not really follow docker conventions. Primarily, many extra steps are required to run on the image before it is usable; it's convention that you should more-or-less be able to docker run <image>
and things will just happen.
Here are a few ideas and some WIP to make some small iterative improvements here
scripts/init-docker-dev.sh
directly into the Dockerfile Other Improvements
- To complete data initialization, the documentation says you should run docker compose run --rm taskd paster setup-app docker-dev.ini
-- it's unclear whether this is required in order for Allura to run. Also whether we should add -e ALLURA_TEST_DATA=False
to the invocation? Is this something that could also be moved directly into the Dockerfile
- The Dockerfile binds the source locally, when it's more customary to COPY or ADD the code into the image itself
- Many of the orchestrated containers shar configuration items that can be simplified if we take a different approach to volumes and binds.
- Update the documentation to reflect docker changes
Other things we could do
- Build images and deploy to repository (does apache have something on docker hub?)
- Leverage Multi-Stage builds
- build venv in a separate stage and copy the resulting virtual_env dir into the final image. This has the benefit of never contaminating the main image with all of the dev-dependencies required to build some of the wheels. Would also be worth slimming down the packages installed
- add-apt-repository
installs python3.12 (or whatever the current ubuntu system python version is) regardless of which deadsnakes package you end up using, requiring you to delete the system python3.12 packages.
Commit | Date | |
---|---|---|
[51ebef]
(dw/docker-improved)
by
fixup! remove necessity of init-docker-dev.sh, all setup in Dockerfile |
2025-08-29 21:48:46 | Tree |
2025-08-29 18:10:27 | Tree |