Doing a one-click merge of a very large repo can potentially cause problems if the working dir is bigger than the tmp dir mount supports.
def merge
can use a try/finally so that shutil.rmtree
always runs even if the clone fails (e.g. out of disk space)_shared_clone
) so the object files don't have to get copied, reducing disk usage and execution time.It'd be nice if we could set a max size limit and check that in can_merge
or merge_allowed
but it doesn't seem to be a way to check the size of a working copy ahead of time. There is git count-objects -vH
but that's just the repo size, not the working copy.
git documentation says that to reduce the size of the repository can use git clone --local, git clone --reference, but we did not get the repository smaller than the repository derived from _shared_clone.
We can't get repo size before clone in normal way, only if add some post-receive hook http://stackoverflow.com/questions/28678223/estimate-the-size-of-a-git-repository-before-cloning-it
https://forge-allura.apache.org/p/allura/git/merge-requests/80/
What about using a shared clone? Sounds like that should work right?
I've merged the try/finally commit since that part is good.
https://forge-allura.apache.org/p/allura/git/merge-requests/84/