When more or less big (<100 tickets) tracker import is run against sandbox (with nginx frontend) it ends up with:
$ time python allura_import_test.py data/sf100.json -u https://sf-psokolovsky-3014.sb.sf.net -a tckd18490839ab2c834b16b -s 7c43a517b310ab7477f251922282456fa184c6440f2769c76fecfece04b1a58125063030b166d4e8 Importing 100 tickets Traceback (most recent call last): File "allura_import_test.py", line 93, in <module> res = cli.call(url, doc=doc_txt, options=json.dumps(import_options)) File "allura_import_test.py", line 64, in call raise e urllib2.HTTPError: HTTP Error 504: Gateway Time-out real 0m31.727s user 0m0.300s sys 0m0.048s </module>
I.e. nginx or some other intermediate component times out in 30s. So, the current approach of providing import data in ForgePlucker format as one big JSON and then executing import synchronously (to return status to the client) is not viable and needs replacement/augmentation. Following choices can be proposed:
Choice 1 is by far the easiest to implement - it would just require what #1767 already queued up, just need to split chunks to be of size 1.
Log in to post a comment.