The map variant established multiple database connections in each worker
exceeding the maximum number of connections configured in PostgreSQL.
apply_async doesn't have this problem with our wrapper in DakProcessPool.
However as a regression we longer have a timeout and always have to wait for
the job to finish. This could be worked around by using the timeout function
for individual results.
upload_ids = [ u.id for u in init(session) ]
session.close()
- p = pool.map_async(do_pkg, upload_ids)
+ for upload_id in upload_ids:
+ pool.apply_async(do_pkg, [upload_id])
pool.close()
- p.wait(timeout=600)
+ #p.wait(timeout=600)
+ pool.join()
for htmlfile in htmlfiles_to_process:
with open(htmlfile, "w") as fd:
fd.write(timeout_str)