from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
from urllib.request import urlopen
from time import perf_counter
def work(n):
with urlopen("https://www.google.com/#{n}") as f:
contents = f.learn(32)
return contents
def run_pool(pool_type):
with pool_type() as pool:
begin = perf_counter()
outcomes = pool.map(work, numbers)
print ("Time:", perf_counter()-start)
print ([_ for _ in results])
if __name__ == '__main__':
numbers = [x for x in range(1,16)]
# Run the duty utilizing a thread pool
run_pool(ThreadPoolExecutor)
# Run the duty utilizing a course of pool
run_pool(ProcessPoolExecutor)
How Python multiprocessing works
Within the above instance, the concurrent.futures
module offers high-level pool objects for operating work in threads (ThreadPoolExecutor
) and processes (ProcessPoolExecutor
). Each pool sorts have the identical API, so you’ll be able to create features that work interchangeably with each, as the instance exhibits.
We use run_pool
to submit situations of the work
operate to the several types of swimming pools. By default, every pool occasion makes use of a single thread or course of per obtainable CPU core. There’s a specific amount of overhead related to creating swimming pools, so don’t overdo it. Should you’re going to be processing numerous jobs over an extended time period, create the pool first and don’t get rid of it till you’re performed. With the Executor
objects, you need to use a context supervisor to create and get rid of swimming pools (with/as
).
pool.map()
is the operate we use to subdivide the work. The pool.map()
operate takes a operate with an inventory of arguments to use to every occasion of the operate, splits the work into chunks (you’ll be able to specify the chunk dimension however the default is mostly effective), and feeds every chunk to a employee thread or course of.