我试图了解Python中的线程。我看过文档和示例,但坦率地说,许多示例过于复杂,我难以理解它们。
您如何清楚地显示为多线程而划分的任务?
我试图了解Python中的线程。我看过文档和示例,但坦率地说,许多示例过于复杂,我难以理解它们。
您如何清楚地显示为多线程而划分的任务?
Answers:
自2010年提出这个问题以来,如何使用带有map和pool的 Python进行简单的多线程处理已经有了真正的简化。
下面的代码来自于一篇文章/博客文章,您绝对应该检出(没有从属关系)- 并行显示在一行中:更好的日常线程任务模型。我将在下面进行总结-最终仅是几行代码:
from multiprocessing.dummy import Pool as ThreadPool
pool = ThreadPool(4)
results = pool.map(my_function, my_array)
这是以下内容的多线程版本:
results = []
for item in my_array:
results.append(my_function(item))
描述
Map是一个很棒的小功能,是轻松将并行性注入Python代码的关键。对于那些不熟悉的人来说,地图是从Lisp之类的功能语言中提炼出来的。它是将另一个功能映射到序列上的功能。
Map为我们处理序列上的迭代,应用函数,并将所有结果存储在最后的方便列表中。
实作
map函数的并行版本由以下两个库提供:multiprocessing,以及鲜为人知但同样出色的继子child:multiprocessing.dummy。
multiprocessing.dummy
与多处理模块完全相同,但是使用线程代替(一个重要的区别 -使用多个进程来执行CPU密集型任务;用于I / O的线程)。
multiprocessing.dummy复制了多处理的API,但仅不过是线程模块的包装器。
import urllib2
from multiprocessing.dummy import Pool as ThreadPool
urls = [
'http://www.python.org',
'http://www.python.org/about/',
'http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html',
'http://www.python.org/doc/',
'http://www.python.org/download/',
'http://www.python.org/getit/',
'http://www.python.org/community/',
'https://wiki.python.org/moin/',
]
# Make the Pool of workers
pool = ThreadPool(4)
# Open the URLs in their own threads
# and return the results
results = pool.map(urllib2.urlopen, urls)
# Close the pool and wait for the work to finish
pool.close()
pool.join()
以及计时结果:
Single thread: 14.4 seconds
4 Pool: 3.1 seconds
8 Pool: 1.4 seconds
13 Pool: 1.3 seconds
传递多个参数(仅在Python 3.3和更高版本中才这样):
要传递多个数组:
results = pool.starmap(function, zip(list_a, list_b))
或传递一个常量和一个数组:
results = pool.starmap(function, zip(itertools.repeat(constant), list_a))
如果您使用的是Python的早期版本,则可以通过此变通方法()传递多个参数。
(感谢user136036的有用评论。)
这是一个简单的示例:您需要尝试一些备用URL并返回第一个URL的内容以进行响应。
import Queue
import threading
import urllib2
# Called by each thread
def get_url(q, url):
q.put(urllib2.urlopen(url).read())
theurls = ["http://google.com", "http://yahoo.com"]
q = Queue.Queue()
for u in theurls:
t = threading.Thread(target=get_url, args = (q,u))
t.daemon = True
t.start()
s = q.get()
print s
在这种情况下,线程被用作简单的优化:每个子线程都在等待URL解析和响应,以便将其内容放入队列中。每个线程都是一个守护进程(如果主线程结束,则不会使进程继续运行-这比不常见);主线程启动所有子线程,get
在队列中执行a ,以等待直到其中一个完成a put
,然后发出结果并终止(由于它们是守护线程,因此将取消可能仍在运行的所有子线程)。
正确使用Python中的线程总是会与I / O操作相关联(因为CPython无论如何都不会使用多个内核来运行受CPU约束的任务,因此,线程的唯一原因是在等待某些I / O时不会阻塞进程)。顺便说一句,队列几乎总是将工作分配到线程和/或收集工作结果的最佳方法,并且它们本质上是线程安全的,因此它们使您不必担心锁,条件,事件,信号量以及其他相互之间的关系。线程协调/通信概念。
join()
方法,因为这会使主线程等待直到完成,而不会不断消耗处理器检查值。@Alex:谢谢,这正是我了解如何使用线程所需要的。
Queue
模块名称替换为queue
。方法名称是相同的。
s = q.get()
print s
@ krs013您不需要,join
因为Queue.get()被阻止。
注意:对于Python中的实际并行化,您应该使用多处理模块来分叉多个并行执行的进程(由于全局解释器锁,Python线程提供了交织,但实际上它们是串行执行的,而不是并行执行的,并且仅仅是在交错I / O操作时很有用)。
但是,如果您只是在寻找交织(或者正在进行尽管可以使用全局解释器锁而可以并行化的I / O操作),那么就可以从线程模块开始。作为一个非常简单的示例,让我们考虑通过并行求和子范围来求和一个大范围的问题:
import threading
class SummingThread(threading.Thread):
def __init__(self,low,high):
super(SummingThread, self).__init__()
self.low=low
self.high=high
self.total=0
def run(self):
for i in range(self.low,self.high):
self.total+=i
thread1 = SummingThread(0,500000)
thread2 = SummingThread(500000,1000000)
thread1.start() # This actually causes the thread to run
thread2.start()
thread1.join() # This waits until the thread has completed
thread2.join()
# At this point, both threads have completed
result = thread1.total + thread2.total
print result
请注意,以上示例是一个非常愚蠢的示例,因为它完全不执行任何I / O操作,并且由于全局解释器锁定,尽管在CPython中是交错执行的(带有上下文切换的额外开销),但仍将串行执行。
thread1
一直运行到主线程阻塞为止,然后与发生相同的事情thread2
,然后主线程恢复并打印出它们累积的值。
super(SummingThread, self).__init__()
吗?就像在stackoverflow.com/a/2197625/806988
像其他提到的一样,由于GIL,CPython只能将线程用于I / O等待。
如果您想从多个内核中受益于CPU绑定任务,请使用multiprocessing:
from multiprocessing import Process
def f(name):
print 'hello', name
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.start()
p.join()
f
函数开始的过程。并行地,主程序现在仅等待进程退出,join
并继续执行。如果主要部分刚刚退出,则子流程可能会或可能不会运行完成,因此join
始终建议执行a 。
map
功能的扩展答案在此处:stackoverflow.com/a/28463266/2327328
仅需注意:线程不需要队列。
这是我能想到的最简单的示例,其中显示了10个进程同时运行。
import threading
from random import randint
from time import sleep
def print_number(number):
# Sleeps a random 1 to 10 seconds
rand_int_var = randint(1, 10)
sleep(rand_int_var)
print "Thread " + str(number) + " slept for " + str(rand_int_var) + " seconds"
thread_list = []
for i in range(1, 10):
# Instantiates the thread
# (i) does not make a sequence, so (i,)
t = threading.Thread(target=print_number, args=(i,))
# Sticks the thread in a list so that it remains accessible
thread_list.append(t)
# Starts threads
for thread in thread_list:
thread.start()
# This blocks the calling thread until the thread whose join() method is called is terminated.
# From http://docs.python.org/2/library/threading.html#thread-objects
for thread in thread_list:
thread.join()
# Demonstrates that the main process waited for threads to complete
print "Done"
for
循环,您可以thread.start()
在第一循环中调用。
Alex Martelli的回答对我有所帮助。但是,这是我认为更有用的修改版本(至少对我而言)。
更新:在Python 2和Python 3中均可使用
try:
# For Python 3
import queue
from urllib.request import urlopen
except:
# For Python 2
import Queue as queue
from urllib2 import urlopen
import threading
worker_data = ['http://google.com', 'http://yahoo.com', 'http://bing.com']
# Load up a queue with your data. This will handle locking
q = queue.Queue()
for url in worker_data:
q.put(url)
# Define a worker function
def worker(url_queue):
queue_full = True
while queue_full:
try:
# Get your data off the queue, and do some work
url = url_queue.get(False)
data = urlopen(url).read()
print(len(data))
except queue.Empty:
queue_full = False
# Create as many threads as you want
thread_count = 5
for i in range(thread_count):
t = threading.Thread(target=worker, args = (q,))
t.start()
import Queue ModuleNotFoundError: No module named 'Queue'
我正在运行python 3.6.5,一些帖子提到在python 3.6.5中它是正确的,queue
但是即使我更改了它,仍然无法正常工作
给定一个函数,将f
其像这样进行线程化:
import threading
threading.Thread(target=f).start()
将参数传递给 f
threading.Thread(target=f, args=(a,b,c)).start()
is_alive
方法,但是我不知道如何将其应用于线程。我尝试分配thread1=threading.Thread(target=f).start()
,然后使用进行检查thread1.is_alive()
,但使用thread1
进行了填充None
,因此没有运气。您知道是否还有其他方法可以访问该线程吗?
thread1=threading.Thread(target=f)
后跟thread1.start()
。那你就可以做thread1.is_alive()
。
thread1.is_alive()
返回测试False
。
我发现这非常有用:创建与内核一样多的线程,并让它们执行(大量)任务(在这种情况下,调用Shell程序):
import Queue
import threading
import multiprocessing
import subprocess
q = Queue.Queue()
for i in range(30): # Put 30 tasks in the queue
q.put(i)
def worker():
while True:
item = q.get()
# Execute a task: call a shell program and wait until it completes
subprocess.call("echo " + str(item), shell=True)
q.task_done()
cpus = multiprocessing.cpu_count() # Detect number of cores
print("Creating %d threads" % cpus)
for i in range(cpus):
t = threading.Thread(target=worker)
t.daemon = True
t.start()
q.join() # Block until all tasks are done
Python 3具有启动并行任务的功能。这使我们的工作更加轻松。
以下提供了一个见解:
ThreadPoolExecutor示例(源)
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
ProcessPoolExecutor(源)
import concurrent.futures
import math
PRIMES = [
112272535095293,
112582705942171,
112272535095293,
115280095190773,
115797848077099,
1099726899285419]
def is_prime(n):
if n % 2 == 0:
return False
sqrt_n = int(math.floor(math.sqrt(n)))
for i in range(3, sqrt_n + 1, 2):
if n % i == 0:
return False
return True
def main():
with concurrent.futures.ProcessPoolExecutor() as executor:
for number, prime in zip(PRIMES, executor.map(is_prime, PRIMES)):
print('%d is prime: %s' % (number, prime))
if __name__ == '__main__':
main()
使用新的并发模块
def sqr(val):
import time
time.sleep(0.1)
return val * val
def process_result(result):
print(result)
def process_these_asap(tasks):
import concurrent.futures
with concurrent.futures.ProcessPoolExecutor() as executor:
futures = []
for task in tasks:
futures.append(executor.submit(sqr, task))
for future in concurrent.futures.as_completed(futures):
process_result(future.result())
# Or instead of all this just do:
# results = executor.map(sqr, tasks)
# list(map(process_result, results))
def main():
tasks = list(range(10))
print('Processing {} tasks'.format(len(tasks)))
process_these_asap(tasks)
print('Done')
return 0
if __name__ == '__main__':
import sys
sys.exit(main())
对于所有以前接触过Java的人来说,执行者方法似乎都很熟悉。
另外请注意:为了使Universe保持理智,如果您不使用with
上下文,请不要忘记关闭池/执行器(它非常强大,它可以为您完成此工作)
对我而言,线程的完美示例是监视异步事件。看这段代码。
# thread_test.py
import threading
import time
class Monitor(threading.Thread):
def __init__(self, mon):
threading.Thread.__init__(self)
self.mon = mon
def run(self):
while True:
if self.mon[0] == 2:
print "Mon = 2"
self.mon[0] = 3;
您可以通过打开IPython会话并执行以下操作来处理此代码:
>>> from thread_test import Monitor
>>> a = [0]
>>> mon = Monitor(a)
>>> mon.start()
>>> a[0] = 2
Mon = 2
>>>a[0] = 2
Mon = 2
等一下
>>> a[0] = 2
Mon = 2
大多数文档和教程都使用Python Threading
和Queue
模块,对于初学者来说,它们似乎不胜枚举。
也许考虑使用concurrent.futures.ThreadPoolExecutor
Python 3 的模块。
结合with
子句和列表理解,这可能是一个真正的魅力。
from concurrent.futures import ThreadPoolExecutor, as_completed
def get_url(url):
# Your actual program here. Using threading.Lock() if necessary
return ""
# List of URLs to fetch
urls = ["url1", "url2"]
with ThreadPoolExecutor(max_workers = 5) as executor:
# Create threads
futures = {executor.submit(get_url, url) for url in urls}
# as_completed() gives you the threads once finished
for f in as_completed(futures):
# Get the results
rs = f.result()
我在这里看到了很多没有执行任何实际工作的示例,这些示例主要是CPU约束的。这是一个CPU限制任务的示例,该任务计算1000万到10.05百万之间的所有素数。我在这里使用了所有四种方法:
import math
import timeit
import threading
import multiprocessing
from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor
def time_stuff(fn):
"""
Measure time of execution of a function
"""
def wrapper(*args, **kwargs):
t0 = timeit.default_timer()
fn(*args, **kwargs)
t1 = timeit.default_timer()
print("{} seconds".format(t1 - t0))
return wrapper
def find_primes_in(nmin, nmax):
"""
Compute a list of prime numbers between the given minimum and maximum arguments
"""
primes = []
# Loop from minimum to maximum
for current in range(nmin, nmax + 1):
# Take the square root of the current number
sqrt_n = int(math.sqrt(current))
found = False
# Check if the any number from 2 to the square root + 1 divides the current numnber under consideration
for number in range(2, sqrt_n + 1):
# If divisible we have found a factor, hence this is not a prime number, lets move to the next one
if current % number == 0:
found = True
break
# If not divisible, add this number to the list of primes that we have found so far
if not found:
primes.append(current)
# I am merely printing the length of the array containing all the primes, but feel free to do what you want
print(len(primes))
@time_stuff
def sequential_prime_finder(nmin, nmax):
"""
Use the main process and main thread to compute everything in this case
"""
find_primes_in(nmin, nmax)
@time_stuff
def threading_prime_finder(nmin, nmax):
"""
If the minimum is 1000 and the maximum is 2000 and we have four workers,
1000 - 1250 to worker 1
1250 - 1500 to worker 2
1500 - 1750 to worker 3
1750 - 2000 to worker 4
so let’s split the minimum and maximum values according to the number of workers
"""
nrange = nmax - nmin
threads = []
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
# Start the thread with the minimum and maximum split up to compute
# Parallel computation will not work here due to the GIL since this is a CPU-bound task
t = threading.Thread(target = find_primes_in, args = (start, end))
threads.append(t)
t.start()
# Don’t forget to wait for the threads to finish
for t in threads:
t.join()
@time_stuff
def processing_prime_finder(nmin, nmax):
"""
Split the minimum, maximum interval similar to the threading method above, but use processes this time
"""
nrange = nmax - nmin
processes = []
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
p = multiprocessing.Process(target = find_primes_in, args = (start, end))
processes.append(p)
p.start()
for p in processes:
p.join()
@time_stuff
def thread_executor_prime_finder(nmin, nmax):
"""
Split the min max interval similar to the threading method, but use a thread pool executor this time.
This method is slightly faster than using pure threading as the pools manage threads more efficiently.
This method is still slow due to the GIL limitations since we are doing a CPU-bound task.
"""
nrange = nmax - nmin
with ThreadPoolExecutor(max_workers = 8) as e:
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
e.submit(find_primes_in, start, end)
@time_stuff
def process_executor_prime_finder(nmin, nmax):
"""
Split the min max interval similar to the threading method, but use the process pool executor.
This is the fastest method recorded so far as it manages process efficiently + overcomes GIL limitations.
RECOMMENDED METHOD FOR CPU-BOUND TASKS
"""
nrange = nmax - nmin
with ProcessPoolExecutor(max_workers = 8) as e:
for i in range(8):
start = int(nmin + i * nrange/8)
end = int(nmin + (i + 1) * nrange/8)
e.submit(find_primes_in, start, end)
def main():
nmin = int(1e7)
nmax = int(1.05e7)
print("Sequential Prime Finder Starting")
sequential_prime_finder(nmin, nmax)
print("Threading Prime Finder Starting")
threading_prime_finder(nmin, nmax)
print("Processing Prime Finder Starting")
processing_prime_finder(nmin, nmax)
print("Thread Executor Prime Finder Starting")
thread_executor_prime_finder(nmin, nmax)
print("Process Executor Finder Starting")
process_executor_prime_finder(nmin, nmax)
main()
这是我的Mac OS X四核计算机上的结果
Sequential Prime Finder Starting
9.708213827005238 seconds
Threading Prime Finder Starting
9.81836523200036 seconds
Processing Prime Finder Starting
3.2467174359990167 seconds
Thread Executor Prime Finder Starting
10.228896902000997 seconds
Process Executor Finder Starting
2.656402041000547 seconds
if __name__ == '__main__':
在主调用之前需要一个,否则度量将自动产生并打印。尝试在...之前启动一个新过程。
这是使用线程导入CSV的非常简单的示例。(图书馆收录的目的可能有所不同。)
辅助功能:
from threading import Thread
from project import app
import csv
def import_handler(csv_file_name):
thr = Thread(target=dump_async_csv_data, args=[csv_file_name])
thr.start()
def dump_async_csv_data(csv_file_name):
with app.app_context():
with open(csv_file_name) as File:
reader = csv.DictReader(File)
for row in reader:
# DB operation/query
驱动功能:
import_handler(csv_file_name)
我想举一个简单的例子,当我不得不自己解决这个问题时,我发现这些解释很有用。
在此答案中,您将找到有关Python的GIL(全局解释器锁)的一些信息,以及使用multiprocessing.dummy和一些简单的基准编写的简单的日常示例。
全局翻译锁定(GIL)
Python不允许真正意义上的多线程。它具有一个多线程程序包,但是如果您想使用多线程来加快代码速度,那么使用它通常不是一个好主意。
Python具有称为全局解释器锁(GIL)的构造。GIL确保您的“线程”只能在任何一次执行。线程获取GIL,做一些工作,然后将GIL传递到下一个线程。
这发生得非常快,以至于人眼似乎您的线程正在并行执行,但是实际上它们只是使用相同的CPU内核轮流执行。
所有这些GIL传递都会增加执行开销。这意味着,如果您想使代码运行更快,那么使用线程包通常不是一个好主意。
有理由使用Python的线程包。如果您想同时运行某些东西,而效率不是问题,那么它就很好而且很方便。或者,如果您正在运行的代码需要等待某些东西(例如某些I / O),那么这很有意义。但是线程库不允许您使用额外的CPU内核。
多线程可以外包给操作系统(通过执行多处理),某些外部应用程序可以调用Python代码(例如Spark或Hadoop),也可以外包给Python代码调用的某些代码(例如:让您的Python代码调用C函数来执行昂贵的多线程任务)。
为什么如此重要
因为很多人在学习GIL是什么之前,会花费大量时间试图在他们喜欢的Python多线程代码中找到瓶颈。
清除此信息后,这是我的代码:
#!/bin/python
from multiprocessing.dummy import Pool
from subprocess import PIPE,Popen
import time
import os
# In the variable pool_size we define the "parallelness".
# For CPU-bound tasks, it doesn't make sense to create more Pool processes
# than you have cores to run them on.
#
# On the other hand, if you are using I/O-bound tasks, it may make sense
# to create a quite a few more Pool processes than cores, since the processes
# will probably spend most their time blocked (waiting for I/O to complete).
pool_size = 8
def do_ping(ip):
if os.name == 'nt':
print ("Using Windows Ping to " + ip)
proc = Popen(['ping', ip], stdout=PIPE)
return proc.communicate()[0]
else:
print ("Using Linux / Unix Ping to " + ip)
proc = Popen(['ping', ip, '-c', '4'], stdout=PIPE)
return proc.communicate()[0]
os.system('cls' if os.name=='nt' else 'clear')
print ("Running using threads\n")
start_time = time.time()
pool = Pool(pool_size)
website_names = ["www.google.com","www.facebook.com","www.pinterest.com","www.microsoft.com"]
result = {}
for website_name in website_names:
result[website_name] = pool.apply_async(do_ping, args=(website_name,))
pool.close()
pool.join()
print ("\n--- Execution took {} seconds ---".format((time.time() - start_time)))
# Now we do the same without threading, just to compare time
print ("\nRunning NOT using threads\n")
start_time = time.time()
for website_name in website_names:
do_ping(website_name)
print ("\n--- Execution took {} seconds ---".format((time.time() - start_time)))
# Here's one way to print the final output from the threads
output = {}
for key, value in result.items():
output[key] = value.get()
print ("\nOutput aggregated in a Dictionary:")
print (output)
print ("\n")
print ("\nPretty printed output: ")
for key, value in output.items():
print (key + "\n")
print (value)
这是带有一个简单示例的多线程,将很有帮助。您可以运行它并轻松了解Python中多线程的工作方式。在以前的线程完成其工作之前,我使用了一个锁来防止访问其他线程。通过使用这一行代码,
tLock = threading.BoundedSemaphore(值= 4)
您可以一次允许多个进程,并保留其余线程,这些线程将在以后的进程或之前的进程完成后运行。
import threading
import time
#tLock = threading.Lock()
tLock = threading.BoundedSemaphore(value=4)
def timer(name, delay, repeat):
print "\r\nTimer: ", name, " Started"
tLock.acquire()
print "\r\n", name, " has the acquired the lock"
while repeat > 0:
time.sleep(delay)
print "\r\n", name, ": ", str(time.ctime(time.time()))
repeat -= 1
print "\r\n", name, " is releaseing the lock"
tLock.release()
print "\r\nTimer: ", name, " Completed"
def Main():
t1 = threading.Thread(target=timer, args=("Timer1", 2, 5))
t2 = threading.Thread(target=timer, args=("Timer2", 3, 5))
t3 = threading.Thread(target=timer, args=("Timer3", 4, 5))
t4 = threading.Thread(target=timer, args=("Timer4", 5, 5))
t5 = threading.Thread(target=timer, args=("Timer5", 0.1, 5))
t1.start()
t2.start()
t3.start()
t4.start()
t5.start()
print "\r\nMain Complete"
if __name__ == "__main__":
Main()
通过从这篇文章中借用,我们知道在多线程,多处理和异步/ asyncio
及其用法之间进行选择。
Python 3具有新的内置库以实现并发性和并行性:current.futures
因此,我将通过一个实验来演示如何通过以下方式运行四个任务(即.sleep()
方法)Threading-Pool
:
from concurrent.futures import ThreadPoolExecutor, as_completed
from time import sleep, time
def concurrent(max_worker=1):
futures = []
tick = time()
with ThreadPoolExecutor(max_workers=max_worker) as executor:
futures.append(executor.submit(sleep, 2)) # Two seconds sleep
futures.append(executor.submit(sleep, 1))
futures.append(executor.submit(sleep, 7))
futures.append(executor.submit(sleep, 3))
for future in as_completed(futures):
if future.result() is not None:
print(future.result())
print('Total elapsed time by {} workers:'.format(max_worker), time()-tick)
concurrent(5)
concurrent(4)
concurrent(3)
concurrent(2)
concurrent(1)
输出:
Total elapsed time by 5 workers: 7.007831811904907
Total elapsed time by 4 workers: 7.007944107055664
Total elapsed time by 3 workers: 7.003149509429932
Total elapsed time by 2 workers: 8.004627466201782
Total elapsed time by 1 workers: 13.013478994369507
[ 注意 ]:
multiprocessing
对threading
),则可以将更ThreadPoolExecutor
改为ProcessPoolExecutor
。import threading
import requests
def send():
r = requests.get('https://www.stackoverlow.com')
thread = []
t = threading.Thread(target=send())
thread.append(t)
t.start()