【问题标题】:Multiprocessing in Python crashes when code reaches start()当代码到达 start() 时,Python 中的多处理崩溃
【发布时间】:2016-08-26 11:07:27
【问题描述】:

我是 Python 新手。我尝试使用一些多处理来使我的工作更快。首先,我尝试了一个示例,一切正常。代码如下:

from multiprocessing import Process
import time

def f(name, n, m):
    if name == 'bob':
        time.sleep(2)
    print 'hello', name, ' ', n, m

def h():
    g(1, 2, 3)

def g(a, s, d):
    p = Process(target=f, args=('bob', a, s,))
    t = Process(target=f, args=('helen', s, d,))
    p.start()
    t.start()
    t.join()
    p.join()
    print("END")

if __name__ == '__main__':
    print("Start")
    h()

之后,我对代码使用了相同的技术,但出现了错误。这是有问题的代码的一部分:

if __name__ == "__main__":
    night_crawler_steam()

def night_crawler_steam():
    .
    .
    .
    multi_processing(max_pages, url, dirname)
    .
    .
    .

def multi_processing(max_pages, url, dirname):
    page = 1
    while page <= max_pages:
        my_url = str(url) + str(page)
        soup = my_soup(my_url)
        fgt = Process(target=find_game_titles, args=(soup, page, dirname,))
        fl = Process(target=find_links, args=(soup, page, dirname,))
        fgt.start() #<-----------Here is the problem
        fl.start()
        fgt.join()
        fl.join()
        page += 1

def find_links(soup, page, dirname):
.
.
.

def find_game_titles(soup, page, dirname):
.
.
.

当解释器到达 fgt.start() 时会出现一些错误:

Traceback (most recent call last):
  File "C:/Users/��������/Desktop/MY PyWORK/NightCrawler/NightCrawler.py", line 120, in <module>
    night_crawler_steam()
  File "C:/Users/��������/Desktop/MY PyWORK/NightCrawler/NightCrawler.py", line 23, in night_crawler_steam
Traceback (most recent call last):
  File "<string>", line 1, in <module>
    multi_processing(max_pages, url, dirname)
  File "C:/Users/��������/Desktop/MY PyWORK/NightCrawler/NightCrawler.py", line 47, in multi_processing
    fgt.start()
  File "C:\Python27\lib\multiprocessing\process.py", line 130, in start
    self._popen = Popen(self)
  File "C:\Python27\lib\multiprocessing\forking.py", line 277, in __init__
  File "C:\Python27\lib\multiprocessing\forking.py", line 381, in main
    dump(process_obj, to_child, HIGHEST_PROTOCOL)
  File "C:\Python27\lib\multiprocessing\forking.py", line 199, in dump
    self = load(from_parent)
  File "C:\Python27\lib\pickle.py", line 1384, in load
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Python27\lib\pickle.py", line 224, in dump
    self.save(obj)
  File "C:\Python27\lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\lib\pickle.py", line 425, in save_reduce
    return Unpickler(file).load()
  File "C:\Python27\lib\pickle.py", line 864, in load
    save(state)
  File "C:\Python27\lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\lib\pickle.py", line 655, in save_dict
    dispatch[key](self)
  File "C:\Python27\lib\pickle.py", line 886, in load_eof
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\lib\pickle.py", line 687, in _batch_setitems
    raise EOFError
    save(v)
EOFError

这种情况一直持续到RuntimeError: maximum recursion depth exceeded

任何想法都会有所帮助!

【问题讨论】:

  • 如果传递None而不是soup,错误是否仍然相同?看起来可能是酸洗的问题。
  • 这不是同一个错误。使用 None 后,我只是对使用汤的功能有问题。我看到多处理是正确的。知道如何解决这个酸洗问题吗?
  • 能否粘贴您在流程实例中使用的方法主体? find_links 等我觉得有问题
  • 我发现了问题。多处理不接受汤作为包含函数的 arg。所以我使用函数soup_to_string 将其作为arg 传递,传递后我使用string_to_soup。感谢@janbrohl 的帮助,你让我大开眼界!
  • 在大多数情况下,使用multiprocessing.Pool 比直接使用进程更简单、更快。

标签: python python-2.7 multiprocessing python-multiprocessing


【解决方案1】:

picklingsoup 似乎有问题(请参阅Programming Guidelines),因此一个简单的解决方案是将调用 my_soup(my_url) 移动到目标函数中,如下所示:

def multi_processing(max_pages, url, dirname):        
    p=Pool() # using a pool is not necessary to fix your problem
     for page in xrange(1,max_pages+1):
        my_url = str(url) + str(page)
        p.apply_async(find_game_titles, (my_url, page, dirname))
        p.apply_async(find_links, (my_url, page, dirname))
    p.close()
    p.join()

def find_links(url,page, dirname):
    soup=my_soup(url)
    #function body from before


def find_game_titles(url, page, dirname):
    soup=my_soup(url)
    #function body from before

(当然,您也可以以可腌制的形式传递汤,但取决于my_soup 的具体做法,可能值得,也可能不值得。)

虽然不是完全必要,但将if __name__=="__main__": 部分放在文件末尾是正常的。

您还可能想看看multiprocessing.Pool 的其他方法,因为它们可能更适合,具体取决于您的功能。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多