【问题标题】:Why my multi-threading program is slow? [duplicate]为什么我的多线程程序很慢? [复制]
【发布时间】:2018-05-27 12:12:51
【问题描述】:

我正在尝试使用线程使我的程序运行得更快,但这需要太多时间。代码必须计算两种矩阵(word_level,我比较查询的每两个单词和一个文档,sequence_level:我将查询与文档上的不同序列进行比较。以下是主要功能:

import threading
from threading import Thread

def sim_QxD_word(query, document, model, alpha, outOfVocab, lock): #word_level
    sim_w = {}
    for q in set(query.split()):
        sim_w[q] = {}
        qE = []
        if q in model.vocab:
            qE = model[q]
        elif q in outOfVocab:
            qE = outOfVocab[q]
        else:
            qE = numpy.random.rand(model.layer1_size) # random vector
            lock.acquire()
            outOfVocab[q] = qE
            lock.release()

        for d in set(document.split()):
            dE = []
            if d in model.vocab:
                dE = model[d]
            elif d in outOfVocab:
                dE = outOfVocab[d]
            else:
                dE = numpy.random.rand(model.layer1_size) # random vector
                lock.acquire()
                outOfVocab[d] = dE
                lock.release()
            sim_w[q][d] = sim(qE,dE,alpha)
    return (sim_w, outOfVocab)

def sim_QxD_sequences(query, document, model, outOfVocab, alpha, lock): #sequence_level
    # 1. extract document sequences 
    document_sequences = []
    for i in range(len(document.split())-len(query.split())):
        document_sequences.append(" ".join(document.split()[i:i+len(query.split())]))
    # 2. compute similarities with a query sentence
    lock.acquire()
    query_vec, outOfVocab = avg_sequenceToVec(query, model, outOfVocab, lock)
    lock.release()
    sim_QxD = {}
    for s in document_sequences:
        lock.acquire()
        s_vec, outOfVocab = avg_sequenceToVec(s, model, outOfVocab, lock)
        lock.release()
        sim_QxD[s] = sim(query_vec, s_vec, alpha)
    return (sim_QxD, outOfVocab)

def word_level(q_clean, d_text, model, alpha, outOfVocab, out_w, q, ext_id, lock):
    #print("in word_level")
    sim_w, outOfVocab = sim_QxD_word(q_clean, d_text, model, alpha, outOfVocab, lock)
    numpy.save(join(out_w, str(q)+ext_id+"word_interactions.npy"), sim_w)

def sequence_level(q_clean, d_text, model, outOfVocab, alpha, out_s, q, ext_id, lock):
    #print("in sequence_level")
    sim_s, outOfVocab = sim_QxD_sequences(q_clean, d_text, model, outOfVocab, alpha, lock)
    numpy.save(join(out_s, str(q)+ext_id+"sequence_interactions.npy"), sim_s)

def extract_AllFeatures_parall(q_clean, d_text, model, alpha, outOfVocab, out_w, q, ext_id, out_s, lock):
    #print("in extract_AllFeatures")
    thW=Thread(target = word_level, args=(q_clean, d_text, model, alpha, outOfVocab, out_w, q, ext_id, lock))
    thW.start()
    thS=Thread(target = sequence_level, args=(q_clean, d_text, model, outOfVocab, alpha, out_s, q, ext_id, lock))
    thS.start()
    thW.join()
    thS.join()

def process_documents(documents, index, model, alpha, outOfVocab, out_w, out_s, queries, stemming, stoplist, q):
    #print("in process_documents")
    q_clean = clean(queries[q],stemming, stoplist)
    lock = threading.Lock()
    for d in documents:
        ext_id, d_text = reaDoc(d, index)
        extract_AllFeatures_parall(q_clean, d_text, model, alpha, outOfVocab, out_w, q, ext_id, out_s, lock)

outOfVocab={} # shared variable over all threads
queries = {"1":"first query", ...} # can contain 200 elements

....

threadsList = []
for q in queries.keys():
    thread = Thread(target = process_documents, args=(documents, index, model, alpha, outOfVocab, out_w, out_s, queries, stemming, stoplist, q))
    thread.start()
    threadsList.append(thread)
for th in threadsList:
    th.join()

如何优化不同的功能以使其运行得更快? 提前感谢您的回复。

【问题讨论】:

  • 不要使用线程,使用进程。查看建议的副本
  • 使用lambda 避免在传递参数作为建议的答案状态时调用

标签: python multithreading python-3.x optimization


【解决方案1】:

我将在这个答案中专注于这些代码行

thread = Thread(target = process_documents(documents, index, model, alpha, outOfVocab, out_w, out_s, queries, stemming, stoplist, q))
thread.start()

来自文档https://docs.python.org/2/library/threading.html

target 是由 run() 方法调用的可调用对象。 默认为 None,表示不调用任何内容。

目标应该是一个可调用。在您的代码中,您传递的是对 process_documents 的调用结果。您要做的是说 target=process_documents(即传入函数本身 - 这是一个可调用的)并根据需要传入 args/kwargs。

在您的代码按顺序运行的那一刻,对 process_documents 的每次调用都发生在同一个线程中。你需要给线程你想要它做的工作,而不是工作的结果。

【讨论】:

  • 你是对的。这是一个经典的问题。但即使这样,由于 python GIL,这也不会更快,因为所有进程都在执行纯 python 代码。
  • 好的,我会尽量考虑你的 cmets,如果我的问题很愚蠢,我很抱歉,我刚刚开始在 python 上进行并行编程。
  • 我刚刚更正了 Thread() 类的使用,我注意到我的计算机上只有一个节点使用率高达 100%,而且程序比以前慢,这可能是个问题是?
猜你喜欢
  • 2015-07-09
  • 2014-12-21
  • 2021-07-19
  • 2017-09-22
  • 1970-01-01
  • 2014-07-21
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多