【问题标题】:how to make python loop faster to run pairwise association test如何使python循环更快地运行成对关联测试
【发布时间】:2017-08-21 00:38:38
【问题描述】:

我有一个患者 ID 和药物名称列表以及一个患者 ID 和疾病名称列表。我想为每种疾病找到最具指示性的药物。

为了找到这一点,我想做 Fisher 精确检验,以获得每个疾病/药物对的 p 值。但是循环运行很慢,10多个小时。有没有办法让循环更高效,或者有更好的方法来解决这个关联问题?

我的循环:

import numpy as np
import pandas as pd
from scipy.stats import fisher_exact 

most_indicative_medication = {}
rx_list = list(meps_meds.rxName.unique()) 
disease_list = list(meps_base_data.columns.values)[8:]

for i in disease_list:
    print i
    rx_dict = {}
    for j in rx_list: 
        subset = base[['id', i, 'rxName']].drop_duplicates()
        subset[j] = subset['rxName'] == j
        subset = subset.loc[subset[i].isin(['Yes', 'No'])]
        subset = subset[[i, j]]
        tab = pd.crosstab(subset[i], subset[j]) 
        if len(tab.columns) == 2:
            rx_dict[j] = fisher_exact(tab)[1]
        else: 
            rx_dict[j] = np.nan
    most_indicative_medication[i] = min(rx_dict, key=rx_dict.get)

【问题讨论】:

标签: python pandas loops statistics associations


【解决方案1】:

你需要多处理/多线程,我已经添加了代码。:

from multiprocessing.dummy import Pool as ThreadPool
most_indicative_medication = {}
rx_list = list(meps_meds.rxName.unique()) 
disease_list = list(meps_base_data.columns.values)[8:]

def run_pairwise(i):
    print i
    rx_dict = {}
    for j in rx_list: 
        subset = base[['id', i, 'rxName']].drop_duplicates()
        subset[j] = subset['rxName'] == j
        subset = subset.loc[subset[i].isin(['Yes', 'No'])]
        subset = subset[[i, j]]
        tab = pd.crosstab(subset[i], subset[j]) 
        if len(tab.columns) == 2:
            rx_dict[j] = fisher_exact(tab)[1]
        else: 
            rx_dict[j] = np.nan
    most_indicative_medication[i] = min(rx_dict, key=rx_dict.get)

pool = ThreadPool(3)
pairwise_test_results = pool.map(run_pairwise,disease_list)
pool.close()
pool.join()

备注:http://chriskiehl.com/article/parallelism-in-one-line/

【讨论】:

  • 这正是我所需要的。非常感谢!顺便说一句,线程池数量有限制吗?我假设越多越快?
【解决方案2】:

更快地处理是好的,但更好的算法通常会胜过它;-)

填写一下,

import numpy as np
import pandas as pd
from scipy.stats import fisher_exact

# data files can be download at
# https://github.com/Saynah/platform/tree/d7e9f150ef2ff436387585960ca312a301847a46/data
meps_meds      = pd.read_csv("meps_meds.csv")               #  8 cols * 1,148,347 rows
meps_base_data = pd.read_csv("meps_base_data.csv")          # 18 columns * 61,489 rows

# merge to get disease and drug info in same table
merged = pd.merge(                                          # 25 cols * 1,148,347 rows
    meps_base_data, meps_meds,
    how='inner', left_on='id', right_on='id'
)

rx_list        = meps_meds.rxName.unique().tolist()         # 9218 items
disease_list   = meps_base_data.columns.values[8:].tolist() # 10 items

请注意,rx_list 有很多重复项(例如,阿莫西林有 52 个条目,如果您包含拼写错误)。

然后

most_indicative = {}

for disease in disease_list:
    # get unique (id, disease, prescription)
    subset = merged[['id', disease, 'rxName']].drop_duplicates()
    # keep only Yes/No entries under disease
    subset = subset[subset[disease].isin(['Yes', 'No'])]
    # summarize (replaces the inner loop)
    tab = pd.crosstab(subset.rxName, subset[disease])

    # bind "No" values for Fisher exact function
    nf, yf = tab.sum().values
    def p_value(x, nf=nf, yf=yf):
        return fisher_exact([[nf - x.No, x.No], [yf - x.Yes, x.Yes]])[1]

    # OPTIONAL:
    # We can probably assume that the most-indicative drugs are among
    #  the most-prescribed; get just the 100 most-prescribed drugs
    # Note: you have to get the nf, yf values before doing this!
    tab = tab.sort_values("Yes", ascending=False)[:100]

    # and apply the function
    tab["P-value"] = tab.apply(p_value, axis=1)

    # find the best match
    best_med = tab.sort_values("P-value").index[0]
    most_indicative[disease] = best_med

这现在在我的机器上运行大约 18 分钟,您可能可以将它与 EM28 的答案结合起来,将其速度提高 4 倍或更多。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2019-03-05
    • 2019-07-03
    • 1970-01-01
    • 1970-01-01
    • 2022-07-26
    • 2022-10-15
    • 1970-01-01
    相关资源
    最近更新 更多