【发布时间】:2020-10-14 07:18:16
【问题描述】:
我正在使用第三方 API 从大量天数中检索不同标签的 10 分钟数据。当前的数据提取可能需要几分钟时间,具体取决于天数和标签数量。因此,我正在尝试多线程,我知道这对于繁重的 IO 操作很有用。
API 调用如下(我已经替换了实际的 API 名称):
import numpy as N
import requests as r
import json
import pandas as pd
from datetime import datetime
import concurrent.futures
class pyGeneric:
def __init__(self, serverName, apiKey, rootApiUrl='/Generic.Services/api'):
"""
Initialize a connection to server, and return a pyGeneric server object
"""
self.baseUrl = serverName + rootApiUrl
self.apiKey = apiKey
self.bearer = 'Bearer ' + apiKey
self.header = {'mediaType':'application/json','Authorization':self.bearer}
def getRawMeasurementsJson(self, tag, start, end):
apiQuery = '/measurements/' + tag + '/from/' + start + '/to/' + end + '?format=json'
dataresponse = r.get(self.baseUrl+apiQuery, headers=self.header)
data = json.loads(dataresponse.text)
return data
def getAggregatesPandas(self, tags, start, end):
"""
Return tag(s) in a pandas dataFrame
"""
df = pd.DataFrame()
if type(tags) == str:
tags = [tags]
for tag in tags:
tempJson = self.getRawMeasurementsJson(tag, start, end)
tempDf = pd.DataFrame(tempJson['timeSeriesList'][0]['timeSeries'])
name = tempJson['timeSeriesList'][0]['measurementName']
df['TimeUtc'] = [datetime.fromtimestamp(i/1000) for i in tempDf['t']]
df['TimeUtc'] = df['TimeUtc'].dt.round('min')
df[name] = tempDf['v']
return df
gener = pyGeneric('https://api.generic.com', 'auth_keymlkj9789878686')
对 API 的调用示例如下:
gener_df = gener.getAggregatesPandas('tag1.10m.SQL', '*-10d', '*')
这适用于单个标签,但对于列表,这需要更长的时间,这就是我一直在尝试以下方法的原因:
tags = ['tag1.10m.SQL',
'tag2.10m.SQL',
'tag3.10m.SQL',
'tag4.10m.SQL',
'tag5.10m.SQL',
'tag6.10m.SQL',
'tag7.10m.SQL',
'tag8.10m.SQL',
'tag9.10m.SQL',
'tag10.10m.SQL']
startdate = "*-150d"
enddate = '*'
final_df = pd.DataFrame
with concurrent.futures.ThreadPoolExecutor() as executor:
args = ((i,startdate, enddate) for i in tags)
executor.map(lambda p: gener.getAggregatesPandas(*p), args)
但是我无法检查 gener.getAggregatesPandas 是否正确执行。最终,我想在一个名为 final_df 的数据框中获得结果,但也不确定如何进行。我在post 中读到,在上下文管理器中附加会导致数据帧的二次副本,因此最终会减慢速度。
【问题讨论】:
标签: python pandas concurrent.futures