【发布时间】:2020-05-31 08:19:08
【问题描述】:
现在我有一个从多个 URL 提取大量数据(约 150 万行)的过程。当前流程完美无缺,但速度极慢(约 4 分钟)且效率低下,因此我正在寻求帮助。
我提取数据的方式是在列表中包含一个日期列表,然后将它们插入到 URL 模板中以进行提取。下面是一个示例,实际列表包含 52 个元素。
list = ['121229','121222','121215','121208','121201','121124','121117','121110']
当前代码如下:
def dataPull(year):
columnsName = ['C/A','UNIT','SCP','DATE1','TIME1','DESC1','ENTRIES1','EXITS1','DATE2','TIME2','ESC2',\
'ENTRIES2','EXITS2','DATE3','TIME3','DESC3','ENTRIES3','EXITS3','DATE4','TIME4','DESC4',\
'ENTRIES4','EXITS4','DATE5','TIME5','DESC5','ENTRIES5','EXITS5','DATE6','TIME6','DESC6',\
'ENTRIES6','EXITS6','DATE7','TIME7','DESC7','ENTRIES7','EXITS7','DATE8','TIME8','DESC8',\
'ENTRIES8','EXITS8']
df = pd.DataFrame(columns=columnsName)
for i in dateList:
if str(year)[2:5] == str(i)[:2]:
tempUrl ='http://web.mta.info/developers/data/nyct/turnstile/turnstile_'+str(i)+'.txt'
tempDf = pd.read_csv(tempUrl, header=None, engine = 'python', error_bad_lines=False, warn_bad_lines=False)
tempDf.columns = columnsName
df= pd.concat([df, tempDf])
return df
输出如下:
C/A UNIT SCP DATE1 TIME1 DESC1 ENTRIES1 EXITS1 DATE2 TIME2 ESC2 ENTRIES2 EXITS2 DATE3 TIME3 DESC3 ENTRIES3 EXITS3 DATE4 TIME4 DESC4 ENTRIES4 EXITS4 DATE5 TIME5 DESC5 ENTRIES5 EXITS5 DATE6 TIME6 DESC6 ENTRIES6 EXITS6 DATE7 TIME7 DESC7 ENTRIES7 EXITS7 DATE8 TIME8 DESC8 ENTRIES8 EXITS8
0 A002 R051 02-00-00 04-20-13 00:00:00 REGULAR 4084276 1405308 04-20-13 04:00:00 REGULAR 4084308.0 1405312.0 04-20-13 08:00:00 REGULAR 4084332.0 1405348.0 04-20-13 12:00:00 REGULAR 4084429.0 1405441.0 04-20-13 16:00:00 REGULAR 4084714.0 1405494.0 04-20-13 20:00:00 REGULAR 4085107.0 1405550.0 04-21-13 00:00:00 REGULAR 4085286.0 1405578.0 04-21-13 04:00:00 REGULAR 4085317.0 1405582.0
1 A002 R051 02-00-00 04-21-13 08:00:00 REGULAR 4085336 1405603 04-21-13 12:00:00 REGULAR 4085421.0 1405673.0 04-21-13 16:00:00 REGULAR 4085543.0 1405725.0 04-21-13 20:00:00 REGULAR 4085543.0 1405781.0 04-22-13 00:00:00 REGULAR 4085669.0 1405820.0 04-22-13 04:00:00 REGULAR 4085684.0 1405825.0 04-22-13 08:00:00 REGULAR 4085715.0 1405929.0 04-22-13 12:00:00 REGULAR 4085878.0 1406175.0
2 A002 R051 02-00-00 04-22-13 16:00:00 REGULAR 4086116 1406242 04-22-13 20:00:00 REGULAR 4086986.0 1406310.0 04-23-13 00:00:00 REGULAR 4087164.0 1406335.0 04-23-13 04:00:00 REGULAR 4087172.0 1406339.0 04-23-13 08:00:00 REGULAR 4087214.0 1406441.0 04-23-13 12:00:00 REGULAR 4087390.0 1406685.0 04-23-13 16:00:00 REGULAR 4087738.0 1406741.0 04-23-13 20:00:00 REGULAR 4088682.0 1406813.0
3 A002 R051 02-00-00 04-24-13 00:00:00 REGULAR 4088879 1406839 04-24-13 04:00:00 REGULAR 4088890.0 1406845.0 04-24-13 08:00:00 REGULAR 4088934.0 1406951.0 04-24-13 12:00:00 REGULAR 4089105.0 1407209.0 04-24-13 16:00:00 REGULAR 4089378.0 1407269.0 04-24-13 20:00:00 REGULAR 4090319.0 1407336.0 04-25-13 00:00:00 REGULAR 4090535.0 1407365.0 04-25-13 04:00:00 REGULAR 4090550.0 1407370.0
4 A002 R051 02-00-00 04-25-13 08:00:00 REGULAR 4090589 1407469 04-25-13 08:57:03 DOOR OPEN 4090629.0 1407591.0 04-25-13 08:58:01 LOGON 4090629.0 1407591.0 04-25-13 09:01:08 LGF-MAN 4090629.0 1407591.0 04-25-13 09:01:53 LOGON 4090629.0 1407591.0 04-25-13 09:02:02 DOOR CLOSE 4090629.0 1407591.0 04-25-13 09:02:04 DOOR OPEN 4090629.0 1407591.0 04-25-13 09:02:31 DOOR CLOSE 4090629.0 1407591.0
5 A002 R051 02-00-00 04-25-13 09:02:32 DOOR OPEN 4090629 1407591 04-25-13 09:07:21 LOGON 4090629.0 1407591.0 04-25-13 09:12:12 LGF-MAN 4090642.0 1407592.0 04-25-13 09:12:20 DOOR CLOSE 4090642.0 1407592.0 04-25-13 12:00:00 REGULAR 4090743.0 1407723.0 04-25-13 16:00:00 REGULAR 4091064.0 1407793.0 04-25-13 20:00:00 REGULAR 4092044.0 1407840.0 04-26-13 00:00:00 REGULAR 4092314.0 1407859.0
6 A002 R051 02-00-00 04-26-13 04:00:00 REGULAR 4092325 1407861 04-26-13 08:00:00 REGULAR 4092363.0 1407958.0 04-26-13 12:00:00 REGULAR 4092541.0 1408225.0 04-26-13 16:00:00 REGULAR 4092837.0 1408285.0 04-26-13 20:00:00 REGULAR 4093823.0 1408341.0 None None None NaN NaN None None None NaN NaN None None None NaN NaN
非常感谢任何帮助!
【问题讨论】:
-
尝试创建所有 DataFrame 的列表,然后使用单个
pandas.concat()。你能分享一下输出应该是什么样子吗?此外,您的变量名称未遵循 PEP 8。 -
@AMC PEP 8 不是必要标准,纯属优先。
-
@th0nk- 这不是法律,而是惯例。完全有理由做不同的事情。
-
一个小东西:可以在
pandas.read_csv()中设置列名,之后就不需要再赋值给.columns属性了。
标签: python pandas numpy dataframe pyspark