【发布时间】:2014-02-13 00:07:48
【问题描述】:
我有超过 3400 个 CSV 文件,大小在 10kb 到 3mb 之间。每个 CSV 文件都有这个通用文件名:stockticker-Ret.csv 其中stockticker 是股票代码,如 AAPL、GOOG、YHOO 等,并且在给定日期的每一分钟都有股票回报。我的 SAS 代码首先从 SAS 数据集中的 stockticker-Ret.csv 文件中加载所有股票代码名称。我遍历每个股票代码以将适当的.csv 文件加载到名为want 的SAS 数据集中,并在want 上应用一些数据步骤,并将每个股票代码的最终数据集want 存储在名为global 的SAS 数据集中。可以想象,这个过程需要很长时间。有没有办法改进我下面的DO LOOP 代码以使这个过程更快?
/*Record in a sas dataset all the csv file name to extract the stock ticker*/
data yfiles;
keep filename;
length fref $8 filename $80;
rc = filename(fref, 'F:\data\');
if rc = 0 then do; did = dopen(fref);
rc = filename(fref); end; else do; length msg $200.; msg = sysmsg(); put msg=; did = .; end;
if did <= 0 then putlog 'ERR' 'OR: Unable to open directory.';
dnum = dnum(did);
do i = 1 to dnum; filename = dread(did, i); /* If this entry is a file, then output. */ fid = mopen(did, filename); if fid > 0 then output; end;
rc = dclose(did);
run;
/*store in yfiles all the stock tickers*/
data yfiles(drop=filename1 rename=(filename1=stock));
set yfiles;
filename1=tranwrd(filename,'-Ret.csv','');
run;
proc sql noprint;
select stock into :name separated by '*' from work.yfiles;
%let count2 = &sqlobs;
quit;
*Create the template of the desired GLOBAL SAS dataset;
proc sql;
create table global
(stock char(8), time_gap num(5), avg_ret num(5));
quit;
proc sql;
insert into global
(stock, time_gap,avg_ret)
values('',0,0);
quit;
%macro y1;
%do i = 1 %to &count2;
%let j = %scan(&name,&i,*);
proc import out = want datafile="F:\data\&j-Ret.csv"
dbms=csv replace;
getnames = yes;
run;
data want;
set want; ....
....[Here I do 5 Datasteps on the WANT sasfile]
/*Store the want file in a global SAS dataset that will contain all the stock tickers from the want file*/
data global;
set global want; run;
%end;
%mend y1;
%y1()
如您所见,全局 SAS 数据集针对我存储在 global 中的每个 want 数据集进行扩展。
【问题讨论】:
-
文件有共同的布局吗?
-
是的,他们有一个共同的布局
标签: performance sas do-loops