【问题标题】:python pandas how to read csv file by blockpython pandas如何逐块读取csv文件
【发布时间】:2022-01-25 02:36:52
【问题描述】:

我正在尝试逐块读取 CSV 文件。

CSV 看起来像:

No.,time,00:00:00,00:00:01,00:00:02,00:00:03,00:00:04,00:00:05,00:00:06,00:00:07,00:00:08,00:00:09,00:00:0A,...
1,2021/09/12 02:16,235,610,345,997,446,130,129,94,555,274,4,
2,2021/09/12 02:17,364,210,371,341,294,87,179,106,425,262,3,
1434,2021/09/12 02:28,269,135,372,262,307,73,86,93,512,283,4,
1435,2021/09/12 02:29,281,207,688,322,233,75,69,85,663,276,2,
No.,time,00:00:10,00:00:11,00:00:12,00:00:13,00:00:14,00:00:15,00:00:16,00:00:17,00:00:18,00:00:19,00:00:1A,...
1,2021/09/12 02:16,255,619,200,100,453,456,4,19,56,23,4,
2,2021/09/12 02:17,368,21,37,31,24,8,19,1006,4205,2062,30,
1434,2021/09/12 02:28,2689,1835,3782,2682,307,743,256,741,52,23,6,
1435,2021/09/12 02:29,2281,2047,6848,3522,2353,755,659,885,6863,26,36,

块以 No. 开头,后面是数据行。

def run(sock, delay, zipobj):
   zf = zipfile.ZipFile(zipobj)
   for f in zf.namelist():
      print(zf.filename)
      print("csv name: ", f)
      df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5] #,"nrows=1435? (but for the next blocks?")
      print(df, '\n')
      date_pattern='%Y/%m/%d %H:%M'
      df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column
      tuples=[] # data will be saved in a list
      formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS'
      for each_column in list(df.columns)[2:-1]:
             for e in zip(list(df['epoch']),list(df[each_column])):
                 each_column=each_column.replace("X", '')
                 #print(f"perf.type.serial.LDEV.{each_column}.TOTAL_IOPS",e)
                 tuples.append((f"perf.type.serial.LDEV.{each_column}.TOTAL_IOPS",e))
      package = pickle.dumps(tuples, 1)
      size = struct.pack('!L', len(package))
      sock.sendall(size)
      sock.sendall(package)
      time.sleep(delay)

非常感谢您的帮助,

【问题讨论】:

  • 另外,我之前在 StackOverflow 上看到过这种格式的数据……它叫什么,“碳”什么的?

标签: python pandas csv


【解决方案1】:

使用pd.read_csv 加载您的文件,并在每次第一列的行为No. 时创建块。使用groupby 遍历每个块并创建一个新的数据帧。

data = pd.read_csv('data.csv', header=None)
dfs = []
for _, df in data.groupby(data[0].eq('No.').cumsum()):
    df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
    dfs.append(df.rename_axis(columns=None))

输出:

# First block
>>> dfs[0]
    No.              time 00:00:00 00:00:01 00:00:02 00:00:03 00:00:04 00:00:05 00:00:06 00:00:07 00:00:08 00:00:09 00:00:0A  ...
0     1  2021/09/12 02:16      235      610      345      997      446      130      129       94      555      274        4  NaN
1     2  2021/09/12 02:17      364      210      371      341      294       87      179      106      425      262        3  NaN
2  1434  2021/09/12 02:28      269      135      372      262      307       73       86       93      512      283        4  NaN
3  1435  2021/09/12 02:29      281      207      688      322      233       75       69       85      663      276        2  NaN


# Second block
>>> dfs[1]
    No.              time 00:00:10 00:00:11 00:00:12 00:00:13 00:00:14 00:00:15 00:00:16 00:00:17 00:00:18 00:00:19 00:00:1A  ...
0     1  2021/09/12 02:16      255      619      200      100      453      456        4       19       56       23        4  NaN
1     2  2021/09/12 02:17      368       21       37       31       24        8       19     1006     4205     2062       30  NaN
2  1434  2021/09/12 02:28     2689     1835     3782     2682      307      743      256      741       52       23        6  NaN
3  1435  2021/09/12 02:29     2281     2047     6848     3522     2353      755      659      885     6863       26       36  NaN

等等。

【讨论】:

  • 感谢 Corralien,但如果 csv 有 3 或 4 块?
  • 如果有10个block,那么dfs中就有10个元素
【解决方案2】:

对不起,我没有找到正确的代码方法:

def run(sock, delay, zipobj):
   zf = zipfile.ZipFile(zipobj)
   for f in zf.namelist():
      print("using zip :", zf.filename)
      str = f
      myobject = re.search(r'(^[a-zA-Z]{4})_.*', str)
      Objects = myobject.group(1)
      if Objects  == 'LDEV':
         metric = re.search('.*LDEV_(.*)/.*', str)
         metric = metric.group(1)
      elif Objects  == 'Port':
         metric = re.search('.*/(Port_.*).csv', str)
         metric = metric.group(1)
      else:
         print("None")
      print("using csv : ", f)
      #df = pd.read_csv(zf.open(f), skiprows=[0,1,2,3,4,5])
      data = pd.read_csv(zf.open(f), header=None, skiprows=[0,1,2,3,4,5])
      dfs = []
      for _, df in data.groupby(data[0].eq('No.').cumsum()):
         df = pd.DataFrame(df.iloc[1:].values, columns=df.iloc[0])
         dfs.append(df.rename_axis(columns=None))
         print("here")
         date_pattern='%Y/%m/%d %H:%M'
         df['epoch'] = df.apply(lambda row: int(time.mktime(time.strptime(row.time,date_pattern))), axis=1) # create epoch as a column
         tuples=[] # data will be saved in a list
         #formated_str='perf.type.serial.object.00.00.00.TOTAL_IOPS'
         for each_column in list(df.columns)[2:-1]:
                for e in zip(list(df['epoch']),list(df[each_column])):
                    each_column=each_column.replace("X", '')
                    tuples.append((f"perf.type.serial.{Objects}.{each_column}.{metric}",e))
      package = pickle.dumps(tuples, 1)
      size = struct.pack('!L', len(package))
      sock.sendall(size)
      sock.sendall(package)
      time.sleep(delay)

感谢您的帮助,

【讨论】:

  • 这是对@Corralien 回答的回应吗?请不要为您尝试但不起作用的东西添加答案。更新原始问题中的代码并将其删除。
  • 不要在代码中使用dfsdf 已经包含当前块。将dfs['epoch'] 替换为df['epoch']
  • 成功了,谢谢 Corralien。问题出现在我的代码中! :)
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2012-06-22
  • 2012-08-23
  • 1970-01-01
  • 2021-12-08
  • 1970-01-01
相关资源
最近更新 更多