【问题标题】:Tail file that does not start with symbol不以符号开头的尾文件
【发布时间】:2020-05-31 14:31:46
【问题描述】:

我有一组原始 csv 文件,除了列名外,还有注释标题(# 符号)。像这样:

# This data is taken from ....
# ...
# ...
# ...
# col1,col2,...,coln
#
[csv data rows starts here]

包含列名的行上方的行数不是每个文件固定的。

如何“剪切”将输出标准 CSV 格式的文件(创建新文件)?

col1,col2,...,coln
[csv data rows starts here]

我正在使用 Jupyter notebook 进行一些数据整理,因此我有兴趣使用内联 shell 脚本(可能使用 tail)和 Python 来执行此操作。

【问题讨论】:

  • 你说,'包含列名的行上方的行数不是每个文件固定的';但是,在您的 csv 数据行开始之前,该注释块末尾的结构是否一致?
  • 在 csv 行开始后是否存在“注释行”?

标签: python-3.x bash csv jupyter-notebook text-processing


【解决方案1】:

下面是可以在 Jupyter 笔记本中使用的 Python 版本。您需要将file_names = ["<file_name1>","<file_name2>"] 行中列出的<file_name>s 替换为您的。

import os
import sys
import pandas as pd
try:
    from StringIO import StringIO
except ImportError:
    from io import StringIO

def mine_header(fn):
    '''
    To answer https://stackoverflow.com/q/60249235/8508004

    Takes a file name as input

    Assumes last commented line with contents before the data rows start 
    contains the column names. Could be condensed, to read in all text once and
    then rsplit on last `#` but going line by line at start offers more 
    opportunity for customizing later if not quite matching pattern seen in 
    data files. Also could just assume second last line above the data contains 
    the column names? In that case, could skip 
    `header = [x for x in header if x]` line and use 
    `col_names = header[-2].split(",")` instead.

    Returns list of column names and rest of contents of csv file beyond
    header.
    '''
    # first copy the input file that will be parsed line by to a new file so
    # can parse contents while possibly overwriting the input file with a
    # shorter version if a label for a set encountered inside it
    beyond_header = False
    header = [] # collect the header lines 
    data_rows = "" # collect the data rows
    # go through the file line by line until beyond commented out header
    with open(fn, 'r') as input:
        for line in input:
            if beyond_header:
                data_rows += line
            elif line.startswith("#"):
                header.append(line[1:].strip()) # leave off comment symbol and 
                # remove any leadding and trailing whitespace
            # If line doesn't start with comment symbol, have hit the end of 
            # the header and want to start collecting the csv data tows
            else:
                data_rows += line
                beyond_header = False
    # Now process the header lines to get the column names.
    header = [x for x in header if x]# The last row before the data should be 
    # empty now and so that list comprehension should remove it leaving last row
    # as the one with the column names
    col_names = header[-1].split(",")
    return col_names, data_rows


file_names = ["<file_name1>","<file_name2>"]
df_dict = {}
for i,fn in enumerate(file_names):
    col_names, data_rows = mine_header(fn)
    df_dict[i] = pd.read_csv(StringIO(data_rows), header=0, names=col_names)

# display the produced dataframes
from IPython.display import display, HTML
for df in df_dict:
    display(df)

可以通过与您创建的文件列表匹配的索引访问每个 pandas 数据框。例如,由第三个 csv 文件生成的数据框将是 df_dict[2]

我有点超出了您的要求,因为将列拆分为列表很容易设计到挖掘功能中,并且 Pandas 已设置为处理此后的所有事情。
如果您真的想将输出作为标准 CSV,您可以使用 col_names, data_rows = mine_header(fn) 返回的 col_namesdata_rows 并保存一个 CSV 文件。您可以将它们组合成一个字符串来保存,如下所示:

col_names_as_string = ",".join(col_names)
string_to_save = col_names_as_string + "\n" + data_rows

【讨论】:

    猜你喜欢
    • 2020-07-17
    • 1970-01-01
    • 2017-05-28
    • 1970-01-01
    • 2019-08-21
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多