【问题标题】:merge multiple csvs to one csv将多个 csv 合并为一个 csv
【发布时间】:2021-07-07 01:49:45
【问题描述】:

我正在尝试将大约 5000 个 csv 表合并为一个 csv,各个 csv 文件的结构相同,因此代码应该很简单,但是我一直收到“找不到文件”的错误消息。

代码如下:

csv_paths = set(glob.glob("folder_containing_csvs/*.csv"))
full_csv_path = "folder_containing_csvs/full_df.csv"
csv_paths -= set([full_csv_path])
for csv_path in csv_paths:
    print("csv_path", csv_path)
    df = pd.read_csv(csv_path, sep="\t")
    df[sorted(list(df.columns.values))].to_csv(full_csv_path, mode="a", header=not 
os.path.isfile(full_csv_path), sep="\t", index=False)
full_df = pd.read_csv(full_csv_path, sep="\t", encoding='utf-8')
full_df

代码产生的错误信息如下:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-47-11ffadd03e3e> in <module>
----> 1 full_df = pd.read_csv(full_csv_path, sep="\t", encoding='utf-8')
      2 full_df

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in read_csv(filepath_or_buffer,
sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, type, 
engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, 
nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, 
infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, 
chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, 
escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, 
low_memory, memory_map, float_precision)
    686     )
    687 
--> 688     return _read(filepath_or_buffer, kwds)
    689 
    690 

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
    452 
    453     # Create the parser.
--> 454     parser = TextFileReader(fp_or_buf, **kwds)
    455 
    456     if chunksize or iterator:

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds)
    946             self.options["has_index_names"] = kwds["has_index_names"]
    947 
--> 948         self._make_engine(self.engine)
    949 
    950     def close(self):

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in _make_engine(self, engine)
   1178     def _make_engine(self, engine="c"):
   1179         if engine == "c":
-> 1180             self._engine = CParserWrapper(self.f, **self.options)
   1181         else:
   1182             if engine == "python":

~/.local/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds)
   1991         if kwds.get("compression") is None and encoding:
   1992             if isinstance(src, str):
-> 1993                 src = open(src, "rb")
   1994                 self.handles.append(src)
   1995 

FileNotFoundError: [Errno 2] No such file or directory: 'folder_containing_csvs/full_df.csv'

【问题讨论】:

  • 如果它们是 csv 文件,你为什么不直接open('merge.csv','w').write(open('file1.csv').read()+open('file2.csv').read())。如果有表头,则先去掉表头。

标签: python pandas csv


【解决方案1】:

glob 提供的路径是相对于脚本的执行位置的。

如果你有这样的文件结构:

~/code/ |
       | merge.py
       | folder_containing_csvs/  |
                                  | file1.csv
                                  | file2.csv

merge.py 文件必须从 /code 文件夹中执行。

例如

~/code$ python merge.py

做类似的事情

~/$ python ./code/merge.py

会导致

NotFoundError: [Errno 2] No such file or directory: 'folder_containing_csvs/full_df.csv'

【讨论】:

  • 将数据移动到 /code 文件夹后,代码运行良好。感谢您解释使用glob 所需的文件结构
【解决方案2】:

试试这个:

loc_path = /path/to/folder/of/csv's
files = os.listdir(loc_path)
files = [file for file in files if '.csv' in file]

# now load them into a list
dfs = []
for file in files:
    dfs.append(pd.read_csv(loc_path+file), sep='\t')

# concat the dfs list:

df = pd.concat(dfs)
# Send this df.to_csv at location of your choice.

只需阅读 5000 个 csv 表部分。您期望有多少行?

【讨论】:

    猜你喜欢
    • 2019-10-11
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2014-05-17
    • 1970-01-01
    • 1970-01-01
    • 2020-11-25
    相关资源
    最近更新 更多