【问题标题】:NETCDF4 file doesn't grow beyond 2GBNETCDF4 文件不会超过 2GB
【发布时间】:2021-09-04 19:23:07
【问题描述】:

我有一个不超过 2GB 的 NETCDF4 文件。

我正在使用以下示例数据 - 我正在将 200 多个 txt 文件转换为 netcdf4 文件

STATIONS_ID;MESS_DATUM;  QN;FF_10;DD_10;eor
       3660;201912150000;    3;   4.6; 170;eor
       3660;201912150010;    3;   4.2; 180;eor
       3660;201912150020;    3;   4.3; 190;eor
       3660;201912150030;    3;   5.2; 190;eor
       3660;201912150040;    3;   5.1; 190;eor
       3660;201912150050;    3;   4.8; 190;eor

代码如下:

files = [f for f in os.listdir('.') if os.path.isfile(f)]
count = 0 
for f in files:

    filecp = open(f, "r", encoding="ISO-8859-1")
    
    
# NC file setup
    mydata = netCDF4.Dataset('v5.nc', 'w', format='NETCDF4')
    
    mydata.description = 'Measurement Data'
    
    mydata.createDimension('STATION_ID',None)
    mydata.createDimension('MESS_DATUM',None)
    mydata.createDimension('QN',None)
    mydata.createDimension('FF_10',None)
    mydata.createDimension('DD_10',None)
    
    STATION_ID = mydata.createVariable('STATION_ID',np.short,('STATION_ID'))
    MESS_DATUM = mydata.createVariable('MESS_DATUM',np.long,('MESS_DATUM'))
    QN = mydata.createVariable('QN',np.byte,('QN'))
    FF_10 = mydata.createVariable('FF_10',np.float64,('FF_10'))
    DD_10 = mydata.createVariable('DD_10',np.short,('DD_10'))
    
    STATION_ID.units = ''
    MESS_DATUM.units = 'Central European Time yyyymmddhhmi'
    QN.units = ''
    FF_10.units = 'meters per second'
    DD_10.units = "degree"
    
    txtdata = pd.read_csv(filecp, delimiter=';').values
    
    #txtdata = np.genfromtxt(filecp, dtype=None, delimiter=';', names=True, encoding=None)
    if len(txtdata) > 0:
        
        df = pd.DataFrame(txtdata)

        sh = txtdata.shape
        print("txtdata shape is ", sh)
    
        mydata['STATION_ID'][:] = df[0]
        mydata['MESS_DATUM'][:] = df[1]
        mydata['QN'][:] = df[2]
        mydata['FF_10'][:] = df[3]
        mydata['DD_10'][:] = df[4]
    
        
    mydata.close()
    filecp.close()
    count +=1

【问题讨论】:

  • 你使用的是 32 位 python 吗?
  • - 如何检查这个?- @talonmies
  • @talonmies - 我正在使用 Mac OS :(
  • python -c "import ctypes; print(32 if ctypes.sizeof(ctypes.c_voidp)==4 else 64, 'bit CPU')" >>> 64位CPU

标签: python-3.x netcdf netcdf4 measurement cdo-climate


【解决方案1】:

您的问题是您在循环中创建了相同的文件。所以你的文件大小仅限于最大的初始数据文件。

打开文件一次,然后将每个新数据添加到 netcdf 数据数组的末尾。

如果您在第一个文件中获得 124 个值,则输入:

mydata['STATION_ID'][0:124] = df[0]

你从第二个文件中得到 224,你把

mydata['STATION_ID'][124:124+224] = df[0]

所以,万一数据文件从https://opendata.dwd.de/climate_environment/CDC/observations_germany/climate/10_minutes/wind/recent/下载到<text file path>

import netCDF4
import codecs
import pandas as pd
import os
import numpy as np


mydata = netCDF4.Dataset('v5.nc', 'w', format='NETCDF4')
mydata.description = 'Wind Measurement Data'
mydata.createDimension('STATION_ID',None)
mydata.createDimension('MESS_DATUM',None)
mydata.createDimension('QN',None)
mydata.createDimension('FF_10',None)
mydata.createDimension('DD_10',None)

STATION_ID = mydata.createVariable('STATION_ID',np.short,('STATION_ID'))
MESS_DATUM = mydata.createVariable('MESS_DATUM',np.long,('MESS_DATUM'))
QN = mydata.createVariable('QN',np.byte,('QN'))
FF_10 = mydata.createVariable('FF_10',np.float64,('FF_10'))
DD_10 = mydata.createVariable('DD_10',np.short,('DD_10'))

STATION_ID.units = ''
MESS_DATUM.units = 'Central European Time yyyymmddhhmi'
QN.units = ''
FF_10.units = 'meters per second'
DD_10.units = "degree"    
fpath = <text file path>
files = [f for f in os.listdir(fpath)]
count = 0 
mydata_startindex=0
for f in files:
    filecp = open(fpath+f, "r", encoding="ISO-8859-1")
    txtdata = pd.read_csv(filecp, delimiter=';')
    chunksize = len(txtdata)
    if len(txtdata) > 0:          
        mydata['STATION_ID'][mydata_startindex:mydata_startindex+chunksize] = txtdata['STATIONS_ID']
        mydata['MESS_DATUM'][mydata_startindex:mydata_startindex+chunksize] = txtdata['MESS_DATUM']
        mydata['QN'][mydata_startindex:mydata_startindex+chunksize] = txtdata['  QN']
        mydata['FF_10'][mydata_startindex:mydata_startindex+chunksize] = txtdata['FF_10']
        mydata['DD_10'][mydata_startindex:mydata_startindex+chunksize] = txtdata['DD_10']
        mydata_startindex += chunksize

【讨论】:

  • 对我来说,如果 OP 连接从文本文件创建的 DataFrame,然后然后在最后一步将列放入 nc 数据集,这似乎更容易
  • @MrFuppes 只有当所有数据帧都适合内存时,它才会起作用,但它确实可能更简单。由于 NetCDF4 对分块和附加有很好的支持,我认为部分写入不是一个糟糕的练习。对于生产代码,无论如何我都会使用 dask 和 xarray。
  • @kakk11 块大小导致第二次迭代出现问题。第一次迭代使用一个文件 - 它运行良好。问题是什么?你能帮忙吗?另外,我们决定块大小的依据是什么?
  • @MrFuppes : 如果可以的话,请帮我解决上述问题
  • 更新:适用于任意数量的迭代的块大小查询:块大小取决于哪些参数?它是如何实时变化的?
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2020-11-19
  • 1970-01-01
  • 2011-08-30
  • 2013-03-25
  • 1970-01-01
  • 2014-12-21
相关资源
最近更新 更多