您需要的是文件中所有数据集的列表。我认为这里需要recursive function 的概念。这将允许您从一个组中提取所有“数据集”,但是当其中一个似乎是一个组本身时,递归地执行相同的操作,直到找到所有数据集。例如:
/
|- dataset1
|- group1
|- dataset2
|- dataset3
|- dataset4
您的函数在伪代码中应如下所示:
def getdatasets(key, file):
out = []
for name in file[key]:
path = join(key, name)
if file[path] is dataset: out += [path]
else out += getdatasets(path, file)
return out
对于我们的示例:
-
/dataset1 是一个数据集:添加路径到输出,给出
out = ['/dataset1']
-
/group 不是数据集:调用getdatasets('/group',file)
-
/group/dataset2 是一个数据集:添加路径到输出,给出
nested_out = ['/group/dataset2']
-
/group/dataset3 是一个数据集:添加路径到输出,给出
nested_out = ['/group/dataset2', '/group/dataset3']
这是添加到我们已经拥有的:
out = ['/dataset1', '/group/dataset2', '/group/dataset3']
-
/dataset4 是一个数据集:添加路径到输出,给出
out = ['/dataset1', '/group/dataset2', '/group/dataset3', '/dataset4']
此列表可用于将所有数据复制到另一个文件。
要制作一个简单的克隆,您可以执行以下操作。
import h5py
import numpy as np
# function to return a list of paths to each dataset
def getdatasets(key,archive):
if key[-1] != '/': key += '/'
out = []
for name in archive[key]:
path = key + name
if isinstance(archive[path], h5py.Dataset):
out += [path]
else:
out += getdatasets(path,archive)
return out
# open HDF5-files
data = h5py.File('old.hdf5','r')
new_data = h5py.File('new.hdf5','w')
# read as much datasets as possible from the old HDF5-file
datasets = getdatasets('/',data)
# get the group-names from the lists of datasets
groups = list(set([i[::-1].split('/',1)[1][::-1] for i in datasets]))
groups = [i for i in groups if len(i)>0]
# sort groups based on depth
idx = np.argsort(np.array([len(i.split('/')) for i in groups]))
groups = [groups[i] for i in idx]
# create all groups that contain dataset that will be copied
for group in groups:
new_data.create_group(group)
# copy datasets
for path in datasets:
# - get group name
group = path[::-1].split('/',1)[1][::-1]
# - minimum group name
if len(group) == 0: group = '/'
# - copy data
data.copy(path, new_data[group])
当然,可以根据您的需要进行进一步的自定义。您描述了一些文件组合。在这种情况下,您将不得不
new_data = h5py.File('new.hdf5','a')
并且可能在路径中添加一些东西。