【发布时间】:2016-07-27 16:42:24
【问题描述】:
我有两个具有以下数据结构的简单示例文件:
人.csv
0|John
1|Maria
2|Anne
和
item.csv
0|car|blue
0|bycicle|red
1|phone|gold
2|purse|black
2|book|black
我需要收集所有文件的所有相关行(具有相同标识的行,在本例中为整数 0、1 或 2),并在收集它们后对它们执行一些操作(与此问题无关)。 第一组相关行(字符串列表)应如下所示:
0|John
0|car|blue
0|bycicle|red
第二组相关行:
1|Maria
1|phone|gold
等等
每个文件的实际文件大小约为 5 到 10gb。文件按第一列排序,并首先打开 id 最小的文件以供读取。内存是一个限制因素(无法读取内存中的整个文件)。考虑到这一点,我编写了以下代码,它似乎可以很好地阅读大部分行,然后按照我想要的方式对它们进行分组……但是,最后一部分(在我的代码中,我在 250.000 组上设置了日志计数)需要更长的时间并且内存使用量激增。
主要
public class Main {
private static int groupCount = 0;
private static int totalGroupCount = 0;
private static long start = 0;
private static int lineCount;
public static void main(String[] args) {
GroupedReader groupedReader = new GroupedReader();
groupedReader.orderReadersOnSmallestId();
long fullStart = System.currentTimeMillis();
start = System.currentTimeMillis();
lineCount = 0;
while (groupedReader.hasNext()) {
groupCount++;
List<String> relatedLines = groupedReader.readNextGroup();
for (String line : relatedLines) {
lineCount++;
}
totalGroupCount++;
if (groupCount == 250_000) {
System.out.println("Building " + NumberFormat.getNumberInstance(Locale.US).format(groupCount) + " groups took " + (System.currentTimeMillis() - start) / 1e3 + " sec");
groupCount = 0;
start = System.currentTimeMillis();
}
}
System.out.println("Building " + NumberFormat.getNumberInstance(Locale.US).format(groupCount) + " groups took " + (System.currentTimeMillis() - start) / 1e3 + " sec");
System.out.println(String.format("Building [ %s ] groups from [ %s ] lines took %s seconds", NumberFormat.getNumberInstance(Locale.US).format(totalGroupCount), NumberFormat.getNumberInstance(Locale.US).format(lineCount), (System.currentTimeMillis() - fullStart) / 1e3));
System.out.println("all done!");
}
}
GroupedReader ...省略了一些方法
public class GroupedReader {
private static final String DELIMITER = "|";
private static final String INPUT_DIR = "src/main/resources/";
private boolean EndOfFile = true;
private List<BufferedReader> sortedReaders;
private TreeMap<Integer, List<String>> cachedLines;
private List<String> relatedLines;
private int previousIdentifier;
public boolean hasNext() {
return (sortedReaders.isEmpty()) ? false : true;
}
public List<String> readNextGroup() {
updateCache();
EndOfFile = true;
for (int i = 0; i < sortedReaders.size(); i++) {
List<String> currentLines = new ArrayList<>();
try {
BufferedReader br = sortedReaders.get(i);
for (String line; (line = br.readLine()) != null;) {
int firstDelimiterIndex = StringUtils.ordinalIndexOf(line, DELIMITER, 1);
int currentIdentifier = Integer.parseInt(line.substring(0, firstDelimiterIndex));
if (previousIdentifier == -1) {
// first iteration
previousIdentifier = currentIdentifier;
relatedLines.add(i + DELIMITER + line);
continue;
} else if (currentIdentifier > previousIdentifier) {
// next identifier, so put the lines in the cache
currentLines.add(i + DELIMITER + line);
if (cachedLines.get(currentIdentifier) != null) {
List<String> local = cachedLines.get(currentIdentifier);
local.add(i + DELIMITER + line);
} else {
cachedLines.put(currentIdentifier, currentLines);
}
EndOfFile = false;
break;
} else {
// same identifier
relatedLines.add(i + DELIMITER + line);
}
}
if (EndOfFile) {
// is this close needed?
br.close();
sortedReaders.remove(br);
}
} catch (NumberFormatException | IOException e) {
e.printStackTrace();
}
}
if (cachedLines.isEmpty()) cachedLines = null;
return relatedLines;
}
private void updateCache() {
if (cachedLines != null) {
previousIdentifier = cachedLines.firstKey();
relatedLines = cachedLines.get(cachedLines.firstKey());
cachedLines.remove(cachedLines.firstKey());
} else {
previousIdentifier = -1;
relatedLines = new ArrayList<>();
cachedLines = new TreeMap<>();
// root of all evil...?
System.gc();
}
}
}
我曾尝试“玩弄”明确关闭阅读器并调用垃圾收集器,但我无法发现我编写的代码中的实际缺陷。
问题:
是什么导致接近文件末尾的读取速度变慢?
简单的系统日志:
Building 250,000 groups took 0.394 sec
Building 250,000 groups took 0.261 sec
Building 250,000 groups took 0.289 sec
...
Building 250,000 groups took 0.281 sec
Building 250,000 groups took 0.314 sec
Building 211,661 groups took 10.829 sec
Building [ 9,961,661 ] groups from [ 31,991,125 ] lines took 21.016 seconds
all done!
【问题讨论】:
标签: java memory memory-leaks io bufferedreader