【问题标题】:Getting an IOException when running a sample code in “Mahout in Action” on mahout-0.6在 mahout-0.6 上的“Mahout in Action”中运行示例代码时出现 IOException
【发布时间】:2012-03-22 21:16:38
【问题描述】:

我正在学习 Mahout 并阅读“Mahout in Action”。

当我尝试运行第7章SimpleKMeansClustering.java中的示例代码时,弹出异常:

线程“主”java.io.IOException 中的异常:错误值类:0.0:null 不是类 org.apache.mahout.clustering.WeightedPropertyVectorWritable 在 org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile .java:1874) 在 SimpleKMeansClustering.main(SimpleKMeansClustering.java:95)

我在 mahout-0.5 上成功执行了这段代码,但在 mahout-0.6 上我看到了这个异常。 即使我将目录名称从 clusters-0 更改为 clusters-0-final,我仍然面临这个异常。

    KMeansDriver.run(conf, vectors, new Path(canopyCentroids, "clusters-0-final"), clusterOutput, new TanimotoDistanceMeasure(), 0.01, 20, true, false);//First, I changed this path.

    SequenceFile.Reader reader = new SequenceFile.Reader(fs,  new Path("output/clusters/clusteredPoints/part-m-00000"), conf);//I double checked this folder and filename.

    IntWritable key = new IntWritable();
    WeightedVectorWritable value = new WeightedVectorWritable();
    int i=0;
    while(reader.next(key, value)) {
        System.out.println(value.toString() + " belongs to cluster " + key.toString());
        i++;
    }
    System.out.println(i);
    reader.close();

有人知道这个例外吗?我一直在尝试解决它很长时间,但没有任何想法。并且互联网上的资源很少。

提前致谢

【问题讨论】:

  • 这通常意味着您的输入为空或格式错误。另请注意,这本书与 Mahout 0.5 一起使用,但总的来说,我预计使用 0.6 的示例不会出现问题。不过不能肯定。
  • 谢谢肖恩·欧文。那我会选择 Mahout 0.5。 :)

标签: mahout k-means


【解决方案1】:

为了使这个示例在 Mahout 0.6 中工作,添加

import org.apache.mahout.clustering.WeightedPropertyVectorWritable;

导入并替换行:

 WeightedVectorWritable value = new WeightedVectorWritable();

通过

WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable();

这是因为 Mahout 0.6 代码将聚类输出值写入新类型 WeightedPropertyVectorWritable。

【讨论】:

    【解决方案2】:

    它可能关心的人,这里是 mahout 0.9 的工作 MiA 示例:

    import org.apache.hadoop.conf.Configuration;
    import org.apache.hadoop.fs.FileSystem;
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.io.IntWritable;
    import org.apache.hadoop.io.LongWritable;
    import org.apache.hadoop.io.SequenceFile;
    import org.apache.hadoop.io.Text;
    import org.apache.mahout.clustering.Cluster;
    import org.apache.mahout.clustering.classify.WeightedPropertyVectorWritable;
    import org.apache.mahout.clustering.kmeans.KMeansDriver;
    import org.apache.mahout.clustering.kmeans.Kluster;
    import org.apache.mahout.common.distance.EuclideanDistanceMeasure;
    import org.apache.mahout.math.RandomAccessSparseVector;
    import org.apache.mahout.math.Vector;
    import org.apache.mahout.math.VectorWritable;
    
    import java.io.File;
    import java.io.IOException;
    import java.util.ArrayList;
    import java.util.List;
    
    public class SimpleKMeansClustering {
    
        public static final double[][] points = {
                {1, 1}, {2, 1}, {1, 2},
                {2, 2}, {3, 3}, {8, 8},
                {9, 8}, {8, 9}, {9, 9}};
    
        public static void writePointsToFile(List<Vector> points,
                                             String fileName,
                                             FileSystem fs,
                                             Configuration conf) throws IOException {
            Path path = new Path(fileName);
            SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf,
                    path, LongWritable.class, VectorWritable.class);
            long recNum = 0;
            VectorWritable vec = new VectorWritable();
            for (Vector point : points) {
                vec.set(point);
                writer.append(new LongWritable(recNum++), vec);
            }
            writer.close();
        }
    
        public static List<Vector> getPoints(double[][] raw) {
            List<Vector> points = new ArrayList<Vector>();
            for (int i = 0; i < raw.length; i++) {
                double[] fr = raw[i];
                Vector vec = new RandomAccessSparseVector(fr.length);
                vec.assign(fr);
                points.add(vec);
            }
            return points;
        }
    
        public static void main(String args[]) throws Exception {
    
            int k = 2;
    
            List<Vector> vectors = getPoints(points);
    
            File testData = new File("clustering/testdata");
            if (!testData.exists()) {
                testData.mkdir();
            }
            testData = new File("clustering/testdata/points");
            if (!testData.exists()) {
                testData.mkdir();
            }
    
            Configuration conf = new Configuration();
            FileSystem fs = FileSystem.get(conf);
            writePointsToFile(vectors, "clustering/testdata/points/file1", fs, conf);
    
            Path path = new Path("clustering/testdata/clusters/part-00000");
            SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path, Text.class, Kluster.class);
    
            for (int i = 0; i < k; i++) {
                Vector vec = vectors.get(i);
                Kluster cluster = new Kluster(vec, i, new EuclideanDistanceMeasure());
                writer.append(new Text(cluster.getIdentifier()), cluster);
            }
            writer.close();
    
            KMeansDriver.run(conf,
                    new Path("clustering/testdata/points"),
                    new Path("clustering/testdata/clusters"),
                    new Path("clustering/output"),
                    0.001,
                    10,
                    true,
                    0,
                    true);
    
            SequenceFile.Reader reader = new SequenceFile.Reader(fs,
                    new Path("clustering/output/" + Cluster.CLUSTERED_POINTS_DIR + "/part-m-0"), conf);
    
            IntWritable key = new IntWritable();
            WeightedPropertyVectorWritable value = new WeightedPropertyVectorWritable();
            while (reader.next(key, value)) {
                System.out.println(value.toString() + " belongs to cluster " + key.toString());
            }
            reader.close();
        }
    
    }
    

    【讨论】:

    • 哇!非常感谢这工作!我已经调试示例代码 0.7 几个小时了。
    【解决方案3】:

    本书中的示例适用于 mahout 05,但有以下小改动:

    (1)正确设置路径:

       KMeansDriver.run(conf, new Path("testdata/points"), new Path("testdata/clusters"), new Path("testdata/output"), new EuclideanDistanceMeasure(), 0.001, 10, true, false);
    

       SequenceFile.Reader reader = new SequenceFile.Reader(fs, new Path("testdata/output/clusteredPoints/part-m-0"), conf);
    

    (2) 如果您没有安装 HADOOP,那么您需要将 KMeansDriver.run() 调用的最后一个参数从“false”更改为“true”。

       KMeansDriver.run(conf, new Path("testdata/points"), new Path("testdata/clusters"), new Path("testdata/output"), new EuclideanDistanceMeasure(), 0.001, 10, true, true);
    

    那么这个例子就起作用了。

    【讨论】:

      【解决方案4】:

      替换

      import org.apache.mahout.clustering.WeightedVectorWritable;
      

      import org.apache.mahout.clustering.classify.WeightedVectorWritable;
      

      【讨论】:

        猜你喜欢
        • 2012-03-05
        • 1970-01-01
        • 2018-01-23
        • 1970-01-01
        • 1970-01-01
        • 2013-06-05
        • 2014-05-08
        • 1970-01-01
        • 2017-12-29
        相关资源
        最近更新 更多