【问题标题】:how to configure the elasticserch.yml for repository-hdfs plugin of elasticsearch如何为elasticsearch的repository-hdfs插件配置elasticserch.yml
【发布时间】:2016-05-06 16:57:54
【问题描述】:

弹性搜索 2.3.2

存储库-hdfs 2.3.1

我将 elasticsearch.yml 文件配置为elastic official

repositories
   hdfs:
      uri: "hdfs://<host>:<port>/"    # optional - Hadoop file-system URI
      path: "some/path"               # required - path with the file-system where data is stored/loaded
      load_defaults: "true"           # optional - whether to load the default Hadoop configuration (default) or not
      conf_location: "extra-cfg.xml"  # optional - Hadoop              
      configuration XML to be loaded (use commas for multi values)
      conf.<key> : "<value>"          # optional - 'inlined' key=value    added to the Hadoop configuration
      concurrent_streams: 5           # optional - the number of concurrent streams (defaults to 5)
      compress: "false"               # optional - whether to      compress the metadata or not (default)
      chunk_size: "10mb"              # optional - chunk size (disabled by default)

但它会引发异常,格式不正确

错误信息:

Exception in thread "main" SettingsException
 [Failed to load settings from [elasticsearch.yml]]; nested: ScannerException[while scanning a simple key'

 in 'reader', line 99, column 2:
     repositories
     ^
 could not find expected ':'
 in 'reader', line 100, column 10:
     hdfs:
         ^];   
Likely root cause: while scanning a simple key
in 'reader', line 99, column 2:
 repositories
 ^
could not find expected ':'
in 'reader', line 100, column 10:
     hdfs:

我将其编辑为:

   repositories:
       hdfs:
         uri: "hdfs://191.168.4.220:9600/"

但它不起作用

我想知道格式是什么。

我找到了aws configure for elasticsearch.xml

cloud:
    aws:
        access_key: AKVAIQBF2RECL7FJWGJQ
        secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br

repositories:
    s3:
        bucket: "bucket_name"
        region: "us-west-2"
        private-bucket:
            bucket: <bucket not accessible by default key>
            access_key: <access key>
            secret_key: <secret key>
        remote-bucket:
            bucket: <bucket in other region>
            region: <region>
    external-bucket:
        bucket: <bucket>
        access_key: <access key>
        secret_key: <secret key>
        endpoint: <endpoint>
        protocol: <protocol>

我模仿了,还是不行

【问题讨论】:

    标签: elasticsearch elasticsearch-plugin


    【解决方案1】:

    我尝试在elasticsearch 2.3.2中安装repository-hdfs 2.3.1,但是失败了:

    ERROR: Plugin [repository-hdfs] is incompatible with Elasticsearch [2.3.2]. Was designed for version [2.3.1]
    

    插件只能安装在elasticsearch 2.3.1中。

    您应该指定 uri,path,conf_location 选项,并可能删除 conf.key 选项。以如下配置为例。

    security.manager.enabled: false
    repositories.hdfs:
        uri: "hdfs://master:9000"       # optional - Hadoop file-system URI
        path: "/aaa/bbb"                # required - path with the file-system where data is stored/loaded
        load_defaults: "true"           # optional - whether to load the default Hadoop configuration (default) or not
        conf_location: "/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/core-site.xml,/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/hdfs-site.xml"  # optional - Hadoop configuration XML to be loaded (use commas for multi values)
        concurrent_streams: 5           # optional - the number of concurrent streams (defaults to 5)
        compress: "false"               # optional - whether to compress the metadata or not (default)
        chunk_size: "10mb"              # optional - chunk size (disabled by default)
    

    我启动es成功:

    [----@----------- elasticsearch-2.3.1]$ bin/elasticsearch
    [2016-05-06 04:40:58,173][INFO ][node                     ] [Protector]     version[2.3.1], pid[17641], build[bd98092/2016-04-04T12:25:05Z]
    [2016-05-06 04:40:58,174][INFO ][node                     ] [Protector]     initializing ...
    [2016-05-06 04:40:58,830][INFO ][plugins                  ] [Protector] modules [reindex, lang-expression, lang-groovy], plugins [repository-hdfs], sites []
    [2016-05-06 04:40:58,863][INFO ][env                      ] [Protector] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8gb], net total_space [9.9gb], spins? [unknown], types [rootfs]
    [2016-05-06 04:40:58,863][INFO ][env                      ] [Protector] heap size [1007.3mb], compressed ordinary object pointers [true]
    [2016-05-06 04:40:58,863][WARN ][env                      ] [Protector] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
    [2016-05-06 04:40:59,192][INFO ][plugin.hadoop.hdfs       ] Loaded Hadoop     [1.2.1] libraries from file:/home/ec2-user/app/elasticsearch-2.3.1/plugins/repository-hdfs/
    [2016-05-06 04:41:01,598][INFO ][node                     ] [Protector] initialized
    [2016-05-06 04:41:01,598][INFO ][node                     ] [Protector] starting ...
    [2016-05-06 04:41:01,823][INFO ][transport                ] [Protector] publish_address {xxxxxxxxx:9300}, bound_addresses {xxxxxxx:9300}
    [2016-05-06 04:41:01,830][INFO ][discovery                ] [Protector] hdfs/9H8wli0oR3-Zp-M9ZFhNUQ
    [2016-05-06 04:41:04,886][INFO ][cluster.service          ] [Protector] new_master {Protector}{9H8wli0oR3-Zp-M9ZFhNUQ}{xxxxxxx}{xxxxx:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
    [2016-05-06 04:41:04,908][INFO ][http                     ] [Protector] publish_address {xxxxxxxxx:9200}, bound_addresses {xxxxxxx:9200}
    [2016-05-06 04:41:04,908][INFO ][node                     ] [Protector] started
    [2016-05-06 04:41:05,415][INFO ][gateway                  ] [Protector] recovered [1] indices into cluster_state
    [2016-05-06 04:41:06,097][INFO ][cluster.routing.allocation] [Protector] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[website][0], [website][0]] ...]).
    

    但是,当我尝试创建快照时:

    PUT /_snapshot/my_backup
    {
      "type": "hdfs",
      "settings": {
            "path":"/aaa/bbb/"
      }
    }
    

    我收到以下错误:

    Caused by: java.io.IOException: Mkdirs failed to create file:/aaa/bbb/tests-zTkKRtoZTLu3m3RLascc1w
    

    【讨论】:

    • 是的。我这样做是你的答案。但是,出现“服务器 IPC 版本 9 无法与客户端版本 4 通信”错误
    • 你的hadoop集群版本太低(可能是0.2.x)。我不认为repository-hdfs插件可以使用它,它的默认hadoop版本是1.x。但是你仍然可以尝试一下:relpace hadoop-core-1.2.1.jar 及其所有第三方依赖jar 文件在plugins/repository-hdfs/hadoop-libs/ 目录中。
    • 我只是尝试编译elasticsearch 5.0.0,但仍然不知道如何配置它。
    猜你喜欢
    • 1970-01-01
    • 2018-06-23
    • 2019-02-21
    • 2023-02-24
    • 2023-03-25
    • 1970-01-01
    • 1970-01-01
    • 2019-06-01
    • 1970-01-01
    相关资源
    最近更新 更多