【发布时间】:2023-09-15 04:01:01
【问题描述】:
自大约 3 个月以来,我目前一直在努力解决这个问题。 Crawler 似乎每 10 分钟获取一次页面,但在这之间似乎什么都不做。总体吞吐量非常慢。我正在并行抓取 300 个域。这应该使大约 30 页/秒,爬行延迟 10 秒。目前大约是每秒 2 页。
拓扑在 PC 上运行
- 8GB内存
- 普通硬盘
- 酷睿双核 CPU
- Ubuntu 16.04
Elasticsearch 安装在具有相同规格的另一台机器上。
您可以在此处查看来自 Grafana 仪表板的指标
它们还反映在 Storm UI 中看到的进程延迟中:
我目前的 Stormcrawler 架构是:
spouts:
- id: "spout"
className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.AggregationSpout"
parallelism: 25
bolts:
- id: "partitioner"
className: "com.digitalpebble.stormcrawler.bolt.URLPartitionerBolt"
parallelism: 1
- id: "fetcher"
className: "com.digitalpebble.stormcrawler.bolt.FetcherBolt"
parallelism: 6
- id: "sitemap"
className: "com.digitalpebble.stormcrawler.bolt.SiteMapParserBolt"
parallelism: 1
- id: "parse"
className: "com.digitalpebble.stormcrawler.bolt.JSoupParserBolt"
parallelism: 1
- id: "index"
className: "de.hpi.bpStormcrawler.BPIndexerBolt"
parallelism: 1
- id: "status"
className: "com.digitalpebble.stormcrawler.elasticsearch.persistence.StatusUpdaterBolt"
parallelism: 4
- id: "status_metrics"
className: "com.digitalpebble.stormcrawler.elasticsearch.metrics.StatusMetricsBolt"
parallelism: 1
与配置(这里是最相关的部分):
config:
topology.workers: 1
topology.message.timeout.secs: 300
topology.max.spout.pending: 100
topology.debug: false
fetcher.threads.number: 50
worker.heap.memory.mb: 2049
partition.url.mode: byDomain
fetcher.server.delay: 10.0
这里是风暴配置(也只是相关部分):
nimbus.childopts: "-Xmx1024m -Djava.net.preferIPv4Stack=true"
ui.childopts: "-Xmx768m -Djava.net.preferIPv4Stack=true"
supervisor.childopts: "-Djava.net.preferIPv4Stack=true"
worker.childopts: "-Xmx1500m -Djava.net.preferIPv4Stack=true"
您知道可能是什么问题吗?还是只是硬件问题?
我已经尝试过的
- 将 fetcher.server.delay 增加到一个更高和更低的值,这并没有改变任何东西
- 减少和增加 fetcher 线程的数量
- 玩弄并行度
- 计算是否为网络带宽。带宽为 400mbit/s,平均页面大小为 0.5 MB,它将是 15MB/s,这将是 120mbit/s,这也不应该是问题
- 增加工人数量
您还有什么我应该检查的想法或可以解释缓慢获取的原因吗?也许它也只是缓慢的硬件?或者瓶颈是 Elasticsearch?
非常感谢您
编辑:
我将拓扑更改为两个工作人员并且经常出现错误
2018-07-03 17:18:46.326 c.d.s.e.p.AggregationSpout Thread-33-spout-executor[26 26] [INFO] [spout #12] Populating buffer with nextFetchDate <= 2018-06-21T17:52:42+02:00
2018-07-03 17:18:46.327 c.d.s.e.p.AggregationSpout I/O dispatcher 26 [ERROR] Exception with ES query
java.io.IOException: Unable to parse response body for Response{requestLine=POST /status/status/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&preference=_shards%3A12&search_type=query_then_fetch&batched_reduce_size=512 HTTP/1.1, host=http://ts5565.byod.hpi.de:9200, response=HTTP/1.1 200 OK}
at org.elasticsearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:548) [stormjar.jar:?]
at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:600) [stormjar.jar:?]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:355) [stormjar.jar:?]
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:346) [stormjar.jar:?]
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119) [stormjar.jar:?]
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177) [stormjar.jar:?]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436) [stormjar.jar:?]
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326) [stormjar.jar:?]
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) [stormjar.jar:?]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) [stormjar.jar:?]
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104) [stormjar.jar:?]
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588) [stormjar.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: java.lang.NullPointerException
抓取过程似乎更加平衡,但仍然没有获取很多链接
同样在拓扑运行几周后,延迟上升了很多
【问题讨论】:
标签: elasticsearch web-crawler apache-storm stormcrawler