【发布时间】:2012-03-31 22:34:52
【问题描述】:
我目前正在用 node.js 开发一个练习应用程序。此应用程序包含一个 JSON REST Web 服务,它允许两种服务。
- 插入日志(对 /log 的 PUT 请求,以及要记录的消息)
- 最后 100 条日志(对 /log 的 GET 请求,返回最新的 100 条日志)
当前堆栈由具有应用程序逻辑的 node.js 服务器和负责持久性的 mongodb 数据库组成。为了提供 JSON REST Web 服务,我使用了node-restify 模块。
我目前正在使用 apache bench 执行一些压力测试(使用 5000 个并发 10 的请求)并得到以下结果:
Execute stress tests
1) Insert log
Requests per second: 754.80 [#/sec] (mean)
2) Last 100 logs
Requests per second: 110.37 [#/sec] (mean)
我对性能上的差异感到惊讶,我正在执行的查询使用索引。有趣的是,似乎 JSON 输出生成似乎一直在我执行的更深入的测试中得到。
能否详细分析节点应用程序?
这种行为正常吗?检索数据比插入数据要多得多?
编辑:
完整的测试信息
1) Insert log
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Server Software: log-server
Server Hostname: localhost
Server Port: 3010
Document Path: /log
Document Length: 0 bytes
Concurrency Level: 10
Time taken for tests: 6.502 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 2240634 bytes
Total PUT: 935000
HTML transferred: 0 bytes
Requests per second: 768.99 [#/sec] (mean)
Time per request: 13.004 [ms] (mean)
Time per request: 1.300 [ms] (mean, across all concurrent requests)
Transfer rate: 336.53 [Kbytes/sec] received
140.43 kb/s sent
476.96 kb/s total
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 6 13 3.9 12 39
Waiting: 6 12 3.9 11 39
Total: 6 13 3.9 12 39
Percentage of the requests served within a certain time (ms)
50% 12
66% 12
75% 12
80% 13
90% 15
95% 24
98% 26
99% 30
100% 39 (longest request)
2) Last 100 logs
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Server Software: log-server
Server Hostname: localhost
Server Port: 3010
Document Path: /log
Document Length: 4601 bytes
Concurrency Level: 10
Time taken for tests: 46.528 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 25620233 bytes
HTML transferred: 23005000 bytes
Requests per second: 107.46 [#/sec] (mean)
Time per request: 93.057 [ms] (mean)
Time per request: 9.306 [ms] (mean, across all concurrent requests)
Transfer rate: 537.73 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 28 93 16.4 92 166
Waiting: 26 85 18.0 86 161
Total: 29 93 16.4 92 166
Percentage of the requests served within a certain time (ms)
50% 92
66% 97
75% 101
80% 104
90% 113
95% 121
98% 131
99% 137
100% 166 (longest request)
从数据库中检索数据
要查询数据库,我使用mongoosejs 模块。日志架构定义为:
{
date: { type: Date, 'default': Date.now, index: true },
message: String
}
我执行的查询如下:
Log.find({}, ['message']).sort('date', -1).limit(100)
【问题讨论】:
-
每条日志包含多少数据?性能上的差异可能只是因为 GET 返回的数据量。另外,检索操作是否使用日志日期的索引?
-
@beny23 我已经添加了来自 apache bench 的完整日志,因此您可以分析我发送的数据量。索引确实在日志日期。
-
我个人认为这看起来很正常,毕竟你插入 1 个对象并且每次检索 100 个对象,所以虽然阅读比写作快,但你阅读的内容比你阅读的要多得多写作。有一件事我觉得有点令人费解。 GET 的文档长度只有 301 个字节,是不是意味着每个日志条目只有 3 个字节?
-
@beny23 你是对的,日志消息没有正确发送并且日志被插入空,所以它的日志条目是“{}”^^。更正错误,启动测试并更新结果