【发布时间】:2021-11-05 11:03:20
【问题描述】:
我正在尝试增加 UDP 套接字的接收缓冲区大小,但最终大小似乎无法预测:
LOG_INFO("UDP echo server default receive buffer size : " << rcv_buf << " bytes");
// increase default buffer sizes
rcv_buf *= 3;
LOG_INFO("trying to increase receive buffer size to : " << rcv_buf << " bytes");
if (!SockWrap::set_recv_buf_size(m_handle, sizeof(m_sockaddr_in), rcv_buf))
LOG_ERR("unable to set new receive buffer size");
// checking the new size after possible modifications if any
rcv_buf = SockWrap::recv_buf(m_handle, sizeof(m_sockaddr_in));
if (rcv_buf == -1) {
LOG_ERR("unable to read UDP echo server receive buffer size after modification");
} else {
LOG_INFO("UDP echo server new receive buffer size : " << rcv_buf << " bytes");
}
包装函数是:
bool SockWrap::set_recv_buf_size(int fd, socklen_t len, int size)
{
// SO_RCVBUF option is an integer
int n = setsockopt(fd, SOL_SOCKET, SO_RCVBUF, &size, len);
if (n == -1) {
LOG_ERR("setsockopt : " << strerror(errno));
return false;
}
return true;
}
和
int SockWrap::recv_buf(int fd, socklen_t len)
{
// SO_RCVBUF option is an integer
int optval;
if (getsockopt(fd, SOL_SOCKET, SO_RCVBUF, &optval, &len) == -1) {
LOG_ERR("getsockopt : " << strerror(errno));
return -1;
} else
return optval;
}
输出:
UDP echo server default receive buffer size : 212992 bytes
trying to increase receive buffer size to : 638976 bytes
UDP echo server new receive buffer size : 425984 bytes
我已经在/proc/sys/net/ipv4检查了我系统的限制:
cat udp_rmem_min
4096
cat udp_mem
186162 248216 372324
在/proc/sys/net/core
cat rmem_max
212992
cat rmem_default
212992
所以第一个输出看起来很清楚,默认的recv缓冲区值为212992字节,由rmem_default定义。
但后来尺寸增加了,而且比rmem_max 大得多,但仍然不是我想要的。
这个最终值(425984 字节)从何而来?
这个值是最大值吗?它是否取决于内核当前使用了多少内存?
编辑:
根据答案,我测试了其他值,似乎甚至可以将rmem_default 设置为大于rmem_max:
echo 500000 > /proc/sys/net/core/rmem_default
cat /proc/sys/net/core/rmem_default
500000
现在在调用setsockopt 之前,getsockopt(一如既往)返回rmem_default,而不是rmem_default * 2,即500000。
但如果我使用 setsockopt 将值设置为 500000,则 getsocktop 返回 rmem_max * 2,即 425984。
所以看起来使用/proc 接口比setsockopt 可以更好地控制缓冲区大小。
如果rmem_default 可以更大,rmem_max 的目的是什么?
/* from kernel 5.10.63 net/core/sock.c */
case SO_RCVBUF:
/* Don't error on this BSD doesn't and if you think
* about it this is right. Otherwise apps have to
* play 'guess the biggest size' games. RCVBUF/SNDBUF
* are treated in BSD as hints
*/
__sock_set_rcvbuf(sk, min_t(u32, val, sysctl_rmem_max));
break;
和
static void __sock_set_rcvbuf(struct sock *sk, int val)
{
/* Ensure val * 2 fits into an int, to prevent max_t() from treating it
* as a negative value.
*/
val = min_t(int, val, INT_MAX / 2);
sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
/* We double it on the way in to account for "struct sk_buff" etc.
* overhead. Applications assume that the SO_RCVBUF setting they make
* will allow that much actual data to be received on that socket.
*
* Applications are unaware that "struct sk_buff" and other overheads
* allocate from the receive buffer during socket buffer allocation.
*
* And after considering the possible alternatives, returning the value
* we actually used in getsockopt is the most desirable behavior.
*/
WRITE_ONCE(sk->sk_rcvbuf, max_t(int, val * 2, SOCK_MIN_RCVBUF));
}
但也许这个编辑应该是另一个(相关的)问题。
谢谢。
【问题讨论】:
-
这不是巧合:
425984 = 212992 * 2
标签: c++ linux sockets networking udp