|
>> To me it sounds like you have (or LuaSocket has) set the socket
> > to a 0-second timeout, but didn't restore the timeout to non-0 before
> > polling.
> You rise an interesting point here. In my case, once a socket is created,
> its timeout is set to 0-second for all its lifetime, and never set back to
> not-0. Is that wrong ?
Yes. Because when you poll on the socket it will always poll as ready for
reading, regardless of whether there's any data to read. That means every
call to select, poll, or epoll will immediately return; it's not actually
polling.
> When (that is, at which moment in the life of a socket) is it correct to
> set a 0-timeout ?
I've never used luasocket, but AFAIU older versions (pre 2.0?) didn't
support O_NONBLOCK. The workaround was to set SO_RCVTIMEO and SO_SNDTIMEO to
0 before attempting the read or write, respectively. But you were supposed
to reset those values to non-0 before polling.
Ultimately it was the Linux kernel and the network that had trouble keeping
up with the amount of data, largely because of so many small writes. It's a
real-time streaming server which tries it's best to minimize latency (mostly
for demo purposes so it sounds more responsive when comparing the
over-the-air broadcast with the transcoded one), so except for the initial
connection it doesn't buffer more a single compressed frame of data before
writing it out to the socket. That's roughly about 1 packet per second per
socket, depending on the codec. At scale that's very taxing on the kernel
TCP stack and the network because of all the ACKs.