This archive is retained to ensure existing URLs remain functional. It will not contain any emails sent to this mailing list after July 1, 2024. For all messages, including those sent before and after this date, please visit the new location of the archive at https://mailman.ripe.net/archives/list/db-wg@ripe.net/
[db-wg] NRTM replication inefficiencies
- Previous message (by thread): [db-wg] NRTM replication inefficiencies
- Next message (by thread): [db-wg] NRTM replication inefficiencies
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Edward Shryane
eshryane at ripe.net
Fri Dec 8 15:19:24 CET 2017
Hi Agoston, > On 8 Dec 2017, at 15:05, Edward Shryane via db-wg <db-wg at ripe.net> wrote: > > Hi Agoston, > >> On 8 Dec 2017, at 13:43, Horváth Ágoston János via db-wg <db-wg at ripe.net> wrote: >> >> Or you could use TCP's built-in keepalive feature: >> >> http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html >> > > Yes, this is already possible, you can set the SO_KEEPALIVE option on the socket. > > However, at least on CentOS7 (Linux), the default is to wait 2 hours before sending a keepalive probe, then 9 probes have to fail (each 75s apart) before declaring the connection is broken. Changing this default behaviour is applied system-wide. > > Adding a protocol-specific keepalive mechanism may still be useful. > > Regards > Ed > to clarify my earlier reply - changing these system-wide defaults can also be changed on a per-socket basis. A client could use the TCP keep alive mechanism, or we could add a periodic keepalive comment on the server side. Regards Ed
- Previous message (by thread): [db-wg] NRTM replication inefficiencies
- Next message (by thread): [db-wg] NRTM replication inefficiencies
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[ db-wg Archives ]