I've been working through an issue with Connman and NETLINK_ROUTE messages as it
relates to policy routing. Background can be seen here:
I'm using kernel 4.9.27 from ASOP releases and had a couple of questions of how
NETLINK_ROUTE is intended to work and whether or not we are seeing a kernel bug.
Connman has long-running NETLINK_ROUTE socket which binds with:
memset(&addr, 0, sizeof(addr));
addr.nl_family = AF_NETLINK;
addr.nl_groups = RTMGRP_LINK | RTMGRP_IPV4_IFADDR | RTMGRP_IPV4_ROUTE |
RTMGRP_IPV6_IFADDR | RTMGRP_IPV6_ROUTE |
Connman also creates other short-lived NETLINK_ROUTE sockets to perform setters, in
particular, we have RTM_NEWROUTE and RTM_DELROUTE with a custom routing table. Connman
uses policy routing to create a session based routing table. When a new interface comes
online and has a gateway, Connman adds a default route to a new custom routing table.
When this happens, we get a RTM_NEWROUTE message for the main table (254), but we never
receive any RTM_NEWROUTE/RTM_DELROUTE messages for our custom table. Should NETLINK_ROUTE
sockets bound to RTMGRP_IPV4_ROUTE be receiving updates for custom tables or only table ID
The other behavior which I ran into was originally my kernel didn't have
CONFIG_IP_MULTIPLE_TABLES enabled and when Connman sent RTM_NEWROUTE/DELROUTE with the
custom table, we got NETLINK_ROUTE messages for these actions being applied to the main
table (254). This was corrected by enabling CONFIG_IP_MULTIPLE_TABLES in the kernel, but
I was still just curious if using table 254 was a fallback mechanism rather than the
setter returning an error.