Trying out the go-chromecast
CLI, I was able to see mDNS requests coming to my home network over OpenVPN (a L2 tunnel using a TAP device bridged using br0
on both ends). I could also see responses being generated, but none of them were seen on the other end. The build of dd-wrt I use has no tcpdump, making it hard to observe both ends of the tunnel.
There’s a lot of sources suggesting a bunch of actions:
- enable ip_forward
- disable /sys/devices/virtual/net/$BRIDGE/bridge/multicast_snooping
- disable ageing on the bridge
- turn off
multicast_snooping
- turn on
multicast_snooping
- turn {off,on}
multicast_{router,querier,...}
, maybe even use non-{0,1} values (for instance, 2) - tweak
group_fwd_mask
flags (which is meant for other multicast traffic) - check how many TTL hops do multicast packets have left
- figure out IGMP proxying
- check IGMP version in use
However, that was all tapping in the dark.
But when someone mentioned ebtables, nat and PREROUTING, this led me to the right path: what if one of the chains in one of the tables was dropping outgoing packets?
# ebtables -t nat -L POSTROUTING Bridge table: nat Bridge chain: POSTROUTING, entries: 1, policy: ACCEPT -o tap1 --pkttype-type multicast -j DROP
Voila. All multicast packets were being dropped on L2.
# ebtables -t nat -D POSTROUTING -o tap1 --pkttype-type multicast -j DROP
This is fine because I happen to control both ends of the tunnel. Because my systems use multicast only for mDNS, I don’t expect traffic to require drops.
On the bridge device (br0
on both ends), I currently have multicast_router
set to 2
on both ends; I have multicast_querier
set to 1
on both ends; non-ddwrt system has multicast_igmp_version
set to 2
; and multicast_snooping
is set to 1
on both ends. I don’t claim correctness of any of these, nor do I claim them to be optimal. But, getting mDNS traffic through is exactly what I wanted, so I’m happy right now.
–
via blog.vucica.net