我们有一个非常具体的需求,我想用开放vSwitch来解决。它已经起作用了-你能告诉我我在这里错过了什么吗?
要求:连接到mac接口的Docker容器公开特定端口上的服务(需要在本地网络上广播)。我们需要在不同的端口上提供服务,而且无法配置运行服务的端口。我们已经尝试过不同的方法(反向代理、端口指令等)。这是由于各种原因而不起作用,主要是因为我们仍然必须坚持mac接口的IP。
这个基本的设置是相当固定的,我的主要目标是让它以那种方式工作,我认为这是可能的。
环境:具有内核core/linux 5.10.9和包community/openvswitch 2.14.1-1、community/docker 1:20.10.2-4的Arch
输入Open :我们在OVS桥上创建了mac接口,并希望使用OpenFlow指令来更改端口。
# ovs-vsctl show
... output omitted
Bridge br1
Port br1.200
tag: 200
Interface br1.200 <<< our container is connected here
type: internal
Port br1
Interface br1
type: internal
Port patch-br0
Interface patch-br0 <<< uplink to OVS bridge with physical interface
type: patch
options: {peer=patch-br1}使用Nginx做演示,应该可以使用任何容器..。
# docker network create -d macvlan --subnet=172.16.0.0/20 --ip-range=172.16.13.0/29 --gateway=172.16.0.1 -o parent=br1.200 mv.200
# docker run -d --name web --network mv.200 nginx到目前为止,curl http://172.16.13.0 (在本例中是容器web )返回“欢迎来到nginx!”默认页面。现在,我们正在尝试以下OpenFlow配置,以便在端口9080上访问容器服务。
# ovs-ofctl dump-flows br1
cookie=0x0, duration=1647.225s, table=0, n_packets=16, n_bytes=1435, priority=50,ct_state=-trk,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(table=0)
cookie=0x0, duration=1647.223s, table=0, n_packets=3, n_bytes=234, priority=50,ct_state=+new+trk,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(commit,nat(dst=172.16.13.0:80)),NORMAL
cookie=0x0, duration=1647.221s, table=0, n_packets=11, n_bytes=956, priority=50,ct_state=+est+trk,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(nat),NORMAL
cookie=0x0, duration=1647.219s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=-trk,tcp,nw_src=172.16.13.0,tp_src=80 actions=ct(table=0)
cookie=0x0, duration=1647.217s, table=0, n_packets=12, n_bytes=2514, priority=50,ct_state=+trk,tcp,nw_src=172.16.13.0,tp_src=80 actions=ct(nat),NORMAL
cookie=0x0, duration=84061.461s, table=0, n_packets=309364, n_bytes=36251324, priority=0 actions=NORMAL现在,只有在已经有活动流的情况下,curl http://172.16.13.0:9080才能工作,但是第一次尝试(服务器上的tcpdump -i br1.200)就中断了。
Client > Server : 172.16.1.51:46056 > 172.16.13.0:80 SYN
Server > Client : 172.16.13.0:80 > 172.16.1.51:46056 SYN ACK
Client > Server : 172.16.1.51:46056 > 172.16.13.0:9080 ACK (destination port not translated)
Server > Client : 172.16.13.0:9080 > 172.16.1.51:46056 RST (unknown to server)
Server > Client : 172.16.13.0:80 > 172.16.1.51:46056 SYN ACK
Client > Server : 172.16.1.51:46056 > 172.16.13.0:80 RST (already ACK'ed)
Client > Server : 172.16.1.51:46058 > 172.16.13.0:80 SYN (second curl)
Server > Client : 172.16.13.0:80 > 172.16.1.51:46058 SYN ACK
Client > Server : 172.16.1.51:46058 > 172.16.13.0:80 ACK (now with correct port 80)
... (normal TCP connection from here)包#3应该被流#3所覆盖,显然它不是我想的那样工作。
# ovs-appctl dpctl/dump-conntrack | grep 172.16.13.0
tcp,orig=(src=172.16.1.51,dst=172.16.13.0,sport=46056,dport=9080),reply=(src=172.16.13.0,dst=172.16.1.51,sport=80,dport=46056),protoinfo=(state=CLOSING)
tcp,orig=(src=172.16.1.51,dst=172.16.13.0,sport=46058,dport=9080),reply=(src=172.16.13.0,dst=172.16.1.51,sport=80,dport=46058),protoinfo=(state=TIME_WAIT)您能帮助我理解为什么+trk+est流的ct(nat)操作不适用于第一个连接(然后是第二个连接)?
变式2:(将mod_tp_dst添加到流#2)
# ovs-ofctl dump-flows br1
cookie=0x0, duration=6182.935s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=-trk,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(table=0)
cookie=0x0, duration=6182.931s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=+new+trk,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=mod_tp_dst:80,ct(commit,nat(dst=172.16.13.0:80)),NORMAL
cookie=0x0, duration=6182.928s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=+est+trk,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(nat),NORMAL
cookie=0x0, duration=6182.925s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=-trk,tcp,nw_src=172.16.13.0,tp_src=80 actions=ct(table=0)
cookie=0x0, duration=6182.923s, table=0, n_packets=0, n_bytes=0, priority=50,ct_state=+trk,tcp,nw_src=172.16.13.0,tp_src=80 actions=ct(nat),NORMAL
cookie=0x0, duration=81462.938s, table=0, n_packets=302990, n_bytes=35637543, priority=0 actions=NORMAL运行curl http://172.16.13.0:9080时,情况比变体1(客户机上的tcpdump -i eth0)稍微好一些。
Client > Server : 172.16.1.51:45974 > 172.16.13.0:9080 SYN
Server > Client : 172.16.13.0:80 > 172.16.1.51:45974 SYN ACK (response source port not translated)
Client > Server : 172.16.1.51:45974 > 172.16.13.0:80 RST (unknown to client)
Client > Server : 172.16.1.51:45974 > 172.16.13.0:9080 SYN (retransmission)
Server > Client : 172.16.13.0:9080 > 172.16.1.51:45974 SYN ACK (now with correct port 9080)
Client > Server : 172.16.1.51:45974 > 172.16.13.0:9080 ACK这样,连接总是工作的,但它也会将SYN重传超时添加到会话设置延迟。
# ovs-appctl dpctl/dump-conntrack | grep 172.16.13.0
tcp,orig=(src=172.16.1.51,dst=172.16.13.0,sport=45974,dport=80),reply=(src=172.16.13.0,dst=172.16.1.51,sport=80,dport=45974),protoinfo=(state=SYN_SENT)
tcp,orig=(src=172.16.1.51,dst=172.16.13.0,sport=45974,dport=9080),reply=(src=172.16.13.0,dst=172.16.1.51,sport=80,dport=1355),protoinfo=(state=TIME_WAIT)你能帮我理解为什么第一个SYN没有翻译吗?流#5和ct_state=+trk和actions=ct(nat)应该已经讨论过了。
谢谢你阅读这篇长篇文章。我很感谢你的暗示!
发布于 2021-01-23 15:18:55
找到了让它工作的方法,仍然不知道为什么变体1不工作.
这一点的关键似乎是变体1中的流#3没有正确捕获,或者连接跟踪没有可用的NAT信息。
下面是为我工作的流转储:
ovs-ofctl dump-flows br1
cookie=0x0, duration=160.870s, table=0, n_packets=14, n_bytes=1156, priority=50,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(table=1)
cookie=0x0, duration=160.867s, table=0, n_packets=11, n_bytes=2430, priority=50,tcp,nw_src=172.16.13.0,tp_src=80 actions=ct(table=1)
cookie=0x0, duration=184012.978s, table=0, n_packets=558802, n_bytes=60818179, priority=0 actions=NORMAL
cookie=0x0, duration=160.865s, table=1, n_packets=2, n_bytes=156, priority=50,ct_state=+new,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=ct(commit,nat(dst=172.16.13.0:80)),NORMAL
cookie=0x0, duration=160.862s, table=1, n_packets=12, n_bytes=1000, priority=50,tcp,nw_dst=172.16.13.0,tp_dst=9080 actions=mod_tp_dst:80,NORMAL
cookie=0x0, duration=160.860s, table=1, n_packets=11, n_bytes=2430, priority=50,tcp,nw_src=172.16.13.0,tp_src=80 actions=ct(nat),NORMALhttps://serverfault.com/questions/1050795
复制相似问题