摘要
我正在尝试使用testpmd作为来自物理NIC的流量的汇,通过OVS和DPDK。
当我运行testpmd时,它失败了。错误信息非常简短,所以我不知道出了什么问题。
如何让testpmd用DPDK连接到OVS中的虚拟端口?
步骤
我主要关注的是这些Mellanox指令。
# step 5 - "Specify initial Open vSwitch (OVS) database to use"
export PATH=$PATH:/usr/local/share/openvswitch/scripts
export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:$DB_SOCK --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
# step 6 - "Configure OVS to support DPDK ports"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
# step 7 - "Start OVS-DPDK service"
ovs-ctl --no-ovsdb-server --db-sock="$DB_SOCK" start # what does this do? I forget
# step 8 - "Configure the source code analyzer (PMD) to work with 2G hugespages and NUMA node0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="2048,2048" # 2048 = 2GB
# step 9 - "Set core mask to enable several PMDs"
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0xFF0 # cores 4-11, 4 per NUMA node
# core masks are one's hot. LSB is core 0
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x8 # core 3
# step 10 - there is no step 10 in the doc linked above
# step 11 - Create an OVS bridge
BRIDGE="br0"
ovs-vsctl add-br $BRIDGE -- set bridge br0 datapath_type=netdev那么,对于OVS元素,我试图遵循这些步骤
# add physical NICs to bridge, must be named dpdk(\d+)
sudo ovs-vsctl add-port $BRIDGE dpdk0 \
-- set Interface dpdk0 type=dpdk \
options:dpdk-devargs=0000:5e:00.0 ofport_request=1
sudo ovs-vsctl add-port $BRIDGE dpdk1 \
-- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=0000:5e:00.1 ofport_request=2
# add a virtual port to connect to testpmd/VM
# Not sure if I want dpdkvhostuser or dpdkvhostuserclient
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser0 \
-- \
set Interface dpdkvhostuser0 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:4,1:6" \
ofport_request=3
sudo ovs-vsctl add-port $BRIDGE dpdkvhostuser1 \
-- \
set Interface dpdkvhostuser1 \
type=dpdkvhostuser \
options:n_rxq=2,pmd-rxq-affinity="0:8,1:10" \
ofport_request=4
# add flows to join interfaces (based on ofport_request numbers)
sudo ovs-ofctl add-flow $BRIDGE in_port=1,action=output:3
sudo ovs-ofctl add-flow $BRIDGE in_port=3,action=output:1
sudo ovs-ofctl add-flow $BRIDGE in_port=2,action=output:4
sudo ovs-ofctl add-flow $BRIDGE in_port=4,action=output:2然后我运行testpmd
sudo -E $DPDK_DIR/x86_64-native-linuxapp-gcc/app/testpmd \
--vdev virtio_user0,path=/usr/local/var/run/openvswitch/dpdkvhostuser0 \
--vdev virtio_user1,path=/usr/local/var/run/openvswitch/dpdkvhostuser1 \
-c 0x00fff000 \
-n 1 \
--socket-mem=2048,2048 \
--file-prefix=testpmd \
--log-level=9 \
--no-pci \
-- \
--port-numa-config=0,0,1,0 \
--ring-numa-config=0,1,0,1,1,0 \
--numa \
--socket-num=0 \
--txd=512 \
--rxd=512 \
--mbcache=512 \
--rxq=1 \
--txq=1 \
--nb-cores=4 \
-i \
--rss-udp \
--auto-start产出如下:
...
EAL: lcore 18 is ready (tid=456c700;cpuset=[18])
EAL: lcore 21 is ready (tid=2d69700;cpuset=[21])
Interactive-mode selected
Auto-start selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=327680, size=2176, socket=0
USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=327680, size=2176, socket=1
Configuring Port 0 (socket 0)
Fail to configure port 0
EAL: Error - exiting with code: 1
Cause: Start ports failed/usr/local/var/log/openvswitch/ovs-vswitchd.log的底部是
2018-11-30T02:45:49.115Z|00026|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been added on numa node 0
2018-11-30T02:45:49.115Z|00027|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00028|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 0
2018-11-30T02:45:49.115Z|00029|netdev_dpdk|INFO|State of queue 0 ( tx_qid 0 ) of vhost device '/usr/local/var/run/openvswitch/dpdkvhostuser0'changed to 'enabled'
2018-11-30T02:45:49.115Z|00030|dpdk|INFO|VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
2018-11-30T02:45:49.115Z|00031|dpdk|INFO|VHOST_CONFIG: set queue enable: 1 to qp idx: 1
2018-11-30T02:45:49.278Z|00032|dpdk|ERR|VHOST_CONFIG: recvmsg failed
2018-11-30T02:45:49.279Z|00033|dpdk|INFO|VHOST_CONFIG: vhost peer closed
2018-11-30T02:45:49.280Z|00034|netdev_dpdk|INFO|vHost Device '/usr/local/var/run/openvswitch/dpdkvhostuser0' has been removed失败的原因是什么?
我应该使用dpdkvhostuserclient而不是dpdkvhostuser吗?
我还试过什么
/var/log/messages中查找更多信息--但这只是stdout和stderr的一个副本。我尝试过更改我的testpmd命令参数。(Docs 这里)
--no-pci。结果是Configuring Port 0 (socket 0) Port 0: 24:8A:07:9E:94:94 Configuring Port 1 (socket 0) Port 1: 24:8A:07:9E:94:95 Configuring Port 2 (socket 0) Fail to configure port 2 EAL: Error - exiting with code: 1 Cause: Start ports failed,这些MAC地址是物理NIC,我已经连接到OVS。--auto-start:相同的结果--nb-cores=1:相同的结果--vdev:Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained。当我添加--port-topology=chained时,最终会出现原来的错误。其他信息
ip addr时,我看到名为br0的接口具有与物理NIC相同的MAC地址(p3p1,当它绑定到内核时)。当我运行sudo ovs-vsctl show时我看到
d3e721eb-6aeb-44c0-9fa8-5fcf023008c5
Bridge "br0"
Port "dpdkvhostuser1"
Interface "dpdkvhostuser1"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:8,1:10"}
Port "dpdk1"
Interface "dpdk1"
type: dpdk
options: {dpdk-devargs="0000:5e:00.1"}
Port "dpdk0"
Interface "dpdk0"
type: dpdk
options: {dpdk-devargs="0000:5e:00.0"}
Port "br0"
Interface "br0"
type: internal
Port "dpdkvhostuser0"
Interface "dpdkvhostuser0"
type: dpdkvhostuser
options: {n_rxq="2,pmd-rxq-affinity=0:4,1:6"}编辑:添加/usr/local/var/log/openvswitch/ovs-vswitchd.log的内容
发布于 2018-11-30 07:26:44
dpdkvhostuser,而不是客户端。options:n_rxq=2中的队列数与testpmd --txq=1中的队列数不匹配。https://stackoverflow.com/questions/53530589
复制相似问题