首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >DPDK用于一般用途工作量

DPDK用于一般用途工作量
EN

Stack Overflow用户
提问于 2021-10-31 04:14:22
回答 1查看 374关注 0票数 0

我已经在计算节点上部署了OpenStack并配置了OVS,以实现高性能的网络。我的工作负载是一个通用的工作负载,比如运行haproxymysqlapacheXMPP等.

当我进行负载测试时,我发现性能是平均的,在200 drops数据包率之后,我注意到数据包下降。我听说并阅读了DPDK可以处理数百万数据包,但在我的例子中,这不是真的。在来宾中,我使用的是virtio-net,它在内核中处理包,所以我相信我的瓶颈是我的客户VM。

我没有像testpmd这样的基于客户的DPDK应用程序,这是否意味着OVS+DPDK对我的云没有用处?如何利用具有通用工作负载的OVS+DPDK?

更新

我们有自己的负载测试工具,它生成音频RTP流量,它是基于纯UDP的150字节数据包,并且注意到200 down后音频质量下降和起伏。简而言之,DPDK主机的PMD cpu使用率和负载测试都很高,显示了很差的音频质量。当我使用基于SRIOV的VM做同样的测试时,性能真的很好。

代码语言:javascript
复制
$ ovs-vswitchd -V
ovs-vswitchd (Open vSwitch) 2.13.3
DPDK 19.11.7

英特尔网卡X550T

代码语言:javascript
复制
# ethtool -i ext0
driver: ixgbe
version: 5.1.0-k
firmware-version: 0x80000d63, 18.8.9
expansion-rom-version:
bus-info: 0000:3b:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

在下面的输出中,这些队列id:0到8是什么,以及为什么只使用第一个队列,而不是其他队列,它们总是为零。这是什么意思?

代码语言:javascript
复制
ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 2:
  isolated : false
  port: vhu1c3bf17a-01    queue-id:  0 (enabled)   pmd usage:  0 %
  port: vhu1c3bf17a-01    queue-id:  1 (enabled)   pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  2 (disabled)  pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  3 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 3:
  isolated : false
pmd thread numa_id 0 core_id 22:
  isolated : false
  port: vhu1c3bf17a-01    queue-id:  3 (enabled)   pmd usage:  0 %
  port: vhu1c3bf17a-01    queue-id:  6 (enabled)   pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  0 (enabled)   pmd usage: 54 %
  port: vhu6b7daba9-1a    queue-id:  5 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 23:
  isolated : false
  port: dpdk1             queue-id:  0 (enabled)   pmd usage:  3 %
pmd thread numa_id 0 core_id 26:
  isolated : false
  port: vhu1c3bf17a-01    queue-id:  2 (enabled)   pmd usage:  0 %
  port: vhu1c3bf17a-01    queue-id:  7 (enabled)   pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  1 (disabled)  pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  4 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 27:
  isolated : false
pmd thread numa_id 0 core_id 46:
  isolated : false
  port: dpdk0             queue-id:  0 (enabled)   pmd usage:  27 %
  port: vhu1c3bf17a-01    queue-id:  4 (enabled)   pmd usage:  0 %
  port: vhu1c3bf17a-01    queue-id:  5 (enabled)   pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  6 (disabled)  pmd usage:  0 %
  port: vhu6b7daba9-1a    queue-id:  7 (disabled)  pmd usage:  0 %
pmd thread numa_id 1 core_id 47:
  isolated : false


$ ovs-appctl dpif-netdev/pmd-stats-clear && sleep 10 && ovs-appctl
dpif-netdev/pmd-stats-show | grep "processing cycles:"
  processing cycles: 1697952 (0.01%)
  processing cycles: 12726856558 (74.96%)
  processing cycles: 4259431602 (19.40%)
  processing cycles: 512666 (0.00%)
  processing cycles: 6324848608 (37.81%)

加工周期是否意味着我的PMD处于压力之下?但我只达到200公里的费率?

这是我的dpdk0和dpdk1端口统计信息

代码语言:javascript
复制
sudo ovs-vsctl get Interface dpdk0 statistics
{flow_director_filter_add_errors=153605,
flow_director_filter_remove_errors=30829, mac_local_errors=0,
mac_remote_errors=0, ovs_rx_qos_drops=0, ovs_tx_failure_drops=0,
ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0,
ovs_tx_qos_drops=0, rx_128_to_255_packets=64338613,
rx_1_to_64_packets=367, rx_256_to_511_packets=116298,
rx_512_to_1023_packets=31264, rx_65_to_127_packets=6990079,
rx_broadcast_packets=0, rx_bytes=12124930385, rx_crc_errors=0,
rx_dropped=0, rx_errors=12, rx_fcoe_crc_errors=0, rx_fcoe_dropped=12,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=367,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=128, rx_management_dropped=35741,
rx_management_packets=31264, rx_mbuf_allocation_errors=0,
rx_missed_errors=0, rx_oversize_errors=0, rx_packets=71512362,
rx_priority0_dropped=0, rx_priority0_mbuf_allocation_errors=1096,
rx_priority1_dropped=0, rx_priority1_mbuf_allocation_errors=0,
rx_priority2_dropped=0, rx_priority2_mbuf_allocation_errors=0,
rx_priority3_dropped=0, rx_priority3_mbuf_allocation_errors=0,
rx_priority4_dropped=0, rx_priority4_mbuf_allocation_errors=0,
rx_priority5_dropped=0, rx_priority5_mbuf_allocation_errors=0,
rx_priority6_dropped=0, rx_priority6_mbuf_allocation_errors=0,
rx_priority7_dropped=0, rx_priority7_mbuf_allocation_errors=0,
rx_undersize_errors=6990079, tx_128_to_255_packets=64273778,
tx_1_to_64_packets=128, tx_256_to_511_packets=43670294,
tx_512_to_1023_packets=153605, tx_65_to_127_packets=881272,
tx_broadcast_packets=10, tx_bytes=25935295292, tx_dropped=0,
tx_errors=0, tx_management_packets=0, tx_multicast_packets=153,
tx_packets=109009906}

统计数据

代码语言:javascript
复制
sudo ovs-vsctl get Interface dpdk1 statistics
{flow_director_filter_add_errors=126793,
flow_director_filter_remove_errors=37969, mac_local_errors=0,
mac_remote_errors=0, ovs_rx_qos_drops=0, ovs_tx_failure_drops=0,
ovs_tx_invalid_hwol_drops=0, ovs_tx_mtu_exceeded_drops=0,
ovs_tx_qos_drops=0, rx_128_to_255_packets=64435459,
rx_1_to_64_packets=107843, rx_256_to_511_packets=230,
rx_512_to_1023_packets=13, rx_65_to_127_packets=7049788,
rx_broadcast_packets=199058, rx_bytes=12024342488, rx_crc_errors=0,
rx_dropped=0, rx_errors=11, rx_fcoe_crc_errors=0, rx_fcoe_dropped=11,
rx_fcoe_mbuf_allocation_errors=0, rx_fragment_errors=107843,
rx_illegal_byte_errors=0, rx_jabber_errors=0, rx_length_errors=0,
rx_mac_short_packet_dropped=1906, rx_management_dropped=0,
rx_management_packets=13, rx_mbuf_allocation_errors=0,
rx_missed_errors=0, rx_oversize_errors=0, rx_packets=71593333,
rx_priority0_dropped=0, rx_priority0_mbuf_allocation_errors=1131,
rx_priority1_dropped=0, rx_priority1_mbuf_allocation_errors=0,
rx_priority2_dropped=0, rx_priority2_mbuf_allocation_errors=0,
rx_priority3_dropped=0, rx_priority3_mbuf_allocation_errors=0,
rx_priority4_dropped=0, rx_priority4_mbuf_allocation_errors=0,
rx_priority5_dropped=0, rx_priority5_mbuf_allocation_errors=0,
rx_priority6_dropped=0, rx_priority6_mbuf_allocation_errors=0,
rx_priority7_dropped=0, rx_priority7_mbuf_allocation_errors=0,
rx_undersize_errors=7049788, tx_128_to_255_packets=102664472,
tx_1_to_64_packets=1906, tx_256_to_511_packets=68008814,
tx_512_to_1023_packets=126793, tx_65_to_127_packets=1412435,
tx_broadcast_packets=1464, tx_bytes=40693963125, tx_dropped=0,
tx_errors=0, tx_management_packets=199058, tx_multicast_packets=146,
tx_packets=172252389}

更新-2

dpdk接口

代码语言:javascript
复制
  # dpdk-devbind.py -s
    
    Network devices using DPDK-compatible driver
    ============================================
    0000:3b:00.1 'Ethernet Controller 10G X550T 1563' drv=vfio-pci unused=ixgbe
    0000:af:00.1 'Ethernet Controller 10G X550T 1563' drv=vfio-pci unused=ixgbe
    
    Network devices using kernel driver
    ===================================
    0000:04:00.0 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=vfio-pci
    0000:04:00.1 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 unused=vfio-pci
    0000:3b:00.0 'Ethernet Controller 10G X550T 1563' if=int0 drv=ixgbe unused=vfio-pci
    0000:af:00.0 'Ethernet Controller 10G X550T 1563' if=int1 drv=ixgbe unused=vfio-pci

卵子

代码语言:javascript
复制
# ovs-vsctl show
595103ef-55a1-4f71-b299-a14942965e75
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: netdev
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port vxlan-0a48042b
            Interface vxlan-0a48042b
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.43"}
        Port vxlan-0a480429
            Interface vxlan-0a480429
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.41"}
        Port vxlan-0a48041f
            Interface vxlan-0a48041f
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.31"}
        Port vxlan-0a48042a
            Interface vxlan-0a48042a
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.72.4.44", out_key=flow, remote_ip="10.72.4.42"}
    Bridge br-vlan
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: netdev
        Port br-vlan
            Interface br-vlan
                type: internal
        Port dpdkbond
            Interface dpdk1
                type: dpdk
                options: {dpdk-devargs="0000:af:00.1", n_txq_desc="2048"}
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:3b:00.1", n_txq_desc="2048"}
        Port phy-br-vlan
            Interface phy-br-vlan
                type: patch
                options: {peer=int-br-vlan}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        datapath_type: netdev
        Port vhu87cf49d2-5b
            tag: 7
            Interface vhu87cf49d2-5b
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_socket/vhu87cf49d2-5b"}
        Port vhub607c1fa-ec
            tag: 7
            Interface vhub607c1fa-ec
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_socket/vhub607c1fa-ec"}
        Port vhu9a035444-83
            tag: 8
            Interface vhu9a035444-83
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_socket/vhu9a035444-83"}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-vlan
            Interface int-br-vlan
                type: patch
                options: {peer=phy-br-vlan}
        Port vhue00471df-d8
            tag: 8
            Interface vhue00471df-d8
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_socket/vhue00471df-d8"}
        Port vhu683fdd35-91
            tag: 7
            Interface vhu683fdd35-91
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_socket/vhu683fdd35-91"}
        Port vhuf04fb2ec-ec
            tag: 8
            Interface vhuf04fb2ec-ec
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_socket/vhuf04fb2ec-ec"}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    ovs_version: "2.13.3"

我使用openstack创建了来宾vms,他们可以看到它们使用vhost套接字连接(Ex: /var/lib/vhost_ socket /vhuf04fb2ec-ec)

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-11-24 04:30:09

当我进行负载测试时,我发现性能是平均的,在200 drops数据包率之后,我注意到数据包下降。简而言之,DPDK主机的PMD cpu使用率和负载测试都很高,显示了很差的音频质量。当我和SRI做同样的测试时

回答:根据到目前为止所做的实时调试,此观察结果不正确。如下所述理由

推出的cores.

  • comparison

  • qemu没有固定在针对vhost的特定的PMD上,客户端不是苹果苹果comparison.

  • with OpenStack方法,在到达VM之前至少有3个网桥要通过。

  • on线程没有被固定,导致所有PMD线程在每个桥接阶段运行在相同的核心上(导致延迟和下降)。

为了与SRIOV方法进行公平的比较,对similar question做了以下更改

代码语言:javascript
复制
  External Port <==> DPDK Port0 (L2fwd) DPDK net_vhost <--> QEMU (virtio-pci)

使用iperf3 (双向)实现的数字大约为10 10Gbps。

注意:请求运行trex,pktgen来尝试Mpps。期望用当前的设置达到最小的8个MPPS。

因此,这不是DPDK、virtio-client、qemu-kvm或SRIOV相关的问题,而是配置或平台设置问题.。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/69783902

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档