當初讓l3fwd這個範例程式可以正常轉送兩台電腦之間的封包
花了幾天的時間才成功
因為官網寫的文件太精簡了
起初一直無法兩端互相ping到對方
後來經過一番研究
發現使用l3fwd範例
後來經過一番研究
發現使用l3fwd範例
除了routing table要在程式上預先設定之外
還必須需要指定static ARP
主要是因為l3fwd的範例程式
並沒有針對TCP/UDP之外的封包做處理
所以從Source發出的ARP封包無法傳遞給Destination
拓譜架構:
電腦A(Windows 10,IP: 198.18.0.1,00-E0-4C-68-DD-6C,與Port0接)
電腦B(Windows 10,IP: 198.18.1.1,00-E0-4C-68-FD-E0,與Port1接)
電腦S(Ubuntu 20.04,Port0: 24-6e-96-58-69-68,Port1: 24-6e-96-58-69-69)
1. 在Windows上設定Static ARP MAC Address
參考這裡的說明
Ubuntu的設定
參考這裡的說明
以我這邊的環境Windows
電腦A要下Port0的MAC Address
netsh -c i i add neighbors 14 "198.18.1.1" "24-6e-96-58-69-68"
(雙引號可不加,純粹方便閱讀)
同理,電腦B要下Port1的MAC Address
這裡就不加贅述了
2. 在執行l3fwd範例的時候
要在命令列指定MAC address
最重要的就是要在命令列加入--eth-dest的參數
將電腦S上的port所對應的電腦A/B使用的網卡MAC Address
sudo ./l3fwd -c 0x3 -n 2 -- -p 0x3 --config="(0,0,0),(1,0,1)" --eth-dest=0,00:E0:4C:68:DD:6C --eth-dest=1,00:E0:4C:68:FD:E0
l3fwd啟動之後
ping的部分就會正常運作了
這邊在電腦A上使用iperf3來打流量過去到電腦B
執行結果
PS D:\iperf-3.1.3-win64> .\iperf3.exe -c 198.18.1.1
Connecting to host 198.18.1.1, port 5201
[ 4] local 198.18.0.1 port 51374 connected to 198.18.1.1 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 99.0 MBytes 830 Mbits/sec
[ 4] 1.00-2.00 sec 110 MBytes 919 Mbits/sec
[ 4] 2.00-3.00 sec 110 MBytes 919 Mbits/sec
[ 4] 3.00-4.00 sec 109 MBytes 916 Mbits/sec
[ 4] 4.00-5.00 sec 110 MBytes 924 Mbits/sec
[ 4] 5.00-6.00 sec 105 MBytes 881 Mbits/sec
[ 4] 6.00-7.00 sec 110 MBytes 924 Mbits/sec
[ 4] 7.00-8.00 sec 108 MBytes 904 Mbits/sec
[ 4] 8.00-9.00 sec 110 MBytes 920 Mbits/sec
[ 4] 9.00-10.00 sec 63.5 MBytes 533 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.01 GBytes 867 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.01 GBytes 867 Mbits/sec receiver
iperf Done.
PS D:\iperf-3.1.3-win64>
遇過的問題
port 1 is not present on the board
多半可能是因為網卡沒有綁定到DPDK上
延伸閱讀
https://www.twblogs.net/a/5b99e1c22b71773ebacd9d9e
https://www.cnblogs.com/ZCplayground/p/9381961.html
還必須需要指定static ARP
主要是因為l3fwd的範例程式
並沒有針對TCP/UDP之外的封包做處理
所以從Source發出的ARP封包無法傳遞給Destination
拓譜架構:
電腦A(Windows 10,IP: 198.18.0.1,00-E0-4C-68-DD-6C,與Port0接)
電腦B(Windows 10,IP: 198.18.1.1,00-E0-4C-68-FD-E0,與Port1接)
電腦S(Ubuntu 20.04,Port0: 24-6e-96-58-69-68,Port1: 24-6e-96-58-69-69)
1. 在Windows上設定Static ARP MAC Address
參考這裡的說明
Ubuntu的設定
參考這裡的說明
以我這邊的環境Windows
電腦A要下Port0的MAC Address
netsh -c i i add neighbors 14 "198.18.1.1" "24-6e-96-58-69-68"
(雙引號可不加,純粹方便閱讀)
同理,電腦B要下Port1的MAC Address
這裡就不加贅述了
2. 在執行l3fwd範例的時候
要在命令列指定MAC address
最重要的就是要在命令列加入--eth-dest的參數
將電腦S上的port所對應的電腦A/B使用的網卡MAC Address
sudo ./l3fwd -c 0x3 -n 2 -- -p 0x3 --config="(0,0,0),(1,0,1)" --eth-dest=0,00:E0:4C:68:DD:6C --eth-dest=1,00:E0:4C:68:FD:E0
l3fwd啟動之後
ping的部分就會正常運作了
這邊在電腦A上使用iperf3來打流量過去到電腦B
執行結果
PS D:\iperf-3.1.3-win64> .\iperf3.exe -c 198.18.1.1
Connecting to host 198.18.1.1, port 5201
[ 4] local 198.18.0.1 port 51374 connected to 198.18.1.1 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 99.0 MBytes 830 Mbits/sec
[ 4] 1.00-2.00 sec 110 MBytes 919 Mbits/sec
[ 4] 2.00-3.00 sec 110 MBytes 919 Mbits/sec
[ 4] 3.00-4.00 sec 109 MBytes 916 Mbits/sec
[ 4] 4.00-5.00 sec 110 MBytes 924 Mbits/sec
[ 4] 5.00-6.00 sec 105 MBytes 881 Mbits/sec
[ 4] 6.00-7.00 sec 110 MBytes 924 Mbits/sec
[ 4] 7.00-8.00 sec 108 MBytes 904 Mbits/sec
[ 4] 8.00-9.00 sec 110 MBytes 920 Mbits/sec
[ 4] 9.00-10.00 sec 63.5 MBytes 533 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 1.01 GBytes 867 Mbits/sec sender
[ 4] 0.00-10.00 sec 1.01 GBytes 867 Mbits/sec receiver
iperf Done.
PS D:\iperf-3.1.3-win64>
遇過的問題
port 1 is not present on the board
多半可能是因為網卡沒有綁定到DPDK上
延伸閱讀
https://www.twblogs.net/a/5b99e1c22b71773ebacd9d9e
https://www.cnblogs.com/ZCplayground/p/9381961.html
---------------------------------------------------------------------------------------------------------------------
下面另外亂入附上
在虛擬機上執行l3fwd的結果
因為沒有實際的網卡所以沒辦法產生封包互打
在虛擬機上執行l3fwd的結果
因為沒有實際的網卡所以沒辦法產生封包互打
推測可能要在虛擬機之間設定LAN Segment之類的東西
((有誰試試可以告訴我結果XD
如果要在虛擬機上執行
需要多加一個--parse-ptype的參數
john@ubuntu:~/dpdk-stable-18.11.6/myinstall/share/dpdk/examples/l3fwd/build/app$ sudo ./l3fwd -c 1 -n 2 -- -p 0x3 -P --config="(0,0,0),(1,0,0)" --parse-ptype
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:07.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
soft parse-ptype is enabled
LPM or EM none selected, default LPM on
Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1... Port 0 modified RSS hash function based on hardware support,requested:0xa38c configured:0
portid = 0, nb_rx_queue = 1
Address:00:0C:29:DC:F9:16, Destination:02:00:00:00:00:00, Allocated mbuf pool on socket 0
LPM: Adding route 0x01010100 / 24 (0)
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route IPV6 / 48 (0)
LPM: Adding route IPV6 / 48 (1)
txq=0,0,0
Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1... Port 1 modified RSS hash function based on hardware support,requested:0xa38c configured:0
portid = 1, nb_rx_queue = 1
Address:00:0C:29:DC:F9:20, Destination:02:00:00:00:00:01, txq=0,0,0
Initializing rx queues on lcore 0 ... rxq=0,0,0 rxq=1,0,0
Port 0: softly parse packet type info
Port 1: softly parse packet type info
Checking link statusdone
Port0 Link Up. Speed 1000 Mbps -full-duplex
Port1 Link Up. Speed 1000 Mbps -full-duplex
L3FWD: entering main loop on lcore 0
L3FWD: -- lcoreid=0 portid=0 rxqueueid=0
L3FWD: -- lcoreid=0 portid=1 rxqueueid=0
john@ubuntu:~/dpdk-stable-18.11.6/myinstall/share/dpdk/examples/l3fwd/build/app$ sudo ./l3fwd -c 1 -n 2 -- -p 0x3 -P --config="(0,0,0),(1,0,0)" --parse-ptype
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:02:01.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:02:07.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 8086:100f net_e1000_em
soft parse-ptype is enabled
LPM or EM none selected, default LPM on
Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1... Port 0 modified RSS hash function based on hardware support,requested:0xa38c configured:0
portid = 0, nb_rx_queue = 1
Address:00:0C:29:DC:F9:16, Destination:02:00:00:00:00:00, Allocated mbuf pool on socket 0
LPM: Adding route 0x01010100 / 24 (0)
LPM: Adding route 0x02010100 / 24 (1)
LPM: Adding route IPV6 / 48 (0)
LPM: Adding route IPV6 / 48 (1)
txq=0,0,0
Initializing port 1 ... Creating queues: nb_rxq=1 nb_txq=1... Port 1 modified RSS hash function based on hardware support,requested:0xa38c configured:0
portid = 1, nb_rx_queue = 1
Address:00:0C:29:DC:F9:20, Destination:02:00:00:00:00:01, txq=0,0,0
Initializing rx queues on lcore 0 ... rxq=0,0,0 rxq=1,0,0
Port 0: softly parse packet type info
Port 1: softly parse packet type info
Checking link statusdone
Port0 Link Up. Speed 1000 Mbps -full-duplex
Port1 Link Up. Speed 1000 Mbps -full-duplex
L3FWD: entering main loop on lcore 0
L3FWD: -- lcoreid=0 portid=0 rxqueueid=0
L3FWD: -- lcoreid=0 portid=1 rxqueueid=0