最近在Run Project的時候恰巧有碰到上述OS搭Intel/Emulex NIC的SR-IOV(PCI-SIG Single Root I/O Virtualization),主要功用可以在實際的PCIE NIC上產生出多個Virtual PCI NIC(產生出額外的PCIE ID),且不影響原本實際PCIE NIC的Device(也就是可以共存),直接從Host OS Dettach之後直接Passthrough到Guest OS作使用,總之話不多說,直接看下去就知道了,如下:
I.VMware 5.x
1) Check info about the lspci in the CLI Console(Enable the func of VT, VT-d and SR-IOV)
#lspci | grep -i intel | grep -i network
2) Setup the parameter for create VFs(這邊以Intel 82599為例)
#esxcfg-module ixgbe -s max_vfs=0,10,0,10 -> 10 VFs on adapter1 and 2 with port 2
3) Check the parameter is be added with the module of ixgbe
#esxcfg-module -g ixgbe
4) Check the info through lspci after OS Reboot
#lspci | grep -i intel | grep -i virtual
5) 透過vSphere Client Dettach VF且接上到Guest OS後,透過lspci去Check(請先準備一個Guest OS)
#lspci | grep -i intel | grep -i 'ethernet\|network'
II.RHEL6
1) Add the parameter in the line of kernel and Reboot(Enable the func of VT, VT-d and SR-IOV)
#vi /boot/grub/grub.conf -> Add the string of 'intel_iommu=on'(Legacy Mode)
# grub.conf generated by anaconda
[comments omitted]
default=1
timeout=0
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux (2.6.32-22.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-22.el6.x86_64 ro root=/dev/mapper/vg_vm6b-lv_root rd_LVM_LV=vg_vm6b/lv_root rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet crashkernel=auto intel_iommu=on
initrd /initramfs-2.6.32-22.el6.x86_64.img
#vi /boot/efi/EFI/redhat/grub.conf -> UEFI Mode
II.RHEL7 & SLES 12
2) Add the parameter in the line of kernel and Reboot(Enable the func of VT, VT-d and SR-IOV)
#vi /etc/default/grub -> Add the string of 'intel_iommu=on' with the line of kernel(Legacy Mode)
#grub2-mkconfig -o /boot/grub2/grub.cfg(Legacy Mode)
#init 6
#cat /proc/cmdline -> Query the parameter is be added with the kernel
#grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg -> RHEL7's UEFI Mode
#grub2-mkconfig -o /boot/efi/EFI/suse/grub.cfg -> SLES12's UEFI Mode
III.SLES11
3) Add the parameter in the line of kernel and Reboot through yast2(Enable the func of VT, VT-d and SR-IOV)
#yast2 bootloader
4) Add the parameter with the module you want to use(EX:igb, ixgbe, be2net or etc.)
#modprobe -r igb -> Unload the module
#modprobe igb max_vfs=1
#modprobe be2net num_vfs=1
#lspci | grep -i virtual -> Query the PCI ID about the VFs
5) Through the comman virsh to query the PCI ID of VF you want to passthrough and dettach
#virsh nodedev-list | grep -i '01_00_1' -> if your VF's ID is 01:00.1
#virsh nodedev-dumpxml pci_0000_01_00_1 -> Review the information about VF
#virsh nodedev-dettach pci_0000_01_00_1
6) 透過Virt-manager將已經Dettach的VF Passthrough到Guest OS上,並透過lspci去Check(請先準備一個Guest OS[KVM])
#lspci | grep -i intel | grep -i 'ethernet\|network' -> 這邊也可以直接添加Device到XML後透過virsh開啟Guest
◎、以上就是在VMware、RHEL6/7、SLES11/12上實作SRIOV Function的簡易介紹,事實上是可以自己去編VM的XML,這部分就留給大家自己去玩味玩味了,另外Hyper-V的部分等B大提供囉,至於剩下Detail的說明看倌們可以參考維基百科的文章,先到這,收工囉!
VMware SR-IOV 對於Intel 和 Emulex 可開的VF port有限制,Intel 好像是42 有試過其他os嗎?
Hi 台啤哥:
Emulex最大是16 -> http://www-dl.emulex.com/support/linux/835444p/sr_iov_guide.pdf
Intel igb是45個,ixgbe是63個 -> http://h20566.www2.hp.com/portal/site/hpsc/template.BINARYPORTLET/public/kb/docDisplay/resource.process/?spf_p.tpst=kbDocDisplay_ws_BI&spf_p.rid_kbDocDisplay=docDisplayResURL&javax.portlet.begCacheTok=com.vignette.cachetoken&spf_p.rst_kbDocDisplay=wsrp-resourceState%3DdocId%253Demr_na-c03701945-5%257CdocLocale%253D&javax.portlet.endCacheTok=com.vignette.cachetoken
Above information for reference.
Intel官方網站裡面我還沒找著勒>"<…..
[補充]
Intel X540/X520 10G Family 可到 64 ->
http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/ethernet-x520-server-adapters-brief.html
Intel I350 1G 可到 8 -> http://www.intel.com/content/www/us/en/ethernet-controllers/ethernet-i350-server-adapter-brief.html
以上僅供參考
PS. 為什麼連編輯留言都要禁 Ctrl+C ??
因為你不是版主@@"
Due to the limited number of vectors available for passthrough devices, there is a limited number of VFs supported on a vSphere ESXi host. The vSphere 5.1 and 5.5 SR-IOV supports up to 41 VFs on supported Intel NICs, and up to 64 VFs on supported Emulex NICs for passthrough.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038739
Tabi哥太強了= =|||||