Thanx for posting to the community.
I have a couple of questions that may help figure out the problem.
What are the 10GbE cards you are using?
Is SR-IOV enabled?
Does it make a different which slot the last card is in?
You might be running out of IO address space. If you do not need to use network boot you could disable IO Map Mode. See Doug Boom's blog, IO Util Tool for details. The utility is included in the Administrative Tools webpack available for download at http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=4237.
Patrick has some great questions and I have a few of my own. Is IO mapped resourses enabled on each of the cards? You might be running out of IO resourses. Use can IOUtil (on the media at APPS\TOOLS\IOUtil) and turn it off if you aren't using it. What would be using it are the Option ROMs, another thing that could be consuming BIOS resources. If you aren't using PXE/iSCSI boot or FCoE Boot, you can turn off the Option ROMs using BootUtil (located at APPS\BootUtil). The readmes in each directory provide the usage and install details.
Hopefully between Patrick's advice and mine can get going. Mark the question as answered please if it helps, or post more data if it doesn't. Good luck!
How to diable the IO map mode is it in BIOS(which option)?
SRIOV is diabdled in the BIOS.Using fedora 14 setup. Some time i am getting the error "NMI recieved system halted" on boot up.
You cannot manually disable resources, but you can disable the unused Boot ROMs like Douglas suggested. That will free up additional PCI resources.
I am linux fedora 14. I downloaded the IO util .exe for dos i think so.....
How can i do for the linux system ?
Change the bios settings of Memory Mapping.
Go to BIOS => Advanced => PCI Configuration =>
Maximize Memory below 4GB Disabled
Memory Mapped I/O above 4 GB Enabled
Memory Mapped I/O Size <select the card required amount of memory mapping in GB>
I hope this will solve your issue.
with thanks and regards,
BIOS setup has an option for MMIO over 4GB = Enabled,i will accept this will have an impact to our gigabit ethernet device. Since Gigabit ethernet PCIe endpoint device's MMIO is configured as Prefetchable address space in the configuration space for all the functions.
I believe If the PCI device is configured as Non-prefetchable then there will be no effect when we setup MMIO over 4GB = enabled.
i think i am not wrong ,somewhere in the PCIe spec i have seen the Implementation note for Non-pre-fetch address space restriction.
And even gigabit ethernet will require less memory region only, even with SRIO-V feature enabled.
Gigabit Ethernet card Memory allocations
512K + 16K = 528K
512K + 16K = 528K
32K * 64 Vfs = 2048K
32K *64 Vfs = 2048K
Memory allocation will be of 5152 K (Per card memory requirement) * 5(max 5 cards) = 25760K.
My hunch is that pnp bridge configuration or the PCI bridge address range allocation for the endpoint device in the BIOS(By increasing range) can slove this issue even if the memory space for the PCIe endpoint device is larger in MB size.