Back to back connection should work and please refer to URL below for the detail.
What is the exact QSFP+ cable you used?
Are you able to load the driver from this website https://downloadcenter.intel.com/download/26092/Intel-Beta-Network-Adapter-Drivers-for-Windows-Server-2016-Technical-Preview-5
sorry about the delay, I could not find the path back to this forum, but I have book marked it this time
I used the driver from the Intel web site for Win Server 2016 TP 5,
driver version is 18.104.22.168 5/27/2016
I did not apply any firmware updates because it was very confusing
cable is Intel XLDACBL3 Ethernet QSFP+ Twinaxial Cables
the OS recognizes the driver and adapter
When I first installed the adapter and driver, both systems booted up just fine, but would not establish a connection,
now on one system, it will not boot unless I disconnect the cable,
the other system boots with cable connected
Both systems have embedded Intel network, one I210+I217, the other 2 x I210.
both get IP from DHCP via standard small office internet router
I thought there was some mention that there had to be DNS/WINS on the converged network
Thank you for the reply. As you mentioned this is back to back connection, you directly
connects the cable on the two PCs without passing throught the switch. But you mentioned
you got the IP from internet router? Can you help clarify your setup?
If you connects directly via back to back connection, you can assign static ip on both PCs without using DHCP server.
Further currently the Intel driver for 2016 server OS is still a beta version
thus it is not fully validated and tested still
I have 2 computers
one is a Xeon E3 v3, another is Xeon E5 v4
both use Supermicro motherboards
and run Windows Server 2016 TP5
each system has onboard 1GbE, which is on a network with Internet connectivity
IPv4 is DHCP from a router
each system has a XL710-Q2 adapter,
with a Intel XLDACBL3 Ethernet QSFP+ Twinaxial Cable directly connecting the 2 adapters
IP is manually assigned
The Windows operating system recognizes the adapter, and uses the latest beta driver.
From the Xeon E5 system, I can ping the E3 system,
but not from the E3 to the E5,
On the E5 system, I can use explorer to map a network drive to the E5 system using the IP address of the XL710 adapter
If I do a transfer, it runs at 1Gbps, not a much higher number
(both systems have HDD and SSD share points)
do you mean Windows Server 2012 R2?
The plan is to test this on Win Server 2016 TP, and not to change network infrastructure on any 2012 R2 or earlier systems.
Also, I am the only person advocating 40GbE or 100Gbs if Omni-path were to support Windows,
so if I cannot proof that it works with a direct connection,
I cannot get funds to buy a switch, and the whole project would be shelved.
One of the reasons I am pursuing this is that I read somewhere that SFP/QSFP has lower latency than 10GBase-T/Cat 6 TP
(if there is documentation on the actual values, I would appreciate it )
Further checking, I am sorry Windows 2016 is a testing environment which we don't officially support yet. As it is stated in our pre-release software agreement at
https://downloadcenter.intel.com/download/26092/Intel-Beta-Network-Adapter-Drivers-for-Windows-Server-2016-Technical-Preview-5 , you may refer to the entire
"the Software you are installing or using under this agreement is pre-commercial release or is labeled or otherwise represented as “alpha-“ or “beta-“ versions of the Software ("pre-release Software"), the following terms apply. To the extent that any provision in this Section conflicts with any other term(s) or condition(s) in this Agreement with respect to pre-release Software, this Section shall supersede the other term(s) or condition(s), but only to the extent necessary to resolve the conflict.
You understand and acknowledge that the Software is pre-release Software, does not represent the final Software from Intel, and may contain errors and other problems that could cause data loss, system failures, or other errors. The pre-release Software is provided to you "as-is" and Intel disclaims any warranty or liability to you for any damages that arise out of the use of the pre-release Software..."
Hoping for your understanding on this matter.
yes I understand Windows Server 2016 TPx is a testing environment, and that Intel does not want to incur obligations of a fully qualified supported OS used in production.
As it so happens, I am doing testing in the Windows Server 2016 TP5 environment,
I would expect that Intel would be interested in finding out what issues there might be, so that Windows Server 2016 RTM would be supported when the time come.
But if you were interested, it does seem that the XL710 is now working at least in some parts,
I have two machines, there is a 1GbE (intel I210) connected to a switch with internet access
and the XL710 connected back-to-back,
after applying the latest Windows updates, I noticed SQL Server connections are now defaulting to the XL710, which would be the correct choice.
however, ping only works in from system 2 to system 1, but not from system 1 to system 2
Given this reply from Intel, I would have to recommend no confidence that Intel takes support issues seriously
We would like to clarify it is likely to encounter error still with the beta driver,
thus it is best to wait for the release of the official driver. Even with the new official driver, there is possible sometimes issue exist also.
We have done some testing in the lab in Windows 2016 TP5 in back to back connection, we are able to ping the ports in each direction.
To abbreviate what I said above and additional
Windows Server TP5 with the 1.3.115 beta driver, and Windows hotfixes as of Oct 12, the connection seemed to work to some degree.
On Windows Server 2016 RTM, the 1.5.59 driver, and Windows hotfixes as of Oct 13,
most functionality seems to work.
It does seem that starting a network connection, example, if I transfer a large file, it takes 1-2 sec for the transfer to start and then ramp up. It is visually instantaneous on a 1GbE connection, is visibly not on the 40GbE. Of course, the 40GbE eventual transfer is very nice.
From a SQL Server .Net client connection, it is also noticeable that there is a slight delay in establishing the connection.