Skip navigation


This is designed to be a quick overview of the most common 82574 questions we get.  This is not designed to replace a reading of the datasheet and spec update.  In fact, the datasheet and spec update are required reading, and, if they conflict, the datasheet and spec update are right and this page is wrong.  If you like this type of FAQ, leave us a comment and maybe we will do more of them.  Also, we may not update this page in the future so I can’t say this enough; check the datasheet and spec update!


Q: Does the 82574 support EEPROM-less designs?

A:  No. The 82574 requires an EEPROM or a Flash for proper operation.


Q. What power supplies does the 82574 require?

A:  3.3V, 1.9V and 1.0V.


Q. Which power rails are sourced from main power?

A:  In order to support manageability and Wake on LAN features, all 82574 power supplies should be derived from AUX power.


Q:  I’m working on an 82574L design and it is going into an ARM* platform that does not offer a PERST# signal.   I had them pull it high and then de-assert it before the PCIe* setup sequence started and it’s working but I want to know of any specific timing requirements for this situation:

A:  PCI Express CEM r2.0.pdf section 2.2 PERST# Signal defines very specifically what the expectations are for this signal (Tperst is the variable I believe you were asking about). The Tperst min is stated as 100µs (applies to S0->S3->S0). However, if this is the G3->S0 transition PERST should be kept low from the power on to 100ms after power is known to be stable.


Q: What is the purpose of ATEST_P and ATEST_N?  Is there a way to measure the GTX clock (125MHz)?

A: Don't use the ATEST pins.  There are a few IEEE tests that require a different methodology of determining the clock. The tests can still be done and we document how to do the standard tests without a clock.  The best way to measure the crystal frequency is using a high impedance probe, a probe amp, and a counter.


Q: Are any LAN Voltage Regulator options problematic?

A:  In early 82574 designs we advised allowing multiple options to reduce risk.  Since then we have tested all the options well and have had customers successfully using all the options.  Therefore we can now state that there is no additional risk at this point to using any of our recommended power options detailed in the 82574L datasheet.


Q:  I want to change the MDI to MDI-X on my board (crossover).  Is there anything else they need to do to support this in the EEPROM or SW?

A:  No, the 82574L's Auto MDI-X auto detects when MDI-X is needed.  As long as you have the pins connected correctly you don't need to do anything else.


Q:  There is a different recommendation in the datasheet or checklist compared with the reference schematic (Resistor Value, Capicitor Value, etc).  Which document do I go by?

A:  When there is a conflict, please follow the reference schematic's recommendation since this was validated with a real board design. Also, let us know so that we can update the documentation.


Q: What if I want to use a 10/100 magnetic for my design?

A: You could, but you should consider the following points:

1.)    Intel chose not to validate 10/100 magnetics because the cost of a gigabit magnetic is often equivalent or CHEAPER in a few cases compared to 10/100 magnetics.

2.)    If you want to use only 10/100 but use gigabit magnetics, you can disable and enable gigabit mode with an EEPROM bit.  This allows for flexibility and upgrades by changing a single bit.


Q:  Where can I find the schematic and layout symbols for 82574L?

A:  OrCAD and Cadence symbols for 82574L can be found on CDI (Intel’s Classified Design Information documentation available to customers with NDAs). We don’t supply layout symbols for our parts since the footprint can be very board process-specific. Since 82574L is a 64 pin QFN the designer probably already has an appropriate symbol in his library. If not, the mechanical specs, etc, in the datasheet can be used to create one.


Q:  Why is Intel using different crystal loading capacitance values (33pF vs 27pF) between the Intel® 82578 GbE Network Connection and 82574L LAN chips?

A:  82578 uses 33pF capacitors to achieve a Cload = 18pF to ensure that the pullability (ppm) of the crystal is small.  This loading capacitor value was found during the crystal validation to be optimal for meeting the 30ppm specification for the crystal.  82574L's validation resulted in the value of 27pF.  The specific board capacitance on a particular design can impact the results customers should test their crystal circuit to verify that the ppm requirements are met with the recommended capacitance values.


Q:  The LAN connector on 82574L has a 1.9V bias being applied to the magnetics, but this is not the case for 82578.  Shouldn't we be applying a voltage to the 82578 connector as well?

A:  The 1000BASE interface on the 82578 and the 82574L have different architectures for the PHY drivers.  The 82578 has a voltage mode driver while the 82574L has a current mode driver.  A current mode driver requires a voltage source at the Center tap of the magnetics but a voltage mode driver requires that the center tap be connected to Ground.  Here is an article on the difference for your reference.


Q:  How long can NC-SI traces be for 82574L?

A:  Our simulations show that a two drop NC-SI interface is functional but not spec compliant up-to 19-20 inches depending on the number of vias.    The interface is spec complaint up to about 7.5 or 8 inches.


Q:  How do I know if I designed in my 82574L correctly?

A:  We can check your homework!  Just submit your design to Intel via premier.intel.com or via your parts source for a one time schematic review and we’ll (for FREE) review the design and let you know what we think.  We also do layout reviews.  Did I mention it’s free?

Steve Worley of the System x Performance and Analysis Benchmarking group at IBM just released a paper that we here at the Intel Ethernet Virtualization team are pretty excited about.

 

This paper, entitled Effect of SR-IOV Support in Red Hat KVM on Network Performance in Virtualized Environments discusses how Steve used the SPECvirt_sc2010 benchmark to test performance when using SR-IOV and when not using SR-IOV.

 

They ran a variety of workloads; here is a quote from the document detailing the workloads:

 

The benchmark uses several workloads representing applications that are commonly consolidated into a virtualized environment. Scaling is achieved by running additional sets of virtual machines, called “tiles,” until overall throughput reaches a peak. Each tile consists of six different virtual machines:

 

• Infraserver – Serves file downloads directory for web workload, runs web backend simulator

• Webserver – Runs Web workload

• Mailserver – Runs Internet Message Access Protocol (IMAP) workload

• Appserver – Runs application server workload

• Dbserver – Runs database as backend to application server workload

• Idleserver – Runs a poll workload

 

Because each tile represents six VMs, a system running 10 tiles would consist of 60 VMs.

 

The article itself does not indicate which SR-IOV Ethernet controller is used for the test, however the detailed configuration document that it links to does.  The Intel® 82576 Gigabit Ethernet Controller was used for these tests.

 

Results

The results of the tests are very interesting.  It shows that SR-IOV definitely reduces CPU overhead and that by doing so, it can increase QoS due to the fact that the CPU utilization is reduced.

 

The following chart is a graphical representation of the data published within the paper by IBM:

 

IBM KVM Results.png

SR-IOV Data Results

Note that there is no data for 13 Tiles in non-SR-IOV mode – this is because the CPU utilization was so high that the test run could not be completed.  The details of each of the tests can be found here.

 

So What Does This Mean

Good question.  It does not mean that the SR-IOV enabled network path is a replacement for the Emulated path for all workloads.  In fact, if you dig into the details of the test, IBM seems to have picked and chosen specific workloads to use SR-IOV while leaving the others on the Emulated path.

          

The current ecosystem for SR-IOV today has its limitations – one of the major ones is that live migration is not currently supported (though we are working with the industry on this right now).  For this reason alone, SR-IOV may not be suitable for all virtualized workloads.

 

However, as these test results show, there are definitely some cases for some customers where the performance benefits gained by using SR-IOV outweigh the current limitation associated with it.

 

For more information, I suggest you carefully read the document by Steve Worley entitled Effect of SR-IOV Support in Red Hat KVM on Network Performance in Virtualized Environments, and take a close look at the SPECvirt_sc2010 Result information.

 

BTW… If you take a look at the other test results posted on the SPECvirt_sc2010 site, all of the systems are using Intel® Ethernet Controllers for their primary benchmark network;

  • Intel® Ethernet Server Adapter X520 10GbE dual-port with SR-IOV support
  • HP* NC365T 4-port Server Ethernet Adapter that uses Intel® Ethernet 82580 Controller.
  • Intel® Gigabit ET Adapter that uses the Intel® Ethernet 82576 Controllers with SR-IOV.

http://www.spec.org/virt_sc2010/results/specvirt_sc2010_perf.html

Curious about what SR-IOV is, and how it works?  We have a blog on that!

My partner-in-crime, Brian Johnson, and I have just finished a new White Paper detailing some best practices when using 10 Gigabit Ethernet with VMware* ESX/ESXi.  This is the 3rd paper in a series that began with Brian’s well-received and many times re-branded paper entitled Simplify VMware vSphere* 4 Networking with Intel® Ethernet 10 Gigabit Server Adapters that I blogged about here in July.  This paper detailed some best practices regarding how to improve efficiency and simplicity by moving to two 10 Gigabit Ethernet connections on a virtualized server rather than multiple 1 Gigabit ports.

 

 

We continued the series with another White Paper entitled Virtual Switches Demand Rethinking Connectivity for Servers, which discusses how the virtual switch within VMM can now be thought of as having uplinks from your Virtualized Server in a VM to the virtual switch, which in turn has physical uplinks to the top-of-rack switch; this paper is located here.

 

 

This latest paper modifies some of the best practices discussed in the 1st paper to utilize some of the new Quality of Service capabilities in the Virtual Distributed Switch in VMware vSphere 4.1.   We discuss how and why those practices have been updated and provide some additional best practices that can be applied to a virtualized environment.

 

 

We hope you find the latest paper useful and encourage feedback.  The white paper is located here.

 

dougb

Using Intel® Ethernet FCoE?

Posted by dougb Dec 3, 2010

If you are deploying FCoE, drop us a line to let us know and you might be profiled in an Intel® marketing campaign.  Let us know early enough in your plans and we might help design your deployment.  We are looking for small and medium businesses that are okay with things not being FC vendor qualified or soon to be qualified.  Some vendors won’t qualify our product in a timely fashion because its revolutionary effect on costs in the market.

To qualify, you must have pre-existing FC infrastructure and a non-disclosure agreement with Intel.  We reserve the right to pick and choose, and your mileage may vary.  Private e-mail me or post a comment and hopefully we'll be talking about your business here on the blog!

Filter Blog