Wide bus – Achronix Speedster22i SerDes User Manual

Page 86

Advertising
background image

lanes. Now we have a total of 4 clocks per bonded group of 12 lanes, or 8 total clocks for the

24 10 Gbps lanes.
At this point, we have a total of 16 clock resources needed for the SerDes (8 for the 5 Gbps

lanes and 8 for the 10 Gbps lanes). Now we need to place the SerDes lanes.
Since the chip allows bonded groups of lanes to be placed on lanes 0-7, 8-19, and 20-31, we

can easily see that our groups of 12 bonded lanes will not fit on lanes 0-7. We must place our

2 groups of 12 bonded lanes on lanes 8-19 and 20-31. This leaves lanes 0-7 open to place the 8

independent 5 Gbps lanes on.
Now let’s see where we stand with the clock resources. The 8 independent 5 Gbps lanes

(using EFIFO) placed on lanes 0-7 bring 8 clocks into the NorthWest clock region. The

bonded group of 12 10 Gbps lanes placed on lanes 8-19 bring an additional 4 clocks into the

NorthWest region. Let’s say the master clock lane is assigned to lane 15 on the chip. Since

lanes 15-19 distribute clocks to both East and West, that would mean we now also have 4

clocks entering the NorthEst region. The bonded group of 12 10 Gbps lanes placed on lanes

20-31 bring an additional 4 clocks into the NorthEast region.
So, for all 32 lanes, we have a total of 12 SerDes clocks in the NorthWest region and 8 SerDes

clocks in the NorthEast region. This leaves 4 clock resources available in the NorthWest

region and 8 clock resources available in the NorthEast region (for system clocks, SBUS clock,

etc).
Now, if we wanted to add more SerDes lanes on the South side of the chip, we would go

through the same type of exercise. Note that using the SerDes lanes on the North side of the

chip does not consume clock resources on the South clock regions (which are available to the

South SerDes lanes).

Wide Bus

At the interface between the SerDes and the FPGA fabric, incoming RX data is parallelized

onto a user-selected width bus before being provided to the FPGA fabric. Similarly, parallel

data of a user-selected width from the FPGA fabric is serialized in the SerDes before being

transmitted on outgoing TX lanes.
This interface allows for parallelization of 8, 10, 16 or 20, as defined by the user. For example,

a full duplex link operating at 2.5Gbps with a data width of 10, would require the FPGA

fabric to be operating at 2.5*1000/10 = 250MHz.
As you can imagine, even at the widest data width of 20, high link data rate operation would

result in FPGA fabric timing requirements that would be difficult to achieve.
To accommodate for this, and to ensure that timing can be closed for the FPGA fabric in a

reasonable manner, the “Generic” and “Lanelinx” Standards in the SerDes macro

automatically introduce a ‘Wide Bus’ interface. This interface is enabled for all data rates

beyond 6.25 Gbps and essentially doubles the parallel transmit/receive data bus (and

supporting buses) widths at the SerDes-FPGA fabric interface, whilst accommodating FPGA

fabric operation at half of the previously defined frequency. There is also some additional

latency introduced.
For example, a full duplex link operating at 8.0Gbps with a data width of 20, would require

the FPGA fabric to be operating at 8.0*1000/40 = 200MHz. The datain and datout buses would

both be of size 40.
“10G Ethernet”, “Interlaken” and “PCI-Express” also provide support for wide bus

interfaces. Please refer to the respective user guides on support details and other relevant

information.

86

UG028, July 1, 2014

Advertising