Earl y access – Xilinx LogiCore PLB PCI Full Bridge User Manual

Page 42

Advertising
background image

PLB PCI Full Bridge (v1.00a)

42

www.xilinx.com

DS508 March 21, 2006
Product Specification

EARL

Y ACCESS

Furthermore, it is the responsibility of the PCI initiator to properly read data from non-prefetchable
PLB slaves. For example, it must perform single transaction reads of non-prefetchable PLB slaves to
avoid destructive read operations of a PLB slave. However, some protection is provided in the
hardware as described in a later subsection.

As shown in

Table 16

, memory read commands (i.e., not multiple) are translated to single PLB

transactions. A remote PCI initiator can request more than one data transfer with the memory read
command, but on the first data phase, the PLB PCI Bridge handles each data request as single PLB
transactions with a disconnect with data on the PCI bus. This behavior is due to the characteristic of the
v3.0 core which does not allow throttling data except as a wait before the first data phase complete.
Data throughput will be low when memory read commands are utilized.

Data throughput can be very high with memory read multiple transactions. Memory read multiple
commands of memory are translated to PLB burst read transactions of length defined by the PLB PCI
Bridge. The bridge will attempt to fill the IPIF2PCI FIFO. Unless the remote PLB slave terminates the
transaction the bridge will fill the FIFO with one burst prefetch read. The prefetch read will not read
beyond the high-address defined by the PCI BAR length parameter. After the remote PCI initiator
terminates the read transaction, the IPIF2PCI_FIFO is flushed of prefetched data that has not been read
by the remote PCI initiator.

When read data is received from a remote PLB slave, the data is loaded in the IPIF2PCI FIFO and
synchronized across the PLB/PCI time domain boundary which takes up to two PCI clock cycles to
accomplish. The PLB slave can throttle the data read by the remote PCI initiator. If the FIFO is emptied
(i.e., the PCI initiator is accepting data faster than the PLB slave is providing it), the PLB PCI Bridge
must disconnect with data because the v3.0 core does not allow throttling after the first data phase.

Throttling by the PLB slave and the v3.0 restriction of not allowing throttling of data except as a wait
before the first data phase completes can cause low data throughput. Impact on system performance
can be minimized by optimizing the parameter that sets the FIFO level when the first data is transferred
on the PCI bus during a memory read multiple operation. The parameter is
C_TRIG_PCI_XFER_OCC_LEVEL and setting this parameter throttles the first data phase until the
FIFO has buffered the number of words set by the parameter. This insures that the transfer is at least
this number of words even if the remote PLB slave throttles on the PLB bus.

Another parameter that can increase data throughput is the FIFO occupancy level that triggers the
bridge to prefetch more data from the remote PLB slave (C_TRIG_IPIF_READ_OCC_LEVEL). Properly
setting this parameter helps insure that the FIFO does not empty while the remote PCI initiator is
requesting data.

In a PCI initiator read multiple command of a PLB slave, the Master IP module attempts to keep the
IPIF2PCI_FIFO full of data read from an PLB slave device for subsequent transfer to the PCI initiator. If
the word address presented on the PCI bus is mid-double word aligned (i.e., 0x4 or 0xC), a single word
is read from the PLB slave before the burst prefetch read is started to attempt to fill the FIFO. Data
remaining in the FIFO when the PCI initiator terminates the memory read multiple command is
discarded. Prefetch is not performed on memory read commands (i.e., not memory read multiple).

The PLB PCI Bridge operates the same, independently of whether PLB clock is faster or slower than the
PCI clock. Single data request on the PCI bus are translated in the same way to the PLB bus with the
only difference being the delays due to the varying clock periods. Because the v3.0 core cannot throttle
data flow, the PCI data flow is very different for read multiple commands depending on the relative
clock speeds. If the PLB clock is faster, the data flow is limited by the PCI bus and the data flow is, in
most cases, one continuous read multiple.

Advertising