Texas Instruments TLV1562 User Manual

Page 38

Advertising
background image

Software Overview

32

SLAA040

8.5.2.1

Throughput Optimization

According to the data sheet, the mono interrupt driven mode with CSTART
starting the conversion can be described as follows: After the conversion is done
(INT set low), the DSP

selects the converter,

brings down the RD signal,

waits until the data are valid,

reads the data from the ADC and

resets RD to a high signal level.

Now, CSTART can be pulled low, for at least 100 ns, and set high to start a
new conversion.

As tests showed, it does not matter at what time the CSTART signal gets pulled
low to start the sampling.

Changing the signal flow slightly by pulling CSTART low, before the ADC output
data are read on the data bus, will save at least of 100 ns of CSTART low time
after read instruction (additional advantage: the longer the analog input is
sampled, the more precisely the sampling capacitor will be charged assuming
that the noise located by RD is negligible). In this algorithm, CSTART can be
taken high right after the data has been read by the DSP without any wait
instruction. Therefore, the maximum throughput is gained because the 100-ns
sampling time is saved. Test results showed a maximum throughput of more than
1.2 MSPS (approximately 20% of gain in throughput), with the internal ADC clock,
when using this strategy (see Figure 8).

A concern is that possible small spikes during conversion at the same time as the
data gets read onto the data bus might worsen the analog input signal accuracy.
Some measurements could help here to verify the applicability of the throughput
optimization.

A concern is that during conversion if any small spikes occurs on the CSTART
signal while the ADC data is being read out onto the data bus, then the accuracy
of the ADC quantized output data could be affected.

This only works for one TLV1562 (not multiple) because CS is not used.

Advertising