Posted by Ilya Zakharevich on 01/22/08 00:25
[A complimentary Cc of this posting was sent to
<firstname.lastname@example.org>], who wrote in article <email@example.com>:
> >> The USB bus isn't interrupt-driven -- it's polled (as I wrote),
> >Do not see any significant difference. The poller would set up a
> >wake-up call, which would generate an interrupt. [And, BTW, why my
> >PCI hardware view shows that USB controllers use interrupts?]
> When the bus is busy with device 1, device 2 has to wait until device 1
> is done.
Irrelevant. Assume there is no device 1. Why maximum throughput is
> In addition, device 2 cannot use the bus until polled by the
> host bus controller, which can be delayed for a number of reasons.
> There is no instantaneous interrupt response.
There is if there is no higher-priority interrupt. Why would there?
> >Could you explain how latency is related to polling?
> Latency occurs because the device has to wait to be polled. It cannot
> demand the bus instantly.
Same happens with any communication method. If the other side does
not cooperate, a streaming device would choke - polling or no polling.
> >I do not see how this could be a problem - just do not use more than 1
> >USB device...
> That helps, but there can still be latency due to issues in the host
> controller (which handles multiple ports)
Assume than controller is dedicated to a device.
> and software stack (which handles all ports).
Again, how this is relevant? The stack expects some data be ready on
port7 in 3msec. It sets a timer, and polls port7. If nothing there,
and some suitable delay passed, it can poll the remaining 11 ports.
(I assume that USB controller can bus master for multi-packet
transfer; can it?)
IMO, a *real* answer would describe a minimal (repeatable) pattern of
USB activity to transfer a 512byte packet from an USB device,
calculate the required timing, and show why this should take more than
16usec (to support the observed max of 33MB/sec). *Somebody* must
have done it already...
[Reply to this message]