|
Posted by Arny Krueger on 04/13/07 18:40
<mmaker@my-deja.com> wrote in message
news:1176478201.621892.70030@w1g2000hsg.googlegroups.com
> On Apr 13, 4:07 pm, "nappy" <s...@spam.com> wrote:
>> <mma...@my-deja.com> wrote in message
>>> But if the CPU is having to access RAM, it's already
>>> screwed because the latency of memory accesses is large
>>> no matter how fast they run. WHAT?
>
> Is it really that difficult to understand? If your data
> isn't in the cache, you have to wait for the memory
> access to get out onto the bus and talk to the RAM...
> saving a few clock cycles with faster RAM doesn't make
> much difference to an operation that takes tens of clock
> cycles anyway.
>
> That's why AMD got a speedup by putting the memory
> controller on the CPU, and why Intel keep adding larger
> and larger caches.
The benefits of read cache are largely eliminated when data is only accessed
once in a great while.
Usually, about 2/3 of all references to backing storage are reads.
Basic RAM performance can still be very important, particularly when a very
large working set is being referenced, such as with a lot of types of bitmap
graphics, video, and database accesses.
Navigation:
[Reply to this message]
|