The invention provides a method of operating a cache memory so that
operation is optimized. Instead of fetching data immediately upon a cache
miss, the present invention continues with subsequent cache accesses.
Decoupled from cache access, cache misses are fetched to cache. During
operation, for each request in a sequence of data requests, it is
determined if the requested data can be found in cache memory. If the data
is not found in the cache, the next request in the sequence is processed
without first retrieving the data pending from the earlier request. A miss
list is generated for each of the requests in the sequence of requests
whose data is not found in the cache. The data that is associated with the
requests in the miss list is obtained from DRAM and used to satisfy the
requests. Some cache lines may have one or more pending hits to data
associated with the cache line. Those requests are kept in a pending hits
list and processed in order as required. There may also be pending misses
kept in a pending misses list where the list contains one or more pending
misses to data associated with the cache line. A flag or indicator is set
for a cache line when there are misses associated with the cache line.