The proposed system and associated algorithm when implemented improves the
processor cache miss rates and overall cache efficiency in multi-core
environments in which multiple CPU's share a single cache structure (as
an example). The cache efficiency will be improved by tracking CPU core
loading patterns such as miss rate and minimum cache line load threshold
levels. Using this information along with existing cache eviction method
such as LRU, results in determining which cache line from which CPU is
evicted from the shared cache when a capacity conflict arises. This
methodology allows one to dynamically allocate shared cache entries to
each core within the socket based on the particular core's frequency of
shared cache usage.