Informally, we could say that a memory system is coherent if any read of a data item returns the most recently written value of that data item. This difficulty is generally referred to as the cache coherence problem. Thus, two different processors can have two different values for the same location. Later on, when processor A modifies it to value 0, processor B still has it as value 1. We can see that both processors A and B read location X as 1. This is because the shared data can have different values in different caches, and this has to be handled appropriately. Caching of shared data, however, introduces the cache coherence problem. In addition to the reduction in access latency and required memory bandwidth, this replication also provides a reduction in contention that may exist for shared data items that are being read by multiple processors simultaneously. Similarly, when shared data are cached, the shared value may be replicated in multiple caches. Since no other processor uses the data, the program behavior is identical to that in a uniprocessor. When a private data is cached, its location is migrated to the cache, reducing the average access time as well as the memory bandwidth required. Private data are used by a single processor, while shared data are used by multiple processors essentially providing communication among the processors through reads and writes of the shared data. Multiprocessor Cache Coherence: Symmetric shared-memory machines usually support the caching of both shared and private data. We shall elaborate on that in detail in this module and the next module. However, when we cache data in multiple processors, we have the problem of cache coherence and consistency. Caches serve to increase bandwidth and reduce latency of access and are useful for both private data and shared data. This can be done by caching the data in multiple processors.
0 Comments
Leave a Reply. |