An Efficient Method of Eliminating Inclusion Overhead in Snoop-Based CC-NUMA Systems
スポンサーリンク
概要
- 論文の詳細を見る
In a Cache Coherent Non-Uniform Memory Access(CC-NUMA)system, memory transactions can be classified into two types: inter-node transactions and intra-node transactions. Because the latency of inter-node transactions is usually hundreds times larger than that of intra-node transactions, it is important to reduce the latency of inter-node transactions. Even though the remote cache in the CC-NUMA systems improves the latency of inter-node transactions through caching the remote memory lines, the remote and processor caches of snoop-based CC-NUMA systems have to retain the multi-level cache inclusion property for the simplification of snooping. The inclusion property degrades the cache performance by following factors. First, all the remote memory lines in a processor cache should be preserved in the remote cache of the same node. Second, a line replacement at the remote cache replaces the same address line in the processor caches, which does not comply with the replacement policy of the processor caches. In this paper, we propose Access-list which renders the inclusion property unnecessary, and evaluate the performance of the proposed system by program-driven simulation. From the simulation results, it is shown that the miss rates of caches are reduced and the efficiency of the snoop filtering is similar to the system with the inclusion property. It turns out that the performance of the proposed system is improved up to 1.28 times.
- 社団法人電子情報通信学会の論文
- 2000-02-25
著者
-
Suh Hyo-joong
The Department Of Computer Engineering Seoul National University
-
Yoo Seung
The Graduate School Of Information And Communication Technology Ajou University
-
JOHN Chu
The Department of Computer Engineering, Seoul National University
-
John Chu
The Department Of Computer Engineering Seoul National University