Search for dissertations about: "Caches"
Showing result 11 - 15 of 45 swedish dissertations containing the word Caches.
-
11. Leveraging Existing Microarchitectural Structures to Improve First-Level Caching Efficiency
Abstract : Low-latency data access is essential for performance. To achieve this, processors use fast first-level caches combined with out-of-order execution, to decrease and hide memory access latency respectively. READ MORE
-
12. Distributed Coded Caching with Application to Content Delivery in Wireless Networks
Abstract : The amount of content downloaded to mobile devices, mainly driven by the demand for video content, threatens to completely congest wireless networks and the trend of ever increasing video traffic is expected to continue unabated for many years. A promising solution to this problem is to store popular content closer to end users, effectively trading expensive bandwidth resources for affordable memory, a technique known as caching. READ MORE
-
13. Advances Towards Data-Race-Free Cache Coherence Through Data Classification
Abstract : Providing a consistent view of the shared memory based on precise and well-defined semantics—memory consistency model—has been an enabling factor in the widespread acceptance and commercial success of shared-memory architectures. Moreover, cache coherence protocols have been employed by the hardware to remove from the programmers the burden of dealing with the memory inconsistency that emerges in the presence of the private caches. READ MORE
-
14. Packet Order Matters! : Improving Application Performance by Deliberately Delaying Packets
Abstract : Data-centers increasingly deploy commodity servers with high-speed network interfaces to enable low-latency communication. However, achieving low latency at high data rates crucially depends on how the incoming traffic interacts with the system's caches. When packets that need to be processed in the same way are consecutive, i.e. READ MORE
-
15. Efficient methods for application performance analysis
Abstract : To reduce latency and increase bandwidth to memory, modern microprocessors are designed with deep memory hierarchies including several levels of caches. For such microprocessors, the service time for fetching data from off-chip memory is about two orders of magnitude longer than fetching data from the level-one cache. READ MORE