Dear all members:

Jaemin Jung and Hu-Ung Lee are going to present in ESOS Seminar.

Please check the wiki for updates and presentation material.

Brief schedule on ESOS Seminar is as follows

*       Time and date: 3 July 2014 13:00

*       Place : Fusion Technology Center, Room 811

*       Speaker: Jaemin Jung and Hu-Ung Lee

 

Jaemin Jung: A Mean Field Model for a Class of Garbage Collection Algorithms in Flash-based Solid State Drives

Benny Van Houdt (University of Antwerp – iMinds, Belgium)

Abstract

Garbage collection (GC) algorithms play a key role in reducing the write amplification in ash-based solid state drives, where the write amplification affects the lifespan and speed of the drive. This paper introduces a mean field model to assess the write amplification and the distribution of the number of valid pages per block for a class C of GC algorithms. Apart from the Random GC algorithm, class C includes two novel GC algorithms: the d-Choices GC algorithm, that selects d blocks uniformly at random and erases the block containing the least number of valid pages among the d selected blocks, and the Random++ GC algorithm, that repeatedly selects another block uniformly at random until it finds a block with a lower than average number of valid blocks. Using simulation experiments we show that the proposed mean field model is highly accurate in predicting the write amplification (for drives with N = 50000 blocks). We further show that the d-Choices GC algorithm has a write amplification close to that of the Greedy GC algorithm even for small d values, e.g., d = 10, and offers a more attractive trade-off between its simplicity and its performance than the Windowed GC algorithm introduced and analyzed in earlier studies. The Random++ algorithm is shown to be less effective as it is even inferior to the FIFO algorithm when the number of pages b per block is large (e.g., for b ≥ 64).

 

Hu-Ung Lee: Revisiting Widely Held SSD Expectations and Rethinking System-Level Implications

Myoungsoo Jung and Mahmut Kandemir (The Pennsylvania State University)

Abstract

Storage applications leveraging Solid State Disk (SSD) technology are being widely deployed in diverse computing systems. These applications accelerate system performance by exploiting several SSD-specific characteristics. However, modern SSDs have undergone a dramatic technology and architecture shift in the past few years, which makes widely held assumptions and expectations regarding them highly questionable. The main goal of this paper is to question popular assumptions and expectations regarding SSDs through an extensive experimental analysis using 6 state-of-the-art SSDs from different vendors. Our analysis leads to several conclusions which are either not reported in prior SSD literature, or contradict to current conceptions. For example, we found that SSDs are not biased toward read-intensive workloads in terms of performance and reliability. Specifically, random read performance of SSDs is worse than their sequential and random write performance by 40% and 39% on average, and more importantly, the performance of sequential reads gets significantly worse over time. Further, we found that reads can shorten SSD lifetime more than writes, which is very unfortunate, given the fact that many existing systems/platforms already employ SSDs as read caches or in applications that are highly read intensive. We also performed a comprehensive study to understand the worst-case performance characteristics of our SSDs, and investigated the viability of recently proposed enhancements that are geared towards alleviating the worst-case performance challenges, such as TRIM commands and background-tasks. Lastly, we uncover the overheads brought by these enhancements and their limits, and discuss system-level implications.

댓글 남기기