- ALL COMPUTER, ELECTRONICS AND MECHANICAL COURSES AVAILABLE…. PROJECT GUIDANCE SINCE 2004. FOR FURTHER DETAILS CALL 9443117328
Projects > COMPUTER > 2017 > IEEE > CLOUD COMPUTING
The in-memory cache system is a performance-critical layer in today’s web server architecture. Memcached is one of the most effective, representative, and prevalent among such systems. An important problem is on its memory allocation. The default design does not make the best use of the memory. It is unable to adapt when the demand changes, a problem known as slab calcification. This paper introduces locality-aware memory allocation (LAMA), which addresses the problem by first analyzing locality of Memcached’s requests and then reassigning slabs to minimize the miss ratio or the average response time. By evaluating LAMA using various industry and academic workloads, the paper shows that LAMA outperforms existing techniques in the steady-state performance, the speed of convergence, and the ability to adapt to request pattern changes, and overcome slab calcification. The new solution is close to optimal, achieving over 98% of the theoretical potential. Furthermore, LAMA can also be adopted in resource partitioning to guarantee quality-of-service (QoS).