Memcached is a high-performance, distributed memory object caching system. It is used in a wide range of sites such as livejournal.com and facebook.com as a way to alleviate database load helping to speed up sites. Memcached is usually deployed as an adequate subset of servers in relation to your application and database infrastructure. It’s a great way to cut down your database calls by caching the requests.
The servers are not load balanced in any least connections or response time fashion, but instead using a Hashing/Key Distribution. So in other words it spreads out cache across all the memcached servers not redundantly. A hash table allows the application to locate where cache exists. Utilizing a lazy expiration, cpu is not a factor for expiring items. An item gets requested it checks the expiration time to see if the item is still valid prior to returning it to the client. When adding a new item to the cache, if the cache is full it will look out for expired items to replace before replacing the least used items in the cache. You can read more about it at the memcached site.
Recently I ran into a situation where an object (db conf) that was being called repeatedly via a web application ~100x more then any other cached item was being stored on one of four memcached servers. This item was stored as a flat file originally but had been implemented into a database which is how memcached came into play.
The excitement starts here, the four memcached servers were Dell 1950’s Dual Core Intel Xeon 2.0 GHz with 8 Gigs of RAM on each running Centos. Version of Memcached was 1.2.1. The particular object being cached ended up on a server as designed, however was being called by the web application a ~100x more then any other cached object and had a 5 minute expire. The particular memcached server with the object ended up taking a whopping 800+megabit/second in traffic, see the below graph:
*The reads and writes are reversed was the way cacti was set up.
The other server vitals at the time were:
CPU = 60%
Memcache Misses = 681 k/s
Memcache Hits = 1.16 M/s
Memcache Requests/sec (get&set) = 1.66 million
Server Load = 1.29
I haven’t ever had a single server reach passed 300 megabit in traffic first off so that was a new lesson of capacity and threshold. Also when memcached states it is not CPU intensive they really mean it. The server started crapping out meaning not serving requests properly at 780+megabit, so amazingly enough memcached can handle quite a lot of traffic/requests. Our switch port was saturated prior to memcached failing.
So lesson learned BEWARE what you cache when you are using memcached you don’t want to see the stats I did… In this case we removed that cached object which was a large DB config file and localized it. Hope this helps with giving the community a case study of memcached out in the wild.