Memcached
Memcached is a high-performance, distributed in-memory key-value cache designed in 2003 by Brad Fitzpatrick at LiveJournal. It is the spiritual ancestor of Redis — simpler, multi-threaded, and laser-focused on one task: caching opaque byte strings keyed by short string keys. Memcached has no persistence, no replication, no data types beyond byte strings, and no cluster mode — deliberately, because its design goal is the smallest possible cache primitive.
Key Features:
- Multi-Threaded. Unlike Redis’ single-threaded model, Memcached scales to many cores per instance — a single node can saturate a 10 GbE NIC.
- LRU Eviction. When memory fills, oldest-accessed items are evicted automatically — pure cache semantics.
- Sliced Memory Allocator. Slab classes prevent the fragmentation that pure malloc would cause for variable-size values.
- Client-Side Sharding. No server-side cluster — clients use consistent hashing to spread keys across nodes.
- Tiny Footprint. The whole server is < 5,000 lines of C; trivially deployable on any platform.
Memcached vs. Redis:
- Choose Memcached when you need a pure ephemeral cache, want multi-core throughput per instance, and don’t use any of Redis’ data structures, persistence, or pub/sub.
- Choose Redis for everything else — rate limiting (sorted sets), session state with persistence, leaderboards, queues, pub/sub, distributed locks, vector search.
- For most teams in 2026, Redis is the default; Memcached is a niche choice for pure-cache fleets at very high QPS where its simplicity is a feature.
Use Cases:
- Object cache for high-traffic LAMP / Rails / Django applications — the original use case.
- Edge / sidecar cache where simplicity matters more than feature richness.
- Multi-tenant cache fleets where per-instance multi-core throughput dominates total cost of ownership.
- Legacy systems that have run Memcached for 15+ years and haven’t had a reason to migrate.