Cache = Cash! 2.0
Stefan Wintermeyer • Philadelphia, PA • Talk

Date: July 08, 2025
Published: not published
Announced: unknown

Back in 2013, I gave a talk at RailsConf titled "Cache = Cash!", where I explored how caching can dramatically improve Rails performance. A decade later, caching has only become more powerful — but also more dangerous. In this updated session, we’ll go beyond the official Rails documentation and explore advanced caching techniques that can significantly boost performance — if used wisely.

Start with the Basics: Rails Caching 101

To make sure everyone can follow along, we’ll begin with a clear introduction to Rails' built-in caching strategies, including:

- Fragment Caching – Storing reusable view fragments to speed up rendering.
- Russian Doll Caching – Nesting caches effectively to prevent unnecessary recomputation.
- Low-Level Caching (Rails.cache) – Directly caching arbitrary data for optimized reads.
- SQL Query Caching – Reducing database load by storing query results efficiently.
- Cache Store Options – Choosing between memory store, file store, Memcached, and Redis.

This will give attendees — even those with no prior caching experience — a solid foundation before we dive into the advanced techniques that aren’t covered in the official Rails guides.

The Closer, the Faster: Understanding Cache Hierarchies 🚀

Not all caches are created equal! The further away your data is stored, the slower your application becomes. If you want truly high-performance caching, you need to understand where to cache data and how access speeds compare.

Here's how different caches stack up in terms of access speed:

- L1 Cache (CPU Internal Cache) → ~1 nanosecond
L2/L3 Cache → ~3–10 nanoseconds
- RAM (Memory Access) → ~100 nanoseconds
- SSD (Local Disk Cache) → ~100 microseconds (1000× slower than RAM!)
- Network Call (e.g., Redis, Database Query) → ~1–10 milliseconds
- Spinning Disk (HDD Cache Access) → ~10 milliseconds

That’s a 10-million-fold difference between CPU cache and an HDD!

Practical Takeaways

✅ Cache as close to the CPU as possible – Learn how to use in-memory caches and CPU-friendly data structures.
✅ Optimize ActiveRecord for caching efficiency – Instead of always caching full ActiveRecord objects, consider caching only essential attributes as JSON, arrays, or hashes. This reduces deserialization overhead and keeps frequently accessed data lightweight.
✅ Minimize unnecessary cache retrievals – Just because Redis is fast doesn’t mean it’s the right cache for every scenario. Consider database-level caching via materialized views or denormalized tables when appropriate.
✅ Leverage cache preloading and warming – Reduce performance bottlenecks by anticipating cache misses before they happen.

When Caching Goes Wrong: Debugging and Avoiding Traps

Caching is powerful, but it can turn into a nightmare if you don’t handle it properly. We’ll cover:

- Cache Invalidation Challenges – “There are only two hard things in computer science: naming things, and cache invalidation.” Learn how to keep caches fresh without unnecessary complexity.
- Debugging Stale Data Issues – Identify and resolve issues caused by outdated or inconsistent cache entries.
- Knowing When NOT to Cache – Some things shouldn’t be cached! Learn when AI-driven caching decisions (or even Rails defaults) might cause more harm than good.

What You’ll Walk Away With

- A solid understanding of Rails caching fundamentals — perfect for beginners.
- Advanced caching techniques that go beyond the Rails guides.
- Performance insights on CPU, memory, and distributed caches.
- A clear strategy for debugging and maintaining cache integrity.

A decade ago, "Cache = Cash!" was all about making Rails apps faster and more efficient. This time, we’re taking it even further — with new techniques, new pitfalls, and even bigger performance gains.

Are you ready to push Rails caching to the next level?

RailsConf 2025

Explore all talks scheduled for RailsConf 2025
+48