Design an application cache strategy
Design a comprehensive Redis caching strategy with appropriate patterns, TTL policy, invalidation and stampede protection.
Paste in your AI
Paste this prompt in ChatGPT, Claude or Gemini and customize the variables in brackets.
Tu es un architecte systèmes expert en stratégies de mise en cache et en optimisation des performances applicatives. Je dois mettre en place une stratégie de cache robuste pour mon application. **Contexte de l'application :** - Type : [EX: API REST à fort trafic, application web B2C, plateforme de données] - Stack : [EX: Node.js + PostgreSQL, Python + MongoDB] - Solution de cache envisagée : [EX: Redis 7, Memcached, cache en mémoire] - Trafic actuel : [EX: 1000 requêtes/minute, pics à 5000] - Problème de performance : [EX: requêtes DB trop lentes, API tierce rate-limited, calculs coûteux] **Données candidates au cache :** [LISTER_LES_TYPES_DE_DONNÉES: ex. profils utilisateurs, résultats de recherche, configurations, sessions, tokens JWT révoqués] Conçois une stratégie de cache complète : 1. **Analyse des données** : classe chaque type de donnée par fréquence d'accès, taux de modification et coût de recalcul pour prioriser ce qui mérite d'être caché. 2. **Patterns de cache** : recommande et implémente les patterns appropriés (Cache-Aside, Write-Through, Write-Behind, Read-Through) pour chaque cas d'usage. 3. **Politique d'expiration (TTL)** : définis des TTL appropriés pour chaque type de donnée avec justification. 4. **Stratégie d'invalidation** : comment invalider le cache lors des mises à jour des données (invalidation par clé, pattern, event-driven). 5. **Gestion des ratés de cache (Cache Miss)** : protection contre le Cache Stampede (thundering herd) et le Cache Penetration. 6. **Cache distribué** : considérations pour un environnement multi-instances (Redis Cluster, serialisation). 7. **Métriques** : métriques à surveiller (hit rate, miss rate, memory usage, évictions) et seuils d'alerte.
Why this prompt works
<p>This prompt is structured to address caching as a complete systems engineering problem. The initial analysis by access frequency and recalculation cost is the most important step: caching the wrong data (data that changes constantly or is rarely accessed) is worse than not caching anything because it adds complexity without benefit.</p><p>Protection against Cache Stampede (thundering herd) is often ignored until the first production incident: when a cache expires, all simultaneous requests hit the database at the same time, causing exactly the overload we were trying to avoid. This prompt forces anticipation of this scenario.</p><p>The request for specific metrics with alert thresholds transforms the design into an observable system: a hit rate below 80% generally indicates a poor TTL policy or overly aggressive invalidation, metrics that allow continuous cache tuning in production.</p>
Use Cases
Expected Output
A comprehensive caching strategy with data analysis, recommended patterns, TTL policy, invalidation strategy and monitoring metrics.
Learn more
Check the full skill on Prompt Guide to master this technique from A to Z.
View on Prompt GuideGlossary Terms
Similar Prompts
Learn the basics of Git for beginners
Learn Git from scratch with illustrated explanations, concrete examples, a practical workflow and a cheatsheet of essential commands.
Define a Git strategy for a team
Define a comprehensive Git strategy adapted to your team: branching model, conventions, code review and release management.
Configure a CI/CD pipeline with GitHub Actions
Configure a professional CI/CD pipeline with GitHub Actions covering testing, security, Docker build and multi-environment deployment.
Create an accessible React component
Create fully accessible React components complying with WCAG 2.1 with keyboard navigation, ARIA and screen reader support.