How to solve the MEMCACHED cache breakdown and avalanche problem
MEMCACHED is a fast, high -performance distributed memory object cache system, which is widely used in the cache of web applications.However, there are some common problems in Memcachd, such as cache break and cache avalanche.This article will introduce what is cache breakdown and cache avalanche, and provide some methods to solve these problems, including complete programming code and related configurations.
1. Cache breakdown problem
Calling breakdown refers to a hot data due to the expiration or deletion of the cache in the high -parallel scene, which causes a large number of requests to directly access the database, which causes the database pressure to be too stressful, and even cause the service unavailable.Here are some methods to solve the problem of cache breakdown.
1.1 Add mutually exclusive lock
Using mutual locks can ensure that only one thread has one thread to query the database after the cache expires, while other threads are waiting for the query results.This can be implemented using a programming code.
python
# Get data from the cache
data = memcached.get(key)
if data is None:
#
if memcached.add(key_lock, 1, lock_timeout):
# Query data from the database
data = query_from_database()
# Save the data into the cache
memcached.set(key, data, cache_timeout)
#
memcached.delete(key_lock)
else:
# Other threads are inquiring with the database, waiting for a period of time to retry
sleep(retry_interval)
# Get data from the cache again
data = memcached.get(key)
return data
1.2 Settling hot data will never expire
For some hot data, you can set it to never expire, so that even if the cache fails, it can ensure that the hot data has always been cached.Then use the background task or timing task to regularly refresh the cache of these hot data to avoid cache expires.
1.3 cache empty object
When the results of the query database are empty, the empty result caches, and stores a specified tag character. In this way, the same query result can directly obtain the empty result from the cache without the need to query the database.This can avoid frequent querying the database and reduce the pressure of the database.
2. Cache avalanche problem
The avalanche means that a large amount of data expires in the cache is the same, resulting in at a certain point in time, most of the data in the cache fails at the same time, and all requests directly access the database to cause tremendous pressure on the database.Here are some ways to solve the avalanche problem.
2.1 Set the random expiration time
Set the random expiration time of the data in the cache, and distribute evenly within a certain period of time to avoid all data from failing at the same time, thereby reducing the database pressure.
python
# Generate a random expiration time (between min_timeout and max_timeout)
cache_timeout = random.randint(min_timeout, max_timeout)
memcached.set(key, data, cache_timeout)
2.2 Use multi -level cache architecture
Divide the cache into multiple levels, such as the first -level cache (local cache) and the second cache (Memcached). Part of the request returns the result directly when the first -level cache hits to avoid accessing the second -level cache and database.This requires a suitable cache level and strategy based on different business scenarios.
2.3 Update the cache in time
Before caching data fails, update the cache asynchronous in advance, avoid updating the cache when the data is required, and reduce the time window of the data failure.
python
# Asynchronous update cache
def update_cache_async(key):
# Query data from the database
data = query_from_database()
# Save the data into the cache
memcached.set(key, data, cache_timeout)
#Profit asynchronous update method where you need to update the cache
update_cache_async(key)
2.4 Use cache to preheat
When the system starts, load the hotspot data in advance to the cache, and avoid a large number of requests directly to access the database when the system is just started.
python
#An system start when the system starts
def cache_preheat():
hot_data = query_hot_data_from_database()
for data in hot_data:
memcached.set(data.key, data.value, cache_timeout)
# Cache preheating method when starting
cache_preheat()
The above are some methods and example code to solve the method and timely way of cache breakdown and cache avalanche.According to the specific business scenario and system needs, you can choose a suitable method to solve these two problems.Through reasonable configuration and code implementation, the high reliability and high performance of MEMCACHED can be guaranteed.