This makes maxsize deterministic while estimating memory costs of
using this function.
E.g. If I want to cache site config and I'd prefer to keep 16 recent
site configs in memory, there's no way to handle this. Site specific
maxsize means if I have 1000 sites on bench I'll have 1000 keys in
cache.
This change makes behaviour similar to lru_cache which is how I thought
it workerd TBH.
Identified two cases where site cache can break:
1. Other thread clears cache using clear_cache because of TTL or manual
eviction.
2. Other thread pops the eliment we are about to read because of
`maxsize` limit.
This change should fix both and even make it lil bit faster.
- Eagerly initialize request_cache, all requests use it, so what is the
point of doing it lazily?
- Reduce accesses to `frappe.local` namespace, get cache once and reuse
it in rest of the execution.
Before: 1250ns +/- 1%
After: 645ns +/- 1%
Source: Trust me bro.
(no really, for now just trust me or look at the diff)
Just like LRU cache, no need to support unhashable types in site_cache.
Current usage in codebase also shows that it's not required and json.dumps is quite slow.
refactor: clean up code to py39+ supported syntax
- f-strings instead of format
- latest typing support instead of pre 3.9 TitleCase
- remove UTF-8 declarations.
- many more changes
Powered by https://github.com/asottile/pyupgrade/ + manual cleanups
Sped up request_cache access times multi-fold with the help of a
benchmarking script. Access times for this generic cache is comparable
to specific caches written (eg: get_meta's local cache) by an additional
overhead of 15% as compared to implementing it in each function separately
* Got rid of logging
* Optimized bits with the help of benchmarking script against
frappe.get_meta's performance
* Use hash instead of json.dumps
Decorator to cache method calls across requests. The cache is stored in
frappe.utils.caching._SITE_CACHE. The cache persists on the parent process.
It offers a light-weight cache for the current process without the additional
overhead of serializing / deserializing Python objects.
Note: This cache isn't shared among workers. If you need to share data across
workers, use redis (frappe.cache API) instead.
Usage:
from frappe.utils.caching import site_cache
@site_cache
def calculate_pi():
import math, time
precision = get_precision("Math Constant", "Pi") # depends on
site data
return round(math.pi, precision)
calculate_pi(10) # will calculate value
calculate_pi(10) # will return value from cache
calculate_pi.clear_cache() # clear this function's cache for all sites
calculate_pi(10) # will calculate value
Utility method for boundless cache. This method maintains it's cache
using Werkzeug local. Therefore, it's invalidated at the end ofthe
lifecycle of a request in our WSGI app.
Cache keys generated are a function of func.__name__, func.__module__,
passed args & kwargs. Key generation will be successful only if args and
kwargs can be safely `json.dumps`'ed.
Usage:
from frappe.utils.caching import request_cache
@request_cache
def calculate_pi(num_terms=0):
import math, time
print(f"{num_terms = }")
time.sleep(10)
return math.pi
calculate_pi(10) # will calculate value
calculate_pi(10) # will return value from cache