ERPNext updates defaults everytime settings are changed but there's no
need to do it when value itself hasn't changed.
IDK if there are things that depend on this weird behaviour, there
shouldn't be any **ideally**.
Feel free to revert if it breaks.
I don't like test fixtures at all but breaking this is so pointless.
I can't even re-run ERPNext tests!
People should overtime stop relying on hardcoded fixtures and write
utils to generate them at runtime in tests. I've migrated tons of tests
this way during my time in ERPNext team and those tests are far more
reliable than hardcoded ones.
For doctype/user specific cache eviction, no need to remove site_cache.
Rationale:
- Site cache is worker specific, so this eviction doesn't help much.
- Anything that might need to be evicted from site cache should be
manually cleared or use a TTL.
Maybe we can just replace all of site_cache usage with
https://github.com/frappe/frappe/pull/28992 once it's stable.
* perf: No need to set expiry for key everytime
* fix: Set expiry on first request and never again
This prevents problem of rate limiter keys growing constantly.
Call it a new year's resolution.
`frappe/__init__.py` has grown crazy big over time and left unattended it will continue to grow.
This new ~test~ tax will require reducing 3 line per day (so ~1000 in a year) from 1st Jan 2025 onwards. I am offering a headstart of 50 days in this PR by moving ~150 lines: #28869
This is middle ground between caching it completely and requiring a
restart/signal to reload vs always reloading it.
I don't know any use cases that can break from this, nowhere in code
configs should be expected to reload instantly.
This change is only applied to requests for now
This makes maxsize deterministic while estimating memory costs of
using this function.
E.g. If I want to cache site config and I'd prefer to keep 16 recent
site configs in memory, there's no way to handle this. Site specific
maxsize means if I have 1000 sites on bench I'll have 1000 keys in
cache.
This change makes behaviour similar to lru_cache which is how I thought
it workerd TBH.