* perf: No need to set expiry for key everytime
* fix: Set expiry on first request and never again
This prevents problem of rate limiter keys growing constantly.
Call it a new year's resolution.
`frappe/__init__.py` has grown crazy big over time and left unattended it will continue to grow.
This new ~test~ tax will require reducing 3 line per day (so ~1000 in a year) from 1st Jan 2025 onwards. I am offering a headstart of 50 days in this PR by moving ~150 lines: #28869
This is middle ground between caching it completely and requiring a
restart/signal to reload vs always reloading it.
I don't know any use cases that can break from this, nowhere in code
configs should be expected to reload instantly.
This change is only applied to requests for now
This makes maxsize deterministic while estimating memory costs of
using this function.
E.g. If I want to cache site config and I'd prefer to keep 16 recent
site configs in memory, there's no way to handle this. Site specific
maxsize means if I have 1000 sites on bench I'll have 1000 keys in
cache.
This change makes behaviour similar to lru_cache which is how I thought
it workerd TBH.
There is no gaurantee that setup_cache is only called once. This PR adds
a mutex lock to ensure only one thread gets to create the connection. If
both arrive at same time then one of them will be blocked until
connection is setup.
So far this hasn't been an issue because the "orphan" connection would
just get garbage collected but if you setup any kind of listener on it
or refer to it then it will keep running forever hurting performance.
This just has small performance impact on first request that sets up the
connection, in absence of contention the lock should have almost no
overhead. I make up for it by eliminating one function call :pinch:
RESP3 has PUSH support which is useful for implementing client side
caching. Enabling this before I work on that to test if anything breaks
with this.
No need to do this for background jobs instance just yet, infrequent
accesses and performance doesn't matter as much.
* perf: reuse current time
now_datetime is site-tz-aware, we don't need it here.
* perf: dont need redis transactions
* perf: use `time.time()` instead of datetime
Using `datetime.timestamp()` is a round-about way to use `time.time()`
with extra cost of dealing with datetime and timezones.
* perf: define slots for rate_limiter
* fix!: Remove used rate limit header
This just shares how much was consumed in current request, people can
just time requests to get an approximation for this, not sure why is this
useful.
* upstream/develop: (1373 commits)
perf: cache dynamic links map in Redis (#28878)
fix: Never query `flag_print_sql` in `developer_mode=0` (#28884)
fix(restore): remove MariaDB view security definers
fix: sanitize user input during setup wizard
feat(sanitize_column): improve check
refactor: make optimizations.py private entirely (#28872)
fix(site_cache): site cache thread safety (#28870)
chore(printview): change error message
perf: speedup `frappe.call` by ~8x (#28866)
test: reduce noise in test output (#28862)
chore: spelling_invalid_values (#28858)
fix: Remove misleading os.O_NONBLOCK flag (#28859)
fix: string replacement in error logger
perf(gthread): Pin web workers to a single core (#28854)
fix: MariaDBDatabase.get_tables() should not query the entire database schema (#28846)
fix: add strings and fields to translation
fix: typo in test controller boilerplate
perf: faster add_to_date (#28843)
perf(version): Make get_versions fast for autoincrement doctypes (#28847)
refactor: log in monitor as well
...
Note about correctness: Once site has seen enough usage this map will rarely change. So the
problem of "cache inconsistency" is very rare, still care is taken to
avoid possible cache inconsistencies.
Unnecessary overhead and need to disable this everytime I want to get
realistic performance numbers out.
All the performance affecting toggles should be directly controlled by
just `developer_mode` alone.
Identified two cases where site cache can break:
1. Other thread clears cache using clear_cache because of TTL or manual
eviction.
2. Other thread pops the eliment we are about to read because of
`maxsize` limit.
This change should fix both and even make it lil bit faster.