- Kinda confuses query planner (idk why it's not smart enough to
understand but there are probably edge cases where it can't be done)
- `null != null` and `'' != null` both yield `null` which is falsy and
won't be shown in results.
Alternate fix to https://github.com/frappe/frappe/pull/21817
* fix: Remove re cache internals manipulation
* fix: Purge re cache after module loads
Empty cache would work better as we already got our pre-compiled
patterns at the top level of every module. This leaves the cache open
for dynamically generated patterns which are in better need of it. Over
time, workers would converge to this anyway. This change only reduces
the cache hit and eviction effort.
I'd improve this by executing `re.purge` on every module import but
complexity tradeoff lol. I'd prefer if re didn't cache patterns
generated by `re.compile` but I dont see this behaviour or any escape
hatches so this will have to do for now.
* build(deps): update redis client to v4 in legacy mode
* fix: node17+ - prefer ipv4
* chore: use redis client v4 api (async) and adapt error handling
* fix: timeout by exiting if not in watch mode
* fix: parse message before republishing
---------
Co-authored-by: Ankush Menat <ankush@frappe.io>
* fix: procure db config from single authority
ensures that configuration is uniformely procured from local.conf
instead of making use of hard to audit multilevel fallback logic
Implementation Note:
- `get_db(host, port, user, password)` was stripped of any optional
argument and therefrom all errors where fixed.
- All occurances of `grep 'frappe.db.db_'` where changed to
`frappe.conf.db_`
* fix: revert unnecessary breaking changes
We eagerly fetch shared documents for ANY `get_list` query, even when
user has full read acess doctype, where it's moot to consider adding
shared document as separately.
This eliminates one entire db call from get_list and in most cases
get_list will translate to single DB call, hence probably worth the
additional complexity.
20 results are rarely scrolled by user.
Most users end up typing more characters to narrow down results. This
way on large table we end up reading significantly fewer rows.
The way relational DBs work is they keep filtering and reading rows one
by one until limit is hit, so smaller the limit the better.