* fix: procure db config from single authority
ensures that configuration is uniformely procured from local.conf
instead of making use of hard to audit multilevel fallback logic
Implementation Note:
- `get_db(host, port, user, password)` was stripped of any optional
argument and therefrom all errors where fixed.
- All occurances of `grep 'frappe.db.db_'` where changed to
`frappe.conf.db_`
* fix: revert unnecessary breaking changes
- Build version wasn't correctly computed since v14 update of build
system. This makes client side cache useless.
- We clear cache assuming rapid reloads,but opening new tab also does
that. This makes the cache effectively useless for most users.
* fix: checkpoint the supported schemes for connectivity
This PR implements a gateway + error that clearly hints the operator at
a misconfigured system during runtime.
Particularity, against the multiple library-provided ways of configuring
redis connection strings (in python), this hard stops if an unsupported
one is chosen by accident.
* fix: remove unknown protocol
---------
Co-authored-by: Ankush Menat <ankushmenat@gmail.com>
This avoids having to manipulate config files in brittle bash
entrypoints that need to react to dynamic service discovery.
This significantly improves the operability of various bench sites.
* refactor!: Drop currentsite.txt
- `bench use` will continue to work.
- Instead of txt file use common_site_config to set default site using `default_site` key.
- `FRAPPE_SITE` environment variable also works
* fix(DX): warn if non-empty currentsite.txt is present
* fix: reduce retries for rq connection
10 seconds of retries when connection isn't available is too much, just
failing might be beneficial.
- BusyLoadingError only occurs when redis is restarting
- ConnectionError mostly means redis is dead, no amount of retries will
bring it back.
* fix: dont retry if redis is down for realtime
RQ now has experimental support for workerpools.
When to use this?
Roughly when you have more than 2 workers a workerpool might make
sense, below 2 it's overhead as master "pool" process will need to run
to manager workerpool itself.
Why is it any better?
Currently we just let supervisor duplicate the worker process N number
of times. This is inefficient from shared memory POV. Forking the
original process to create workers enables sharing of more memory thus
leading upwards of 60-70% reduction in memory usage with pool size of 8
workers.
BG worker forks are not CoW friendly. Freezing right before we start
worker should lessen overall memory usage. Though this isn't useful much
because at max you're sharing with 2 processes - master and horse.
WorkerPool can improve this benefit a lot by forking each worker from
master process and horse from forked processes. TBD when WorkerPool is
out of beta.
This reverts commit 71b44efcac.
This gets frequently imported from one place or another. Since with
gc.freeze we can mostly reuse the import from parent, let's just leave
it here.