* perf: Reduce penalty for lack of redis connection
If redis isn't running than this client cache is slower than default
implementation because of the extra locking overhead.
* test: update perf redis counts
* perf: cache table columns in client-cache
* fix: race condition on cache-client_cache init
Rare but apparant in synthetic benchmarks.
Cache is set but client cache is still being initialized then request
will fail.
* perf: Don't run notifications when loading document
WHAT?
* fix: use cached doc to repopulate
* perf: reduce get_meta calls
Unnecessary overhead and need to disable this everytime I want to get
realistic performance numbers out.
All the performance affecting toggles should be directly controlled by
just `developer_mode` alone.
If filters are list or dict then they aren't hashable, there was little
reason to do this IMO.
If something is indeed cacheable then where is the eviction for it?
simple k:v is only thing we can realistically cache here.
* feat: Add deprecation_dumpster.py file
* docs: add jovial and jocose docstring for frappe/deprecation_dumpster.py
* refactor: fill the dumpster with its own kind
* refactor: move to the deprecation dumpster
* chore: color coding class
* fix: only check import error when import errors
* fix: Fixes mariadb orm to return list instead of tuple as the typisation suggests it
* fix: inverted fix for pg: Expect tuple as data_type for _transform_result
* fix: Fixed failing upstream spec due to data_type change
- Events like doc.save and doc.submit need to be atomic
- Document hooks can make it not so atomic.
This is extending server script behaviour where server script hooks are
not allowed to commit/rollback.
* fix: set 2 as simultaneous_sessions by default
* fix: Correct offset for simultaneous_sessions
* refactor: use freeze_time instead of patching
* chore: misleading docstring
* test: set lower simultaneous_sessions for test
If you're reading 1000s of rows from MySQL, the default behaviour is to
read all of them in memory at once.
One of the use case for reading large rows is reporting where a lot of
data is read and then processed in Python. The read row is hoever not
used again but still consumes memory until entire function exits.
SSCursor (Server Side Cursor) allows fetching one row at a time.
Note: This is slower than fetching everything at once AND has risk of
connection loss. So, don't use this as a crutch. If possible rewrite
code so processing is done in SQL.