* fix: Avoid Snapshot violation
- Main thread created and "read" user
- Other thread modified something
- Main thread wants to delete or "write" to same row.
This violates snapshot isolation.
* fix: treat snapshot violation as deadlock for now
* test: handle snapshot violations
(wal was already enabled in the file itself, no harm setting it here for consistency.
Reference: 45622c7d54
Co-authored-by: 18alantom <2.alan.tom@gmail.com>
Signed-off-by: Akhil Narang <me@akhilnarang.dev>
* feat: mysqlclient
* fix: update error attrs
* fix: decode mogrified query to unicode
* fix: do some cleanup
* chore: disable cleanup for now
* fix: remove unnecessary call to as_unicode
* test: skip perf test for now
* fix: fallback to empty str
* fix: unbuffered cursor support
* fix: update converters and other changes
* fix: add cleanup back
* perf: improve timedelta converter
* fix: dont attempt to run query when explain flag is set
* test: cleanup tests
* chore: remove commented code
* perf: store conf as local var
* chore: ensure sequence
---------
Co-authored-by: Ankush Menat <ankush@frappe.io>
* fix: always persist all indexes added via db.add_index
* fix: Add `if not exists` clause for index creation
This allows replica to have same index and master to add it later
without causing SQL error. Just minor DX benefit.
* fix(postgres): don't cache if table doesn't exist
* chore: revert postgres changes
Hopeless to maintain this
Certain tables contain A LOT of duplicate data, it makes sense to enable
compressed row format on them by default. I've seen 5-10 fold reduction
in DB size after enabling compressed format on select few tables.
This has some performance overhead:
- both compressed and uncompressed pages live in buffer pool.
- compression/decompression
Note:
- These cons don't apply much on DocTypes I am enabling this for.
- I am not enabling this on existing sites, migration can take a long
time! Do it manually with `transform-database` command if you want to.
MySQL seems to raise diff error based on which state the connection was
in when remote connection closed. Anyway, this should guard against both
kinds of failures.
* perf: Reduce penalty for lack of redis connection
If redis isn't running than this client cache is slower than default
implementation because of the extra locking overhead.
* test: update perf redis counts
* perf: cache table columns in client-cache
* fix: race condition on cache-client_cache init
Rare but apparant in synthetic benchmarks.
Cache is set but client cache is still being initialized then request
will fail.
* perf: Don't run notifications when loading document
WHAT?
* fix: use cached doc to repopulate
* perf: reduce get_meta calls
Unnecessary overhead and need to disable this everytime I want to get
realistic performance numbers out.
All the performance affecting toggles should be directly controlled by
just `developer_mode` alone.