* fix: always persist all indexes added via db.add_index
* fix: Add `if not exists` clause for index creation
This allows replica to have same index and master to add it later
without causing SQL error. Just minor DX benefit.
* fix(postgres): don't cache if table doesn't exist
* chore: revert postgres changes
Hopeless to maintain this
Certain tables contain A LOT of duplicate data, it makes sense to enable
compressed row format on them by default. I've seen 5-10 fold reduction
in DB size after enabling compressed format on select few tables.
This has some performance overhead:
- both compressed and uncompressed pages live in buffer pool.
- compression/decompression
Note:
- These cons don't apply much on DocTypes I am enabling this for.
- I am not enabling this on existing sites, migration can take a long
time! Do it manually with `transform-database` command if you want to.
* fix: return False if attached doctype is not found
* fix(UX): add a open url button in file form
* fix(typo): add translate to string
* fix: check if it is absolute url
* fix: return false via module not found error
* fix: remove whitespace in restrict ip in validate
* fix: added check for request_ip
* fix: return if no restrict ip
* fix: set to localhost if none, refactor validate_ip_addr
* fix: validate ip_address cleanup and removed uncessary comments
* fix: validate ip_addr cleanup
* fix: remove unecessary check
Steps to reproduce:
- enable developer mode (doesn't happen in prod)
- Save a document with set only once fields
- Reload the page (requests meta again which is now polluted)
This is new category of bug surfaced because meta objects now live
longer than request and all kinds of weird `self._cached_property`
starts getting serialized.
Co-Authored-By: ruthra kumar <ruthra@erpnext.com>
Currently it's full table scan, that too on a TEXT field filter.
It's used for finding file docs when `fid` isn't specified. No idea
where we are STILL having private file URLs without fids.
In any case, this is still required.
* perf: Reduce penalty for lack of redis connection
If redis isn't running than this client cache is slower than default
implementation because of the extra locking overhead.
* test: update perf redis counts
* perf: cache table columns in client-cache
* fix: race condition on cache-client_cache init
Rare but apparant in synthetic benchmarks.
Cache is set but client cache is still being initialized then request
will fail.
* perf: Don't run notifications when loading document
WHAT?
* fix: use cached doc to repopulate
* perf: reduce get_meta calls
These are deletes that aren't user triggered and these documents are
typically never "linked" somewhere else. So skip all expensive link /
dynamic link checks.