* feat: global `frappe.in_test` flag
* feat: helper utility to toggle `frappe.in_test`
* fix: use `toggle_test_mode` util
* fix: use `frappe.in_test`
* chore: add comment explaining global `in_test`
* chore: ignore commit replacing flag usage
* test: temporarily disable `frappe.in_test`
this worked earlier because flag was set in werkzeug.local which was separate for API test client
* test: add comment explaining change
Right now if you do this:
`get_doc(doctype, name, field=value)` then it loads document from DB and
then updates field to value.
This is absurd API and I don't think it was ever done intentionally.
Not all single doctypes are settings, so this is better. Implicit
caching is fine, same is done for `db` APIs on singles. We *should* aim
for 100% correctness of caching implementation, especially for singles.
Thanks to @netchampfaris for the suggestion.
* feat: get_settings
get_cached_value doesn't work well with singles because you either need
to pass `None` or repeat doctype name... both are awekward and easy to
shoot yourself in foot with.
* refactor: Use cached settings
Consider this:
```python
for row in doc.get_children():
row.db_set("amount", 0)
```
This sounds like it will do one write query for each row but it does 2
because of this unnecessary locking of child tables.
We recently applied limit on how many links can be buffered. That
pretty much "samples" only records created at start of the hour.
This change makes it flush 4x frequently and samples 10% of input to
reduce updates. Again, statistically this serves same purpose.
Context: This is QoL feature to highlight most used items. But in high
throughput environments where a lot of new documents are being created
this becomes a bottleneck.
Fix: Limit the size of counts that can be buffered before they're flushed.
Statistically this will still work just as well as it did before.
Consider any "items" table in ERNext, same item and warehouses are used
10s of times in a single document, every one of them triggers fetch from
query, even though it can't possibly change during the DB transaction.
This change now caches the fetched value too.
* fix: reduce bulk insert batch size
Back when this feature was added it used to lazily evaluate the input.
Now the iterator is consumed upfront so large batch sizes == huge memory usage.
* perf: bring back iterator for bulk_insert
Bulk insert used to support iterator for consuming arbitrarily large
amount of data and inserting it. Since child table support was added, it
can't do it anymore because that requires collecting values.
This change now brings back iterators by batching input iterator (by
default 1000) documents.
This is almost as good as original change from design POV. Performance
is still meh for flat documents.