### BREAKING CHANGE
#### Datetime, Date and Time fields will always be cast to respective objects in `setattr`, this will ensure uniformity while accessing the values, no more `getdate`, `get_datetime`, `to_timedelta` wrapper.
- While importing data, the framework does check for `set_only_once`.
- In normal case scenarios, this will work flawlessly since most date fields might not be set_only_once.
- But in Subscription, the date field is set to `set_only_once` and in `after_insert`, `document.save` is called, and while doing so, `set_only_once` is checked [here](1944a547f9/frappe/model/document.py (L566)).
-This works fine if the data imported is in the correct format.
- If the date's data is not in the correct format, the framework throws an error.
- for eg `06-02-2022 00:00:00 != 06-02-2022`
- fixes [Issue/#15370](https://github.com/frappe/frappe/issues/15370)
> no-docs
refactor: clean up code to py39+ supported syntax
- f-strings instead of format
- latest typing support instead of pre 3.9 TitleCase
- remove UTF-8 declarations.
- many more changes
Powered by https://github.com/asottile/pyupgrade/ + manual cleanups
Given how widespread PY310's usage has become, and how we're just a
few months away from PY311 major release. This is a slightly late
bumping but necessary to ensure smoother updates & maintenance for
Frappe, ERPNext & other apps in the coming years. Almost all people
who participated in the pool from the community as well as Frappe team
voted (via active telegram groups) PY310 as their preferred minimum
requirement for v14.
Converted all possible usages of re.* that weren't compiling the regex
separately and re-using it. Separated out the compiled patterns as
global variables. Repetitive patterns could be made DRY-er.
Would be nicer to have all regexes in a single module so that we could
re-use better, keep track of outdated, and keep checks for possible
reDos' etc
Sped up request_cache access times multi-fold with the help of a
benchmarking script. Access times for this generic cache is comparable
to specific caches written (eg: get_meta's local cache) by an additional
overhead of 15% as compared to implementing it in each function separately
* Got rid of logging
* Optimized bits with the help of benchmarking script against
frappe.get_meta's performance
* Use hash instead of json.dumps
Decorator to cache method calls across requests. The cache is stored in
frappe.utils.caching._SITE_CACHE. The cache persists on the parent process.
It offers a light-weight cache for the current process without the additional
overhead of serializing / deserializing Python objects.
Note: This cache isn't shared among workers. If you need to share data across
workers, use redis (frappe.cache API) instead.
Usage:
from frappe.utils.caching import site_cache
@site_cache
def calculate_pi():
import math, time
precision = get_precision("Math Constant", "Pi") # depends on
site data
return round(math.pi, precision)
calculate_pi(10) # will calculate value
calculate_pi(10) # will return value from cache
calculate_pi.clear_cache() # clear this function's cache for all sites
calculate_pi(10) # will calculate value
Utility method for boundless cache. This method maintains it's cache
using Werkzeug local. Therefore, it's invalidated at the end ofthe
lifecycle of a request in our WSGI app.
Cache keys generated are a function of func.__name__, func.__module__,
passed args & kwargs. Key generation will be successful only if args and
kwargs can be safely `json.dumps`'ed.
Usage:
from frappe.utils.caching import request_cache
@request_cache
def calculate_pi(num_terms=0):
import math, time
print(f"{num_terms = }")
time.sleep(10)
return math.pi
calculate_pi(10) # will calculate value
calculate_pi(10) # will return value from cache
As per current implementation whenever `get_doc` is called, document is
cached. However, this cache is only ever used by `get_cached_doc`. Going
through the codebase of both FF/ERPNext you'll find that `get_doc` is
used a lot more than `get_cached_doc`. So in many places, all this
caching overhead is unnecessary.
This change removes implicit caching from get_doc and replaces it with
cache-replacement instead. i.e. cache is only updated if it exists but
not created from get_doc.
Pros:
- faster `get_doc`
- lower memory usage on Redis
- Reduces chances of OOM by blowing up worker's memory as old docs can't
be GCed until
- Correctness i.e. caching only what gets used from cache.
Con:
- After this change. First call to `get_cached_doc` will always be a cache miss. DUH.