seitime-frappe/frappe/database
Ankush Menat a2525e545a
perf: Unbuffered cursors for large result sets (#24365)
If you're reading 1000s of rows from MySQL, the default behaviour is to
read all of them in memory at once.

One of the use case for reading large rows is reporting where a lot of
data is read and then processed in Python. The read row is hoever not
used again but still consumes memory until entire function exits.

SSCursor (Server Side Cursor) allows fetching one row at a time.

Note: This is slower than fetching everything at once AND has risk of
connection loss. So, don't use this as a crutch. If possible rewrite
code so processing is done in SQL.
2024-01-16 11:00:12 +05:30
..
mariadb perf: Unbuffered cursors for large result sets (#24365) 2024-01-16 11:00:12 +05:30
postgres chore: name current db handle properly 2024-01-14 17:14:21 +01:00
__init__.py Merge pull request #23168 from blaggacao/refactor/centralize-python-shell-interface-for-database-binaries 2023-11-17 17:02:41 +05:30
database.py perf: Unbuffered cursors for large result sets (#24365) 2024-01-16 11:00:12 +05:30
db_manager.py refactor: set pipefail in shell before running piped backup/restore commands 2024-01-04 18:41:37 +05:30
operator_map.py docs: consistency 2023-12-20 14:02:32 +05:30
query.py feat: Skip locked rows while selecting (#24298) 2024-01-13 09:49:27 +05:30
schema.py refactor(treewide): code cleanup 2023-11-23 13:57:51 +05:30
sequence.py refactor!: remove implicit primary key from logs (#22209) 2023-08-26 16:01:47 +05:30
utils.py feat: migrate columns to be non-nullable if required 2023-11-16 14:51:57 +05:30