The idea behind a streaming CSV export was to reduce the amount of
memory used, by avoiding building the entire CSV file in memory before
sending it to the client. However, it didn't work out this way in
practice: the query objects that were created to represent each line
caused Postgres to generate a very large (~200MB on bookwyrm.social)
temp file, not to mention the memory being used by the Query object
likely being similar to, if not larger than that used by the finalized
CSV row.
While we should in the long term run our CSV exports as a Celery task,
this change should allow CSV exports to work on large servers without
causing disk-space problems.
Fixes: #2157
* New ID: Audible ASIN
Audible belongs to Amazon BUT they do not share the same IDs. The Audible ASIN of an audiobook is never the same as the Amazon ASIN.
Yeah, I know, Amazon is great. The fact that the ASIN is a good distinction for different works and editions bothers me more than I will ever be willing to admint.
* New ID "ISFDB"
Internet Speculative Ficiton Database ID for books and authors.
Links to the entry if set.
* Added aasin to test
Added aasin to test
* the answer expects more emptxy fields...