The idea behind a streaming CSV export was to reduce the amount of
memory used, by avoiding building the entire CSV file in memory before
sending it to the client. However, it didn't work out this way in
practice: the query objects that were created to represent each line
caused Postgres to generate a very large (~200MB on bookwyrm.social)
temp file, not to mention the memory being used by the Query object
likely being similar to, if not larger than that used by the finalized
CSV row.
While we should in the long term run our CSV exports as a Celery task,
this change should allow CSV exports to work on large servers without
causing disk-space problems.
Fixes: #2157
Expand TOTP validity window
This changes the default window to allow 2 codes (60 seconds) on either side. Admins can change this by setting a different `TWO_FACTOR_LOGIN_VALIDITY_WINDOW` value in `.env`
I think this will go a long way to solve the federation delay problems
we're seeing on b.s. I'm not sure at what point adding more queues will
create more problems than it solves, but I do think in this case the
queues are out of balance and moving broadcasts (which are the most
common type of `medium_priority` task at the moment) to their own queue
will be an improvement.
When an inbox activity comes in from another fediverse instance, the
behavior prior to this commit was always to immediately give a 200
response to the external server and then create a celery activity
(usually in the MEDIUM_PRIORITY queue) to complete it.
Instead, this would receive a request and try to complete it without
making any http requests (which would make the request take too long to
process). If an external request is required to complete the activity, a
task is created and added to the queue.
Ideally, this will cause some tasks to happen very promptly, and reduce
the load on celery, which would help queued tasks happen more quickly as
well.
One downside is that this will make completing http requests from
external servers slowing (since it's doing a bunch of thinking before
responding).
The migration uses `RESTRICT` instead of `PROTECT`, which is both more
correct, but also those values need to be identical, otherwise Django
thinks that there's a migration missing and will refuse to apply any
new migrations.
On the feed view along with other statuses, the body will be trimmed,
but on the single view, there's no need to trim it. This preserves the
logic for spoiler alerts.