Description
I've run into issues like #1851 in the past and didn't realise it was due to the SECRET_KEY
changing when recreating a config. It would be nice if we could somehow add some mitigations to this.
Recovery from secret loss
If the secret is changed without access to the previous secret, and things like 2FA secrets can't be decrypted, then there should be some easy way to mitigate this. Right now, the only solution is to delete the corrupted rows in the DB manually.
A few potential options:
- A
gitea doctor
command to delete 2FA data, potentially notifying users who had 2FA enabled - A flag on users in the database to prompt them to reconfigure 2FA on login
Proactive secret rotation
If the user wishes to proactively change the secret, there should be an option to include (at least) two secrets in the config: the new secret for future operations, and the old secret for past operations.
A few options here:
- Secrets could be configured in optional pairs, where there's a "primary" secret and a "backup" secret. The backup secret is exclusively for decryption, and anything decrypted with it should be re-encrypted with the primary secret.
- There could also be the option for more than one backup secret.
- A
gitea doctor
command to re-encrypt things encrypted with a backup secret. - A cron job to re-encrypt things encrypted with a backup secret.
- The ability to automatically remove the backup secret once all its data is updated.
The actual action upon rotation of the secret depends on the reason for the rotation. If there's some kind of breach of security, things that were protected by the secret like 2FA keys should be regenerated. So, a few options here:
- Secrets could be able to be marked as compromised, and this could also be used with the user flag mentioned in the first section, prompting users to reconfigure 2FA on login.
- This could potentially up the priority on the cron job for re-encrypting things, putting it to a high priority.