Object lock support for repositories on S3/B2/GCS #1050
Replies: 10 comments 36 replies
-
About 4.) rustic has already added a creation date for each pack file to the index. This is also used when marking the pack for deletion. Actually I think that for something like extendable object lock, we should use a similar mechanism to determine which packs need to get a lock update. In fact, I wouldn't say that we need to update the lock for all packs, but depending on a "lock retention" policy could also figure out which packs are still locked, which need a lock extenstion and which may be open for removal I have to think about what we could do with index files. It might be Ok, though, to not lock them as they anyway only contain redundant data which can always be recreated by the For snapshot files, we could use the snapshot date and do similar things in So, in summary: Yes I think rustic can add lots of support in this case! It is a bit of work and redesign in the backend trait and forget/ prune, but that's all! Let's put it onto the issue list and I see when I or someone else finds time to add these features. About 5.) I think rclone also supports this with the
in your restore-config should also work. |
Beta Was this translation helpful? Give feedback.
-
I posted this idea to see what others are thinking - myself I think it would be interesting and very useful thing to have. As I explained I can achieve it today without any When I have lock protected repo I run backup daily - if lock is for 100 days I will have 100 daily backups. There is no need to prune anything earlier - it would be almost against all idea:) |
Beta Was this translation helpful? Give feedback.
-
I would say let it sink in:) There are many bright people out there - hopefully some will share their thought on this subject:) There is definitely requirement for lock protected repos - maybe niche and a bit specialised but very solid. This is also something what can differentiate |
Beta Was this translation helpful? Give feedback.
-
I thought a bit about this proposal and would like to propose that we implement a
UPDATE: we can of course also use something like UPDATE2: If the backend supports it, we can also add the possibility to do a |
Beta Was this translation helpful? Give feedback.
-
I managed to compile #1066 and have new
However this
It works perfectly fine with release version. |
Beta Was this translation helpful? Give feedback.
-
I am not sure what is the purpose of this. Myself I want to use the repo in normal way - with What I would like |
Beta Was this translation helpful? Give feedback.
-
Actually you do keep all hourly backups for 1 year if you run |
Beta Was this translation helpful? Give feedback.
-
What I can commit to is that when it is working I will create foolproof step by step manual how to configure |
Beta Was this translation helpful? Give feedback.
-
Just wanted to mention #1078 which (e.g. in an emergency case) is able to handle a missing/incomplete index. It is an alternative for |
Beta Was this translation helpful? Give feedback.
-
I have found
for example:
VALIDITY has to have format Nd or Ny only - https://min.io/docs/minio/linux/reference/minio-mc/mc-retention-set.html Hence I can not use
I think minio is popular enough to justify supporting their tools too - especially that |
Beta Was this translation helpful? Give feedback.
-
It is only idea at this stage but I think
rustic
already has some building blocks available to at least simplify such repository.I have used it in
kopia
- https://kopia.io/docs/advanced/ransomware-protection/#even-more-protection - and got overall concept from there.I can achieve it in fully "manual" mode today with
rustic
:rustic
backups - business as usualfor this I use
rclone
to list all objects and aws cli to do the job:rclone lsf S3remote:bucket_name -R --files-only | xargs -P 16 -n 1 extend_lock.sh
where
extend_lock.sh
executes aws cli command:aws s3api put-object-retention --endpoint-url url --bucket bucket_name --key S1 --retention '{ "Mode": "COMPLIANCE", "RetainUntilDate":"now+N days" }'
it mounts read only bucket as it was at some past
DATE
- I can use nowrustic
to just point to this repo and restore what I need.BTW this is actually one of the reason I started looking at
rustic
.restic
with its need to have writablelocks
directory is a bit more complicated to use here.=======
Now point (5) would be rather hard to implement at the moment without proper S3 API - let's forget it. There is
rclone
and this is something user expects never to use - or once in a lifetime, so can be a bit "manual". I thinkrclone
way is fully acceptable.Points (1...3) - are configuration to be documented how to run such protected repo.
But
rustic
could help with point (4). In similar way like it today invokes cold repo restore command it could have option to invoke lock extension command (for S3, B2, GCS remotes etc.) when run e.g. asrustic extendlock
or maybe generic hook for actions on all repo objectsrustic action --all
.This command would pass every repo object to toml defined command - ideally in multithread fashion as e.g.
aws
cli is slow (this is why I usexargs -P 16
in my manual way)I see questions about ransomware protection popping up from time to time on various forums and some
rustic
support here would make it easier for people to actually implement it.Beta Was this translation helpful? Give feedback.
All reactions