Is there a way to "force" AWS S3 backups to a specific storage class/tier? (S3 Glacier Deep Archive)

PeterPorker3

Cadet
Joined
Jan 5, 2022
Messages
1
Hello, first I would like to preface this discussion with the fact that while I am not new to TrueNAS, I am very new to AWS S3 and object storage in general and I am still trying to wrap by head around some of it. Some of my logic for choosing to use S3 in this way might be based off of assumptions that are incorrect due to my limited experience with it, so please let me know of any mistakes in my knowledge.

I run a home TrueNAS server with a few terabytes of important data. For awhile I have been using a shared basic dropbox home account for my off-site backups and I am trying to look for a more robust solution for the long term. After looking though lots of options it seems that S3 Glacier Deep Archive is perfect for me. The cost is outstanding and I am not bothered by the minimum file retention or long/expensive recovery times for emergency disaster recovery. I planned to create an AWS account and start backing up everyone today but I hit a stumbling block that I could not find a definitive answer on.

While setting up my S3 bucket using the instructions provided by Amazon, I noticed that by default all data I upload to the bucket is put into the S3 standard class. Due to the relatively high costs of the standard tier, I want to make it so all of the data my TrueNAS system uploads is uploaded directly to the Glacier Deep Archive class and not the standard class. I know I can set policies that will move the data in the standard class to the Glacier Deep Archive class over time, but this would result in significant extra charges because the data would still be put in the standard class first. Now if you where uploading files from the S3 console in a web browser, it will let you choose which class to upload to. However there is no way to make a different class the default, and from what I have seen TrueNAS does not have a setting to specify a class or any API options, so everything will go to the standard class (as I understand it).

So my question is: is there any way to make TrueNAS (or my S3 bucket) upload files directly to the Glacier Deep Archive class instead of going to standard first?

I looked around for awhile and (at least on older versions) there does not appear to be any way to do this on the TrueNAS side. The only configurations I have seen just upload to standard first and then a policy is set that will gradually move that data to Glacier Deep Archive over time, but this causes significant extra charges. It also seems that AWS does not have an option to set a default storage class for your buckets. I did see there are API commands that allow you to choose the class while uploading data just like when I was testing in a web browser, however I'm not sure how this is implemented in TrueNAS (if at all).

Any advice is appreciated!
 
Top