The Storj S3-compatible Gateway supports a RESTful API that is compatible with the basic data access model of the Amazon S3 API.
|DeleteBucketTagging||No||We could support this|
|DeleteObjectTagging||Full||Tags can be modified outside of tagging endpoints|
|GetBucketLifecycle (deprecated)||No||We could partially support this|
|GetBucketLifecycleConfiguration||No||We could partially support this|
|GetBucketLocation||No||We could support this||Location constraints would be different from AWS S3|
|GetBucketPolicy||Partial||Only in Gateway-ST with --website|
|GetBucketPolicyStatus||No||We could support this||Currently, it always returns false|
|GetBucketRequestPayment||No||No||Planned support status needs verification|
|GetBucketTagging||No||We could support this|
|GetBucketVersioning||No||Planned. See https://github.com/storj/roadmap/issues/23|
|GetObject||Partial||We need to add support for the partNumber parameter|
|GetObjectTagging||Full||Tags can be modified outside of tagging endpoints|
|GetObjectTorrent||No||With significant effort, we could support this|
|ListMultipartUploads||Partial||Planned full. See https://github.com/storj/roadmap/issues/20||See ListMultipartUploads section|
|ListObjectVersions||No||Planned. See https://github.com/storj/roadmap/issues/23|
|ListObjects||Partial||Planned full. See https://github.com/storj/roadmap/issues/20||See ListObjects section|
|ListObjectsV2||Partial||Planned full. See https://github.com/storj/roadmap/issues/20||See ListObjects section|
|PutBucketRequestPayment||No||No||Planned support status needs verification|
|PutBucketTagging||No||We could support this|
|PutBucketVersioning||No||Planned. See https://github.com/storj/roadmap/issues/23|
|PutObjectTagging||Full||Tags can be modified outside of tagging endpoints|
|UploadPartCopy||No||Planned. See https://github.com/storj/roadmap/issues/40|
Full compatibility means that we support all features of a specific action except for features that rely on other actions that we haven't fully implemented.
Partial compatibility means that we don't support all features of a specific action (see Caveats column).
A bucket's paths are end-to-end encrypted. We don't use an ordering-preserving encryption scheme yet, meaning that it's impossible to always list a bucket in lexicographical order (as per S3 specification). For requests that come with forward-slash-terminated prefix and/or forward-slash delimiter, we list in the fastest way we can, which will list a bucket in lexicographical order, but for encrypted paths (which is often very different from the expected order for decrypted paths). Ideally, clients shouldn't care about ordering in those cases. For requests that come with non-forward-slash-terminated prefix and/or non-forward-slash delimiter, we perform exhaustive listing, which will filter paths gateway-side. In this case, gateways return listing in lexicographical order. Forcing exhaustive listing for any request is not possible for Storj production deployments of Gateway-MT, and for, e.g. Gateway-ST can be achieved with
This endpoint has the same ordering characteristics as
ListObjects described above, in that lexicographic ordering works on encrypted upload paths, not the decrypted uploads paths. It also only supports prefixes that contain a trailing forward-slash, as well as a forward-slash delimiter. An exhaustive search similar to what
ListObjects does with arbitrary prefixes and delimiters is not supported.
NextUploadIdMarker are not supported. This is used to filter out uploads that come before the given upload ID marker. This is tracked at storj/gateway-mt#213
Secure access control in the decentralized cloud is a good read for why we don't support ACL-related actions.
|Maximum number of buckets||100|
|Maximum number of objects per bucket||No limit|
|Maximum object size||No limit|
|Minimum object size||0 B|
|Maximum object size per PUT operation||No limit|
|Maximum number of parts per upload||10000|
|Minimum part size||5 MiB. Last part can be 0 B|
|Maximum number of parts returned per list parts request||10000|
|Maximum number of objects returned per list objects request||1000|
|Maximum number of multipart uploads returned per list multipart uploads request||1000|
|Maximum length for bucket names||63|
|Minimum length for bucket names||3|
|Maximum length for encrypted object names||1280|
|Maximum metadata size||2 KiB|
AWS S3 limits users to upload objects no larger than 5 TiB. Edge services at Storj don't impose such a limit, but the existence of the limit in AWS S3 requires users to tweak their client's configuration to be able to upload larger objects.
This example is specific to AWS CLI, but your particular S3-compatible client might carry a need for a similar configuration.
A multipart upload requires that a single object is uploaded in not more than 10000 distinct parts. You must ensure that the chunk size you set balances the part size and the number of parts.
For example, for 6 TiB objects, you need to set AWS CLI's
multipart_chunksize to approximately 630 MiB:
It's possible to specify TTL for the object by sending the
header (note: S3-compatible clients usually add the
X-Minio-Meta- prefix themselves) with one of the following values:
- a signed, positive sequence of decimal numbers, each with an optional fraction and a unit suffix, such as
- valid time units are
+2hmeans the object expires 2 hours from now
- valid time units are
- full RFC3339-formatted date
It's also possible to specify
none for no expiration (or not send the header).
The value under
X-Amz-Meta-Object-Expires has priority over the value under
An alternate response of the S3 ListBuckets API endpoint which includes Attribution in the Bucket XML element. Other than the addition of Attribution in the response the endpoint behavior is the same as ListBuckets.
This sample code works with the AWS SDK for Go and derives from ListBuckets a call to ListBucketsWithAttribution.