Use our S3 compatible Hosted Gateway integration pattern to increase upload performance and reduce the load on your systems and network. Uploads will be encrypted and erasure-coded server-side, thus a 1GB upload will result in only in 1GB of data being uploaded to storage nodes across the network.
Reduced upload time
Reduction in network load
Navigate to the Access page within your project and then click on Create Access Grant +. A modal window will pop up and you can enter a name for this access grant.
Assign the permissions you want this access grant to have, then click on Continue in Browser:
Enter the Encryption Passphrase you used for your other access grants. If this is your first access grant, we strongly encourage you to use a mnemonic phrase as your encryption passphrase. (The GUI automatically generates one on the client-side for you)
Click on the Generate S3 Gateway Credentials link and then click on the 'Generate Credentials' button.
Copy your Access Key, Secret Key, and Endpoint to a safe location.
Now you are ready to configure Rclone
First, Download and extract the rclone binary onto your system.
Execute the config command:
A text-based menu will prompt. Type
n and hit
Enter to create a new remote configuration, select n (New Remote).
e) Edit existing remoten) New remoted) Delete remoter) Rename remotec) Copy remotes) Set configuration passwordq) Quit confige/n/d/r/c/s/q>
Enter a name for the new remote configuration, e.g.
A long list of supported storage backends will prompt.
Select 4 (4 / Amazon S3 Compliant Storage Provider) and hit
4 / Amazon S3 Compliant Storage Provider
A further list of S3 storage providers will prompt.
Select 13 (13 / Any other S3 compatible provider) and hit
13 (13 / Any other S3 compatible provider)
A choice will be given on how you will enter credentials.
Strike Enter for the default choice of 1 (Enter AWS credentials in the next step).
1 / Enter AWS credentials in the next step
You will be asked for your Access Key ID followed by the Secret Access Key that you previously generated, follow the pattern in the code block below.
# AWS Access Key ID# Enter your <AccessKeyId><AccessKeyId>Strike Enter# AWS Secret Access Key# Enter your <SecretAccessKeyId><SecretAccessKeyId>Strike Enter
You will be asked for what Region to connect to, Endpoint, and Location Constraint.
# Region to connect toStrike enter for default# (1 / Use this if unsure. Will use v4 signatures and an empty region)# Endpoint for S3 API# Enter the Storj DCS Gateway URLhttps://gateway.storj.ioStrike Enter# Location ConstraintStrike enter for default# ("")
A list of Canned Access Control Lists used when creating buckets will be presented.
# Canned ACL used when creating buckets and storing or copying objects# Select your prefured option, otherwise strike enter for the most secure defaultStrike enter for default or enter your prefured number followd by enter
You will be asked if you want to edit the advanced config.
# Edit Advanced Config? (y/n)Strike enter for default# y) Yes# n) No (Default)y/n> y# Value "bucket_acl" = ""# Edit? (y/n)># y) Yes# n) No (Default)Strike enter for default until reach the "chunk_size"# Value "chunk_size" = "5M"# Edit? (y/n)># y) Yes# n) No (default)y/n> y# Chunk size to use for uploading.## When uploading files larger than upload_cutoff or files with unknown# size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google# photos or google docs) they will be uploaded as multipart uploads# using this chunk size.## Note that "--s3-upload-concurrency" chunks of this size are buffered# in memory per transfer.## If you are transferring large files over high-speed links and you have# enough memory, then increasing this will speed up the transfers.## Rclone will automatically increase the chunk size when uploading a# large file of known size to stay below the 10,000 chunks limit.## Files of unknown size are uploaded with the configured# chunk_size. Since the default chunk size is 5MB and there can be at# most 10,000 chunks, this means that by default the maximum size of# a file you can stream upload is 48GB. If you wish to stream upload# larger files then you will need to increase chunk_size.# Enter a size with suffix k,M,G,T. Press Enter for the default ("5M").chunk_size> 64MHit enter for default until end of advanced configuration
A summary of the remote configuration will prompt. Type
Enter to confirm.
[waterbear]type = s3provider = Otherenv_auth = falseaccess_key_id = <AccessKey>secret_access_key = <SecretAccessKey>endpoint = https://gateway.storj.iochunk_size = 64M--------------------y) Yes this is OK (default)e) Edit this remoted) Delete this remotey/e/d>
Now you should see one remote configuration available. Enter
q and hit
Enter to quit the configuration wizard.
Current remotes:Name Type==== ====waterbear s3
mkdir command to create new bucket, e.g.
rclone mkdir waterbear:mybucket
lsf command to list all buckets.
rclone lsf waterbear:
rmdir command to delete an empty bucket.
rclone rmdir waterbear:mybucket
purge command to delete a non-empty bucket with all its content.
rclone purge waterbear:mybucket
copy command to upload an object.
rclone copy --progress ~/Videos/myvideo.mp4 waterbear:mybucket/videos/
Use a folder in the local path to upload all its objects.
rclone copy --progress ~/Videos/ waterbear:mybucket/videos/
ls command to list recursively all objects in a bucket.
rclone ls waterbear:mybucket
Add the folder to the remote path to list recursively all objects in this folder.
rclone ls waterbear:mybucket/videos/
lsf command to list non-recursively all objects in a bucket or a folder.
rclone lsf waterbear:mybucket/videos/
copy command to download an object.
rclone copy --progress waterbear:mybucket/videos/myvideo.mp4 ~/Downloads/
Use a folder in the remote path to download all its objects.
rclone copy --progress waterbear:mybucket/videos/ ~/Downloads/
deletefile command to delete a single object.
rclone deletefile waterbear:mybucket/videos/myvideo.mp4
delete command to delete all object in a folder.
rclone delete waterbear:mybucket/videos/
size command to print the total size of objects in a bucket or a folder.
rclone size waterbear:mybucket/videos/
sync command to sync the source to the destination, changing the destination only. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
rclone sync --progress ~/Videos/ waterbear:mybucket/videos/
The sync can be done also from Storj DCS to the local file system.
rclone sync --progress waterbear:mybucket/videos/ ~/Videos/
Or between two Storj DCS buckets.
rclone sync --progress waterbear-us:mybucket/videos/ waterbear-europe:mybucket/videos/
Or even between another cloud storage and Storj DCS.
rclone sync --progress s3:mybucket/videos/ waterbear:mybucket/videos/