The Storj Developer Hub

Welcome to the Storj developer hub. You'll find comprehensive guides and documentation to help you start working with Storj as quickly as possible, as well as support if you get stuck. Let's jump right in!

Get Started    Discussions

Storj Share Common Issues

You are not sure storjshare is running fine? You don’t get any shards? You don’t understand every logline? This troubleshooting guide should help you out. Something is missing? Let us know so we can add it.

1. Installation guides

2. Don't run NPM as root

Running node with full root privilages is a security risk. NPM will stop the installation and print out an error message.

npm install -g storjshare-daemon
...
sh: 1: prebuild: Permission denied
/root/[...]

It is recommended to run the installation as normal user. If you like to disable this security feature you can run the installation with --unsafe-perm.

3. Wrong Node Version

First of all, please check your node version. A wrong node version is the reason for most of the installation and first run error messages.

$ npm version
{ npm: '5.6.0',
  ares: '1.10.1-DEV',
  cldr: '31.0.1',
  http_parser: '2.7.0',
  icu: '59.1',
  modules: '57',
  nghttp2: '1.25.0',
  node: '8.9.2',
  openssl: '1.0.2m',
  tz: '2017b',
  unicode: '9.0',
  uv: '1.15.0',
  v8: '6.1.534.48',
  zlib: '1.2.11' }

Node version 8.9.x (LTS) and npm version 5.6.X are recommended. If you don't have the correct node version, please install it using one of the following three methods:

apt-get install npm
npm install -g n
apt-get remove npm
apt-get autoremove
n lts
# Logoff and Login to get the symlink
npm install -g npm
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.3/install.sh | bash
# Close and reopen shell or source <shellprofile> (~/.bash_profile, ~/.zshrc, ~/.profile, or ~/.bashrc)
nvm install --lts
https://nodejs.org/en/download/

4. Synchronize your clock

1. http://www.pool.ntp.org/en go here find ntp server closest to you physically and also ping it, 
2. Download this software http://www.timesynctool.com and use ntp server that you found out in previous step.
3. Set the update interval to the 15 minutes.
sudo apt-get install ntp ntpdate -y
sudo service ntp stop
sudo ntpdate -s time.nist.gov
sudo service ntp start
timedatectl status
timedatectl list-timezones
sudo timedatectl set-timezone <your timezone>

5. Failed to Find Tunnels

Syncronize your clock with a good internet time server, for example time.nist.gov, and try again.

6. Port forwarding

The Storj Share daemon and GUI need at least one open TCP port for p2p communication with other nodes. Storj Share will send and receive a probe message to test the port. If that is unsuccessful, it will try to forward the port via UPnP and as fallback solution, use another node for a tunnel connection.

Users can set custom ports as shown in the following example.

  // Interface to bind RPC server, use 0.0.0.0 for all interfaces or if you
  // have a public address, use that, else leave 127.0.0.1 and Storj Share
  // will try to determine your address
  "rpcAddress": "Public IP address or DDNS hostname",
  // Port to bind for RPC server, make sure this is forwarded if behind a
  // NAT or firewall - otherwise Storj Share will try to punch out
  "rpcPort": 4000,
  // Enables NAT traversal strategies, first UPnP, then reverse HTTP tunnel
  // if that fails. Disable if you are public or using dynamic DNS
  "doNotTraverseNat": true,
  // Maximum number of tunnels to provide to the network
  // Tunnels help nodes with restrictive network configurations participate
  "maxTunnels": 3,
  // Maximum number of concurrent connections to allow
  "maxConnections": 150,
  // If providing tunnels, the starting and ending port range to open for
  // them
  "tunnelGatewayRange": {
    "min": 4001,
    "max": 4003
  },

In the above examples, storjshare will use port 4000 for the p2p communication. On port 4001, the built-in tunnel server will wait for a tunnel request and open a tunnel port between 4002 and 4003 if needed. Specifying "doNotTraverseNat": true, will disable UPnP.

You can use an online port check service like http://www.yougetsignal.com/tools/open-ports/ to test your open ports. The Storj Share CLI or GUI has to be running for this test to work. For the example configuration, port 4000 should be open. Port 4001 to 4003 will stay closed as long as they are not needed. The port numbers that need to be open depend on your particular configuration.

Keep in mind that multiple nodes can't share the same open ports. Every instance needs unique open ports.

7. Logfile and loglevel

In the Storj Share daemon, the loglevel can be changed via the config file. Default is loglevel Info, which includes Warnings and Error messages, but not debug. You could also choose one of the other levels shown in the loglevel table below.

  // Determines how much detail is shown in the log:
  // 4 - DEBUG | 3 - INFO | 2 - WARN | 1 - ERROR | 0 - SILENT
  "loggerVerbosity": 3,
  // Path to write the log file to disk, leave empty to default to:
  // $HOME/.storjshare/storjshare/logs/[nodeid].log
  "loggerOutputFile": "/root/.storjshare/storjshare/logs/188071ba7cfd974a9e47b59e24b0737ebf845db3.log",
Loglevel
Configuration

Off

"loggerVerbosity": 0,

Error

"loggerVerbosity": 1,

Warn

"loggerVerbosity": 2,

Info

"loggerVerbosity": 3,

debug

"loggerVerbosity": 4,

8. No Response time on the bridge

You can check your node's contact on the bridge: https://api.storj.io/contacts/YOUR_NODEID
You should get an response like this:

{"lastSeen":"2018-02-28T18:07:50.955Z","port":4986,"address":"173.249.13.57","userAgent":"8.4.2","protocol":"1.2.0","spaceAvailable":true,"responseTime":1294.0466597559177,"reputation":11,"nodeID":"5a9c82bbed6252500c8d2447f08c7be455be9d6d"}

Your node should have a responseTime. For a new node it should be 10000. After a week (in an average) it should be about the real.

Response time is the difference between the bridge time sending the ALLOC message to the node and the bridge time for receiving the response from the node.
It includes all delays: hardware, network, router, ISP and bridge itself.
If your node's response time is below average, your node will receive ALLOC messages more often. You can read about How Storj's Network Works .

9. Clock out of Sync

Every message is signed with a timestamp to guard against replay attacks. Your node will compare the signature time with your system clock. If the time difference is too big, your node will ignore the message and display "signature expired". As a side effect, you will see a lot of "missing or empty reply from contact" messages as well.

Lets take a look at the following bad example:

{"level":"info","message":"clock is synchronized with ntp, delta: 1362 ms","timestamp":"2017-02-07T00:27:18.555Z"}

If you see something like this in your log, you should sync your clock. A offset of 1 second is still working but you will miss some shards.

Syncronize your clock with a good internet time server like time.nist.gov.

10. How to get shards

Step 1. Receive an ALLOC request

{"level":"info","message":"received valid message from {\"userAgent\":\"8.3.0\",\"protocol\":\"1.2.0\",\"hdKey\":\"xpub6AHweYHAxk1EhJSBctQD1nLWPog6Sy2eTpKQLExR1hfzTyyZQWvU4EYNXv1NJN7GpLYXnDLt4PzN874g6zSjAQdFCHZN7U7nbYKYVDUzD42\",\"hdIndex\":29,\"address\":\"renter-28.renters.prod.storj.io\",\"port\":8400,\"nodeID\":\"b553f297d185ff71af7a28baa9985424bf30c8a6\",\"lastSeen\":1519499275132}","timestamp":"2018-02-28T08:03:35.404Z"}
{"level":"info","message":"handling alloc request from b553f297d185ff71af7a28baa9985424bf30c8a6 hash 961780ced4b0d113284f0bc5fefdfe4b56868bb8 size 101","timestamp":"2018-02-28T08:03:35.405Z"}

Step 2. Checking for free space in the buckets and that we doesn't have this shard yet
Not enough diskspace? Better skip this contract and wait for the next one (don't go on with Step 3).
Is there space available? Then let's check that we doesn't have this shard yet ("no storage item available for this shard").

{"level":"info","message":"handling alloc request from e0bd63770e3bd964d64dc74b089478d59916a5e4 hash 087305de2b7751a71297162a040308da939e6447 size 10485760","timestamp":"2018-02-28T18:42:31.569Z"}
{"level":"debug","message":"contract is associated with connected bridge: true","timestamp":"2018-02-28T18:42:31.569Z"}
{"level":"debug","message":"active transfers 0 is less than offerBackoffLimit 3: true","timestamp":"2018-02-28T18:42:31.569Z"}
{"level":"debug","message":"no storage item available for this shard","timestamp":"2018-02-28T18:42:31.606Z"}
{"level":"debug","message":"negotiator returned: true","timestamp":"2018-02-28T18:42:31.606Z"}
{"level":"debug","message":"max KFS bucket size 15032350453, used 3210483397, free 11821867056, shard size 10485760","timestamp":"2018-02-28T18:42:31.607Z"}
{"level":"debug","message":"we have enough free space: true","timestamp":"2018-02-28T18:42:31.607Z"}
...

Note: Default loglevel 3 (info) will not print out this information. Debug output is only visible with loglevel 4 (debug). See logfile and loglevel for more information.

Step 3: Send response to ALLOC request

{"level":"info","message":"Sending alloc response hash 961780ced4b0d113284f0bc5fefdfe4b56868bb8 size 101","timestamp":"2018-02-28T08:03:35.474Z"}
...

Finally, you send your response to an ALLOC but you have to remember that you are probably not the only farmer competing for the contract. The renter will select one of the farmers and send the shard. Most of the time, you will not be selected.

Step 4: Upload shard

{"level":"info","message":"handling alloc request from 0ba756da5c2d6f39ae323d40c69c1b82434046f9 hash a62e55efa60df1bf87aaa313f5bc5c88002ffbff size 688128","timestamp":"2018-02-28T11:57:58.720Z"}
{"level":"info","message":"Sending alloc response hash a62e55efa60df1bf87aaa313f5bc5c88002ffbff size 688128","timestamp":"2018-02-28T11:57:58.796Z"}
...
{"level":"info","message":"Shard upload completed hash a62e55efa60df1bf87aaa313f5bc5c88002ffbff size 688128","timestamp":"2018-02-28T11:57:59.595Z"}
...

Congratulations! If your node was selected, the renter will be uploading a shard to you. You will find the shard within the .ldb file in the sharddata.kfs subfolder inside storagePath (daemon) / storage location (GUI).

Warning: Don't delete any .ldb file yourself! The storage Database will get invalid and you will lose all shards!

Step 5. Mirroring shard
Each shard in the network should have 6 mirrors for reliability. Your node should send a mirror to the next node in the network. Mirroring stops when there are 6 mirrors.

{"level":"info","message":"handling mirror request from 9921c1ab5744b503e71d15d557077ab865353f0b hash 9b04d0fde762cfc1b80797736c769b356abe25e8","timestamp":"2018-02-28T08:49:33.419Z"}
{"level":"info","message":"opening data transfer with {\"userAgent\":\"8.4.2\",\"protocol\":\"1.2.0\",\"address\":\"173.249.13.57\",\"port\":4262,\"nodeID\":\"ef4eeb8745d5731812ad948fa4598a4224dd5cf7\",\"lastSeen\":\"2018-02-28T08:08:57.476Z\"} to mirror 9b04d0fde762cfc1b80797736c769b356abe25e8","timestamp":"2018-02-28T08:49:33.420Z"}
...
{"level":"info","message":"successfully mirrored shard hash 9b04d0fde762cfc1b80797736c769b356abe25e8 size 2097152","timestamp":"2018-02-28T08:49:34.200Z"}
...
{"level":"info","message":"Mirror download completed hash 9b04d0fde762cfc1b80797736c769b356abe25e8 size 2097152","timestamp":"2018-02-28T08:49:42.004Z"}
...

Step 6. Download shard
If the renter wants to download his file back, he will request his shards from the nodes that hold them.

{"level":"info","message":"handling storage retrieve request from eb05fe01a1983fc88d7b7e4df3d2e968ab91e3b0 hash 0c3bdd1b1bae70543dd256eac214d06451c849be","timestamp":"2018-02-28T07:26:14.349Z"}
{"level":"info","message":"authorizing download data channel for eb05fe01a1983fc88d7b7e4df3d2e968ab91e3b0","timestamp":"2018-02-28T07:26:14.350Z"}
...
{"level":"info","message":"Shard download completed hash 0c3bdd1b1bae70543dd256eac214d06451c849be size 134217728","timestamp":"2018-02-28T07:28:06.574Z"}

Step 7: Delete expired or corrupted shards

A cleanup job will run every 24 hours to delete expired contracts, freeing up space for new contracts.

{"level":"info","message":"starting local database expiration","timestamp":"2018-03-04T00:02:37.512Z"}
{"level":"info","message":"database expiration complete","timestamp":"2018-03-04T00:02:37.513Z"}
...
{"level":"info","message":"starting shard reaper, checking for expired contracts","timestamp":"2018-03-04T00:02:52.938Z"}
...
{"level":"info","message":"destroying shard/contract for 011dd9f62ed22916c97f054e9efad54b085c2af2","timestamp":"2018-03-04T00:02:53.633Z"}
...
{"level":"info","message":"flushing shards, some buckets will be inaccessible","timestamp":"2018-03-04T00:03:01.122Z"}
...
{"level":"info","message":"flushing shards finished","timestamp":"2018-03-04T00:03:34.570Z"}