Also have a look at this cool guide made by Dan.
If one does not already own a Pi 3 please
consider buying the Pine Rock64 4GB RAM board here instead (all steps in this guide are still valid), this board allows one to run 2x nodes, is much faster and is not much more expensive than a Pi 3.
In this guide we presume that you have a Raspberry Pi 3, set up with Raspbian Jessie lite (without User Interface). To set up Raspbian lite, please refer to the official documentation here.
Building your own Storj farming node with a Raspberry Pi 3 and attached storage is the most cost effective way to start farming, because the Raspberry Pi hardware is extremely cheap and power efficient: at a continuous power consumption of about 3.5-5 W, it consumes less than one-sixth of an average consumer laptop.
Before we can start building our very own farming setup, there are a few very important things we should know first, namely:
- We can only run one node on the Raspberry Pi 3 because each node should have at least 1GB of RAM to its disposal to function properly.
- The maximum size for a single node is 8TB, this implies that we can rent out a maximum of 8TB on the Raspberry Pi because we cannot add more nodes. If one wants to rent out more space, something like an Odroid XU4 is a good alternative.
- One should always attach the Raspberry Pi via a LAN cable to lower the latency as much as possible.
Note: There are a variety of different drives one can add to the Pi, for example:
- 2.5" Hard drives
- 3.5" Hard drives
- External USB drives
- USB flash drives, NAS ...
The example below is a setup made by our french community member Olivier Balais who joined four 2.5" HDDs into a single partition.
As you can see in the pictures below, a USB hub powers the four hard drives. Another power supply is connected to the Raspberry Pi because all USB ports are currently used. If one has a larger USB hub, both the drives and the RPI can be powered directly from it.
Figure 1.0. Farming node with Raspberry and rack made from Meccano parts.
In the example below, four 2.5" drives will be mounted and merged, however, if you are only running a single external USB hard drive, this guide also does a good job at explaining the mounting process.
First we have to format the hard drives which can be done in the following manner:
parted -a opt /dev/sda mkpart primary ext4 0% 100% mkfs.ext4 -L storj1 /dev/sda1 parted -a opt /dev/sdb mkpart primary ext4 0% 100% mkfs.ext4 -L storj2 /dev/sdb1 parted -a opt /dev/sdc mkpart primary ext4 0% 100% mkfs.ext4 -L storj3 /dev/sdc1 parted -a opt /dev/sdd mkpart primary ext4 0% 100% mkfs.ext4 -L storj4 /dev/sdd1
In this example,the four disks were mounted in
fstab should look similar to this:
#/etc/fstab LABEL=storj1 /mnt/storj1 ext4 defaults 0 1 LABEL=storj2 /mnt/storj2 ext4 defaults 0 1 LABEL=storj3 /mnt/storj3 ext4 defaults 0 1 LABEL=storj4 /mnt/storj4 ext4 defaults 0 1 /mnt/storj1:/mnt/storj2:/mnt/storj3/:/mnt/storj4 /mnt/storjmerge fuse.mergerfs defaults,allow_other,use_ino,fsname=storjmerge 0 0
As you can see on the last line of fstab, we use mergerfs to merge all of the hard drives into a single volume, mounted in
To setup mergerfs on your Raspberry Pi 3, run the following commands:
apt install fuse wget https://github.com/trapexit/mergerfs/releases/download/2.20.0/mergerfs_2.20.0.debian-wheezy_armhf.deb dpkg -i mergerfs_2.20.0.debian-wheezy_armhf.deb rm mergerfs_2.20.0.debian-wheezy_armhf.deb
The hard drives are now setup and will mount automatically at reboot.
There are two possible routes to take here, namely:
If one wants to choose the basic setup route, there is one vital part that has to be setup correctly, namely that your router has to have UPnP enabled. If UPnP is not enabled, this will not work. To verify this, you will have to enter your router and check if UPnP is enabled in the corresponding router menu. This option is normally presented as a check-box.
First open a terminal as a casual user and type in the following commands in a sequential order:
wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash
- exit from bash shell
- run a new shell as casual user
nvm install --lts
- restart shell
sudo apt-get update && sudo apt-get dist-upgrade
sudo apt-get install git python build-essential -y
All the necessary dependencies to run Storj Share are now correctly installed, so we can now install Storj Share by executing the following command:
npm install --global storjshare-daemon
Next we create the Storj Share node:
storjshare create --storj=YOUR_STORJ_TOKEN_WALLET_ADRESS --storage=/mnt/storjmerge/storj.io/
Then, make a script to start everything at once:
storjshare daemon storjshare start --config /path/to/storjconfig/xxxx.json
The advanced setup which includes TCP port forwarding, is explained in Storj Share Daemon (CLI) guide.
- Open shell and execute:
env > ~/.env
- Create a
watchdog.shscript in your home directory.
- Add the following lines to the
watchdog.shscript and save it.
Note: don't forget to replace the paths to each config file to the appropriate paths for your own system:
$HOME/.bashrc . $HOME/.profile . $HOME/.env APP=$(ps aux | grep -v grep | grep storjshare) if [ -z "$APP" ]; then echo "Restart storjshare-daemon" storjshare daemon fi APP=$(ps aux | grep -v grep | grep 'farmer.js --config') if [ -z "$APP" ]; then echo "Restart farmers" storjshare start --config $HOME/.config/storjshare/configs/1f100594a6c1830b3d135f537575dea05f41cbf1.json fi .
- Make it executable:
chmod +x ~/watchdog.sh
- Create the Cron job:
- add the following lines:
*/5 * * * * $HOME/watchdog.sh @reboot $HOME/watchdog.sh
export NVM_DIR="$HOME/.nvm" [ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm cd $HOME APP=$(ps aux | grep -v grep | grep storjshare) if [ -z "$APP" ]; then echo "Restart storjshare-daemon" storjshare daemon fi APP=$(ps aux | grep -v grep | grep 'farmer.js --config') if [ -z "$APP" ]; then echo "Restart farmers" storjshare start --config $HOME/.config/storjshare/configs/1f100594a6c1830b3d135f537575dea05f41cbf1.json fi
Monitoring Storj Share nodes on a Pi on a daily basis is a tedious task, one has to run
iotop and even
ipconfig to gather metrics and understand how the node is behaving. While this is OK at first, it is not a good solution in the long run.
Figure 3.0. The output of running the
storjshare status command.
Monitoring one's node(s) and their host is really important to help understand how they perform, how to improve their efficiency over time and of course, for getting alerted if something goes wrong.
To simplify the monitoring process, one can setup a very classic combination of Grafana, influxdb and collectd. Feel free to replace each one of these components by one of their many alternatives, according to your likings. Have a look at Telegraf for example to replace collectd.
First thing to do is setup collectd which will be responsible for collecting metrics from your host and from the storjshare-daemon RPC port. Assuming you are using Debian, run the following command:
sudo apt install collectd
Then setup the storj-collectd plugin by running:
npm install -g storj-collectd-plugin
Now, edit the following config file
/etc/collectd/collectd.conf to enable the plugins you are interested in. At the very least, configure the network plugin with the IP address or domain name of the webserver on which you will setup InfluxDB (use 127.0.0.1 if influxdb is on the same host) and add a plugin exec entry for the collectd-storj-exec-plugin:
LoadPlugin ... LoadPlugin exec LoadPlugin network <Plugin network> Server "IP_SERVER_INFLUXDB" "25826" </Plugin> <Plugin exec> Interval 120 Exec "youruser" "collectd-storj-exec-plugin" </Plugin>
Note: You should send metrics at least each 2 minutes (Interval 120).
Finally, add the following lines in
peers value:GAUGE:0:U shared value:GAUGE:0:U restarts value:GAUGE:0:U
Don’t forget to restart the collectd service:
systemctl restart collectd
Repeat this operation on every node’s host.
It’s now time to setup influxdb. Assuming you are still using Debian, run the following commands:
curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add - source /etc/os-release test $VERSION_ID = "7" && echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list test $VERSION_ID = "8" && echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list sudo apt update && sudo apt install influxdb
See documentation for more information on the setup process.
Then, enable the influxdb collectd listener by adding the following lines to the config file
[collectd] enabled = true bind-address = ":25826" database = "collectd_db" typesdb = "/usr/share/collectd/types.db"
Now restart influxdb:
sudo systemctl restart influxdb
Finally, install Grafana wherever you want by executing:
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana_4.2.0_armhf.deb sudo apt-get install -y adduser libfontconfig sudo dpkg -i grafana_4.2.0_armhf.deb
See documentation for more information on the setup process.
There is an alternative distribution: https://bintray.com/fg2it/deb/grafana-on-raspberry/v4.6.3
Now that everything is setup, collectd should already be sending metrics to your influx datastore. At this point we are ready to build our very own dashboard on Grafana.
Figure 3.1. Grafana dashboard example
Below a few queries are used to build this dashboard.
Downloaded vs Uploaded data per host:
Figure 3.2. Downloaded vs Uploaded data per host
Storj peers per node:
Figure 3.3. Storj peers per node:
Storj shared data per node:
Figure 3.4. Storj shared data per node
Don’t forget to add alerts on Grafana according to your needs:
Figure 3.5. Storj shared data per node
Note: It can take a few days before interesting patterns emerge from the graphs.
- Collectd installation and configuration.
- Monitor a server with collectd, influxdb and Grafana.
- Collectd download.
Many users have reported that a Storj Share node running on a RPI 3 can crash during peak times. That is very uncool because when Storj Share daemon crashes, you could potentially lose the current contracts and the corresponding data to database corruption. Loss of contracts means less data, which obviously means less money!
You can customize a few things in the RPI «BIOS» and config file to free up some more memory for the RPI.
One of the main reasons why a Storj Share node on a Raspberry Pi crashes is due to the fact that the transfer of large data chunks (shards) requires more CPU and RAM resources. One can define the maximum shard size in the config file of their node so that only shards are downloaded that the Pi can handle. Finding the best shard size for each system requires some trial and error but Starting at 50-100MB and lowering it a few MB at a time is the best method to find the most stable shard size value.
"maxShardSize": "100MB", parameter can be enabled in the config file by removing the double front slash as can be seen in the snippet below.
Tip: You can also set this parameter to a low value to prevent excessive network usage.
Note: Changing the maximum shard size will yield less shards on average because you are ignoring larger shard sizes.
"storageAllocation": "2GB", // Max size of shards that will be accepted and stored // Use this and make this lower if you don't have a strong internet connection "maxShardSize": "100MB",
The closest thing to a traditional BIOS for the Raspberry Pi can be found in /boot/config.txt. In this file, you can tweak a lot of parameters, all described here.
To optimize your Raspberry Pi, insert the following lines into your config file:
# Settings to optimize Storj farming force_turbo=1 boot_delay=1 disable_splash=1 # reduce amount of memory dedicated to GPU gpu_mem=16 reduce power consumption dtoverlay=pi3-disable-wifi dtoverlay=pi3-disable-bt
The most interesting parameter is
*gpu_mem=16*. It reduces the amount of RAM dedicated to the RPI GPU to a minimum which in turn frees up some useful megabytes (default config is 64MB) for your Storj Share daemon node.
Removing Wifi and BT should reduce power consumption a little and also deactivate the associated services.
Of course, do not install anything else on your Raspberry Pi that consume RAM or CPU.
Setting the log verbosity level in the Storj Share config file to
0 also lowers memory usage by a few MB.
Finally, if you have setup a monitoring solution as mentioned earlier, please be aware that sending metrics with collectd too often is very inefficient and consumes a lot of CPU/RAM. You should send metrics at most each 2 minutes.
With this info included in the config, your farming node should be a lot more stable and efficient.
In this guide, we walked through each of the steps, from setting up a Storj Share node to monitoring the node, and finally we looked at how to fine-tune the RPI to get the last bit of performance out of this amazing little device.
He also compiled a very cool Storj Farmer distribution map which you can check-out here.