Setup
Following you will find the instructions on how to manually setup your dHealth full node.
Requirements
Before starting, make sure you read the overview to make sure your hardware meets the needed requirements.
1. Build the software
In your terminal, run the following:
# Make sure we are inside the home directory
cd $HOME
# Clone the dHealth software
git clone https://github.com/dhealthproject/dhealth.git && cd dhealth
# Checkout the correct tag
git checkout tags/v1.0.0
# Build the software
ignite chain build
# Make sure we are inside the home directory
cd $HOME
# Clone the dHealth Testnet software
git clone https://github.com/dhealthproject/dhealth-testnet.git && cd dhealth-testnet
# Checkout the correct tag
git checkout tags/v2.1.1
# Build the software
ignite chain build
If the software is built successfully, the executable will be located inside your ~/go/bin
path. If you setup your environment variables correctly in the previous step, you should also be able to run it properly. To check this, try running:
dhealthd version --long
dhealth-testnetd version --long
2. Initialize the dHealth working directory
Configuration files and chain data will be stored inside the $HOME/.dhealth
(mainnet) or $HOME/.dhealth-testnet
(testnet) directory by default. In order to create this folder and all the necessary data we need to initialize a new full node using the dhealth(-testnet)d init
command.
You are able to provide a custom seed when initializing your node. This will be particularly useful because, in the case that you want to reset your node, you will be able to re-generate the same private node key instead of having to create a new node.
In order to provide a custom seed to your private key, you can do as follows:
- Get a new random seed by running
dhealthd keys add node --coin-type 10111 --dry-run
# Example
# dhealthd keys add node --coin-type 10111 --dry-run
#
# - address: dh1zz79gprxynpfd07cwjnkw7feejmq023udrzy9k
# name: node
# pubkey: '{"@type":"/cosmos.crypto.secp256k1.PubKey","key":"A2uSAx2nguk1JOen0w3+A2roG5WRos8S/9MGnK2xNzXJ"}'
# type: local
#
#
# **Important** write this mnemonic phrase in a safe place.
# It is the only way to recover your account if you ever forget your password.
#
# child switch prefer butter angle average crunch business anxiety sock admit staff small outside flavor basket ice catalog coconut world help boss actual daughter
dhealth-testnetd keys add node --coin-type 10111 --dry-run
# Example
# dhealth-testnetd keys add node --coin-type 10111 --dry-run
#
# - address: tdh021prwndcp3zl8dlhszay5jl533nug4p6ymqcvvg5
# name: node
# pubkey: '{"@type":"/cosmos.crypto.secp256k1.PubKey","key":"A9wTLFkNa7toLmfg8X4AcnDi9Qwicwo844yKfEqpm57y"}'
# type: local
#
#
# **Important** write this mnemonic phrase in a safe place.
# It is the only way to recover your account if you ever forget your password.
#
# miss hope meadow box antique steak enhance shaft blanket sustain rubber young crucial flat public summer adult silent above butter furnace bleak jealous real
This will create a new key without adding it to your keystore, and output the underlying seed.
- Run the
init
command using the--recover
flag.
dhealthd init <your_node_moniker> --chain-id dhealth --recover
dhealth-testnetd init <your_node_moniker> --chain-id dhealth-testnet-2 --recover
You can choose any moniker
value you like. It will be saved in the config.toml
under the working directory.
- Insert the previously outputted secret recovery phrase (mnemonic phrase):
> Enter your bip39 mnemonic
child switch prefer butter angle average crunch business anxiety sock admit staff small outside flavor basket ice catalog coconut world help boss actual daughter
> Enter your bip39 mnemonic
miss hope meadow box antique steak enhance shaft blanket sustain rubber young crucial flat public summer adult silent above butter furnace bleak jealous real
This will generate the working files in ~/.dhealth
(mainnet) or ~/.dhealth-testnet
(testnet).
TIP
By default, running
dhealth(-testnet)d init <your_node_moniker>
without the--recover
flag will randomly generate apriv_validator_key.json
. There is no way to regenerate this key if you lose it. We recommend running this command with the--recover
so that you can regenerate the samepriv_validator_key.json
from the secret recovery phrase (mnemonic phrase).
3. Get the genesis file
To connect to an existing network, or start a new one, a genesis file is required. The file contains all the settings telling how the genesis block of the network should look like.
#copy the chains genesis file to your .dhealth/config folder
curl https://rpc.dhealth.com/genesis | jq '.result.genesis' > ~/.dhealth/config/genesis.json
#copy the chains genesis file to your .dhealth-testnet/config folder
curl https://rpc-testnet.dhealth.dev/genesis | jq '.result.genesis' > ~/.dhealth-testnet/config/genesis.json
4. Setup seeds
The next thing you have to do now is telling your node how to connect with other nodes that are already present on the network. In order to do so, we will use the seeds
and persistent_peers
values of the config.toml
file.
Seed nodes are a particular type of nodes present on the network. Your full node will connect to them, and they will provide it with a list of other full nodes that are present on the network. Then, your full node will automatically connect to such nodes.
Add the seed nodes and persistent peers to the ~/.dhealth(-testnet)/config/config.toml
file each one separated by a comma like following:
PEERS="[email protected]:26656,[email protected]:26656"
sed -i.bak -E "s|^(seeds[[:space:]]+=[[:space:]]+).*$|\1\"$PEERS\"| ; \
s|^(persistent_peers[[:space:]]+=[[:space:]]+).*$|\1\"$PEERS\"| ;" ~/.dhealth/config/config.toml
PEERS="[email protected]:26656,[email protected]:26656"
sed -i.bak -E "s|^(seeds[[:space:]]+=[[:space:]]+).*$|\1\"$PEERS\"| ; \
s|^(persistent_peers[[:space:]]+=[[:space:]]+).*$|\1\"$PEERS\"| ;" ~/.dhealth-testnet/config/config.toml
5. State sync
State-sync feature allows new nodes to sync with the chain extremely fast, by downloading snapshots created by other full nodes. Update your ~/.dhealth(-testnet)/config/config.toml
file with the following commands:
SNAP_RPC="https://rpc.dhealth.com:443"
LATEST_HEIGHT=$(curl -s $SNAP_RPC/block | jq -r .result.block.header.height); \
# BLOCK_HEIGHT=$((LATEST_HEIGHT - 1000)); \
BLOCK_HEIGHT=$((LATEST_HEIGHT - LATEST_HEIGHT % 1000)); \
TRUST_HASH=$(curl -s "$SNAP_RPC/block?height=$BLOCK_HEIGHT" | jq -r .result.block_id.hash)
echo $LATEST_HEIGHT $BLOCK_HEIGHT $TRUST_HASH
sed -i.bak -E "s|^(enable[[:space:]]+=[[:space:]]+).*$|\1true| ; \
s|^(rpc_servers[[:space:]]+=[[:space:]]+).*$|\1\"$SNAP_RPC,$SNAP_RPC\"| ; \
s|^(trust_height[[:space:]]+=[[:space:]]+).*$|\1$BLOCK_HEIGHT| ; \
s|^(trust_hash[[:space:]]+=[[:space:]]+).*$|\1\"$TRUST_HASH\"|" ~/.dhealth/config/config.toml
more ~/.dhealth/config/config.toml | grep 'rpc_servers'
more ~/.dhealth/config/config.toml | grep 'trust_height'
more ~/.dhealth/config/config.toml | grep 'trust_hash'
sed -i.bak -E "s|^(enable[[:space:]]+=[[:space:]]+).*$|\1false| ;" ~/.dhealth/config/config.toml
SNAP_NAME=$(curl -s https://ss.dhealth.nodestake.org/ | egrep -o ">20.*\.tar.lz4" | tr -d ">")
curl -o - -L https://ss.dhealth.nodestake.org/${SNAP_NAME} | lz4 -c -d - | tar -x -C $HOME/.dhealth
SNAP_RPC="https://rpc-testnet.dhealth.dev:443"
LATEST_HEIGHT=$(curl -s $SNAP_RPC/block | jq -r .result.block.header.height); \
# BLOCK_HEIGHT=$((LATEST_HEIGHT - 1000)); \
BLOCK_HEIGHT=$((LATEST_HEIGHT - LATEST_HEIGHT % 1000)); \
TRUST_HASH=$(curl -s "$SNAP_RPC/block?height=$BLOCK_HEIGHT" | jq -r .result.block_id.hash)
echo $LATEST_HEIGHT $BLOCK_HEIGHT $TRUST_HASH
sed -i.bak -E "s|^(enable[[:space:]]+=[[:space:]]+).*$|\1true| ; \
s|^(rpc_servers[[:space:]]+=[[:space:]]+).*$|\1\"$SNAP_RPC,$SNAP_RPC\"| ; \
s|^(trust_height[[:space:]]+=[[:space:]]+).*$|\1$BLOCK_HEIGHT| ; \
s|^(trust_hash[[:space:]]+=[[:space:]]+).*$|\1\"$TRUST_HASH\"|" ~/.dhealth-testnet/config/config.toml
more ~/.dhealth-testnet/config/config.toml | grep 'rpc_servers'
more ~/.dhealth-testnet/config/config.toml | grep 'trust_height'
more ~/.dhealth-testnet/config/config.toml | grep 'trust_hash'
6. Open the proper ports
Now that everything is in place to start the node, the last thing to do is to open up the proper ports.
Your node uses vary different ports to interact with the rest of the chain. Particularly, it relies on:
- port
26656
to listen for incoming connections from other nodes; - port
26657
to expose the RPC service to clients.
A part from those, it also uses:
- port
9090
to expose the gRPC service that allows clients to query the chain state; - port
1317
to expose the REST APIs service.
While opening any ports are optional, it is beneficial to the whole network if you open port 26656
. This would allow new nodes to connect to you as a peer, making them sync faster and the connections more reliable.
This is a list of places where you can enable/disable these ports:
Port | File | Content |
---|---|---|
26656 | ~/.dhealth(-testnet)/config/config.toml | laddr = "tcp://0.0.0.0:26656" |
26657 | ~/.dhealth(-testnet)/config/config.toml | laddr = "tcp://127.0.0.1:26657" (you can change to tcp://0.0.0.0:26657 ) to enable |
9090 | ~/.dhealth(-testnet)/config/app.toml | `[grpc] # Enable defines if the gRPC server should be enabled.enable = true# Address defines the gRPC server address to bind to.address = "localhost:9090"` |
1317 | ~/.dhealth(-testnet)/config/app.toml | `[api] # Enable defines if the API server should be enabled.enable = false# Swagger defines if swagger documentation should automatically be registered.swagger = false# Address defines the API server to listen on.address = "tcp://localhost:1317"` |
7. Start the dHealth node
After setting up the binary and opening up ports, you are now finally ready to start your node:
# Run dHealth full node
dhealthd start
# Run dHealth Testnet full node
dhealth-testnetd start
The full node will connect to the peers and start syncing. You can check the status of the node by executing:
# Check status of the node
dhealthd status
# Check status of the node
dhealth-testnetd status
You should see an output like the following one:
{
"NodeInfo": {
"protocol_version": {
"p2p": "8",
"block": "11",
"app": "0"
},
"id": "67243a0ed11567250aa02d5e47f6c4a0b8313975",
"listen_addr": "tcp://0.0.0.0:26656",
"network": "dhealth",
"version": "0.37.2",
"channels": "40202122233038606100",
"moniker": "api-01",
"other": {
"tx_index": "on",
"rpc_address": "tcp://0.0.0.0:26657"
}
},
"SyncInfo": {
"latest_block_hash": "3ABCF50175E09C3548E750E1E4BA438C083F2355BAD7DE9CF0FDC64AC8D5FF46",
"latest_app_hash": "1D3426B637BD62BC111BEBB7ADE21FA03BDBA9D1C63DD5E20CDDBE335E11C868",
"latest_block_height": "115401",
"latest_block_time": "2024-03-25T10:45:28.586382489Z",
"earliest_block_hash": "9BA9BD50D56E9A68716ACC60DE0303D086EA8DFF1EAB401EF21DEA78580F6949",
"earliest_app_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855",
"earliest_block_height": "1",
"earliest_block_time": "2024-03-18T10:30:00Z",
"catching_up": false
},
"ValidatorInfo": {
"Address": "BE872FA870BEB71BA554D23E6D13ED0A1DA3AB86",
"PubKey": {
"type": "tendermint/PubKeyEd25519",
"value": "ltZOzYeqtT8O/RxI7jc28SJJJliEiL+Eu2ycxbB6YHw="
},
"VotingPower": "0"
}
}
{
"NodeInfo": {
"protocol_version": {
"p2p": "8",
"block": "11",
"app": "0"
},
"id": "fc0c8cc6ea5aa458c9c980ef5d3f1f76861f8b4b",
"listen_addr": "tcp://0.0.0.0:26656",
"network": "dhealth-testnet-2",
"version": "0.37.1",
"channels": "40202122233038606100",
"moniker": "dhealth-validator",
"other": {
"tx_index": "on",
"rpc_address": "tcp://0.0.0.0:26657"
}
},
"SyncInfo": {
"latest_block_hash": "FFBD78F309C68A0317EA7B10A5F68659C6678AC35EB207CADA37BCA5683EAC87",
"latest_app_hash": "84095DD11EDCC29F89C99ED051DD2549C91CB7534F9B6D43FF6821399C0D3D21",
"latest_block_height": "97076",
"latest_block_time": "2024-03-25T13:27:26.346698953Z",
"earliest_block_hash": "A5EFCA75F70F04C0D8CCD52B0CCEEC18AC38DD27F37E3DB38C3C93DB4CCE8087",
"earliest_app_hash": "E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855",
"earliest_block_height": "1",
"earliest_block_time": "2024-03-11T10:43:10.481333395Z",
"catching_up": false
},
"ValidatorInfo": {
"Address": "5B0A8A6E9FCF0E782EABFE47A81E682ECE40D049",
"PubKey": {
"type": "tendermint/PubKeyEd25519",
"value": "dUmBZRkS6IApthCk9Or3tBIiFgUHKxQqrVapshmeF1I="
},
"VotingPower": "501761339"
}
}
If you see that the catching_up
value is false
under the sync_info
, it means that you are fully synced. If it is true
, it means your node is still syncing. You can get the catching_up
value by simply running:
dhealthd status 2>&1 | jq "{catching_up: .SyncInfo.catching_up}"
# Example
# $ dhealthd status 2>&1 | jq "{catching_up: .SyncInfo.catching_up}"
# {
# "catching_up": false
# }
dhealth-testnetd status 2>&1 | jq "{catching_up: .SyncInfo.catching_up}"
# Example
# $ dhealth-testnetd status 2>&1 | jq "{catching_up: .SyncInfo.catching_up}"
# {
# "catching_up": false
# }
After your node is fully synced, you can consider running your full node as a validator node.
8. (Optional) Configure the background service
To allow your dhealthd instance to run in the background as a service you need to execute the following command
sudo tee /etc/systemd/system/dhealthd.service > /dev/null <<EOF
[Unit]
Description=dhealthd Service
After=network-online.target
[Service]
User=$USER
ExecStart=$(which dhealthd) start
Restart=always
RestartSec=3
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
sudo tee /etc/systemd/system/dhealth-testnetd.service > /dev/null <<EOF
[Unit]
Description=dhealth-testnetd Service
After=network-online.target
[Service]
User=$USER
ExecStart=$(which dhealth-testnetd) start
Restart=always
RestartSec=3
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
EOF
Once you have successfully created the service, you need to enable it. You can do so by running
sudo systemctl daemon-reload
sudo systemctl enable dhealthd
sudo systemctl daemon-reload
sudo systemctl enable dhealth-testnetd
After this, you can run it by executing
sudo systemctl start dhealthd
sudo systemctl start dhealth-testnetd
Service operations
Check the service status
If you want to see if the service is running properly, you can execute
systemctl status dhealthd
systemctl status dhealth-testnetd
If everything is running smoothly you should see something like
systemctl status dhealthd
● dhealthd.service - dhealthd Service
Loaded: loaded (/etc/systemd/system/dhealthd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2024-03-24 11:11:27 UTC; 23h ago
Main PID: 64029 (dhealthd)
Tasks: 12 (limit: 9389)
Memory: 911.5M
CPU: 1h 47min 37.111s
CGroup: /system.slice/dhealthd.service
└─64029 /home/ubuntu/go/bin/dhealthd start
systemctl status dhealth-testnetd
● dhealth-testnetd.service - dhealth-testnetd Service
Loaded: loaded (/etc/systemd/system/dhealth-testnetd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2024-03-24 11:11:27 UTC; 23h ago
Main PID: 64029 (dhealth-testnetd)
Tasks: 12 (limit: 9389)
Memory: 911.5M
CPU: 1h 47min 37.111s
CGroup: /system.slice/dhealth-testnetd.service
└─64029 /home/ubuntu/go/bin/dhealth-testnetd start
Check the node status
If you want to see the current status of the node, you can do so by running
journalctl -u dhealthd -f
journalctl -u dhealth-testnetd -f
Stopping the service
If you wish to stop the service from running, you can do so by running
systemctl stop dhealthd
systemctl stop dhealth-testnetd
To check the successful stop, execute systemctl status dhealthd
. This should return
systemctl status dhealthd
○ dhealthd.service - dHealth Mainnet Service
Loaded: loaded (/etc/systemd/system/dhealthd.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2024-03-25 10:57:17 UTC; 7s ago
Process: 64766 ExecStart=/home/ubuntu/go/bin/dhealthd start (code=exited, status=0/SUCCESS)
Main PID: 64766 (code=exited, status=0/SUCCESS)
CPU: 1d 5h 30min 49.454s
systemctl status dhealth-testnetd
○ dhealth-testnetd.service - dHealth-testnetd Service
Loaded: loaded (/etc/systemd/system/dhealth-testnetd.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2024-03-25 10:57:17 UTC; 7s ago
Process: 64766 ExecStart=/home/ubuntu/go/bin/dhealth-testnetd start (code=exited, status=0/SUCCESS)
Main PID: 64766 (code=exited, status=0/SUCCESS)
CPU: 1d 5h 30min 49.454s
Updated 3 months ago