Running a Centrifuge node

Introduction

Running a full node allows to query the Centrifuge Chain blocks through it's RPC endpoints, wether you're a Dapp developer or you just want to be fully trustless and run your own node this guide will teach you how to setup your own full or archive node

Hardware requirements

  • minimum: 2+ cores CPU, 4GB+ RAM, 200GB+ free storage space
  • recommended: 4+ CPU cores, 16GB RAM, 1TB SSD or faster storage

Note: Syncing and Runtime Upgrades might put extra load on the node. It is recommended to increase the resources until the node is fully synced. Use a process manager to restart the process if it reaches memory limits, hangs, or crashes.

CLI arguments

In this section, we'll go over the recommended arguments for running a full node.

Full node

Some of our recommended settings are commented for clarification, the rest can be found in Parity's node documentation

1--port=30333 # p2p listening port
2--rpc-port=9933 # RPC listening port
3--rpc-external # To listen on public interfaces
4--rpc-cors=all # Adjust depending on your needs
5--rpc-max-request-size=40 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load
6--rpc-max-response-size=40 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load
7--rpc-max-connections=512 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load
8--in-peers=100 # Max connections ingress
9--out-peers=100 # Max connections egress
10--db-cache=2048 # DB MB on RAM - Adjust to your hardware setup
11--chain=centrifuge
12--parachain-id=2031
13--base-path=/data
14--log=main,info,xcm=trace,xcm-executor=trace
15--database=rocksdb
16--execution=wasm
17--wasm-execution=compiled
18--bootnodes=/ip4/35.198.171.148/tcp/30333/ws/p2p/12D3KooWDXDwSdqi8wB1Vjjs5SVpAfk6neadvNTPAik5mQXqV7jF
19--bootnodes=/ip4/34.159.117.205/tcp/30333/ws/p2p/12D3KooWMspZo4aMEXWBH4UXm3gfiVkeu1AE68Y2JDdVzU723QPc
20--bootnodes=/dns4/node-7010781199623471104-0.p2p.onfinality.io/tcp/23564/ws/p2p/12D3KooWSN6VXWPvo1hoT5rb5hei5B7YdTWeUyDcc42oTPwLGF2p
21--name=YOUR_NODE_NAME
22--
23--execution=wasm
24--wasm-execution=compiled
25--chain=polkadot

Notes

  • The arguments above the -- are for the parachain and the ones below for the relay chain.
  • Bootnodes, parachain-id, and chain options will change for each network.
  • Use a descriptive NODE_NAME
  • Choose log levels based on your setup

Fast synching

Centrifuge nodes support fast synching using --sync=warp and --sync=fast for both the parachain and the relay chain arguments

Archive node

Everything same as above but adding --prune=archive before the -- on the CLI arguments. Archive nodes do not support fast synching and thus the --sync= options can only be added to the section below the --

Arguments formatting

The specific format will depend on how you deploy your node:

Docker/Kubernetes

1- "--port=30333"
2- "--rpc-port=9933"
3...
4- "--chain=polkadot"
5- "--sync=fast"

Systemd

1ExecStart=/var/lib/centrifuge-data/centrifuge-chain --port=30333 --rpc-port=9933 ...
2 -- ...
3 --sync=fast

Network values

Mainnet (Centrifuge Chain)

Bootnodes:

1--bootnodes=/ip4/35.198.171.148/tcp/30333/ws/p2p/12D3KooWDXDwSdqi8wB1Vjjs5SVpAfk6neadvNTPAik5mQXqV7jF
2--bootnodes=/ip4/34.159.117.205/tcp/30333/ws/p2p/12D3KooWMspZo4aMEXWBH4UXm3gfiVkeu1AE68Y2JDdVzU723QPc
3--bootnodes=/dns4/node-7010781199623471104-0.p2p.onfinality.io/tcp/23564/ws/p2p/12D3KooWSN6VXWPvo1hoT5rb5hei5B7YdTWeUyDcc42oTPwLGF2p

Chain args:

1--chain=centrifuge
2--parachain-id=2031
3--
4--chain=polkadot

Testnet (Centrifuge Demo)

Bootnodes:

1- --bootnodes=/ip4/35.246.168.210/tcp/30333/p2p/12D3KooWCtdW3HWLuxDLD2fuTZfTspCJDHWxnonKCEgT5JfGsoYQ
2- --bootnodes=/ip4/34.89.182.4/tcp/30333/p2p/12D3KooWETyS1VZTS4fS7dBZpXbPKMP129dy4KpFSWoErBWJ5i5d
3- --bootnodes=/ip4/35.198.144.90/tcp/30333/p2p/12D3KooWMJPzvEp5Jhea8eKsUDufBbAzGrn265GcaCmcnp3koPk4

Chain args:

1--chain=/resources/demo-spec-raw.json
2--parachain-id=2031
3--
4--chain=/resources/westend-alphanet-raw-specs.json

demo-spec-raw.jsonand westend-alphanet-raw-specs.json can be found either in the path above for the docker container or in the node/res/ folder in the codebase

Recommended deployments

Docker

You can use the container published on the Centrifuge Docker Hub repo or be fully trustless by cloning the Centrifuge Chain repository and using the Dockerfile (2-4h build time on an average machine). If you are building the image yourself, make sure you have checked out the latest tag for the most recent release:

1git clone https://github.com/centrifuge/centrifuge-chain.git
2git checkout vX.Y.Z
3docker buildx build -f docker/centrifuge-chain/Dockerfile . -t YOUR_TAG

docker-compose

Create a docker-compose.yml file with the contents below, adjusting the following:

  • Change the ports based on your network setup.
  • Replace /mnt/my_volume/data with the volume and/or data folder you want to use.
  • Optional: To run it as an archive node, add "--pruning=archive" before ---name
1version: '3'
2services:
3centrifuge:
4 container_name: centrifuge-chain
5 image: "centrifugeio/centrifuge-chain:[INSERT_RELEASE_HERE]"
6 platform: "linux/amd64"
7 restart: on-failure
8 ports:
9 - "30333:30333"
10 - "9944:9933"
11 volumes:
12 # Mount your biggest drive
13 - /mnt/my_volume/data:/data
14 command:
15 - "--port=30333"
16 ...
17 - "--"
18 ...
19 - "--chain=polkadot"
20 - "--sync=fast"

Refer to the CLI arguments on section 1.

Runing the container

1docker-compose pull --policy always && docker-compose up -d

Kubernetes

We recommend using a stateful set to run multiple replicas and balance the load between them via an ingress.

WARNING: using these K8 manifests as-is will not work, it has been included in this guide to give experienced Kubernetes operators a starting point. Centrifuge cannot provide Kubernetes support to node operators, use at your own risk.

StatefulSet Example

1apiVersion: apps/v1
2kind: StatefulSet
3metadata:
4 labels:
5 app: fullnode-cluster
6 name: fullnode-cluster
7spec:
8 serviceName: "fullnode-cluster"
9 replicas: 2
10 selector:
11 matchLabels:
12 app: fullnode-cluster
13 template:
14 metadata:
15 labels:
16 app: fullnode-cluster
17 spec:
18 nodeSelector:
19 cloud.google.com/gke-nodepool: fullnodes16
20 containers:
21 - args:
22 - --rpc-cors=all
23 - --rpc-methods=unsafe
24 ...
25 - --execution=wasm
26 - --wasm-execution=compiled
27 - --
28 ...
29 - --sync=fast
30 image: centrifugeio/centrifuge-chain:[DOCKER_TAG]
31 imagePullPolicy: IfNotPresent
32 name: fullnodes-cluster
33 livenessProbe:
34 httpGet:
35 path: /health
36 port: 9933
37 initialDelaySeconds: 60
38 periodSeconds: 120
39 ports:
40 - containerPort: 9933
41 protocol: TCP
42 - containerPort: 30333
43 protocol: TCP
44 volumeMounts:
45 - mountPath: /data/
46 name: storage-volume
47 - name: rpc-health
48 image: paritytech/ws-health-exporter
49 env:
50 - name: WSHE_NODE_RPC_URLS
51 value: "ws://127.0.0.1:9933"
52 - name: WSHE_NODE_MIN_PEERS
53 value: "2"
54 - name: WSHE_NODE_MAX_UNSYNCHRONIZED_BLOCK_DRIFT
55 value: "2"
56 ports:
57 - containerPort: 8001
58 name: http-ws-he
59 resources:
60 limits:
61 cpu: "250m"
62 memory: 0.5Gi
63 requests:
64 cpu: "250m"
65 memory: 0.5Gi
66 readinessProbe:
67 httpGet:
68 path: /health/readiness
69 port: 8001
70 initialDelaySeconds: 30
71 periodSeconds: 2
72 successThreshold: 3
73 failureThreshold: 1
74 initContainers:
75 - name: fix-permissions
76 command:
77 - sh
78 - -c
79 - |
80 chown -R 1000:1000 /data
81 image: busybox
82 imagePullPolicy: IfNotPresent
83 volumeMounts:
84 - mountPath: /data/
85 name: storage-volume
86 volumeClaimTemplates:
87 - metadata:
88 name: storage-volume
89 spec:
90 accessModes: ["ReadWriteOnce"]
91 resources:
92 requests:
93 storage: 1200G
94 storageClassName: standard-rwo

NOTE: The example below does not include SSL or any other advanced proxy settings. Adjust to your own needs. Networking

1---
2# Service to balance traffic between replicas:
3apiVersion: v1
4kind: Service
5metadata:
6 name: fullnode-cluster-ha
7 namespace: centrifuge
8spec:
9 selector:
10 app: fullnode-cluster
11 ports:
12 - protocol: TCP
13 port: 9933
14---
15apiVersion: v1
16kind: Service
17metadata:
18 name: fullnode-cluster
19 namespace: centrifuge
20spec:
21 clusterIP: None
22 selector:
23 app: fullnode-cluster
24 ports:
25 - name: tcp
26 port: 9933
27 targetPort: 9933
28---
29
30apiVersion: networking.k8s.io/v1
31kind: Ingress
32metadata:
33 annotations:
34 <ADD_YOUR_OWN>
35 name: fullnode-ha-proxy
36 namespace: centrifuge
37spec:
38 ingressClassName: nginx-v2
39 rules:
40 - host: <YOUR_FQDN_HERE>
41 http:
42 paths:
43 - backend:
44 service:
45 name: fullnode-cluster
46 port:
47 number: 9933
48 path: /
49 pathType: ImplementationSpecific

Ubuntu binaries and systemd

Prepare user and folder

1adduser centrifuge_service --system --no-create-home
2mkdir /var/lib/centrifuge-data # Or use a folder location of you choosing. But replace the all occurences of `/var/lib/centrifuge-data` below accordingly
3chown -R centrifuge_service /var/lib/centrifuge-data

Getting the binary

A. Build your own (recommended)

-> Replace [INSERT_RELEASE_HERE] with the latest release vX.Y.Z

1# This dependencies install is only for Debian Distros:
2sudo apt-get install cmake pkg-config libssl-dev git clang libclang-dev protobuf-compiler
3git clone https://github.com/centrifuge/centrifuge-chain.git
4cd centrifuge-chain
5git checkout [INSERT_RELEASE_HERE]
6curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
7./scripts/install_toolchain.sh
8cargo build --release
9cp ./target/release/centrifuge-chain /var/lib/centrifuge-data

B. "Extract from a docker image"

Pick an appropriate mainnet image for mainnet binaries. Keep in mind that the retrieved binary is build for Linux.

1docker run --rm --name centrifuge-cp -d centrifugeio/centrifuge-chain:[INSERT_RELEASE_HERE] --chain centrifuge
2docker cp centrifuge-cp:/usr/local/bin/centrifuge-chain /var/lib/centrifuge-data

Configure systemd

Create systemd service file

We are now ready to start the node, but to ensure it is running in the background and auto-restarts in case of a server failure, we will set up a service file using systemd. Change the ports based on your network setup.

Notes

  • It is important to leave the --bootnodes $ADDR in one line as otherwise the arguments are not parsed correctly, making it impossible for the chain to find peers as no bootnodes will be present.

  • To run it as an archive node, add --pruning=archive \ before --name below.

1sudo tee <<EOF >/dev/null /etc/systemd/system/centrifuge.service
2[Unit]
3Description="Centrifuge systemd service"
4After=network.target
5StartLimitIntervalSec=0
6
7[Service]
8Type=simple
9Restart=always
10RestartSec=10
11User=centrifuge_service
12SyslogIdentifier=centrifuge
13SyslogFacility=local7
14KillSignal=SIGHUP
15ExecStart=/var/lib/centrifuge-data/centrifuge-chain --port=30333 --rpc-port=9933 ...
16 -- ...
17 --sync=fast
18
19[Install]
20WantedBy=multi-user.target
21EOF

Refer to the CLI arguments on section 1.

Start the systemd service

Actually enable the previously generated service and start it.

1sudo systemctl enable centrifuge.service
2sudo systemctl start centrifuge.service

If everything was set-up correctly, your node should now start the process of synchronization. This will take several hours, depending on your hardware. To check the status of the running service or to follow the logs, use:

1sudo systemctl status centrifuge.service
2sudo journalctl -u centrifuge.service -f

Test and health monitoring

Once your node is fully synced, you can run a cURL request to see the status of your node. If your node is externally available, replace localhost for your URL.

1curl -H "Content-Type: application/json" -d '{"id":1, "jsonrpc":"2.0", "method": "eth_syncing", "params":[]}' localhost:9933

Expected output if node is synced is {"jsonrpc":"2.0","result":false,"id":1}.

Use ws-health-exporter

You can monitor your node to make sure it is ready to serve RPC calls using parity's ws-health-exporter.

More info on the parity's Docker Hub page.

Monitoring

As it happens with any blockchain, the storage will run out eventually. It is recommended to monitor your storage or use any kind of auto-scaling storage to account for this. It is also recommended to setup a reverse proxy or an API gateway to monitor the API calls and see the response rate and the response codes to look for errors over time. How to do this is out of the scope of this documentation.

Troubleshooting

Error logs during syncing

During fast syncing it is expected to see the following error messages on the [Relaychain] side.

1ERROR tokio-runtime-worker sc_service::client::client: [Relaychain] Unable to pin block for finality notification. hash: 0x866f…387c, Error: UnknownBlock: State already discarded [...]
2WARN tokio-runtime-worker parachain::runtime-api: [Relaychain] cannot query the runtime API version: Api called for an unknown Block: State already discarded [...]

As long as the following logs are seen

1INFO tokio-runtime-worker substrate: [Relaychain] ⚙️ Syncing, target=#18279012 (9 peers), best: #27674 (0x28a4…6fe6), finalized #27648 (0x406d…b89e), ⬇ 1.1MiB/s ⬆ 34.6kiB/s
2INFO tokio-runtime-worker substrate: [Parachain] ⚙️ Syncing 469.4 bps, target=#4306117 (15 peers), best: #33634 (0x79d2…0a45), finalized #0 (0xb3db…9d82), ⬇ 1.3MiB/s ⬆ 2.0kiB/s

everything is working correctly. Once the chain is fully synced, the errors are expected to vanish.

Stalled Syncing

If the chain stops syncing, often due to unavailable blocks, please restart your node. The reason is in most cases that the p2p-view of your node is incorrect at the moment. Resulting in your node dropping the peers and being unable to further sync. A restart helps in theses cases.

Example logs will look like the following:

1WARN tokio-runtime-worker sync: [Parachain] 💔 Error importing block 0x88591cb0cb4f66474b189a34abab560e335dc508cb8e7926343d6cf8db6840b7: consensus error: Import failed: Database

Changed bootnode or peer identities

It is common that bootnodes change their p2p-identity, leading to the following logs:

1WARN tokio-runtime-worker sc_network::service: [Relaychain] 💔 The bootnode you want to connect to at `/dns/polkadot-bootnode.polkadotters.com/tcp/30333/p2p/12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y` provided a different peer ID `12D3KooWPAVUgBaBk6n8SztLrMk8ESByncbAfRKUdxY1nygb9zG3` than the one you expect `12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y`.

These logs can be safely ignored.

Contributors