Running a Centrifuge node


Running a full node allows to query the Centrifuge Chain blocks through it's RPC endpoints, wether you're a Dapp developer or you just want to be fully trustless and run your own node this guide will teach you how to setup your own full or archive node

Hardware requirements

  • minimum: 2+ cores CPU, 4GB+ RAM, 200GB+ free storage space
  • recommended: 4+ CPU cores, 16GB RAM, 1TB SSD or faster storage

Note: Syncing and Runtime Upgrades might put extra load on the node. It is recommended to increase the resources until the node is fully synced. Use a process manager to restart the process if it reaches memory limits, hangs, or crashes.

CLI arguments

In this section, we'll go over the recommended arguments for running a full node.

Full node

Some of our recommended settings are commented for clarification, the rest can be found in Parity's node documentation

1--port=30333 # p2p listening port
2--rpc-port=9933 # RPC listening port
3--rpc-external # To listen on public interfaces
4--rpc-cors=all # Adjust depending on your needs
5--rpc-max-request-size=40 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load
6--rpc-max-response-size=40 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load
7--rpc-max-connections=512 # These prevent the node from going into 429: Too many requests errors. Adjust depending on your load
8--in-peers=100 # Max connections ingress
9--out-peers=100 # Max connections egress
10--db-cache=2048 # DB MB on RAM - Adjust to your hardware setup


  • The arguments above the -- are for the parachain and the ones below for the relay chain.
  • Bootnodes, parachain-id, and chain options will change for each network.
  • Use a descriptive NODE_NAME
  • Choose log levels based on your setup

Fast synching

Centrifuge nodes support fast synching using --sync=warp and --sync=fast for both the parachain and the relay chain arguments

Archive node

Everything same as above but adding --prune=archive before the -- on the CLI arguments. Archive nodes do not support fast synching and thus the --sync= options can only be added to the section below the --

Arguments formatting

The specific format will depend on how you deploy your node:


1- "--port=30333"
2- "--rpc-port=9933"
4- "--chain=polkadot"
5- "--sync=fast"


1ExecStart=/var/lib/centrifuge-data/centrifuge-chain \
2 --port=30333 \
3 --rpc-port=9933 \
4 ...
5 -- \
6 ...
7 --sync=fast

Network values

Mainnet (Centrifuge Chain)



Chain args:


Testnet (Centrifuge Demo)


1- --bootnodes=/ip4/
2- --bootnodes=/ip4/
3- --bootnodes=/ip4/

Chain args:


demo-spec-raw.jsonand westend-alphanet-raw-specs.json can be found either in the path above for the docker container or in the node/res/ folder in the codebase

Recommended deployments


You can use the container published on the Centrifuge Docker Hub repo or be fully trustless by cloning the Centrifuge Chain repository and using the Dockerfile (2-4h build time on an average machine). If you are building the image yourself, make sure you have checked out the latest tag for the most recent release:

1git clone
2git checkout vX.Y.Z
3docker buildx build -f docker/centrifuge-chain/Dockerfile . -t YOUR_TAG


Create a docker-compose.yml file with the contents below, adjusting the following:

  • Change the ports based on your network setup.
  • Replace /mnt/my_volume/data with the volume and/or data folder you want to use.
  • Optional: To run it as an archive node, add "--pruning=archive" before ---name
1version: '3'
4 container_name: centrifuge-chain
5 image: "centrifugeio/centrifuge-chain:[INSERT_RELEASE_HERE]"
6 platform: "linux/amd64"
7 restart: on-failure
8 ports:
9 - "30333:30333"
10 - "9944:9933"
11 volumes:
12 # Mount your biggest drive
13 - /mnt/my_volume/data:/data
14 command:
15 - "--port=30333"
16 ...
17 - "--"
18 ...
19 - "--chain=polkadot"
20 - "--sync=fast"

Refer to the CLI arguments on section 1.

Runing the container

1docker-compose pull --policy always && docker-compose up -d


We recommend using a stateful set to run multiple replicas and balance the load between them via an ingress.

WARNING: using these K8 manifests as-is will not work, it has been included in this guide to give experienced Kubernetes operators a starting point. Centrifuge cannot provide Kubernetes support to node operators, use at your own risk.

StatefulSet Example

1apiVersion: apps/v1
2kind: StatefulSet
4 labels:
5 app: fullnode-cluster
6 name: fullnode-cluster
8 serviceName: "fullnode-cluster"
9 replicas: 2
10 selector:
11 matchLabels:
12 app: fullnode-cluster
13 template:
14 metadata:
15 labels:
16 app: fullnode-cluster
17 spec:
18 nodeSelector:
19 fullnodes16
20 containers:
21 - args:
22 - --rpc-cors=all
23 - --rpc-methods=unsafe
24 ...
25 - --execution=wasm
26 - --wasm-execution=compiled
27 - --
28 ...
29 - --sync=fast
30 image: centrifugeio/centrifuge-chain:[DOCKER_TAG]
31 imagePullPolicy: IfNotPresent
32 name: fullnodes-cluster
33 livenessProbe:
34 httpGet:
35 path: /health
36 port: 9933
37 initialDelaySeconds: 60
38 periodSeconds: 120
39 ports:
40 - containerPort: 9933
41 protocol: TCP
42 - containerPort: 30333
43 protocol: TCP
44 volumeMounts:
45 - mountPath: /data/
46 name: storage-volume
47 - name: rpc-health
48 image: paritytech/ws-health-exporter
49 env:
51 value: "ws://"
53 value: "2"
55 value: "2"
56 ports:
57 - containerPort: 8001
58 name: http-ws-he
59 resources:
60 limits:
61 cpu: "250m"
62 memory: 0.5Gi
63 requests:
64 cpu: "250m"
65 memory: 0.5Gi
66 readinessProbe:
67 httpGet:
68 path: /health/readiness
69 port: 8001
70 initialDelaySeconds: 30
71 periodSeconds: 2
72 successThreshold: 3
73 failureThreshold: 1
74 initContainers:
75 - name: fix-permissions
76 command:
77 - sh
78 - -c
79 - |
80 chown -R 1000:1000 /data
81 image: busybox
82 imagePullPolicy: IfNotPresent
83 volumeMounts:
84 - mountPath: /data/
85 name: storage-volume
86 volumeClaimTemplates:
87 - metadata:
88 name: storage-volume
89 spec:
90 accessModes: ["ReadWriteOnce"]
91 resources:
92 requests:
93 storage: 1200G
94 storageClassName: standard-rwo

NOTE: The example below does not include SSL or any other advanced proxy settings. Adjust to your own needs. Networking

2# Service to balance traffic between replicas:
3apiVersion: v1
4kind: Service
6 name: fullnode-cluster-ha
7 namespace: centrifuge
9 selector:
10 app: fullnode-cluster
11 ports:
12 - protocol: TCP
13 port: 9933
15apiVersion: v1
16kind: Service
18 name: fullnode-cluster
19 namespace: centrifuge
21 clusterIP: None
22 selector:
23 app: fullnode-cluster
24 ports:
25 - name: tcp
26 port: 9933
27 targetPort: 9933
31kind: Ingress
33 annotations:
35 name: fullnode-ha-proxy
36 namespace: centrifuge
38 ingressClassName: nginx-v2
39 rules:
40 - host: <YOUR_FQDN_HERE>
41 http:
42 paths:
43 - backend:
44 service:
45 name: fullnode-cluster
46 port:
47 number: 9933
48 path: /
49 pathType: ImplementationSpecific

Ubuntu binaries and systemd

Prepare user and folder

1adduser centrifuge_service --system --no-create-home
2mkdir /var/lib/centrifuge-data # Or use a folder location of you choosing. But replace the all occurences of `/var/lib/centrifuge-data` below accordingly
3chown -R centrifuge_service /var/lib/centrifuge-data

Getting the binary

A. Build your own (recommended)

-> Replace [INSERT_RELEASE_HERE] with the latest release vX.Y.Z

1# This dependencies install is only for Debian Distros:
2sudo apt-get install cmake pkg-config libssl-dev git clang libclang-dev protobuf-compiler
3git clone
4cd centrifuge-chain
5git checkout [INSERT_RELEASE_HERE]
6curl --proto '=https' --tlsv1.2 -sSf | sh
8cargo build --release
9cp ./target/release/centrifuge-chain /var/lib/centrifuge-data

B. "Extract from a docker image"

Pick an appropriate mainnet image for mainnet binaries. Keep in mind that the retrieved binary is build for Linux.

1docker run --rm --name centrifuge-cp -d centrifugeio/centrifuge-chain:[INSERT_RELEASE_HERE] --chain centrifuge
2docker cp centrifuge-cp:/usr/local/bin/centrifuge-chain /var/lib/centrifuge-data

Configure systemd

Create systemd service file

We are now ready to start the node, but to ensure it is running in the background and auto-restarts in case of a server failure, we will set up a service file using systemd. Change the ports based on your network setup.


  • It is important to leave the --bootnodes $ADDR in one line as otherwise the arguments are not parsed correctly, making it impossible for the chain to find peers as no bootnodes will be present.

  • To run it as an archive node, add --pruning=archive \\ before --name below.

1sudo tee <<EOF >/dev/null /etc/systemd/system/centrifuge.service
3Description="Centrifuge systemd service"
15ExecStart=/var/lib/centrifuge-data/centrifuge-chain \
16 --port=30333 \
17 --rpc-port=9933 \
18 ...
19 -- \
20 ...
21 --sync=fast

Refer to the CLI arguments on section 1.

Start the systemd service

Actually enable the previously generated service and start it.

1sudo systemctl enable centrifuge.service
2sudo systemctl start centrifuge.service

If everything was set-up correctly, your node should now start the process of synchronization. This will take several hours, depending on your hardware. To check the status of the running service or to follow the logs, use:

1sudo systemctl status centrifuge.service
2sudo journalctl -u centrifuge.service -f

Test and health monitoring

Once your node is fully synced, you can run a cURL request to see the status of your node. If your node is externally available, replace localhost for your URL.

1curl -H "Content-Type: application/json" \
2-d '{"id":1, "jsonrpc":"2.0", "method": "eth_syncing", "params":[]}' \

Expected output if node is synced is {"jsonrpc":"2.0","result":false,"id":1}.

Use ws-health-exporter

You can monitor your node to make sure it is ready to serve RPC calls using parity's ws-health-exporter.

More info on the parity's Docker Hub page.


As it happens with any blockchain, the storage will run out eventually. It is recommended to monitor your storage or use any kind of auto-scaling storage to account for this. It is also recommended to setup a reverse proxy or an API gateway to monitor the API calls and see the response rate and the response codes to look for errors over time. How to do this is out of the scope of this documentation.


Error logs during syncing

During fast syncing it is expected to see the following error messages on the [Relaychain] side.

1ERROR tokio-runtime-worker sc_service::client::client: [Relaychain] Unable to pin block for finality notification. hash: 0x866f…387c, Error: UnknownBlock: State already discarded [...]
2WARN tokio-runtime-worker parachain::runtime-api: [Relaychain] cannot query the runtime API version: Api called for an unknown Block: State already discarded [...]

As long as the following logs are seen

1INFO tokio-runtime-worker substrate: [Relaychain] ⚙️ Syncing, target=#18279012 (9 peers), best: #27674 (0x28a4…6fe6), finalized #27648 (0x406d…b89e), ⬇ 1.1MiB/s ⬆ 34.6kiB/s
2INFO tokio-runtime-worker substrate: [Parachain] ⚙️ Syncing 469.4 bps, target=#4306117 (15 peers), best: #33634 (0x79d2…0a45), finalized #0 (0xb3db…9d82), ⬇ 1.3MiB/s ⬆ 2.0kiB/s

everything is working correctly. Once the chain is fully synced, the errors are expected to vanish.

Stalled Syncing

If the chain stops syncing, often due to unavailable blocks, please restart your node. The reason is in most cases that the p2p-view of your node is incorrect at the moment. Resulting in your node dropping the peers and being unable to further sync. A restart helps in theses cases.

Example logs will look like the following:

1WARN tokio-runtime-worker sync: [Parachain] 💔 Error importing block 0x88591cb0cb4f66474b189a34abab560e335dc508cb8e7926343d6cf8db6840b7: consensus error: Import failed: Database

Changed bootnode or peer identities

It is common that bootnodes change their p2p-identity, leading to the following logs:

1WARN tokio-runtime-worker sc_network::service: [Relaychain] 💔 The bootnode you want to connect to at `/dns/` provided a different peer ID `12D3KooWPAVUgBaBk6n8SztLrMk8ESByncbAfRKUdxY1nygb9zG3` than the one you expect `12D3KooWCgNAXvn3spYBeieVWeZ5V5jcMha5Qq1hLMtGTcFPk93Y`.

These logs can be safely ignored.