I previously blogged about setting up and monitoring a Cardano relay using Kubernetes. Since I’m storing files on the hosts filesystem using a local PersistentVolume
and was unable map a single configuration folder into all nodes (I found no explanation of this online, but having multiple pods try to mount a ReadOnlyMany
volume seems to result in all but the first getting stuck pending), I’ve ended up with a copy of the configuration files for each node. I decided to move at least the topology config into a ConfigMap
since it often changes (when I add/update peer relays from other SPOs) to avoid having to keep synchronising it between nodes.
Posts in this series:
- Step 1: Setting up Cardano Relays using Kubernetes/microk8s
- Step 2: Monitoring Cardano Relays on Kubernetes with Grafana and Prometheus
- Step 3: Using Kubernetes ConfigMaps for Cardano Node Topology Config
- Step 4: Setting up a Cardano Producer node using Kubernetes/microk8s
If you find this post useful or are looking for somewhere to delegate while setting up your own pool, check out my pool [CODER]! 😀
Creating a ConfigMap
Like the rest of the config, our ConfigMap
will live in a .yaml
file. Each ConfigMap
needs a unique name and can contain multiple files.
I’ve omitted my peers and used just the default IOHK hostname to keep the example shorter.
apiVersion: v1
kind: ConfigMap
metadata:
name: mainnet-relay-topology
data:
topology.json: |
{
"Producers": [
{
"addr": "relays-new.cardano-mainnet.iohk.io",
"port": 3001,
"valency": 2
}
]
}
Next we need to add the ConfigMap
s to the volume
and volumeMounts
sections of the relay StatefulSet
configs. My relays config now looks like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cardano-mainnet-relay-deployment
labels:
app: cardano-mainnet-relay-deployment
spec:
serviceName: cardano-mainnet-relay
replicas: 1
selector:
matchLabels:
app: cardano-mainnet-node
cardano-mainnet-node-type: relay
template:
metadata:
labels:
app: cardano-mainnet-node
cardano-mainnet-node-type: relay
spec:
containers:
- name: cardano-mainnet-relay
image: inputoutput/cardano-node
imagePullPolicy: Always
############ The path '/topology/topology.json' here was updated
args: ["run", "--config", "/data/configuration/mainnet-config.json", "--topology", "/topology/topology.json", "--database-path", "/data/db", "--socket-path", "/data/node.socket", "--port", "4000"]
ports:
- containerPort: 12798
- containerPort: 4000
volumeMounts:
- name: data
mountPath: /data
############ NEWLY ADDED (START)
- name: topology
mountPath: /topology
volumes:
- name: topology
configMap:
name: mainnet-relay-topology
############ NEWLY ADDED (END)
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage-cardano-mainnet-relay
resources:
requests:
storage: 25Gi
And that’s all there was to it. I microk8s.kubectl apply -f
‘d the files, deleted the old topology files from disk and restarted the nodes. Checking the logs with microk8s.kubectl logs pod/cardano-mainnet-relay-deployment-0
I saw the node connecting out and after a few minutes of loading the databases, appear back on Grafana as processing transactions.
If you find this post useful or are looking for somewhere to delegate while setting up your own pool, check out my pool [CODER]! 😀