Delegation Node Deployment Tutorial
In this tutorial, we will guide you through the process of deploying Delegation Node in the Kubernetes environment.
System Requirements
For delegation node deployment, two servers are required: the Node Operations Server to run core program functions, and the Data Server dedicated to user data storage. This architecture optimizes performance and reliability, with the Node Operations Server handling messaging tasks, while the Data Server is reserved for securely storing user information, ensuring both efficiency and data integrity.
Node Operations Server:
Hardware:
CPU: An 8-core x86-64 CPU (AMD or Intel) CPU is required.
Memory: At least 16GB of RAM.
Storage: SSDs are required. The server requires 150GB of free space.
Software:
Operating System: Ubuntu 22.04 LTS or above.
Network:
A static public IP address is needed to facilitate incoming connections.
Set up a valid domain name and SSL/TLS certificate, available from registrars like GoDaddy. Configuring a domain name for your node enables users to find and connect to it through SendingMe, ZuChat, or other applications supported by SendingNetwork.
Allow traffic on port 443.
Data Server:
Hardware:
CPU: An 8-core x86-64 CPU (AMD or Intel) CPU is required.
Memory: At least 16GB of RAM.
Storage: SSDs are required. The server requires at least 500GB of free space.
Software:
Operating System: Ubuntu 22.04 LTS or above.
Install Prerequisites
Install Kubernetes on the Node Operation Server
Kubernetes is an open-source container orchestration platform designed for automating the deployment, scaling, and management of containerized applications. Choose from the following installation methods based on your use case:
kubeadm: Recommended for setting up production-grade Kubernetes clusters.
K3s: Lightweight Kubernetes distribution for edge computing and resource-constrained environments.
Cloud Provider Kubernetes Services: Managed Kubernetes services such as Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), or Amazon Elastic Kubernetes Service (EKS).
For detailed installation instructions, refer to the official Kubernetes documentation.
On your Node Operation Server, install K3s by executing the following command:
After installation, check the status of your node:
Install Dependencies
Before deploying the Delegation Node, ensure that the following services are installed and properly configured using Kubernetes Deployment and Service:
Download the YAML Reference Files:
Deploy Redis
Redis is a high-performance, open-source, in-memory data store often used as a cache, database, and message broker. To deploy Redis on Kubernetes:
Create the Redis ConfigMap:
Deploy Redis:
Create the Redis Service:
This ensures Redis is correctly configured for your application, enabling caching and message brokering with high availability.
Deploy NATS
NATS is a high-performance, open-source messaging system designed for real-time, low-latency communication in distributed systems. It is widely used in microservices architectures for pub/sub messaging, event streaming, and service communication.
To deploy NATS on Kubernetes, execute the following commands:
Deploy NATS:
Create NATS Service:
Install PostgreSQL on Data Server
PostgreSQL is a robust, open-source relational database management system. To install and configure PostgreSQL on your Data Server, follow the steps below. Or you can download and install the version suitable for your operating system from the PostgreSQL official website.
Install PostgreSQL on Ubuntu:
Login to PostgreSQL:
Set the password for the default user name as prompted:
Create database:
To allow external connections, configure PostgreSQL to accept remote connections by modifying the following files:
Modify
postgresql.conf
: Edit the postgresql.conf file, typically located in/etc/postgresql/x/main/
(wherex
is the version number). Add or modify the following line:Modify
pg_hba.conf
: While the previous configuration enables PostgreSQL to accept connections from the specified address, additional setup is required to define allowed authentication methods at the server level. Editpg_hba.conf
to add or modify the following line. You may replace0.0.0.0/0
with your specific network segment:
After making the necessary changes, restart the PostgreSQL service to apply the new configurations:
This completes the installation and configuration of PostgreSQL, enabling remote access and setting up the required databases for the Delegation Node.
Deploy Delegation Node
Once the above services are properly running, you can proceed to deploy Delegation Node on your Node Operation Server. The specific steps may depend on your Delegation Node configuration and requirements. Here is a basic example configuration.
If you wish to rename the node, simply replace all instances of dele-delegationnode
with your preferred name.
Deploy Delegation Cluster Nodes in Kubernetes
To deploy the Delegation Cluster Nodes, use a StatefulSet in Kubernetes. Follow the steps below to create the necessary resources:
Create the ServiceAccount (sa.yaml) First, create the ServiceAccount required for the Delegation Node deployment. Run the following command:
Edit the delegationsts.yaml
Configuration File
Modify the delegationsts.yaml
file to reflect the required configuration:
Entropy Configuration: Replace the placeholder for entropy (
--entropy=xxxxxxxxxxxxxxxxxxxxxx
) with a securely generated 32-character string. Use the following command to generate it:PostgreSQL Connection String: Update the
-connString
placeholder with your actual PostgreSQL connection details. Replace thePASSWORD
and ip address with your actual one.Admin Platform Configuration: Similarly, replace the admin platform connection string:
with your admin platform database connection details. Here're some configuration reference:
ParameterDescription-port=8008
Specifies the port for the admin platform to listen on.
-dendritePort=8012
Specifies the port for Dendrite to listen on.
-username={{ ADMIN_USERNAME }}
The admin platform username.
-password={{ ADMIN_PASSWORD }}
The admin platform password.
-connString=postgresql://postgres:123456@127.0.0.1/admin_platform0?sslmode=disable
The database connection string.
Admin Account Credentials: Replace the placeholders for admin username and password:
with the credentials you want to assign to the admin account.
ParameterDescription- -whiteListEnable=false
If set to true, only wallet addresses on the whitelist are permitted to connect to your node.
- -blackListEnable=false
Set to true to block users through the blacklist.
- -developerKeyEnable=false
Set to true to allow users from applications with a whitelisted developer key.
Apply the configuration
Once the delegationsts.yaml
file has been updated, apply it to your Kubernetes cluster:
Create the Service
Next, create the necessary Kubernetes service for the Delegation Node by applying the delegationsvc.yaml
file:
Verify Deployment
After deploying the Delegation Node and service, you should verify that the deployment was successful:
Check Pod Status: Ensure that all pods are up and running:
Check Services: Verify that the services are running correctly:
Check Logs: Review the logs of the Delegation Node to confirm it is functioning properly. Replace
dele-delegationnode-0
with the name of your Delegation Node pod:
By following these steps, you will have successfully deployed the Delegation Node in Kubernetes and verified its status.
Install Nginx
NGINX is a high-performance, open-source web server and reverse proxy server, commonly used for load balancing, HTTP caching, and routing client requests to backend servers.
Install Nginx on Ubuntu:
Configure Nginx
Move the downloaded nginx.conf
file to /etc/nginx/conf.d/
. Ensure that you update the configuration for the SSL certificate and domain name as the actual one.
Replace <domain_name>
with your domain name.
Use the CLUSTER-IP obtained by the kubectl get svc
command to replace the <ip_address>
.
Once you complete the above steps, NGINX will be installed and configured to handle your traffic routing.
At this point, you have successfully deployed the Delegation Node along with all required services in the Kubernetes environment. You may proceed to further configure and optimize your cluster to meet production standards.
If you encounter any issues or have further needs, don't hesitate to seek support from the SendingNetwork community.
Last updated