Overview
The Nextgen gateway Collector has been introduced for users who want a High Availability (HA) gateway in their kubernetes environment. This gateway consists of a Single Pod and the Pod contains a set of containers in the k3s environment.
Please refer OpsRamp’s Collector Bootstrap Tool for general guidelines on how to install and register NextGen gateway.
Prerequisites
To deploy HA gateway in your kubernetes environment, make sure your environment meets these requirements:
- 8 GB Memory
- 50 GB Disk
- 4 Core of CPU
- AMD64 Arch
- For High Availability 3 nodes are recommended
- Additional IP is required for gateway in HA mode
- Additional IP is required to run the squid-proxy in cluster mode
(Refer MetalLB IP Range document to know how to add additional IP’s).
Note
This feature is available only in ISO or OVA provided by OpsRamp.Install the k3s and Enable HA for NextGen Gateway
Follow the below steps to install the HA gateway on Kubernetes environment.
- Use the following command to check the available options under setup command.
Outputopsramp-collector-start setup --help
Usage: opsramp-collector-start setup [command] Available Commands: init Install Kubernetes in your hostmachine and configure high availability node Kubernetes node options updatehostname Updates hostmachine name Flags: -h, --help help for setup Use "opsramp-collector-start setup [command] --help" for more information about a command.
- Update the hostname before installing k3s.
Make sure that each node should have a unique hostname.opsramp-collector-start setup updatehostname hostname
Note: After updating the hostname, exit the virtual machine (VM). Then, re-login to the VM for the changes to take effect.
- Run the follwong command to get the available flags to install k3s:
Available Flagsopsramp-collector-start setup init --help
Flags | Description |
---|---|
--enable-ha (-E) | Enable High availability(true/false) |
--loadbalancer-ip (-L) | Ip for loadbalancer |
--repository (-R) | Pull helm charts and images from custom repository (default "us-docker.pkg.dev") |
--repo-user (-a) | Repository username |
--repo-password (-s) | Repository Password |
--read-repopass-from-file (-f) | Read repository password from file |
- Install the k3s:
If you don’t want to use OpsRamp repository and use your own repository (either public or private) for pulling docker images and Helm charts in all available nodes, then follow the below steps:
- Open the specified yaml file and uncomment the “configs” section.
vi /var/cgw/asserts_k3s/registries.yaml.template
- Provide your repository details as follows and ensure proper YAML indentation.
Example:mirrors: artifact.registry: endpoint: - "https://us-docker.pkg.dev" configs: "{private-repo}": auth: username: "{user}" password: "{password}"
- Open the specified yaml file and uncomment the “configs” section.
Install the k3s and initialize the gateway on the first node.
ORopsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp}
Install the k3s with a custom pod/service IP range, use the following command.
opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp} --cluster-cidr <cluster-cidr-ip> --service-cidr <service-cide-ip>
Note
- You can pass multiple load balancer IP ranges separated by commas.
For example: 192.25.251.12/32, 192.25.251.23/28, 192.251.50-192.25.251.56 - If you want to use a single IP address during k3s installation, use “–loadbalancer-ip {loadbalancerIp/32}” in the command.
For an example:opsramp-collector-start setup init --enable-ha=true --loadbalancer-ip {loadbalancerIp/32}
- You can pass multiple load balancer IP ranges separated by commas.
Load balancer format
The following are the IP-range information.
IP-range | Description | Result Value |
---|---|---|
192.25.254.45/32 | This is used to add single metallb IP | 192.25.254.45 |
192.25.254.45/30 | This is used to add 4 IPs from a given IP | 192.25.254.44 - 192.25.254.47 |
192.25.254.44 - 192.25.254.49 | This is used to add a custom range | 192.25.254.44 - 192.25.254.49 |
- If you want to add a new node (1st node) to the cluster, use the following two commands.
- Run the following command to generate the k3s token on the first node.
opsramp-collector-start setup node token
- To join the new VM to the existing cluster, run the following command.
Here, NodeIP should be the first node IP and token is what we get in the previous step.opsramp-collector-start setup node add -u https://{NodeIp}:6443 -t token
- Run the following command to generate the k3s token on the first node.
- The K3s is now installed in the new node.
- Repeat step 5 to add the 2nd and 3rd node to the cluster. This will complete the setup for the 2nd and 3rd node HA NextGen gateways.
After K3s is installed, now register the gateway to OpsRamp Cloud.
Register the gateway
Click here to know how to register the gateway to the OpsRamp Cloud.
Commands to check the status of the cluster
Make sure all the nodes are in Ready State.
Sample output:kubectl get nodes
NAME STATUS ROLES AGE VERSION nodea Ready control-plane,etcd,master 8m3s v1.23.5+k3s1 nodeb Ready control-plane,etcd,master 5m13s v1.23.5+k3s1 nodec Ready control-plane,etcd,master 4m5s v1.23.5+k3s1
Make sure longhorn and metallb are deployed successfully.
Sample output:helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION longhorn longhorn-system 1 2023-01-11 06:17:37.412149576 +0000 UTC deployed longhorn-1.0.0 v1.2.4 metallb kube-system 1 2023-01-11 06:17:34.252217507 +0000 UTC deployed metallb-1.0.0 0.9.5
Make sure all the pods are in Running State.
Sample output:kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE default nextgen-gw-0 4/4 Running 0 23h default stan-0 2/2 Running 0 23h kube-system coredns-d76bd69b-n8jhh 1/1 Running 0 23h kube-system metallb-controller-7954c9c84d-pm89k 1/1 Running 0 23h kube-system metallb-speaker-j69tp 1/1 Running 0 23h kube-system metallb-speaker-mddqj 1/1 Running 0 23h kube-system metallb-speaker-n45g4 1/1 Running 0 23h kube-system metrics-server-7cd5fcb6b7-tvnps 1/1 Running 0 23h longhorn-system csi-attacher-76c9f797d7-2jg5w 1/1 Running 0 23h longhorn-system csi-attacher-76c9f797d7-qhs85 1/1 Running 0 23h longhorn-system csi-attacher-76c9f797d7-qn9pr 1/1 Running 0 23h longhorn-system csi-provisioner-b749dbdf9-chjs9 1/1 Running 0 23h longhorn-system csi-provisioner-b749dbdf9-hbwx2 1/1 Running 0 23h