This guide provides detailed instructions for increasing or decreasing memory limits for the NextGen Gateway components in a Kubernetes cluster using Helm. The steps include retrieving the current Helm chart version, updating memory limits, and verifying the changes.
- To begin, you need to retrieve the current Helm chart version for the NextGen Gateway. Run the following command:
Example:helm list | grep nextgen
- Look for the version number after nextgen-gw in the output. For example, if the output shows nextgen-gw-1.7.1, then 1.7.1 is the chart version.
- Next, export the current Helm chart values to a file so you can modify them. Use the following command:
helm get values nextgen-gw > nextgen-values.yaml
- Now, update the memory limits for the NextGen Gateway components. Use the command below, replacing
<chart_version>
with the version you retrieved in step 1, and adjust the memory values as needed:
Example Memory Allocation:helm upgrade nextgen-gw oci://us-docker.pkg.dev/opsramp-registry/gateway-cluster-charts/nextgen-gw --version <chart_version> -f nextgen-values.yaml --set-string vprobe.resources.limits.memory="8192Mi" --set-string nativebridge.resources.limits.memory="1000Mi" --set-string squid.resources.limits.memory="1000Mi" --set-string postgres.resources.limits.memory="2048Mi"
- For a node with 8GB memory:
- vprobe: 4Gi
- postgres: 1Gi
- nativebridge: 500Mi
- squid-proxy: 500Mi
- For a node with 16GB memory:
- vprobe: 8Gi
- postgres: 2Gi
- nativebridge: 1Gi
- squid-proxy: 1Gi
- For a node with 8GB memory:
- Verify the Memory Limits:
- After updating the memory limits, you should verify that the changes have been applied successfully. Run the following command to check the limits on each container:
kubectl describe pod nextgen-gw-0
- Look under the Limits section for each container to ensure the memory limits have been updated according to your specifications.
Sample Output:Containers: postgres: Container ID: containerd://9e42887f45f5bf3bd6874c6c1409c365d23558689441cf0968564e994a896bb7 Image: us-central1-docker.pkg.dev/opsramp-registry/gateway-cluster-images/vendor-images/docker.io/library/postgres:13.4-alpine Image ID: uat.opsramp.net/opsramp-registry/gateway-cluster-images/vendor-images/docker.io/library/postgres@sha256:89fa41c8b840552bdac8c56e923eb04d7bbc5a6eb60515e626ff0afa67d7642b Port: <none> Host Port: <none> State: Running Started: Thu, 28 Mar 2024 10:23:52 +0530 Ready: True Restart Count: 0 Limits: memory: 1Gi Requests: memory: 1Gi Liveness: exec [psql -w -U opsramp -d vistara -c SELECT 1] delay=60s timeout=2s period=10s #success=1 #failure=3 Readiness: exec [psql -w -U opsramp -d vistara -c SELECT 1] delay=60s timeout=2s period=10s #success=1 #failure=3 Environment: POSTGRES_DB: vistara POSTGRES_USER: opsramp POSTGRES_PASSWORD: <set to the key 'psql-password' in secret 'psql-secret'> Optional: false PGDATA: /var/lib/postgresql/data/pgdata/ Mounts: /docker-entrypoint-initdb.d from init-data (ro) /var/lib/postgresql/data from postgres-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n29mt (ro) vprobe: Container ID: containerd://1c5646e9de3a6c45d47df31a867216306c094553ee54e8b2552453abdf5728a0 Image: us-central1-docker.pkg.dev/opsramp-registry/gateway-cluster-images/vprobe:17.1.0 Image ID: us-central1-docker.pkg.dev/opsramp-registry/gateway-cluster-images/vprobe@sha256:8e5ebc39adcf0c73fa799f431eb12ccf9b8379cbe662769a833fa011e01448bb Port: <none> Host Port: <none> State: Running Started: Thu, 28 Mar 2024 10:23:52 +0530 Ready: True Restart Count: 0 Limits: memory: 4Gi Requests: memory: 4Gi Liveness: http-get http://:25000/api/v1/live delay=60s timeout=1s period=60s #success=1 #failure=3 Readiness: http-get http://:25000/api/v1/ready delay=60s timeout=1s period=60s #success=1 #failure=3 Environment: NAMESPACE: default REDIS_HOST: nextgen-gw-redis-master REDIS_PORT: 6379 REDIS_USERNAME: <set to the key 'REDIS_USER' in secret 'nextgen-configurations'> Optional: false REDIS_PASSWORD: <set to the key 'REDIS_PASSWORD' in secret 'nextgen-configurations'> Optional: false JETTY_SERVICE_FLAG: <set to the key 'JETTY_SERVICE_FLAG_VALUE' of config map 'cache-service-info'> Optional: false NETTY_SERVICE_FLAG: <set to the key 'NETTY_SERVICE_FLAG_VALUE' of config map 'cache-service-info'> Optional: false NATS_SERVER: nats://stan:4222 NATS_CLUSTER: gateway-cluster ACK_WAIT_TIME: 60 MAX_IN_FLIGHT: 30 MESSAGE_BUS_SERVICE_ENABLED: false HTTP_PROXY_DATA: <set to the key 'HTTP_PROXY_DATA' in secret 'vprobe-proxy-secret'> Optional: false HTTPS_PROXY_DATA: <set to the key 'HTTPS_PROXY_DATA' in secret 'vprobe-proxy-secret'> Optional: false Mounts: /etc/nsg from vprobe-secret (ro) /home/admin/data/ from vprobe-proxy (ro) /opt/gateway/content from content-home (rw) /opt/gateway/deregister/ from nextgen-deregister-job (rw) /opt/gateway/updater/ from vprobe-updater (rw) /opt/gateway/vprobe/conf from vprobe-configs (ro) /var/fw from vprobe-version-info (ro) /var/log/app from vprobe-logs (rw) /var/log/app/tmp from nmap-out (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n29mt (ro) nativebridge: Container ID: containerd://c7b2a9e08035f8c21c62bb245529c3611834a0fe214d1e3f248ebabed5885f27 Image: us-central1-docker.pkg.dev/opsramp-registry/gateway-cluster-images/nextgen-nativebridge:1.4.3 Image ID: us-central1-docker.pkg.dev/opsramp-registry/gateway-cluster-images/nextgen-nativebridge@sha256:5ed62fd710855f672e902e38c27fb71d2c70468b424373c6805f513a26eef5ed Port: <none> Host Port: <none> State: Running Started: Thu, 28 Mar 2024 10:23:52 +0530 Ready: True Restart Count: 0 Limits: memory: 500Mi Requests: memory: 500Mi Liveness: http-get http://:11450/nativebridge/live delay=60s timeout=1s period=60s #success=1 #failure=3 Readiness: http-get http://:11450/nativebridge/ready delay=60s timeout=1s period=60s #success=1 #failure=3 Environment: <none> Mounts: /opt/gateway/content from content-home (rw) /var/log/app/tmp from nmap-out (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n29mt (ro)
- After updating the memory limits, you should verify that the changes have been applied successfully. Run the following command to check the limits on each container: