This guide covers the known issues and limitations of the NextGen Gateway. It helps you troubleshoot the issues and provides a workaround for the problem.
Gateway Tunnel Disconnected
If the Gateway tunnel is disconnected, how can i verify the status of a pod?
Step 1: Check the pod status
Use the following command to verify whether NextGen Gateway pod (nextgen-gw-0) is running or not.
kubectl get pods
Example:Step 2: If the pod is running and the Gateway tunnel is disconnected
- To ensure that the Gateway tunnel is properly established to the cloud, use the following command to verify the vprobe container logs.
Example:kubectl logs nextgen-gw-0 -c vprobe --tail=200 | grep TlsMonComm
- Make sure that the connection status is True. If the connection is False, use the following command to check the complete vprobe logs for additional information.
kubectl logs nextgen-gw-0 -c vprobe -f
Step 3: If the POD status is other than Running
- If you see POD status other than Running, then you must debug the pod. Use the following command to check the current status of the POD.
Example:kubectl describe pod ${POD_NAME}
ubuntu@nextgen-gateway:~$ kubectl describe pod nextgen-gw-0 Name: nextgen-gw-0 Namespace: default Priority: 0 Node: nextgen-gateway/10.248.157.185 Start Time: Fri, 28 Oct 2022 16:57:45 +0530 Labels: app=nextgen-gw controller-revision-hash=nextgen-gw-6744bddc6f statefulset.kubernetes.io/pod-name=nextgen-gw-0 Annotations: <none> Status: Running IP: 10.42.0.60 IPs: IP: 10.42.0.60 Controlled By: StatefulSet/nextgen-gw
Note
If a POD is stuck in the Pending state, it cannot be scheduled onto a node. Generally this is due to there are insufficient resources of one type or another type which prevents scheduling. The scheduler should send you messages explaining why it is unable to schedule your pod.
If a Pod is in the Waiting state, it means it was scheduled to a worker node but cannot run on that node. Again, the output of the kubectl describe command should be useful. The most common cause of Waiting pods is an image failure.
Gateway Logs Issues
How to access the Gateway logs if encountered with following issues?
- Gateway is disconnected
- Discovery and monitoring not working
- App installation failed
Kubernetes keeps detailed logs of all cluster and application activities, which you can use to narrow down the causes of any failures.
Step 1: See the detailed logs for each container
- To check the detailed logs for each container, first we need to get the pod name using the following command.
kubectl get pods -A
Example:- Now, if we want to check the nextgen-gw-0 pod logs and the list of containers running within the POD, use the following command.
kubectl get pod nextgen-gw-0 -o="custom-columns=NAME:.metadata.name,CONTAINERS:.spec.containers[*].name"
Example:To check the container logs, use the following command.
kubectl logs <pod name> --container <container name> -f
To check the previously terminated pod logs, use the following command.
kubectl logs <pod name> --container <container name> -f -p
Step 2: See the Vprobe container logs
Vprobe container is a core container and if any issues observed with connectivity issues, discovery, monitoring, scheduling, app install/uninstall, and app upgrade then you must to verify the vprobe container logs.
kubectl logs nextgen-gw-0 --container vprobe -f
Step 3: See the Nativebridge container logs
Nativebridge is responsible for native commands and script executions. If you observe any issues with modules that use native commands or script executions, you should check the nativebridge container logs.
kubectl logs nextgen-gw-0 --container nativebridge -f
Example: Ping, EMC VNX, EMC VNXe, EMC ClaRiion, RSE etc.
Step 4: See the Postgres container logs
Postgres container is responsible for persisting the data. If you observe any issues with postgres container startup, you should check the postgres container logs.
kubectl logs nextgen-gw-0 --container postgres -f
Debugging Connectivity Issues
Unable to register the NextGen Gateway ?
OpsRamp IP should be reachable from the Gateway. Refer to this link for OpsRamp IP list.
Example:Note
If you cannot find the required POD public IPs in the above link, please contact the OpsRamp SaaS team.telnet ${OPSRAMP_IP} 443
Openssl should work properly. See the below examples:
Direct Connection
openssl s_client -connect ${OPSRAMP_IP}:443
Proxy Connection
openssl s_client -connect ${OPSRAMP_IP}:443 -proxy ${PROXY_SERVER_IP}:${PROXY_PORT}
The Gateway tunnel is not up after registering the Gateway ?
The Opsramp connection grid IP address should be reachable. See the example below for a better understanding.
You can find the connection node ip address from vprobe logs using the following command.telnet ${CONNECTION_NODE_IP} 443
Here you can copy the Host value:(cn01-gi01-sjc.opsramp.net)
ERROR 17-Nov-22 06:03:10,330 TlsMonComm#189: CommChannelsProcessor. Connection Node : {"httpHost":"cn01-gi01-sjc.opsramp.net","httpPort":8443,"tlsHost":"cn01-gi01-sjc.opsramp.net","tlsPort":443,"resourceToken":"GWXHWfRnBfFj","apiHost":"nextgen.asura.api.opsramp.net"}
Openssl should work properly. See the below examples:
- Direct Connection
openssl s_client -connect ${CONNECTION_NODE_IP}:443
- Proxy Connection
openssl s_client -connect ${CONNECTION_NODE_IP}:443 -proxy ${PROXY_SERVER_IP}:${PROXY_PORT}
- Direct Connection
Pod Crashed due to a Memory Issues
How do I verify the memory usage if the pod crashes due to a memory issue?
To verify the memory usage in Kubernetes pods, make sure that you have enabled the metrics server in the Kubernetes cluster. Kubectl top command can be used to retrieve snapshots of resource utilization of pods or nodes in your Kubernetes cluster.
Use the below command to verify POD memory usage.
$ kubectl top pods NAME CPU(cores) MEMORY(bytes) nextgen-gw-0 48m 1375Mi
Use the below command to verify Node memory usage.
$ kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% nextgen-gateway 189m 9% 3969Mi 49%
Pod Crashed due to high CPU Utilization
Folow the below steps to debug the high CPU utilization:
- Get the process id of vprobe container using the following command.
Example:kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c 'ps -ef | grep vprobe'
ubuntu@ashok:~$ kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c 'ps -ef | grep vprobe' gateway 1 0 0 15:02 ? 00:00:00 /bin/bash /opt/gateway/vprobe/bin/vprobe.sh gateway 6 1 6 15:02 ? 00:03:20 java -Djava.naming.provider.url=file:/opt/gateway/vprobe/temp/jndi -Dserver.home= -Dsun.net.inetaddr.ttl=0 -Djava.net.preferIPv4Stack=true -Dhttp.nonProxyHosts=localhost -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2,SSLv3 -Djdk.tls.client.protocols=TLSv1,TLSv1.1,TLSv1.2,SSLv3 --add-opens java.base/java.util.concurrent=ALL-UNNAMED --add-opens java.base/sun.security.ssl=ALL-UNNAMED -XX:InitialRAMPercentage=30 -XX:MaxRAMPercentage=80 -XX:+ExitOnOutOfMemoryError -XX:+UnlockDiagnosticVMOptions -XX:+LogVMOutput -XX:LogFile=/var/log/app/vprobe-jvm-thread.dump -classpath /opt/gateway/vprobe/conf:/opt/gateway/vprobe/lib/:/opt/gateway/vprobe/lib/* com.vistara.gateway.core.VistaraGateway gateway 1288 0 0 15:52 pts/0 00:00:00 /bin/bash -c ps -ef | grep vprobe
Note
Here, 6 is the vprobe process id. - Get the list of child threads of vprobe process id using the following command.
Example:kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c 'top -n 2 -b -H -p <process_id>' > /tmp/child_threads.txt
kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c 'top -n 2 -b -H -p 6' > /tmp/child_threads.txt
Note
Here, 2 is the number of iterations and 6 is the vprobe process id. You can find the output in/tmp/child_threads.txt
- Generate the thread dump for vprobe using jstack, use the following command.
Example:kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c '/jcmd/bin/jstack <process_id> > /tmp/thread_dump.dump'
kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c '/jcmd/bin/jstack 6 > /tmp/thread_dump.dump'
Note
Here, 6 is the vprobe process id. - Transfer the thread dump from the vprobe container to the gateway node, use the following command.
Example:kubectl cp nextgen-gw-0:<container path> -c vprobe -n default <node path>
kubectl cp nextgen-gw-0:/tmp/thread_dump.dump -c vprobe -n default /tmp/thread_dump.dump
Note
Here, 6 is the vprobe process id. - Share the child threads and thread dump files to the OpsRamp Support Team.
Pre-checks failed during Gateway registration
The OpsRamp collector will check the basic requirements during registering the NextGen Gateway to the OpsRamp cloud at the time of registration. This includes the following checks:
- CoreDNS Check
- Helm/Docker Repository Check
- OpsRamp Cloud Check
- System Resources Check (Memory, Disk, and CPU)
- Connection Node Check
If CoreDNS Check fails during gateway registration
During this pre-check, the OpsRamp collector will verify the CoreDNS status.
In this case, the OpsRamp collector tool verifies the following pre-checks:
- Kubernetes POD to internal service communication
- POD to an external network
- Internal network (with and without proxy)
Note
If the CoreDNS pre-check fails then all the following pre-checks will also fail.If Helm/Docker Repository Check fails
During this pre-check, the OpsRamp collector will verify repository accessibility from the node and the container (with and without-proxy).
If you find the error shown in below figure, then the following could be the possible issue:
- Verify whether the repo URL you passed is valid or not.
- The repository URL is not reachable from the node.
If OpsRamp Cloud Check fails
During this pre-check, the OpsRamp collector will verify OpsRamp cloud accessibility from the node and the container (with and without-proxy).
If you find the error shown in below figure, then the following could be the possible issue:
- Verify whether the OpsRamp cloud URL you passed is correct or not.
- Cloud URL is not reachable from the node.
- Cloud URL is not whitelisted in the network.
If System Resources Check fails
In this pre-check, the OpsRamp collector will verify whether system resources are properly assigned or not before registering the Gateway.
The following are the system resources pre-requisites:
- Disk - 60GB
- Memory - 8GB
- CPU - 4 Core
Possible issues:
- If you do not allocate the required Memory, you will receive the following error. Please provide the required Memory to resolve the issues.
- If you do not allocate the required Disk, you will receive the following error. Please provide the required Disk to resolve the issues.
- If you do not allocate the required CPU, you will receive the following error. Please provide the required CPU to resolve the issues.
If Connection Node Check fails
In this pre-check, the OpsRamp collector will get all the connection nodes from the OpsRamp cloud before registering the Gateway and will check whether they are accessible or not.
Possible issues:
- If the user passes the incorrect access token, they you will see the following error.
- If the connection node is not reachable from the node, then you will see the following error.
Make sure the connection node is reachable from the node and then try to register the Gateway.
Connectivity Issue from Container
The NextGen gateway now includes a new debugging feature that allows you to troubleshoot Pods running on a Kubernetes Node, especially if they are experiencing crashes. To diagnose any connectivity problems from within the container, you can launch the Debug Container.
How to launch the Debug Container
Before you begin to launch the debug container:
- The NextGen gateway Pod should be scheduled and running.
- For the advanced debugging steps, you need to know which Node the Pod is running and must have shell access to run the commands on that Node.
Debugging with an ephemeral debug container
Ephemeral containers are useful for interactive troubleshooting when kubectl
exec is insufficient because a container has crashed or a container image does not include debugging utilities.
OpsRamp provides a new debugger docker image that includes all necessary debugging utilities.
Example 1: Debug using ephemeral containers
You can add ephemeral containers to a running NextGen gateway Pod using the kubectl
debug command. If you use the -i/--interactive
argument, the kubectl
command will automatically connect to the ephemeral container’s console.
kubectl debug -it nextgen-gw-0 --image=us-docker.pkg.dev/opsramp-registry/gateway-cluster-images/gateway-debugger:1.0.0 --share-processes -- /bin/bash
ubuntu@nextgen-gateway:~$ kubectl debug -it nextgen-gw-0 --image=us-docker.pkg.dev/opsramp-registry/gateway-cluster-images/gateway-debugger:1.0.0 --share-processes -- /bin/bash
Defaulting debug container name to debugger-t2cv2.
If you don't see a command prompt, try pressing enter.
root@nextgen-gw-0:/#
Examples 2: Debug using debugging utilities
root@nextgen-gw-0:/# ping pod1.opsramp.com
PING app.opsramp.com (140.239.76.75) 56(84) bytes of data.
64 bytes from app.vistarait.com (140.239.76.75): icmp_seq=1 ttl=52 time=264 ms
64 bytes from app.vistarait.com (140.239.76.75): icmp_seq=2 ttl=52 time=371 ms
64 bytes from app.vistarait.com (140.239.76.75): icmp_seq=3 ttl=52 time=291 ms
^C
--- app.opsramp.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 263.834/308.550/370.549/45.247 ms
root@nextgen-gw-0:/# nslookup pod1.opsramp.com
Server: 10.43.0.10
Address: 10.43.0.10#53
Non-authoritative answer:
pod1.opsramp.com canonical name = app.opsramp.com.
Name: app.opsramp.com
Address: 140.239.76.75
root@nextgen-gw-0:/# snmpwalk -v 2c -c public 192.168.17.90 .1
iso.0.8802.1.1.2.1.1.1.0 = INTEGER: 30
iso.0.8802.1.1.2.1.1.2.0 = INTEGER: 4
iso.0.8802.1.1.2.1.1.3.0 = INTEGER: 2
iso.0.8802.1.1.2.1.1.4.0 = INTEGER: 2
iso.0.8802.1.1.2.1.1.5.0 = INTEGER: 5
root@nextgen-gw-0:/# telnet localhost 11445
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Welcome to Gateway CLI. Authorised users only. Term ID=767166274
Please type help to get list of commands supported.
Negotiated Terminal : xterm 173x43
gcli@gateway>
Vprobe Application Issues
Generate Thread Dump and Heap Dump to identify any issues within the vprobe application?
The NextGen gateway provides the capability to generate thread dump and heap dump in order to identify issues within the vprobe application. A thread dump captures the current state of all threads in a Java process, while a heap dump is useful for detecting memory leaks.
Case 1: If you face any thread blocking issues?
Using the following commands you can generate the thread dump in the NextGen gateway.
Get the process id for the vprobe process using the following command.
kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c 'ps -ef | grep vprobe' gateway 1 0 0 06:20 ? 00:00:00 /bin/bash /opt/gateway/vprobe/bin/vprobe.sh gateway 7 1 20 06:20 ? 00:00:38 java -Djava.naming.provider.url=file:/opt/gateway/vprobe/temp/jndi -Dserver.home= -Dsun.net.inetaddr.ttl=0 -Djava.net.preferIPv4Stack=true -Dhttp.nonProxyHosts=localhost -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2,SSLv3 -Djdk.tls.client.protocols=TLSv1,TLSv1.1,TLSv1.2,SSLv3 -XX:InitialRAMPercentage=30 -XX:MaxRAMPercentage=80 -XX:+ExitOnOutOfMemoryError -XX:+UnlockDiagnosticVMOptions -XX:+LogVMOutput -XX:LogFile=/var/log/app/vprobe-jvm-thread.dump -classpath /opt/gateway/vprobe/conf:/opt/gateway/vprobe/lib/:/opt/gateway/vprobe/lib/* com.vistara.gateway.core.VistaraGateway
Generate a thread dump using the following command.
kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c '/jcmd/bin/jcmd 7 Thread.print' > /tmp/threaddumpnew.dump
Note
Here 7 is the process id and not the fixed id.
Case 2: If you face any memory leaks (this will cause application out of memory issues)?
Using the following commands you can generate the heap dump in the NextGen gateway.
Get the process id for the vprobe process using the following command.
kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c 'ps -ef | grep vprobe' gateway 1 0 0 06:20 ? 00:00:00 /bin/bash /opt/gateway/vprobe/bin/vprobe.sh gateway 7 1 20 06:20 ? 00:00:38 java -Djava.naming.provider.url=file:/opt/gateway/vprobe/temp/jndi -Dserver.home= -Dsun.net.inetaddr.ttl=0 -Djava.net.preferIPv4Stack=true -Dhttp.nonProxyHosts=localhost -Dhttps.protocols=TLSv1,TLSv1.1,TLSv1.2,SSLv3 -Djdk.tls.client.protocols=TLSv1,TLSv1.1,TLSv1.2,SSLv3 -XX:InitialRAMPercentage=30 -XX:MaxRAMPercentage=80 -XX:+ExitOnOutOfMemoryError -XX:+UnlockDiagnosticVMOptions -XX:+LogVMOutput -XX:LogFile=/var/log/app/vprobe-jvm-thread.dump -classpath /opt/gateway/vprobe/conf:/opt/gateway/vprobe/lib/:/opt/gateway/vprobe/lib/* com.vistara.gateway.core.VistaraGateway
Generate a heap dump using the following command.
kubectl exec -it nextgen-gw-0 -c vprobe -- /bin/bash -c '/jcmd/bin/jcmd 7 GC.heap_dump /opt/gateway/content/heapdump.hprof' 7: Dumping heap to /opt/gateway/content/heapdump.hprof ... Heap dump file created [75406258 bytes in 0.795 secs]
Note
- Here 7 is the process id and not the fixed id.
- You can find the heap dump file content of pvc folder in the node.
Failed to Upgrade NextGen Gateways from v15.0.0 to v15.1.0 (Known Issue)
In the v15.0.0 NextGen gateway, there is a known issue where users faced difficulties upgrading to v15.1.0 from the OpsRamp UI. However, the issue has been resolved now.
To upgrade the NextGen gateway from v15.0.0 to v15.1.0, users are required to perform a manual upgrade using one of the following options:
- Option 1: If the Gateway has OpsRamp Agent installed
- Option 2: If the Gateway does not have OpsRamp Agent installed
Option 1: If the Gateway has OpsRamp Agent installed
- Login to OpsRamp portal and navigate to Automation > Scripts.
- Create a Category by clicking (+) icon:
- Select Category Type: Global Script / Partner or Client Category based on the required scope of the Category
- Provide a Category Name.
- Click Save.
- Now a Category has been created.
- Navigate to Category and create a Script by clicking on (</>) icon:
- Select Script Type: Global Script / Partner or Client Script based on the required scope of the script
- Execution Type: SHELL
- Platform: Supported on Linux
- Provide Script Name and Description
- Add the following script in the script field.
#!/usr/bin/sh logFileName="/tmp/ondemand-job.log" echo "$(date) : Creating yaml file" >> $logFileName kubectl get cm vprobe-updater-cm -n $1 -o jsonpath='{.data.ondemand-job\.yaml}' > /tmp/ondemand-job.yaml if [ $? -eq 0 ]; then echo "$(date) : Successfully created yaml file" >> $logFileName echo "$(date) : Creating on-demand job" >> $logFileName kubectl apply -f /tmp/ondemand-job.yaml -n $1 if [ $? -eq 0 ]; then echo "$(date) : Successfully created on-demand job" >> $logFileName else echo "$(date) : Failed to created on-demand job" >> $logFileName fi else echo "$(date) : Failed to create yaml file" >> $logFileName fi
- Add the following parameters to the script.
- Click the Save button.
- After saving the script, click the “Apply Script to Devices” option.
- Next, select the following data and then click the “Run Now” button.
- Client Name: Your client name
- Group Name: Gateway
- Devices: Select your gateway profile name
- Parameters: Pass your gateway namespace (it will be set to default)
8. To check the successful execution of script, verify the /tmp/ondemand-job.log
file.
tail -f /tmp/ondemand-job.log
- Verify ondemand pod status.
podname:nextgen-gw-updater-ondemand-jon-*
kubectl get pods
Option 2: If the Gateway does not have OpsRamp Agent installed
Run the upgrade-gateway.sh
script to upgrade gateways from v15.0.0 to v15.1.0.
- Launch the gateway SSH console.
- Create the upgrade-gateway.sh file, then add the following scripts and save it.
#!/usr/bin/sh logFileName="/tmp/ondemand-job.log" echo "$(date) : Creating yaml file" >> $logFileName kubectl get cm vprobe-updater-cm -n $1 -o jsonpath='{.data.ondemand-job\.yaml}' > /tmp/ondemand-job.yaml if [ $? -eq 0 ]; then echo "$(date) : Successfully created yaml file" >> $logFileName echo "$(date) : Creating on-demand job" >> $logFileName kubectl apply -f /tmp/ondemand-job.yaml -n $1 if [ $? -eq 0 ]; then echo "$(date) : Successfully created on-demand job" >> $logFileName else echo "$(date) : Failed to created on-demand job" >> $logFileName fi else echo "$(date) : Failed to create yaml file" >> $logFileName fi
- Run the above created script using following command.
sh upgrade-gateway.sh default
If the gateway is running on a different namespace, use the following command to change the namespace of gateway.sh upgrade-gateway.sh {NAME SPACE}
- To check the successful execution of script, verify the
/tmp/ondemand-job.log
file.tail -f /tmp/ondemand-job.log
- Verify ondemand pod status.
podname:nextgen-gw-updater-ondemand-jon-*
kubectl get pods
How to Increase the NextGen Gateway Memory Limits
Step 1: Allocate the Required Resources to the Pod
Before you increase the memory limits of NextGen gateway, make sure you allocate the required resources (Memory and CPU) to the node.
For an example:
If the node has 8GB of memory, then the NextGen gateway memory limits are as follows:
- vprobe - 4Gi
- postgres - 1Gi
- nativebridge - 500Mi
- squid-proxy - 500Mi
If the node has 16GB memory, then the NextGen gateway memory limits are following:
- vprobe - 8Gi
- postgres - 2Gi
- nativebridge - 1Gi
- squid-proxy - 1Gi
Step 2: Update the Memory Limits
Once the resources are allocated to the node, use the following commands to update the memory limits with the appropriate values:
Note
- If your gateway is running in a specific namespace, simply append -n <your_namespace> at the end of each command.
- If you wish to customize the memory, replace the required memory values in the below command.
- For postgres container:
sudo kubectl patch statefulset nextgen-gw --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"2024Mi"}]'
- For vprobe container:
sudo kubectl patch statefulset nextgen-gw --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/1/resources/limits/memory", "value":"8192Mi"}]'
- For nativebridge container:
sudo kubectl patch statefulset nextgen-gw --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/2/resources/limits/memory", "value":"1024Mi"}]'
- For squid-proxy:
- If the gateway version is 15.0.0 or above
sudo kubectl patch deployment squid-proxy --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/resources/limits/memory", "value":"1024Mi"}]'
- If the gateway version is below 15.0.0
sudo kubectl patch statefulset nextgen-gw --type json -p='[{"op": "replace", "path": "/spec/template/spec/containers/3/resources/limits/memory", "value":"1024Mi"}]'
- If the gateway version is 15.0.0 or above
Step 3: Restart the Pod
Once the memory limits of NextGen gateway are updated, make sure restart the nextgen-gw-0 pod using the following command to apply the changes.
sudo kubectl delete pod nextgen-gw-0
How to manually add DNS Name Servers to k3s?
To address missing DNS entries in some VMs, follow the steps below to manually add nameservers and search domain to k3s:
Create a file
/etc/rancher/k3s/resolv.conf
with the following data.
Data:vi /etc/rancher/k3s/resolv.conf
nameserver <DNS_IP_1> nameserver <DNS_IP_2> search <DOMAIN_1> <DOMAIN_2>
Create another file
/etc/rancher/k3s/config.yaml
with the following data.
Data:vi /etc/rancher/k3s/config.yaml
kubelet-arg: - "resolv-conf=/etc/rancher/k3s/resolv.conf"
Now restart the k3s service using following command.
service k3s restart
Delete coredns pod using the following command.
kubectl delete pod $(kubectl get pod -n kube-system | grep coredns | awk '{print $1}') -n kube-system
Run the following command to verify whether the DNS has been updated.
kubectl debug -it $(kubectl get pod -n kube-system | grep coredns | awk '{print $1}') -n kube-system --image=busybox:1.28 --target=coredns -- cat /etc/resolv.conf