id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB12438
|
LCM Inventory failed staging to env 'host-AHV' at ip address x.x.x.x. Failure during step 'Prepare', error 'Failed to prepare staging location /home/nutanix/lcm_staging_root ' was seen.
|
This article describes scenario where lcm inventory fails when it cannot write to AHV host under /home/nutanix/lcm_staging_root directory
|
The following error is observed with LCM inventory failing because of issues with a specific host.Example from ESX:
Operation failed. Reason: Inventory Failed - found the following errors: LCM failed staging to env 'host-ESX' at ip address x.x.x.213. Failure during step 'Prepare', error 'Failed to prepare staging location /scratch/tmp/lcm_staging' was seen.
Example from AHV:
2021-12-09 01:51:47 ERROR exception.py:87 LCM Exception [LcmExceptionHandler]: Inventory Failed - found the following errors:
|
SSH to the host address mentioned in error and check the files under the following directory
root@host# ls -ltrh /home/nutanix/lcm_staging_root
Example:
ls: cannot access lcm_staging_root: Structure needs cleaning
Check the dmesg and look for EXT4 errors, for example:
71502135.919954] EXT4-fs error (device sda1): ext4_xattr_ibody_get:368: inode #1311899: comm sshd: corrupted in-inode xattr
Investigate potential file system corruption issues. On AHV run ahv_fs_integrity_check NCC check and review recommendations in KB 9271 http://portal.nutanix.com/kb/9271.
|
KB13895
|
Prism Central - MSP controller failed to upgrade due to Certificate errors
|
On PC, the MSP upgrade process fails due to an SSL Handshake Exception. Users are unable to log in to Prism Central using a third-party MFA provider. The local user works fine. MFA Login attempts fail when Aplos reaches the IAMv2 service.
|
MSP status is now verified as part of NCC 4.6.2, triggering an alert if the service fails. Troubleshooting is still required to understand the reason for the failure.
After the Prism Central (PC) upgrade, the MSP upgrade process fails due to an SSL Handshake Exception. Users cannot log in to Prism Central using a third-party MFA provider. The local user works fine. MFA Login attempts fail when Aplos reaches the IAMv2 service. This issue affects pc.2022.* where MSP is enabled.
The following traceback is observed on aplos.out (home/nutanix/data/logs/aplos.out):
Failed to make request to IAMv2 after 0 tries. Error: No JSON object could be decoded, traceback: Traceback (most recent call last):
IAM Connection failure due to SSL Handshake on prism_gateway.log observed on /home/nutanix/data/logs/prism_gateway.log on the PCVM:
ERROR 2022-09-27 14:48:46,935Z http-nio-127.0.0.1-9081-exec-136 [] auth.util.HttpIAMv2Util.openConnectionWithRetries:213 Failed to establish HTTP connection for URL https://iam-proxy.ntnx-base:8445/api/iam/authn/v1/users?filter=(username==000597cf-d2ac-722d-3c72-ac1f6bb8a6ae), method GET, Reason: javax.net.ssl.SSLException: Connection reset
Certificate error observed on msp_controller.out (/home/nutanix/data/logs/msp_controller.out) while accessing the external repository quay.io:
nutanix@PCVM$ grep ERROR ~/data/logs/msp_controller.out
prism_proxy_access_log.out (/home/apache/ikat_access_logs/prism_proxy_access_log.out) may contain Internal Server Error (500) while accessing APIv:
nutanix@PCVM$ sudo grep "/api/nutanix/v3/directory_services/list.*500" /home/apache/ikat_access_logs/prism_proxy_access_log.out
While querying mspctl controller info, the following error message is observed on PCVM:
nutanix@PCVM$ mspctl controller info
EOF error observed while trying to query authconfig on PCVM:
nutanix@PCVM$ ncli authconfig ls-directory
|
Meet the network requirements according to the Microservices Infrastructure Prerequisites and Considerations https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-cmsp-req-and-limitations-pc-r.html. Another good reference document is the Ports and Protocols > Prism Central page https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Prism%20Central. Enable connectivity in one of the following ways:
Ensure the necessary certificates are trusted on the Prism Central VM.
As of AOS 6.1, 6.0.2, pc.2022.1, 5.20.2, 5.20.3, there is a utility to configure certificate trust at a cluster level. This method should persist through upgrades and propagate to new nodes and is the preferred method. Example:
nutanix@CVM$ pki_ca_certs_manager set -p /path/to/ca_bundle.crt
Previous versions had to use legacy Linux utilities on a per-CVM basis. See KB 5090 https://portal.nutanix.com/kb/5090 for an example workflow using allssh and update-ca-trust (part of the ca-certificates package, present on PCVMs and CVMs).
If certificate trust cannot be established, as an interim workaround, ensure that SSL Intercept is bypassed for traffic that goes to the external repository quay.io.
Verify the connection is working. You can use the Linux utility openssl from the PCVM to inspect the certificate encountered on the connection:
nutanix@CVM$ openssl s_client -showcerts -connect quay.io:443
The check and response above show the request and response when SSL Intercept is not encountered (with certificate data truncated for brevity). Look for the details under "Server Certificate". The "subject" line should show the server you are connecting to. Pay attention to the "issuer" line, which can indicate the department or vendor who set up the internal security certificate. Comparing the output of this test with a server outside the environment is a good way to confirm if SSL Intercept is used.
You can achieve successful CMSP platform upgrades with SSL Intercept in the path, but the software will not proceed with downloads if certificate validation fails. The important piece to look for here is the last line, "Verify return code: 0 (ok)". This "0" means "no error". You may instead see a message like "Verify return code: 18 (self-signed certificate)" or another error. Errors relating to internal certificates can usually be resolved by establishing trust, as described above.
After the external URL is reachable, restart the MSP controller. This will retry the failed upgrade.
Restart MSP service:
nutanix@PCVM$ allssh genesis stop msp_controller ; cluster start
Verify MSP service status. It should now be shown as ok:
nutanix@PCVM$ mspctl controller info
|
KB11165
|
Linux UVM Booting Into Emergency Mode Following Nutanix Move Migration
|
In some cases, it may be seen that a Linux UVM may boot into emergency mode following a Move migration. This KB outlines some data and logs to review to investigate such failures.
|
Following a Nutanix Move migration, boot failure errors as seen in the screenshot below may be seen on the UVM.Error/warning strings such as "Entering emergency mode" or similar are likely to be present, and specific logs with relevance to any failures are likely to be called out explicitly depending on the guest OS.
|
This could be caused by a number of things. Below are some things that should be checked, tested, and collected for review.
1. What is the version of Nutanix Move in use?
2. What hypervisor and hypervisor OS version is the UVM coming from?
3. What version of AHV is the UVM migrating to?
4. What is the version of the UVM guest OS?
a. Is the UVM guest OS in the list of supported Nutanix Move OSes?
i. https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-support-os-r.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-support-os-r.html
ii. https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-support-for-secure-boot.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-support-for-secure-boot.html
5. Is the UVM guest OS in the list of supported OSes for the version of AHV which the migration target is running?
a. https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/guestos https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix/guestos
6. Is the VM actually a VM in the above listed supported OSes, or is it an appliance VM distributed by a third party that may have been heavily modified?
7. Are other UVMs able to migrate from the same source to the same target successfully? CentOS, Windows, BSD, etc.
8. Try rebooting the UVM on the source prior to migration.
9. What does the partition layout look like? Review this information on both source and destination UVMs.
a. What are the partition types?
b. What filesystems are in use on the given disk partitions?
c. If using LVM, what does the LVM naming and layout look like?
d. Are any of the partitions encrypted?
e. Is UEFI enabled?
10. Are the underlying vdisks/block devices even visible to the UVM while in emergency mode?
a. Use "lsblk" or similar to identify the vdisk visibility to the guest OS.
11. Attempt to modify the disk bus type to a type that does not require VirtIO drivers (SATA, IDE). Refer to the Configuring Images https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-image-configure-acropolis-wc-t.html section of the Prism Web Console Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:Web-Console-Guide-Prism.
a. What is the disk bus type on the source?
b. What is the disk bus type on AHV post migration?
c. Try changing the bus type on the source then migrating.
12. The UVM emergency OS boot may have generated an SOS report. This should be collected for review.
13. Any available kernel logs, journalctl logs, and dmesg logs should also be collected for review.
14. Review the Nutanix Move VM for service stability and collect and review the services' logs. KB 7466 http://portal.nutanix.com/kb/7466
15. Are all of the requirements met, and do any limitations apply to the UVM in question?
a. Requirements can be found in the Move User Guide by navigating to the relevant hypervisor > hypervisor requirements section, for example, Requirements (AWS to ESXi) https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-requirements-aws-esxi-r.html
b. Move Unsupported Features https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-unsupported-features-r.html
c. Move Migration Limitations https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move:top-migration-limitations-r.html
16. Is VMware tools/open-vm-tools up to date on the VM?
a. Is the VMware tools/open-vm-tools connection stable on the source?
It may be that networking is unavailable on the UVM due to being booted into emergency mode or similar.If this is the case, then gathering logs for offline review may be difficult. While screenshots of the UVM console may be easy to gather, they are not ideal for offline review. However, it should still be possible to collect the logs mentioned in points 12 and 13 by following a workflow similar to below.
1. Attach a secondary disk to the UVM that is failing to boot.2. Boot the UVM into emergency mode then partition, format, and mount the secondary disk.3. Collect any relevant data and logging and place it on the secondary disk.4. Shut down the problematic UVM.5. Attach the secondary disk to a second UVM that is working.6. SCP or otherwise upload the collected data from the second UVM.7. Shut down the second UVM, and remove the secondary disk once the data has been collected from it.
|
KB15477
|
PCVM upgrade from PC.2022.6.x to PC.2023.3 may get stuck in case if previously CMSP was enabled and then disabled before the upgrade
|
PCVM upgrade from PC.2022.6.x to PC.2023.3 may stuck in case if previously CMSP was enabled and then disabled before the upgrade
|
PCVM upgrade from PC.2022.6.x to PC.2023.3 may get stuck in case if previously CMSP was enabled and then disabled before the upgrade
Symptoms:
Upgrade from PC.2022.6.x to PC.2023.3 stuck; PCVM has all services stopped and is not proceeding to reboot. On Scale-out PCVM one node out of 3 holding shutdown token will be in this statePCVM had CMSP enabled before, and CMSP is cleaned up and disabled currently; check bash history for cmsp-cleanup.py execution:
nutanix@PCVM:~$ panacea_cli show_bash_history | grep -E "=======|cmsp-cleanup.py"
No CMSP is disabled currently; confirm that no kubelet-master/ kubelet-worker daemons are present:
nutanix@PCVM:~$ allssh 'systemctl | grep -E "kubelet-master|kubelet-worker"'
PCVM has a disk mounted in /home/nutanix/data/sys-storage/
nutanix@PCVM:~$ lsblk | grep /home/nutanix/data/sys-storage/
~/data/logs/finish.out contains an error similar to the following, and the node does not proceed to reboot to complete the upgrade:
nutanix@PCVM:~$ less ~/data/logs/finish.out
|
Cause:
The presence of a CMSP disk added during the previous CMSP enable will cause the finished script to enter an unexpected code path and try to stop daemons missing on PCVM with disabled CMSP.
Workaround:
Check for the below keys presence via zkls /appliance/logical/iam_flags:
nutanix@PCVM:~$ zkls /appliance/logical/iam_flags/
If any of these 3 keys are there, remove via zkrm after consulting with a local Pod Lead/Escalation Engineer:
nutanix@PCVM:~$ zkrm /appliance/logical/iam_flags/migration_ack
On the scale-out PCVM: verify stuck node is holding the shutdown token:
nutanix@PCVM:~$ zkcat /appliance/logical/genesis/node_shutdown_token
Reboot the PCVM that is stuck, node will proceed with the upgrade procedure normally
nutanix@PCVM:~$ sudo reboot
On the scale-out PCVM: Wait for the next node to take the shutdown token and stuck with the same symptoms in ~/data/logs/finish.out, reboot the PCVM, and repeat on all 3 PCVMs till the upgrade is finished.
After all PCVM nodes are upgraded, MSP enable is expected to take some time.
|
KB7343
|
NCC Health Check: calm_container_health_check
|
The NCC health check calm_container_health_check verifies if the nucalm and epsilon containers are healthy.
|
Nutanix Self-Service is formerly known as Calm.The NCC health check calm_container_health_check verifies if the NuCalm and Epsilon containers are healthy.
Note: This check will run only on Prism Central VM. Minimum required NCC version is 3.8.0
Running the NCC check
Run the NCC check as part of the complete NCC Health Checks.
nutanix@pcvm$ ncc health_checks run_all
Or run this check selectively using the following command:
nutanix@pcvm:~$ ncc health_checks system_checks calm_container_health_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 15 minutes, by default.
Sample Output
For status: PASS
Running : health_checks system_checks calm_container_health_check
For status: INFO
Running : health_checks system_checks calm_container_health_check
For status: FAIL
Detailed information for calm_container_health_check:
Output messaging
[
{
"Check ID": "Check for Calm Container's state"
},
{
"Check ID": "Internal services of each calm containers are down, Docker plugin is not working properly"
},
{
"Check ID": "Please check internal services of each calm containers."
},
{
"Check ID": "Nucalm services may be inaccessible or performs incorrectly."
},
{
"Check ID": "A400112"
},
{
"Check ID": "Calm Containers are unhealthy"
},
{
"Check ID": "Nucalm or Epsilon container is unhealthy on node_ip"
},
{
"Check ID": "Nucalm or Epsilon container is in unhealthy state"
}
]
|
INFO status:
Info status is returned if either of the containers are in starting state. Containers can take up to 2 minutes to start. Run the check again in 5 mins to check the container health status.
FAIL status:
Fail status is returned when either of the containers are not in healthy state. Calm UI will not load in this state.Verify the status of NuCalm and Epsilon containers by running the below command in PC VM
nutanix@NTNX-PCVM:~$ docker ps
If the containers are healthy, below output is seen
nutanix@NTNX-PCVM:~$ docker ps
If the containers are unhealthy, output similar to below is displayed
nutanix@NTNX-PCVM:~$ docker ps
Try to login to individual containers and note down the error
nutanix@NTNX-PCVM:~$ docker exec -it epsilon /bin/bash
For further investigation initiate a logbay log collection bundle by running the following commands from the Prism Central VM command line.
nutanix@NTNX-PCVM:~$ logbay collect --duration=-1h00m
The above command will collect log bundle for last 1 hour. Raise a support case with Nutanix and attach the log bundle and outputs for above commands for further investigation. For detailed help on logbay collection, refer to KB 6691 https://portal.nutanix.com/kb/6691.
PASS status:
Pass status is returned when both the containers are in healthy state.
|
{
| null | null | null | null |
KB11756
|
VMs on the AHV hypervisor may restart unexpectedly during an AOS upgrade
|
VMs on the AHV hypervisor may restart unexpectedly during an AOS upgrade
|
Nutanix has identified a rare issue where some VMs on the AHV hypervisor may restart unexpectedly during an upgrade from AOS 5.20, 5.20.0.1 or 6.0. This issue is described in Field Advisory #94 https://download.nutanix.com/alerts/Field_Advisory_0094.pdf (FA94).1. To determine the AOS version of the cluster, perform the following:
Log on to Prism ElementClick on the username (on the top right) > About Nutanix
2. To determine if the cluster is susceptible, log in to any CVM on the cluster and run the following command:
nutanix@cvm:~$ hostssh "netstat -ant | grep -F 192.168.5.254:3261 | grep ESTAB | wc -l"
Sample output:
============= x.x.x.1 ============
If any returned value per node is greater than 500, then the cluster is susceptible to this issue.
|
Customers that are susceptible to the issue and are planning an AOS upgrade should contact Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com prior to upgrading AOS for assistance with pre and post upgrade steps.AOS 5.20.1.1 (LTS) and 6.0.1 (STS) or later contain the fix for this issue.
|
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""allssh '~/cluster/bin/genesis stop arithmos'\t\t\tallssh '~/cluster/bin/genesis stop hyperint'\t\t\tallssh '~/cluster/bin/genesis stop prism'\t\t\tallssh 'rm ~/data/arithmos/arithmos_per*'\t\t\tcluster start""
| null | null | null | null |
KB15847
|
Manually sign the certificates and update a CSI secret
|
Certificate rotation may fails in Nutanix Kubernetes Engine (formerly Karbon) if CSI secret is expired.
|
Before proceeding with the solution section, make sure to follow all steps below, and in step 7 below, the CSI pod has a "401 Authorization Error".
1. NKE certification rotation (skip-health-checks) failed with the below message:
nutanix@NTNX-PCVM:~$ export CLUSTER_NAME=<karbon_cluster_name>
2. Login to the Karbon control plane (master) node, then switch the user to root.
[nutanix@karbon-master-0 ~]$ sudo -i
3. Make sure the date and time among the NKE cluster nodes are in sync.
[root@karbon-master-0 ~]# for i in $(kubectl get nodes -o jsonpath='{.items[*].status.addresses[0].address}') ; do echo ====$i====; ssh $i "date" ;done
4. Make sure the NKE cluster nodes have a valid certificate.
[root@karbon-master-0 ~]# for i in $(kubectl get nodes -o jsonpath='{.items[*].status.addresses[0].address}') ; do echo ====$i====; ssh $i 'find /var/nutanix/etc/kubernetes/ssl ! -iname *key* ! -iname webhook* ! -iname *proxy* -type f -exec echo *******{}***** \; -exec openssl x509 -in {} -noout -enddate \;' ;echo;done
5. Observed some pods (below prometheus and elasticsearch-logging) are in the Init state for a long time.
[root@karbon-master-0 ~]# kubectl get po -n ntnx-system |egrep -v 'Running|Completed'
6. Under the event of the pod, has logs "401 Authorization Error" error when trying to mount a VG.
[root@karbon-master-0 ~]# kubectl describe pod -n ntnx-system prometheus-k8s-0 |grep Events -A 30
7. Finally, confirm the csi-node-ntnx-plugin pod also has "401 Authorization Error" for GetVersions() API call.
[root@karbon-master-0 ~]# kubectl logs csi-node-ntnx-plugin-q52hm -n ntnx-system -c csi-node-ntnx-plugin
More symptoms:1. The log rotation failed due to Kibana pods being down.
Rotate the certificates for the k8s clusters da8c8b6-xxxx-yyyy-zzzz 100% kFailed Certificate rotation initiated for addons failed to rotate addons certificates: Max retries done: Currently running: 0 replicas of kibana
2. The stateful sets for elasticsearch and prometheus are not running
[nutanix@karbon-master-0 ~]$ kubectl get sts -A
3. Elasticsearch pod events showed the following entries about connection refused on the CSI socket due to it being unable to mount PVC volumes.
Events:
4. Looking at the CSI pods, they are in a crash loop
[nutanix@karbon-master-0 ~]$ kubectl get pods -A | grep csi
5. For the same pod, in the container ''driver-registrar" there are a few errors regarding file "/var/nutanix/var/lib/kubelet/plugins/csi.nutanix.com/registration" not existing.
[nutanix@karbon-master-0 ~]$ sudo kubectl logs -n ntnx-system nutanix-csi-node-c5wv6 -c driver-registrar
|
The following steps should only be followed by a Support Tech Lead/ Staff SRE or DevEx.Perform the below procedure on PCVM and on one of the NKE node master node. A. Extract the certificate from the failed task on PCVM.1. On one of the PCVM, download the script https://download.nutanix.com/kbattachments/16269/karbon_extract_cert.py to extract the certificate from the failed task. md5sum is 5d494bfd5426af6ea73f42260ff5ab44
nutanix@NTNX-PCVM:~/tmp$ wget -nv https://download.nutanix.com/kbattachments/16269/karbon_extract_cert.py
2. List the failed certificate rotation tasks with the command below. Before running the below commands ensure you have logged into karbonctl.
nutanix@NTNX-PCVM:~$ export CLUSTER_NAME=<karbon_cluster_name>
In the above command, replace <karbon_cluster_name> with the name of the karbon cluster that needs to be fixed.3. Run the script to extract the certificate from the above task uuid.
nutanix@NTNX-PCVM:~$ python ~/tmp/karbon_extract_cert.py --uuid <task_uuid> --csi > ~/tmp/csi_cert.pem
In the above command, replace <task_uuid> with the uuid taken from the output of Step2.Below is an example output which includes a certificate and a private key
nutanix@NTNX-PCVM:~$ cat ~/tmp/csi_cert.pem
4. Run the below commands to extract the cert file and the private key separately.
nutanix@NTNX-PCVM:~$ sed -n '1,/END CERT/p' ~/tmp/csi_cert.pem > ~/tmp/cert
5. Copy the certificate into one of the master nodes in the Karbon cluster under /home/nutanix/tmp_cert folder by running the following commands. Ensure you have created tmp_cert folder before proceeding below.Extract the private key to use for scp with the below commands
nutanix@NTNX-PCVM:~$ ~/karbon/karbonctl cluster ssh script --cluster-name ${CLUSTER_NAME} > ~/tmp/${CLUSTER_NAME}.sh
The above command will show an example output like the one below; copy the private key file path for the next command in the output below.
nutanix@NTNX-PCVM:~$ sh ${CLUSTER_NAME}.sh -s
With the private key file path from the above output, run the below commands to copy the private key file into one of the Karbon master nodes,
nutanix@NTNX-PCVM:~$ scp -i <private_key_file_path> ~/tmp/{cert,key} nutanix@<master_node_ip>:/home/nutanix/tmp_cert
In the above command, replace <master_node_ip> with one of the master node's IP address.If the following error is seen, copy the contents from both files "~/tmp/cert" and "~/tmp/key" and create them in the master node under "/home/nutanix/tmp_cert" manually with the "vi"
nutanix@PCVM:~$ sudo scp ~/tmp/cert nutanix@<master_node_ip>:/home/nutanix
B. Update CSI certificate on NKE cluster. SSH to the master node where you copied the files in the previous command 1. Change the certificate and key format.
[root@karbon-master-0 tmp_cert]# cat cert |sed -z 's/\n/\\n/g' |sed -r 's/(^|$)/"/g' > csi.crt.pem ;echo
2. Extract the existing certificate from the ntnx-secret*, then generate a new certificate.
[root@karbon-master-0 tmp_cert]# kubectl get secret -n kube-system | grep ntnx-secret
3. Replace the certificate and key in the existing certificate.
[root@karbon-master-0 tmp_cert]# jq --slurpfile csi csi.crt.pem '.cert |= $csi ' all_cert_json.json > all_cert_json_new.json
4. Update the CSI secret.
[root@karbon-master-0 tmp_cert]# kubectl apply -f ${SECRET}_updated.json
If you observe the below error when applying the secret, remove the "managedFields" array from the JSON file and reapply.
[root@karbon-master-0 tmp_cert]# kubectl apply -f ${SECRET}_updated.json
An example of using jq to remove the "managedFields" array from the JSON file and re-apply the JSON file.
[root@karbon-master-0 tmp_cert]# jq 'del(.metadata.managedFields)' ${SECRET}_updated.json > ${SECRET}_updated_no_managed.json
5. Restart all nutanix-csi* pods.
[root@karbon-master-0 tmp_cert]# kubectl get pod -n ntnx-system --show-labels |grep nutanix-csi
6. Wait a few minutes and check if all pods are healthy.7. Re-initiate the certificate rotation task again from the PCVM SSH session.
nutanix@NTNX-PCVM:~$ ~/karbon/karbonctl cluster certificates rotate-cert --cluster-name ${CLUSTER_NAME} --skip-health-checks
8. Check the status of certificate rotation and confirm it is successful
nutanix@NTNX-PCVM:~$ ~/karbon/karbonctl cluster certificates rotate-cert status --cluster-name ${CLUSTER_NAME}
|
}
| null | null | null | null |
KB12071
|
Failed to connect to OCSP responder
|
Fail to log into Prism using CAC authentication.
|
The Online Certificate Status Protocol (OCSP) is an Internet protocol used for checking certificate revocation in client authentication seen in Common Access Card (CAC) based authentication.If a user is unable to log into Prism UI using CAC-based authentication, verify the following:
ikat_control_plane.out contains errors for "connection refused" or "i/o timeout"
nutanix@CVM$ grep 'connection refused' ~/data/logs/ikat_control_plane*
nutanix@CVM$ grep 'timeout' ~/data/logs/ikat_control_plane*
OCSP is enabled, and OCSP Responder URI:
nutanix@CVM$ ncli authconfig get-client-authentication-config
DNS and Firewall connectivity checks are successful:
nutanix@CVM$ allssh 'nslookup <ocsp-responder-url>'
|
Chose an appropriate set of actions, depending on the outcome of the verification steps above
if the OCSP configuration is not set up or not enabled; see the below Security Guide documentation for setup procedures AOS Security 6.0 - Configuring Authentication (nutanix.com) https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_0:wc-security-authentication-wc-t.html.if the nslookup command fails to resolve, perform further DNS troubleshooting.if the ncat command returns a 'connection refused' message, review the firewall configuration.
|
KB6960
|
Prism - How to extract CA Chain, Public Certificate and RSA Key from PFX file
|
This article describes how to extract the CA chain, public certificate and RSA key from a PFX file and import them into Prism.
|
Certificates exported from Windows systems are usually in PFX format. This article describes how to import these certificates into Prism.
Below information are mandatory in deploying custom SSL certificates in Prism:
Private Key Type: The appropriate type for the signed certificate (RSA 2048-bit, EC DSA 256-bit, or EC DSA 384-bit)Private Key: The private key associated with the certificate to be importedPublic Certificate: The signed public portion of the server certificate corresponding to the private keyCA Certificate/Chain: The certificate or chain of the signing authority for the public certificate
The certificate bundling tool may pack all these certificates in one PFX file and you may not be able to extract the needed certificates.
Refer to Installing an SSL Certificate https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:wc-security-ssl-certificate-wc-t.html from the Nutanix Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:Nutanix-Security-Guide for the detailed instructions on SSL deployment.
|
This procedure works on PFX certificate files that bundle keys in RSA format.
To extract the certificates in PEM format from a PFX file, follow the below procedure:
Copy the PFX file to a Linux system with the OpenSSL utility installed.Log on to that Linux system and navigate to the directory where the PFX file was copied in the previous step.Execute the below commands to extract the certificates:
Extract the RSA Key from the PFX file:
$ openssl pkcs12 -in <PFX_file> -nocerts -nodes -out nutanix-key-pfx.pem
Extract the Public Certificate from the PFX file:
$ openssl pkcs12 -in <PFX_file> -clcerts -nokeys -out nutanix-cert-pfx.pem
Extract the CA Chain from the PFX file:
$ openssl pkcs12 -in <PFX_file> -cacerts -nokeys -chain -out ca-pfx.pem
Convert the RSA Key from PFX format to PEM:
$ openssl rsa -in nutanix-key-pfx.pem -out nutanix-key.pem
Convert the x509 Public Certificate and CA Chain from PFX to PEM format:
$ openssl x509 -in nutanix-cert-pfx.pem -out nutanix-cert.pem
Download nutanix-key.pem, nutanix-cert.pem and ca.pem from the Linux system to a local desktop.After following the steps above, the needed certificates and keys will be generated in a preset working directory, and it can be used as follows.
Private Key Type: RSA 2048-bitPrivate Key: nutanix-key.pemPublic Certificate: nutanix-cert.pemCA Certificate/Chain: ca.pem
Follow the procedure in Installing an SSL Certificate https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:wc-security-ssl-certificate-wc-t.html from the Nutanix Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:Nutanix-Security-Guide to install and deploy the downloaded certificates.
Note: Prism allows certificates in PEM format only, and the format is shown below:
-----BEGIN ..... -----
|
KB12152
|
ID Based Security configuration fails on Flow Network Security with LDAPS Enabled AD Groups
|
When configuring Flow Network Security ID Based Security with LDAPS protocol, Prism UI may show a stack trace.
|
Flow Network Security does not support the LDAPS (secure) protocol for the ID Based Security feature.When configuring Flow Network Security feature "ID Based Security" in the PC (Prism Central) UI, an issue may be observed when attempting to reference configured AD Groups that are to be used for VM accounts.
The following error message may be seen when selecting "Add User Group" and choosing a group that is configured with a Secure LDAP (LDAPS) connection protocol under the "Referenced AD Groups" menu:
Prism UI may report the following stack trace:
|
Configure ID Based Security using the LDAP protocol for the underlying AD user groups.
|
KB12439
|
Nutanix Files - Failed to upgrade Files Manager: Failed to remove the old files_manager image
|
In rare cases, post upgrade of Files Manager (FSM) on Prism Central, the container exits and not able to start.
To work around this, Files Manager can be upgraded again using LCM offline bundle.
|
In rare cases, the Files Manager upgrade on Prism Central may fail to remove the old Files Manager docker image.The Files Manager container exits and it is not able to start.We see the following failure message:
Operation failed. Reason: Update of Files Manager failed on x.x.x.x (environment pc) at stage 1 with error: [Failed to upgrade Files Manager: Failed to remove the old files_manager image: files_manager:1.0.1]
|
Please only follow this KB if the original description is a match. Any other issues causing Files Manager failed will need to be investigated further and a new ENG / TH should be opened as required. To work around this, Files Manager can be upgraded again using LCM offline bundle.Steps to do this:
1. Download the Files Manager LCM bundle from https://portal.nutanix.com/page/downloads?product=afs2. Extract the bundle and find the "files_manager.tar.xz" package
nutanix@PCVM ~$ tar -xvf lcm_files_manager_2.0.0.tar.gz
3. Move the "files_manager.tar.xz" package onto /home/nutanix/tmp folder
nutanix@PCVM ~$ mv ~/builds/files_manager-builds/2.0.0/files_manager.tar.xz /home/nutanix/tmp
4. Start the upgrade
nutanix@PCVM ~$ files_manager_cli --pkg_path=/home/nutanix/tmp/files_manager.tar.xz update_package
5. Check if the files manager container is running with the "docker ps" command; wait few secs for the container bring up to be complete and it will show up in the command output.
nutanix@PCVM ~$ docker ps
6. Check the FSM version
nutanix@PCVM ~$ files_manager_cli get_version
7. Run LCM inventory and confirm the files manager version is shown as upgraded
|
KB14785
|
Nutanix Files - MMC plug-in crashes and becomes unresponsive while updating share permissions
|
MMC plug-in crashes and becomes unresponsive while updating share permissions
|
MMC plug-in crashes and becomes unresponsive while updating share permissions at the top level on the TLD folder when multiple File server instances are loaded in the MMC. When customer attempts to update share permissions, the MMC would lock up and the permissions would fail to be added to the folders.After customer updates permissions and clicks apply, we can see a small red icon next to the file server name and an hourglass icon next to the share. The screen remains stuck like this till we close the window after which a window with “Snap-in is not responding” appears.
|
This issue is documented in ENG-407150 https://jira.nutanix.com/browse/ENG-407150 and ENG-458890 https://jira.nutanix.com/browse/ENG-458890 which are fixed in future MMC version 4.0.WorkaroundLoading MMC plug-in with only one File Server would help remediate this issue.
|
KB13022
|
Scale out PC upgrade from PC 2021.x to PC 2022.x may appear to be hung
|
Scale out PC upgrade from PC 2021.x to PC 2022.x may appear to be hung
|
DESCRIPTIONScenario#1: Scale-out Prism Central upgrade from pc.2021.x --> pc.2022.x may get stuck as the "Catalog Service Proxy" leadership election fails. The issue may happen with both 1-click upgrade OR LCM-based Prism Central (PC) upgrade. Scenario#2: Scale-out Prism Central upgrade to pc.2022.1.x may get stuck after the first node upgrade if the "Catalog Service" leader is not on the "upgraded" node. DIAGNOSIS
Scenario#1 undefined
Verify if there is a "Catalog" leader.
nutanix@PCVM:~$ catalog_leader_printer
If no leader is printed, you may be running into Scenario#1. Check "cluster status" on all nodes, and should show "Catalog" service up and running but its still not able to acquire leadership
Scenario#2 undefined
Run the below command to find out the "Catalog" leader
nutanix@PCVM:-$ service=/appliance/logical/pyleaders/catalog_master; id=$(zkcat $service/`zkls $service| head -1`); zeus_config_printer | grep -B 15 " uuid: \"$id\"" | grep "service_vm_external_ip"| awk '{print "Catalog Leader ==>", $2}'
To determine if the "Catalog" leader is on the non-upgraded node, run the below command:
nutanix@PCVM:~$ upgrade_status
Make a note of the node that has been upgradedIf the "Catalog" leader is not on the upgraded node, then you are running into Scenario#2.
You could also check the Genesis logs for error signature - “Failed to get catalog items: Received buffer is too short”
/home/nutanix/data/logs/genesis.out
In the case of LCM upgrade, Genesis service may be stuck in a restart loop due to failure in fetching catalog items as the LCM framework uses "Catalog Service" to download the images. This may cause the affected node from relinquishing the "shutdown token" to other nodes and other nodes may get stuck in the "upgrade running" state.
|
Scenario#1:
If you are running pc.2021.x Prism Central version, please upgrade to pc.2022.4.0.1 or higher. The fix for scenario#1 is incorporated in pc.2022.4 and higher versions.If you have already upgraded to pc.2022.1 and have not encountered this issue, it's still wise to upgrade to pc.2022.4.0.1If you have upgraded to pc.2022.1 and have determined you are running into Scenario#1, please engage Nutanix Support to resolve this issue. Once the issue is resolved and the Prism Central cluster is stable, kindly upgrade to pc.2022.4.0.1 or the higher available version.
Scenario#2: This issue is resolved in pc.2022.6. Please upgrade Prism Central to the specified version or newer.To RESOLVE Scenario#2, Ensure the "Catalog" leader is on the node that is upgraded to the latest version i.e. pc.2022.1.x. by running the below:
To verify the current Catalog leader and verify which node is upgraded, run steps #1 and #2 from Scenario2 #.To force the "Catalog" leader over to the node that is upgraded, you may stop the "Catalog" on the remaining 2 nodes that are not upgraded.
nutanix@PCVM:-$ genesis stop catalog
Once the upgrade node becomes the "Catalog" leader, you may restart the "Catalog" using cluster start.
|
KB4273
|
NCC Health Check: aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check
|
The NCC health checks aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check check the cluster for dangling third-party backup snapshots that may be present for a protection domain longer than intended/specified.
|
The NCC health checks aged_third_party_backup_snapshot_check and aged_entity_centric_third_party_backup_snapshot_check check the cluster for dangling third-party backup snapshots that may be present for a protection domain. If a third-party backup snapshot has existed in the system for longer than the specified period, this check will raise an alert. These backup snapshots are snapshots that are generated from third-party backups (Veeam, Commvault, Rubrik, HYCU, etc.).
Running the NCC Check
Run this check as part of the complete NCC health checks.
nutanix@cvm$ ncc health_checks run_all
Or run the check separately.
Third-Party backup snapshot check:
nutanix@cvm$ ncc health_checks data_protection_checks protection_domain_checks aged_third_party_backup_snapshot_check
Entity Centric Specific check:
nutanix@cvm$ ncc health_checks data_protection_checks protection_domain_checks aged_entity_centric_third_party_backup_snapshot_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every day, by default.
Sample Output
For status: PASS
Third-Party backup snapshot check:
Running : health_checks data_protection_checks protection_domain_checks aged_third_party_backup_snapshot_check
Entity Centric Check:
Running : health_checks data_protection_checks protection_domain_checks aged_entity_centric_third_party_backup_snapshot_check
For status: FAIL
Third-Party backup snapshot check:
Running : health_checks data_protection_checks protection_domain_checks aged_third_party_backup_snapshot_check
Entity Centric Specific check:
Detailed information for aged_entity_centric_third_party_backup_snapshot_check
Starting AOS 5.15.3 and 5.18.1 and above, backup snapshots created without an expiration time are assigned a default 60 days expiration.From NCC 3.10.1 and higher on AOS version 5.15.3 and 5.18.1 and above, the check will only fail if the snapshots have an infinite expiration (ex 2086) and have exceeded the threshold set for the alert (default 7 days). Output messaging
[
{
"Check ID": "Check for aged third-party backup snapshots."
},
{
"Check ID": "Third-party backup snapshots are present in the cluster longer than the configured threshold."
},
{
"Check ID": "Contact Nutanix support."
},
{
"Check ID": "Aged snapshots may unnecessarily consume storage space in the cluster."
},
{
"Check ID": "A110250"
},
{
"Check ID": "Aged third-party backup snapshots present"
},
{
"Check ID": "pd_name has num_snapshot aged third-party backup snapshot(s) and they may unnecessarily consume storage space in the cluster."
},
{
"Check ID": "110263"
},
{
"Check ID": "Check for the old entity-centric third-party backup snapshots."
},
{
"Check ID": "Entity-centric third-party backup snapshots are present in the cluster longer than the configured threshold."
},
{
"Check ID": "The Recovery Point for the below VM needs to be deleted from Recoverable Entities in Prism Central. If the issue persists, contact Nutanix support."
},
{
"Check ID": "Old snapshots may unnecessarily consume storage space in the cluster."
},
{
"Check ID": "A110263"
},
{
"Check ID": "Old Entity-Centric Third-Party Backup Snapshots Present"
},
{
"Check ID": "VM vm_name has num_snapshot old third-party backup snapshot(s) and may unnecessarily consume storage space in the cluster."
}
]
|
The check passes if third-party backup snapshots are not present in the cluster longer than the configured threshold. The default threshold is 7 days, but this threshold value can be modified. The threshold can range from 1 day to 365 days.
Administrators may specify how long a third-party snapshot is expected to be present in the cluster through Prism UI. This check is based on the backup schedule settings. A snapshot is created every time the third-party backup job runs - and the previous snapshot is deleted once the backup job completes. Essentially, there should be 1 snapshot per VM at all times unless otherwise is specified within the backup software. The only other time there will be multiple snapshots for a VM is if that VM is configured within multiple backup jobs.
If the check fails, verify that the threshold set for the check matches the backup schedule and/or retention for the snapshots that are linked to the protection domain. If there are multiple backup jobs linked to the protection domain, then the threshold for this check should be equal to the longest schedule.
For example, if there are 4 backup jobs linked to the protection domain:
Backup Job 1 runs dailyBackup Job 2 runs once a weekBackup Job 3 runs once every 2 weeksBackup Job 4 runs only once a month
The threshold for the check should be set to 32 days. If the threshold is set correctly but the check is still failing, it is recommended to reach out to the third-party backup vendor to ensure the DELETE API calls are being sent. If the snapshots are still present after consulting with the backup vendor, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
Aged snapshots from third-party backups can cause unnecessary capacity utilization on the cluster. Nutanix recommends verifying the threshold is consistent with the expected snapshot time frame within the backup software.
To change the threshold for "Aged third-party backup snapshots present":
Log in to Prism ElementFrom the Home drop-down menu, select "Alerts"In Alerts, click on "Configure" and "Alert Policy"Search for alert ID "A110250" or "Aged third-party backup snapshots present" and click the update policy actionConfigure the alert threshold to be consistent with the retention policy of backup software you are using then click Save
Entity-Centric Snapshots:
The workflow for these snapshots is through Prism Central NOT Prism ElementWorkflow to find entity-centric snapshots. Prism Central UI -> Virtual Infrastructure -> VM Recovery Points -> Select the VM
To change the threshold for "Old Entity-Centric Third-Party Backup Snapshots Present":
Log in to Prism Element where the alert is observedFrom the Home drop-down menu select "Settings"In Settings click "Alert Policy"Search for alert ID "A110263" or "Old Entity-Centric Third-Party Backup Snapshots Present" and click the update policy actionConfigure the alert threshold to be consistent with the retention policy of backup software you are using then click "Save"
NOTE: These checks could also be failing if a VM that was once being backed up via the backup software has been removed from the backup configuration and/or removed from the cluster. It is up to the backup vendor to send the DELETE API call for the snapshot(s) once a VM has been removed from the backup configuration. If you suspect that snapshots are being left on the system due to a VM being removed from the configuration, engage the backup vendor for further assistance.
Default expiration in AOS 5.15.3, 5.18.1 and later
Starting AOS 5.15.3 and 5.18.1, backup snapshots created without an expiration time are assigned a default 60 days expiration. Upgrade AOS to 5.15.3, 5.18.1 or later to prevent this issue in the future. However, upgrading a cluster running a previous version of AOS and having backup snapshots without an expiration date will not change those backup snapshots. They will continue to have no expiration date after the upgrade. In this case, engage the third-party backup vendor. If the snapshots are still present after consulting with the backup vendor, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
|
KB15217
|
Phoenix not able to find LSI controller after Mother Board replacement on HPE node
|
Phoenix not able to find LSI controller after Mother Board replacement on HPE DL380
StandardError: This node should have at least 1 and at most 2 Broadcom / LSI MegaRAID 12GSAS/PCIe Secure SAS39xx. But phoenix could not find any such device.
|
Phoenix not able to find LSI controller after Mother Board replacement on HPE DL380. After booting the host into phoenix it give below error message.
StandardError: This node should have at least 1 and at most 2 Broadcom / LSI MegaRAID 12GSAS/PCIe Secure SAS39xx. But phoenix could not find any such device.
Using different version of phoenix and foundation result into the same error message.Below is the screenshot of the same error when host booted into phoenix.
|
Contact HPE support to make sure Nutanix specific BIOS is loaded into new Mother Board.
|
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""watch -n 5 pgrep component""
| null | null | null | null |
KB13030
|
Nutanix Files - File Analytics deployment fails to verify services
|
This article helps in troubleshooting File Analytics deployment failure while services verification
|
File Analytics deployment fails with "Failed to verify services" error.
Scenario 1.
In /var/log/nutanix/install/analytics_bootstrap.log in FAVM, we see "Read timed out" errors
---- cut here ----
Scenario 2.
FAVM is deployed, we see that the VG is mounted and docker service is started but none of the containers would start up. Then the FAVM would freeze and get destroyed and the deployment would eventually fail at 'Verify File Analytics services'.The issue is related to the described in KB-8817 https://portal.nutanix.com/kb/8817 conflict when the docker default subnet is already in use.prism_gateway.log on Prism leader
------ cut here ------
/var/log/nutanix/install/analytics_bootstrap.log on FAVM shows creating the docker bridge.
2023-03-30 09:44:40,7772 :: Waiting for Volume Group to be created and discovered.
|
Solution Scenario 1:
This is caused due to ENG-464722 https://jira.nutanix.com/browse/ENG-464722 where Kafka container takes more than default time to come up leading to failure in deployment during services verification. Engineering is engaged to fix the issue in future releases.Note: Please engage File Analytics Engineering through a TH or an ONCALL to resolve this issueSteps that would be taken to resolve this issue:1. Sleep time in deployment process will be increased so that FA VM will be available for debugging2. Once FA VM is available for troubleshooting, timeout value for health check in docker compose will be increased3. Rest of the FA Bootstrap steps will be performed manually_______________________________________________________Solution Scenario 2:The issue is related to the described in KB-8817 https://portal.nutanix.com/kb/8817 conflict when the docker default subnets is already in use (172.(17-35).0.0/16 or 172.28.0.0/16 or 172.28.x.0/24).The same KB provides a workaround as well to change the subnet used for the docker network.This has been reported to engineering in ENG-270839 https://jira.nutanix.com/browse/ENG-270839.
|
KB8717
|
Nutanix Files - Failover and Failback guide step by step
|
This article explains how DR and Failback can be done with Nutanix Files step by step.
|
This explains step by steps how Failover and Failback can be done with Nutanix Files:
When Nutanix Files Cluster is deployed, a new unique container is created named Nutanix_FilesXYZ_ctr where FilesXYZ is the File Server Name. That container will store the File Server VMs as well as all the Volume Groups (Shares).It is very important to set the vStore Mappings on both PE clusters to make sure we do not have multiple instances of Nutanix Files Clusters running on the same Container. More details below.During the deployment, it will also ask for a Protection Domain which will include FSVMs and Shares as entities and it will be automatically updated when creating new shares: If that option is not selected during the deployment, it can be safely protected afterward by clicking "Protect" after selecting your Files Cluster from the "File Server" menu in Prism.
Disaster Recovery Preparation Steps: FilesXYZ runs on ClusterA - its remote site (DR) will be ClusterB.
ClusterB > Create a new Storage container DR_Nutanix_FilesXYZ_ctr ** a different container name on the different clusters will help identifying the correct cluster: ClusterB > Data Protection > Remote Site > Update > Settings > vStore Name Mapping > Select vStores > Save ClusterA > Data Protection > Remote Site > Update > Settings > vStore Name Mapping > Select vStores > SaveClusterA > Data Protection > Async DR > NTNX-FilesXYZ > Update > Configure Retention Policy on local and remote > Save. ** First replication will be a full replicationClusterB > A new Inactive Protection Domain has been automatically created. ClusterB > File Server > FilesXYZ appears as (Not Active) - which means the DR is set correctly for Nutanix Files:
|
Nutanix Files Failover
ClusterA > Data Protection > NTNX-FilesXYZ (Active) > Migrate. Migrate operations on ClusterA consist on: File server pd deactivate, VM Change Power state, VM destroy and Volume group delete > NTNX-FilesXYZ becomes Inactive.ClusterB > Data Protection > NTNX-FilesXYZ is now Active: ClusterB > File Server > FileXYZ (Needs Activation)> Activate > Wizard. Activate operations on ClusterB consist ofPD activate Phase: Volume group create, VM register, happens with.File server activation Phase: VM change power state,VM Nic Update, VM disk detach/attach Note: During File Server activation one needs to specify Storage(Internal network) and Client(external network),DNS,NTP,AD details,make sure to specify unique IP address for Fileserver VMs, refer following documentation for details Nutanix Files User Guide: Activating a File Server https://portal.nutanix.com/page/documents/details?targetId=Files-v4_0:fil-file-server-activate-wc-t.html . Confirm FilesXYZ on ClusterB is accessible and data is consistent.If from now on FilesXYZ is going to run on ClusterB, we need to consider removing the Protection Domain Schedule on ClusterA and create a new one from ClusterB to ClusterA as follows: ClusterB > Data Protection > NTNX-FilesXYZ > Update > New Schedule > Create schedule > Close.
Nutanix Files FailbackFilesXYZ is now running on ClusterB. Its remote site will be ClusterA. Assuming the previous steps were re-done correctly, Migration from ClusterB to ClusterA can be triggered. As we still have the snapshots on both sites, the failback will only replicate the changes (delta).
ClusterB > Data Protection > Async DR > NTNX-FilesXYZ > Migrate. ClusterA > NTNX-FilesXYZ became Active and it has all entities.ClusterA > File Server > FilesXYZ (Needs Activation) > Activate > Complete the Wizard.ClusterA > Confirm FilesXYZ is accessible and data is consistent.ClusterA > Confirm the vStore Mappings are still correct.ClusterB > Confirm the vStore Mappings are still correct.ClusterA > Data Protection > NTNX-FilesXYZ > Update > New Schedule > Retention Policy > Remote Site > Create schedule.ClusterB > Data Protection > Confirm the Protection Domain NTNX-FilesXYZ is Inactive.
|
KB1955
|
INTERNAL - recover/check the SNMPv3 password for AOS
|
You need to check/verify the SNMPv3 password that was stored in AOS
|
You need to check/verify the SNMPv3 password that was stored in NOS.
|
On any CVM that is part of the cluster you can run this command:
$ zeus_config_printer 2> /dev/null | grep -A 8 snmp_info
The output will be as follows:
snmp_info { enabled: true user_list { username: "marek" auth_type: kMD5 auth_key: "3A1D1D124E000B0F401D" priv_type: kAES priv_key: "3A1D1D124E000B0F401D" }
Here you see that the values auth_key and priv_key, they are encoded. To recover the clear text you can simply do this:
$ pythonPython 2.6.6 (r266:84292, Jan 22 2014, 09:42:36)[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2Type "help", "copyright", "credits" or "license" for more information.>>> a=list("3A1D1D124E000B0F401D".decode('hex'))>>> key="This is the default key">>> "".join([chr(ord(c1)^ord(c2)) for c1,c2 in zip(a,list(key))])'nutanix/4u'>>>
If your password is longer then default key you just need to append it to itself, something like this:
>>> key="This is the default key"*2>>> key'This is the default keyThis is the default key'>>> key="This is the default key"*3>>> key'This is the default keyThis is the default keyThis is the default key'>>>
|
KB10103
|
Hardware Scenario: Repeat Disk Failure in Same Slot
|
How to troubleshoot multiple drives going offline in the same chassis slot.
|
Overview
In cases where a failed drive is replaced but the new drive also fails within the next 30 days, it is possible that the disks themselves are healthy and the actual cause of the problem was elsewhere in the I/O path (Node, Chassis Slot, HBA card). This article is intended to help you identify the actual part that is causing the disks to get marked offline. It is intended primarily for NX platforms.You can get a history of hardware and firmware changes on a node using the hardware.history log file on CVM.
Example: log indicating that a Samsung SSD was removed from Slot 2 on July 7th.
nutanix@NTNX-CVM:~$ cat config/hardware.history
Problem Signatures
Start by ruling out possible causes like:
Out-of-date firmware on HBA or Disks ISB-014 https://confluence.eng.nutanix.com:8443/display/STK/ISB-014-2016%3A+2U2N+-+Node+B+fails+to+detect+drives+or+fails+to+power+on: 2U2N - Node B fails to detect drives or fails to power on ISB-106 https://confluence.eng.nutanix.com:8443/display/STK/ISB-106-2020%3A+Broadcom+%28LSI%29+SAS3008+Storage+Controller+Instability: Broadcom (LSI) SAS3008 Storage Controller Instability ISB-130 https://confluence.eng.nutanix.com:8443/display/STK/ISB-130-2023%3A+Disk+Instability+on+NX-8035-G8+Nodes: Disk Instability on NX-8035-G8 Nodes TH-8056 https://jira.nutanix.com/browse/TH-8056 Drive assigned multiple SAS Bus References
1. LSIutil shows number of DWord and Disparity Errors increase over the course of hours/days.
nutanix@NTNX-CVM:~$ date & sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil -a 12,0,0 20
2. Check smartctl -x output across multiple samples to see if any of the following error counters are increasing over time. This may be a sign of an HBA/Chassis/Node issue (refer to Failure Isolation section in the Solution of this KB)."Command failed due to ICRC error"
nutanix@NTNX-CVM:~$ sudo smartctl -x /dev/sdX -T permissive
"Number of Interface CRC Errors"
Device Statistics (GP Log 0x04)
"Number of Hardware Resets"
nutanix@NTNX-CVM:~$ sudo smartctl -x /dev/sdX -T permissive
"UDMA_CRC_Error_Count"
Vendor Specific SMART Attributes with Thresholds:
3. Sense key errors persist in the kernel logs for the same device slot even after the original disk was replaced.
nutanix@CVM:~$ sudo dmesg -T | grep -C9 'I/O error'
|
This issue may be caused by the following:
Bad HBA card (on single-node platforms) where LSI HBA is a PCI card(s)Bad Node (on multi-node platforms) where LSI HBA is soldered onto the node motherboardBad chassis disk slot or internal cabling (all platforms)
Failure Isolation
In order to narrow-down the source of the problem, perform the following steps.Note: Please make sure to contact an SME from your local Hardware vTeam if you need additional guidance. Alternatively, you can frame your issue and ask for assistance in Slack via #hw and @hw-help. For more-complex cases, please utilize Tech Help https://confluence.eng.nutanix.com:8443/display/WWEE/Tech+Help (low/medium priority) or OnCall https://confluence.eng.nutanix.com:8443/display/SW/SRE+Oncall+Process (high priority). If the customer temperature is high or the hardware platform is relatively new, consider marking dispatches for Failure Analysis https://confluence.eng.nutanix.com:8443/display/SPD/The+Dispatch+Process#TheDispatchProcess-RequestingFailureAnalysisonaPreviouslySubmittedDispatch by Hardware Engineering.
Disk Test - To determine whether the issue is with the disk or some other part on the I/O Path
1. Assuming one of the drives is already offline, use Prism to mark a 2nd drive offline that is of the same type and in the same NX model. NOTE: Make sure that both drives are logically removed (no longer have entries for the serial number in the disk_list section of zeus_config_printer).2. Swap the two drives between the slots and then mark them both online. NOTE: Prism option "Repartition and Add" should ONLY be used after confirming that both drives have been logically removed from the cluster.3. Monitor for disk errors. If the errors follow the drive, then the disk itself is faulty. If the error appear in the original slot, then the fault is with the Chassis, HBA, or Node. Move on to the next test to further isolate the problem.
Node/Chassis Test - On multi-node platforms (2U4N, 2U2N)
1. See whether there is an empty slot available in this chassis or another nearby chassis of the same NX model.2. Move the node and its disks to the unused slot.3. Check to see if the errors mentioned in the "Problem Signatures" section are increasing. If the errors increase, then the node itself is the faulty part. If the errors no longer appear, then the disk slot is faulty and the chassis should be replaced.
HBA/Chassis Test - On single-node platforms (1U1N, 2U1N)
1. Using LSIutil, note the SAS address of the HBA which is associated with the phy which shows DWord and Disparity Errors.a. Launch the LSIutil utility from the affected CVM.
nutanix@NTNX-CVM:~$ sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil
b. Each HBA card will be assigned an MPT port called an ioc#. In the example above, there are three separate LSI HBA cards in the node (NX-8150). Select the device for the IOC which shows the DWord and Disparity errors that you noted earlier.c. You should now see a list of menu options. If you need to check the errors again, choose option "20. Diagnostics" and then "12. Display phy counters." If you already know which ioc# has the errors, choose option "16. Display attached devices" to show the SAS address of this HBA card. This will always be the first address listed. In the example below, the address for this HBA card is 500304801adaca01.
Select a device: [1-3 or 0 to quit] 1
Main menu, select an option: [1-99 or e/p/w or 0 to quit] 16
2. Dispatch a replacement HBA card with instructions for the FE to look for the sticker on the card with the SAS address you obtained earlier and have them replace that card. Also, have the SAS cables going from the HBA card to the disk backplane are properly seated in their connectors. If you need a SAS cabling diagram, ask Hardware Engineering via Slack in #hw.3. Monitor for any further errors. If the errors continue to increment, replace the chassis.
SAS Bus Instability
In TH-8056 https://jira.nutanix.com/browse/TH-8056 we have noted a situation where a drive that had been replaced successfully and several hours later (after a COMRESET) was reassigned to a different SCSI Bus ID. A few hours later this drive was added to the Tombstone list and logically failed despite having no obvious errors in the SAS chain or in the disk. In looking at the data, first we see the replacement drive is attached per dmesg and /var/log/messages in the CVM:
2022-01-24T12:13:12.098030-05:00 NTNX-17SM37280243-A-CVM kernel: [249023.032210] scsi 2:0:6:0: serial_number(PHYG022105S51P9DGN )
Some hours later, this device is disconnected and reconnected with a different SCSI bus reference:
2022-01-24T14:53:05.768027-05:00 NTNX-17SM37280243-A-CVM kernel: [258616.780343] sd 2:0:6:0: device_block, handle(0x0009)
Several hours later, the drive is added to the Tombstone list and logically removed from the cluster. There are minimal errors noted on the drive:
sudo smartctl -x /dev/sda
If this behavior is seen, please open a Tech Help to investigate this instance of the issue.
|
KB12771
|
LCM updates for Calm policy engine failing on PC with "Error while creating upgrade directory /home/nutanix/tmp/lcm_upgrade/xx"
|
This KB explains a condition of LCM failing to upgrade 'Calm Policy Engine' on PC.
|
LCM might fail to upgrade 'Calm Policy Engine' on Prism Central. All other LCM upgrades might be running fine.
Prism task may report
Update of Calm Policy Engine failed on 10.xx.xx.45 (environment pc) at stage 1 with error: [Error while creating upgrade directory /home/nutanix/tmp/lcm_upgrade/940]
lcm_ops.out will show that it failed with 'Error while creating upgrade directory' on a PCVM.
2022-01-07 00:35:55,417Z INFO helper.py:117 (10.xx.xx.45, update, 10e7798b-61d0-4db9-565b-feb0efe3a098) Attempting to load update method upgrade from from update module release.policy_engine.update
Check if all the PCVMs have the public ssh key registered with the Calm Policy Engine VM:
nutanix@PCVM:~$ allssh "ssh -i /home/nutanix/.ssh/id_rsa nutanix@<calm_policy_engine_vm_ip> uname -a"
As seen above only 1 of 3 PCVM have the public key registered to reach the calm policy engine VM , Thus when the upgrade was run from PCVM .45, the remote ssh command to create a directory would fail (return 255).
|
Manually add the the ~/.ssh/id_rsa.pub key from PCVMs which cant reach the PCVM (.44 and .45 in the above case) to the ~/.ssh/authorized_keys of the Calm policy VM.
|
KB1017
|
How to find the system serial number on a Nutanix chassis
|
How to find the system serial number on a Nutanix chassis
|
If Nutanix Support requests the serial number of a chassis, there are many ways to find it: either through the command line or by locating a sticker on the physical chassis.
|
From the CVM (Controller VM)
SSH into any CVM in the cluster (default username is nutanix and default password is nutanix/4u) and perform any or all of the following:
The user prompt contains the block serial number. In the example below, "13SMXXXXX911" is the block serial number.
nutanix@NTNX-13SMXXXXX911-1-CVM$
Run the zeus_config_printer command below:
nutanix@NTNX-13SMXXXXX911-1-CVM$ zeus_config_printer | grep 'rackable_unit_serial'
The output should be similar to the following:
rackable_unit_serial: "13SMXXXXX911"
Run the ncli command below:
nutanix@NTNX-13SMXXXXX911-1-CVM$ ncli ru ls | grep 'Rack Serial'
The output should be similar to the following:
Rack Serial: 13SMXXXXX911
From the ESXi Host
Log in to the ESXi host and run the following command:
root@esxi# esxcli hardware platform get
The output should be similar to the following:
Platform Information
Look at the Serial Number field. In this example, the Serial Number is "13SMXXXXX911".
If this method provides a serial number of "1234567890" or "0123456789", you will have to obtain the serial number using the physical method (below).
From the AHV Host
Log into the AHV host and run the following command:
[root@NTNX-16SM6B490273-D ~]# /ipmicfg -tp info | grep "System S/N"
The output should be similar to the following:
System S/N : 16SM6B490273
In this example, the Serial Number is "16SMB490273".
From the Physical Chassis
If you are facing the front of the box, the sticker will be located on the right side of the chassis, below the rack rail.
Figure 1. Sticker on physical chassis (for blocks shipped before September 1, 2012)
Figure 2. Sticker on physical chassis (for blocks shipped after September 1, 2012)
Figure 3. Possible sticker locations on chassis
|
KB16314
|
Genesis in crash loop due to missing symlink
|
Genesis service will be in crash loop due to missing symlink
|
Genesis service in crash loop due to which CVM is reported as Down.
nutanix@CVM:~$ cs | grep -v UP
nutanix@CVM:~$ allssh genesis status
Genesis.out reports below traceback.
2024-02-27 07:03:53,046Z INFO 58565680 zookeeper_service.py:209 Last known zookeeper ensemble is, {'172.16.100.13': 1, '172.16.100.14': 3, '172.16.100.15': 2}
The list_disks command also fails to execute with same error.
nutanix@CVM:~$ list_disks
Symlink is missing on affected CVM.
nutanix@CVM:~$ allssh file /usr/sbin/nvme
|
Manually create symlink link to resolve this issue.
nutanix@CVM:~$ sudo su
nutanix@CVM:~$ allssh ls -al /usr/sbin/nvme
Restart Hades and Genesis.
nutanix@CVM:~$ sudo /usr/local/nutanix/bootstrap/bin/hades restart
CVM comes UP.
nutanix@CVM:~$ cs | grep -v UP
|
KB14617
|
NDB - Scale Database UI for MSSQL DB Log Area "xx GB used of allocated xx GB" does not change after the size is extended
|
A false-positive alert resulting from a cosmetic issue with NDB UI.
|
When the database log volume size is successfully extended with the NDB Scale button, the change is not reflected on the NDB Database page. The NDB Alerts page still generates the warning message if the log volume is almost full before extending.Here is an example:
The log volume size was 96 GiB before scaling (scsi.9):
After extending it by 12 GiB, the DB scaling operation was completed successfully and the log volume size was updated to 107 GB: When clicking the Scale button again, the "Expand Log Area by" still shows "20 GB used of allocated 92 GB", while we expect it to be "20 GB used of allocated 104 GB": The "Data Area" amount is correct. Here is an example after the Data Area was extended from 200 GiB to 400 GiB:
Scale 10 GB to the Data Area. After the operation was completed successfully, it shows "200 GB used of allocated 408 GB", as expected: Even though the log volume size is extended successfully if the log volume was almost full before extending, the NDB Alerts page still generates this kind of Critical message:
The database [<DB Name>] has less than 10% of free space on the log disks.
|
This is an NDB UI issue. The DB log volume is successfully extended. Therefore, the DB log space warning is a false positive. The issue is resolved in NDB 2.5.2 version. Upgrade NDB to the latest release and re-try the Scale operation.Workaround: Browse to the NDB Alerts page > Policies > Storage Space Critical > Set Severity to Info.
|
KB14908
|
Anduril service go in a crash loop due to presence of non-ascii character in the name of the VM
|
This KB discuss a BUG with Anduril service go in a crash loop due to presence of non-ascii character in the name of the VM
|
If the VM name contains non-ASCII characters this may cause the Anduril service to go in a crash loop.
The following stack trace can be found in /home/nutanix/data/logs/anduril.out on Prism Element:
2023-04-28 03:43:21,834Z CRITICAL decorators.py:47 Traceback (most recent call last):
As a result of Anduril crash, all the VM operations serialized through Anduril are ending up in a queued state:
nutanix@cvm:~$ ecli task.list include_completed=false
To validate which and how many UVMs have non-ascii characters, run the following command
nutanix@A-CVM:x.x.x.x:~$ acli vm.get "*" | grep -P "[\x80-\xFF]"
|
This issue is resolved in:
AOS 6.7.X family (STS): AOS 6.7
Upgrade AOS to versions specified above or newer.Open an ONCALL to engage DevEx. Do not cancel tasks without confirmation from DevEx/Engineering.
|
KB2484
|
NCC Health Check: host_pingable_check
|
The NCC health check host_pingable_check verifies that each of the nodes/hosts in the Nutanix cluster are pingable by their respective Controller VMs on their eth0 interface (External IP).
|
The NCC health check host_pingable_check verifies that each of the nodes or hosts in the Nutanix cluster are pingable by their respective Controller VMs (CVMs) on their eth0 interface (External IP).
If all the pings are successful, the NCC check passes.
Otherwise, the check lists the unreachable host IP addresses. The Prism web console raises an alert if a host is not pingable.
This check was introduced in NCC 2.0.
Running the NCC Check
Run this check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or run this check separately:
nutanix@cvm$ ncc health_checks network_checks host_pingable_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 5 minutes, by default.
This check will generate an alert after 6 consecutive failures across scheduled intervals.
Sample output
For status: PASS
Running /health_checks/network_checks/host_pingable_check [ PASS ]
For status: WARN
Node x.x.x.x:
Output messaging
[
{
"Check ID": "Check that all host ips are pingable from local SVM."
},
{
"Check ID": "The hypervisor host is down or there is a network connectivity issue."
},
{
"Check ID": "Ensure that the hypervisor host is running and that physical networking, VLANs, and virtual switches are configured correctly."
},
{
"Check ID": "Cluster compute and storage capacity are reduced. Until data stored on this host is replicated to other hosts in the cluster, the cluster has one less copy of guest VM data."
},
{
"Check ID": "Host IP Not Reachable"
},
{
"Check ID": "Hypervisor target_ip is not reachable from Controller VM source_ip in the last 6 attempts."
}
]
|
If the check reports FAIL/WARN staus and displays a host IP address, ensure that:
The hypervisor host is running.The physical networking, VLANs, and virtual switches are configured correctly.
If the configuration is correct and the check continues to raise an alert, consider engaging Nutanix Support at portal.nutanix.com https://portal.nutanix.com.
|
KB5195
|
Hyper-V: Failed to reach a node where Genesis is up. Retrying...
|
Hyper-V: Genesis failing to start
|
In Hyper-V environment genesis fails to start.CVM: /home/nutanix/data/logs/genesis.out
2018-01-30 09:27:09 INFO hyperv.py:293 Updating NutanixUtils path
On Hyper-V host the Nutanix host agent won't start, when attempting to start the service receive unable to start doesn't respond in a timely fashion.
|
Reboot the Hyper-V host.
|
KB14454
|
Aplos becomes unresponsive on Prism Central due to unreachable Prism Element clusters
|
Aplos becomes unresponsive on Prism Central due to unreachable Prism Element clusters, generating high number of socket connections. Upgrade the PC to the latest supported version.
|
While trying to connect to an unreachable Prism Element, the Aplos service on Prism Element will create an increasing amount of open sockets. Once the open socket count gets high enough, Aplos fails to process API calls. After the connection count reaches a threshold of 700, the Aplos service becomes completely unresponsive. Symptoms:
Aplos on Prism Central is not accepting any new connections from Mercury. As a result, the Prism Central login failed with "Server is unreachable" or "Server is Not Reachable". When logging into Prism Central, you may be unable to generate reports as the page fails with "Spec can't be empty."
When one of the connected PE clusters is unreachable (due to the cluster being powered off or unreachable over the network), Aplos creates a high number of socket connections that are not getting closed fast enough. Firstly, check if there are any stale remote connections in nuclei:
nutanix@NTNX-X-X-X-X-A-PCVM:~$ nuclei remote_connection.health_check_all
If all connected PE instances are up, but some are unreachable due to networking issues, restore network connectivity.
Verify if there are many Aplos connections by running the following command(s) from the Aplos leader PCVM.Identify the Aplos PID:
nutanix@NTNX-X-X-X-X-A-PCVM:~$ ps -ef|grep aplos | grep python
Identify the number of socket connections:
nutanix@NTNX-X-X-X-X-A-PCVM:~$ sudo lsof -p 106851| egrep "TCP|UDP" | wc -l
If you see many socket connections open on Aplos PID, proceed to the solution. It is possible to experience login issues with just over 300 open connections.
|
The issue is fixed in Prism Central version pc.2022.6.0.9 and higher. Upgrade the PC to the latest supported version.
|
KB9085
|
[Objects 2.0] Error creating the object store at the deploy step.
| null |
In the recent Objects 2.0 release, step 3 of the 'Create Object Store' may fail with "Error creating the object store" :The browser inspect/developer mode shows the API call:https://<prism-central>:9440/oss/api/nutanix/v3/objectstores
Response: 502 Bad Gateway
Another signature for this issue seen in the aoss_service_manager log file is a panic string:
(SSH to PC VM,the log file located under ~/data/logs/aoss_service_manager.out)
2020/03/17 13:13:23 http: panic serving 127.0.0.1:58284: runtime error: invalid memory address or nil pointer dereference
|
Nutanix Objects engineering team is aware of this issue and working on improving the error message in the coming release, this issue is noticed when (Internal Access Network and/or Client Access Network) are not following the prerequisites such as:
Both of the networks must have IPAM configured (AHV managed network) with free IP pool range covering the required number of IPs specified here https://support-portal.nutanix.com/page/documents/details/?targetId=Objects-v20:v20-network-configurations-r.html. The static IPs configured for each network type must be in the same subnet as the selected network, yet needs to be outside the IP pools range defined for that managed network/IPAM.
After correcting the above conditions start the creation wizard again to create an Object store.If the issue persists, Contact Nutanix support for further assistance.
|
KB16935
|
MSP controller upgrade fails with the error: "error contacting notary server: x509: certificate signed by unknown authority"
|
MSP controller upgrades may fail when running with self signed certificate
|
Customers running MSP upgrades may notice MSP upgrade failure with the following error while trying to pull the docker image as noticed in the msp_controller.out file.
Error: exit status 1. Output: Error: error contacting notary server: x509: certificate signed by unknown authority
This can also be confirmed with the output of mspctl controller info output from Prism central.
nutanix@NTNX-10-x-x-x-A-PCVM:~$ mspctl controller info
When trying the manual docker pull from the Prism Central, the image downloads successfully.
nutanix@NTNX-10-x-x-x-A-PCVM:~$ docker pull docker.io/nutanix/iam-alerts-broker:311 --disable-content-trust=false
The docker pull fails if done from the msp-controller container fails, with the missing CA certificate errors like below.
[root@ntnx-10-x-x-x-a-pcvm /]# docker pull docker.io/nutanix/iam-user-authn:8657 --disable-content-trust=false
The msp-container as noticed in the output below, is not mounting the PC /etc/pki/ dir, where the customer certificate is presentFrom the msp-controller container:
[root@ntnx-10-x-x-x-a-pcvm /]# ls -la /etc/pki
On Prism Central, the contents of the directory /etc/pki are different as noticed below:
nutanix@NTNX-x-x-x-x-A-PCVM:~$ ls -la /etc/pki
|
The solution here is to configure msp-controller to bind its /etc/pki directory with /etc/pki from Central. It is a non-disruptive action but needs msp-controller restart. 1. Take a backup for the msp-controller configuration file.
nutanix@NTNX-10-x-x-x-x-PCVM:~$ allssh "cp /home/docker/msp_controller/bootstrap/msp-controller-config.json ~/tmp/msp-controller-config.json.bak"
2. Edit the msp-controller-config.json to bind its /etc/pki directory with the /etc/pki directory from Prism Central.
nutanix@NTNX-10-x-x-x-x-PCVM:~$ allssh "sed -i '/^\s*\"binds\".*/a \ \ \"/etc/pki:/etc/pki\",' /home/docker/msp_controller/bootstrap/msp-controller-config.json"
3. Confirm that the bind was configured by verifying the modified msp-controller-config.json file.
nutanix@NTNX-10-18-17-3-A-PCVM:~$ grep -A3 host_config /home/docker/msp_controller/bootstrap/msp-controller-config.json
4. Restart msp-controller on all the PC VMs.
nutanix@NTNX-10-x-x-x-A-PCVM:~$ allssh genesis stop msp-controller; cluster start
5. docker pull from the msp-controller container should succeed now.
[root@ntnx-10-x-x-x-a-pcvm /]# docker pull docker.io/nutanix/iam-user-authn:8657 --disable-content-trust=false
Once msp-controller container can pull the images like above, the MSP controller upgrade can be reattempted.
|
KB4658
|
ACS: No permission to access the resource. Container XXXX does not have access to use volume YYYY
| null |
Getting the below error while creating a container through SSP portal:
No permission to access the resource. Container XXXX does not have access to use volume YYYY
|
It generally happens when the VG specified during container deployment is not created through the SSP portal page.
As per design, we only support existing VGs that are created through SSP portal, while deploying any previous container.
|
KB9430
|
Alert - A130172 Nutanix DR(Formerly Leap) - Degraded VM Recovery Point
|
This Nutanix article provides the information required for investigating the alert: "Degraded VM Recovery Point" for your Nutanix cluster.
|
Alert Overview
The alert A130172 :Recovery Point for VM failed to capture associated policies and categories because {reason} is generated when the Network name or information is not available for VM or the Management plane is not available to get the configuration.
Sample Alert
Block Serial Number: 20SMXXXXXXXX
Output messaging
[
{
"Check ID": "Degraded VM recovery Point"
},
{
"Check ID": "Network name is not available for the VM.\t\t\tNetwork information is not available for the VM.\t\t\tManagement plane is not available to get the configuration."
},
{
"Check ID": "Check the connection between PC and the PE."
},
{
"Check ID": "VM will be restored without associated policies and categories."
},
{
"Check ID": "A130172"
},
{
"Check ID": "Degraded VM Recovery Point"
},
{
"Check ID": "Recovery Point for VM {vm_name} failed to capture associated policies and categories because {reason}."
}
]
|
Solution for #1 and #2: Network name or network information is not available for the VM
The issue can occur when the VM reported in the alert is part of a network that is not available on all the ESXi hosts. From the VM settings, check the name of the network the VM is part of. Now check what are the networks that Uhura has detected as common to all hosts by running the following command.
nutanix@cvm$ curl -k -s -u admin https://127.0.0.1:9440/api/nutanix/v2.0/networks | python -m json.tool | grep name
When prompted, enter the password for the admin user account. The list returned will give the names of all the network names that are available in all the ESXi hosts in the cluster. If the reported VMs' network name is not available in the list, check the following:
If the VM's network is part of a distributed switch port group, ensure all the ESXi hosts in the PE (Prism Element) cluster are added to the DvSwitch and the networks are visible on them from vCenter.If the VM is part of a standard portgroup, ensure all hosts have the port group configured on the standard vSwitch.If none of the above, check if Uhura is reporting any failures in collecting information from the ESXi host. Check Uhura logs (/home/nutanix/data/logs/uhura.*) on all the CVMs (Controller VMs) for any failure stack traces.
Solution for #3: Management plane is not available to get the configuration
This is usually a false positive alert. Reset the connection between PE and PC (Prism Central) if the connectivity health check is OK. Run the following commands from the PE CVM:
nutanix@cvm$ nuclei remote_connection.health_check_all
nutanix@cvm$ nuclei remote_connection.reset_pe_pc_remoteconnection
If you need further assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/. Collect additional information and attach them to the support case.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA032000000986SCAQ.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA032000000986SCAQ.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000LM3BCAW.
nutanix@cvm$ logbay collect --aggregate=true
Attaching Files to the Case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.
If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB4138
|
Alert - A1012 - SystemTemperatureHigh
|
Alert generated for High System Temperature on a Nutanix Cluster.
|
Note: You can review the specific clusters affected by this alert via the discoveries on the Support Portal powered by Nutanix Insights here https://portal.nutanix.com/page/wellness/discoveries/listOverviewThis Nutanix Article provides the information required for troubleshooting the alert SystemTemperatureHigh for your Nutanix cluster. For an overview of alerts including who is contacted and where parts are sent when an alert case is raised, see KB 1959 http://portal.nutanix.com/kb/1959.
IMPORTANT: Keep your contact details and Parts Shipment Address for your nodes in the Nutanix Support Portal up-to-date for timely dispatch of parts and prompt resolution of hardware issues. Out-of-date addresses may result in parts being sent to the wrong address/person, which results in delays in receiving the parts and getting your system back to a healthy state.
Alert overview
The SystemTemperatureHigh alert is generated if the temperature exceeds any of the recommended thresholds.
FAIL: Sensor 'cpu 2' with value '87.000' is not within threshold
Sample alert
Following is a complete example of one event.
Block Serial Number:BYxxxxx
Output messaging
[
{
"Check ID": "This check is scheduled to run every 5 minutes, by default."
},
{
"Check ID": "Check that system temperature is not high"
},
{
"Check ID": "The device is overheating to the point of imminent failure."
},
{
"Check ID": "Ensure that the fans in the block are functioning properly and that the environment is cool enough."
},
{
"Check ID": "The device may fail or be permanently damaged."
},
{
"Check ID": "A1012"
},
{
"Check ID": "System Temperature High"
},
{
"Check ID": "System temperature exceeded temperatureC on Controller VM ip_address"
}
]
|
Troubleshooting
Run the IPMI NCC Health Check using the following command from any CVM.
nutanix@cvm$ ncc health_checks hardware_checks ipmi_checks ipmi_sensor_threshold_check
For more details on the ipmi_sensor_threshold_check command and output, refer to KB 1524 https://portal.nutanix.com/kb/1524.
The NCC check reports a FAIL status in the tests if any of the hardware components in the Nutanix Cluster have exceeded the recommended temperature threshold.
+---------------+
Resolving the issue
Ensure that the fans in the block are functioning properly.Ensure that the Data Center or Server Room environment is cold enough.The operating environment for all Nutanix Hardware should be within the thresholds. To see the threshold for your hardware, see https://www.nutanix.com/products/hardware-platforms/ https://www.nutanix.com/products/hardware-platforms/.
If you are able to resolve the issue using the instructions, update the support case and Nutanix Support will close the case.
Collecting additional information
Upgrade NCC. For information about upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871.NCC output file ncc-output-latest.log. Refer to KB 2871 http://portal.nutanix.com/kb/2871 for details on running NCC and collecting this file.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command:
nutanix@cvm$ ncc log_collector run_all
Attaching to the caseAttach the files at the bottom of the support case page on the support portal. If the size of the NCC log bundle being uploaded exceeds 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
Requesting assistanceIf you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. You can also click on the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact.
Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email.
|
KB16194
|
DRaaS - Can't access internet when a floating static IP is assigned to a VM interface hosted in DRaaS cloud
|
Internet access via it's floating static IP address is not working for a VM hosted in DRaaS cloud.
|
A VM hosted in DRaaS cloud can't access the internet through it's interface when a floating static IP address is configured on the interface. Without a floating static (FIP), the internet access works via the tenant SNAT IP.
|
1. On the tenant Prism central, check the floating IP assignment status the IP/VM interface in question.
pcvm$nuclei floating_ip.get "floating IP UUID" --->> get the UUID from "nuclei floating_ip.list"
2. Check the atlas.out logs in the atlas service leader PCVM for any assignment error.data/logs/atlas.out3. Check the routing policy in atlas CLI of the PCVM cluster and make sure there is a 2 way policy from VM physical IP or subnet to external networks.Example: pcvm$atlas_cli routing_policy.list networks=Production
UUID Name VPC name Collection UUID Priority Source type Destinati ranges ICMP Type ICMP Code Protocol Action Service IP Counter value Bidirectional
4. Perform a packet capture in the tap interface of the VM on the AHV host, and see the behavior, perform a continuous ping to a public IP (8.8.8.8). If you see only the request and reply with no floating static assigned and reply fails when floating static (FIP) is assigned to the interface as below, proceed with the action plan.SNIP
[root@iad04-0003-host12a ~]# tcpdump -i tap2 host 172.16.3.75
5. On DRAAS, create a new VM and associate the Floating IP to the new VM. Check if the issues exist on the new VM. (Note: If the Floating IP is associated already with a VM with an issue, for testing you can dissociate and associate with the test VM. Refer KB-15597 https://nutanix.my.salesforce.com/kA07V0000010xx1.Action Plan:There is an issue with stale port assignments from deleted VPCs. Engage the DRaaS ONCALL for further validation and help.
|
KB1709
|
NCC Health Check: hostname_resolution_check
|
NCC 1.2. The NCC health check hostname_resolution_check checks for the defined name servers assigned by Nutanix customers.
|
The NCC health check hostname_resolution_check checks for the defined name servers assigned by Nutanix customers.
The name servers are used when a specified hostname needs to be resolved, for example, when trying to resolve Nutanix service centers or other Nutanix hosts.
This check verifies if the following can be resolved.
Hypervisor host FQDN to IP and IP to FQDN (reverse lookup)NTP server FQDNSMTP server FQDNNutanix Service Center Servers (nsc01.nutanix.net and nsc02.nutanix.net)
This ensures that the reverse lookup works.
Running NCC Check
You can run this check as a part of the complete NCC health checks.
nutanix@cvm$ ncc health_checks run_all
Or you can run this check individually.
nutanix@cvm$ ncc health_checks system_checks hostname_resolution_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
Sample Output
Check Status: PASS
Running : health_checks system_checks hostname_resolution_check
Check Status: WARN
Running : health_checks system_checks hostname_resolution_check
Running : health_checks system_checks hostname_resolution_check
This check also ensures that the Controller VMs (CVMs) can resolve the Management IP address of the hypervisor hosts using the configured DNS servers, used when data protection or disaster recovery capability is implemented. If the check cannot resolve all the service centers, it returns a WARN status.Starting NCC 4.0 the check also raises an INFO message when Alerts email is not configured or Pulse is disabled and the SMTP server is not configured.Check Status: INFO
Running : health_checks system_checks hostname_resolution_check
Output messaging
[
{
"Check ID": "Check FQDN resolution of host IPs"
},
{
"Check ID": "Unable to reach name server or name server doesn't have correct entry."
},
{
"Check ID": "Check if hostname to IP and IP to hostname resolution is working.\t\t\tIf it is in domain environment, please add the PTR record from DNS server side to solve the problem"
},
{
"Check ID": "Unable to resolve \"host FQDN to IP\" or \"IP to host FQDN\"."
},
{
"Check ID": "This check is scheduled to run every day, by default."
},
{
"Check ID": "This check does not generate an alert."
},
{
"Check ID": "103072"
},
{
"Check ID": "Check NSC server FQDN resolution"
},
{
"Check ID": "Unable to reach name server or name server doesn't have correct entry."
},
{
"Check ID": "Check if cluster has Internet access and correct name server setting exists"
},
{
"Check ID": "Unable to resolve NSC(Nutanix Service Center) server FQDN."
},
{
"Check ID": "This check is scheduled to run every day, by default."
},
{
"Check ID": "This check does not generate an alert."
},
{
"Check ID": "103070"
},
{
"Check ID": "Check NTP Server FQDN resolution"
},
{
"Check ID": "Unable to reach name server or name server doesn't have correct entry."
},
{
"Check ID": "Check if ntp server name resolution is working."
},
{
"Check ID": "Unable to resolve NTP server FQDN."
},
{
"Check ID": "This check is scheduled to run every day, by default."
},
{
"Check ID": "This check does not generate an alert."
},
{
"Check ID": "103071"
},
{
"Check ID": "Check SMTP server FQDN resolution"
},
{
"Check ID": "Unable to reach name server or name server doesn't have correct entry."
},
{
"Check ID": "Check if smtp server name resolution is working."
},
{
"Check ID": "Unable to resolve SMTP server FQDN."
},
{
"Check ID": "This check is scheduled to run every day, by default."
},
{
"Check ID": "This check does not generate an alert."
}
]
|
Ensure that the DNS servers used are properly configured and can resolve all the hosts. For more information, review the NCC log file available at the following location.
/home/nutanix/data/logs/ncc-output-latest.log
Troubleshooting
It is currently possible for NCC to report a warning for this check when the DNS servers resolve only internal addresses as a portion of this check is to resolve the Nutanix Service Center servers - nsc01.nutanix.net and nsc02.nutanix.net.
Customers with network segmentation enabled may see output messaging results identifying an issue for this check such as:
Node X.X.X.X:
In this example, Y.Y.Y.Y is the Backplane LAN network, a non-routable subnet used by the cluster as part of the network segmentation feature. This network is a separate subnet used in addition to the Management LAN network X.X.X.X where the CVM and host IPs reside. Check results identifying Backplane LAN host IPs can be safely ignored. This issue has been fixed in AOS 5.11, NCC 3.7.0.1, and NCC 3.7.1.
If you do not want to resolve the addresses on the Internet, then ignore this warning message. However, you need to have an internal SMTP server defined if you need any Alert or Pulse emails to be sent to Nutanix.
Use dig or nslookup to confirm that DNS can resolve the IP addresses to hostnames of the hosts, NTP servers, and SMTP servers.
Example
nutanix@cvm$ allssh dig +noall +answer -x <host, NTP or SMTP IP address>
Or
nutanix@cvm$ allssh nslookup <host, NTP or SMTP IP address>
In circumstances where the check returns the following ERR status:
Running : health_checks system_checks hostname_resolution_check
Ensure that the corresponding PTR record is configured for the complete FQDN rather than just the hostname.
|
nutanix@CVM:~$ sudo smartctl -a /dev/nvme1n1 -T permissive
| null | null | null | null |
KB11255
|
Nutanix Kubernetes Engine - Node in NotReady state due to a Docker issue
|
Nutanix Kubernetes Engine worker node becomes in NotReady state due to Docker could not work properly. KRBN-3894 was opened for this issue and more logs are needed for RCA.
|
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.We have seen a docker issue that prevents kubelet-worker from running, and thus the worker node went into NotReady state. The docker version is:
[nutanix@karbon-worker ~]$ docker version
The following symptoms are seen:1. Kubelet-worker service is in a crash loop
[nutanix@karbon-worker ~]$ systemctl status kubelet-worker
2. Check docker logs ($ sudo journalctl -u docker), it constantly shows the error similar to the following:
Error response from daemon: failed to create endpoint xx on network bridge: network 4a9c4a45f766a56b1e840602e3d112570ae46955534eca132b7e116b37cfb947 does not exist
3. Confirm the network mentioned in the docker error messages exists.
[nutanix@karbon-k8s-worker ~]$ sudo docker network list
[nutanix@karbon-k8s-worker ~]$ sudo docker network inspect 4a9c4a45f766
[nutanix@karbon-k8s-worker ~]$ ifconfig docker0
|
There is no known, permanent solution. The current workaround is to restart docker until the service stabilizes. Reach out to a Karbon SME if you need assistance.KRBN-3894 was opened for this issue and additional logs are needed for RCA. If you are hitting the same issue, please collect logs and attach the case to KRBN-3894 https://jira.nutanix.com/browse/KRBN-3894.Collect the systemd logs and all logs under /var/log directory on the worker node. To collect systemd logs, run the following, replacing "YYYY-MM-DD HH:MM:SS" with the start date and time log collection should cover:
[nutanix@karbon-k8s-worker ~]$ sudo journalctl --since "YYYY-MM-DD HH:MM:SS" > /tmp/systemd_log.txt
If dockerd debug is not enabled, enable dockerd debug mode on the problematic worker node as follows:1. Backup /etc/docker/daemon.json.
[nutanix@karbon-k8s-worker ~]$ sudo cp /etc/docker/daemon.json /etc/docker/daemon_backup.json
2. Add "debug": true, in /etc/docker/daemon.json. The configuration file should look like below:
{
3. Restart docker on worker node.
[nutanix@karbon-k8s-worker ~]$ sudo systemctl daemon-reload
4. Check docker logs if the debug mode was set successfully. Notice it has level=debug in the logs.
[nutanix@karbon-k8s-worker ~]$ sudo grep dockerd /var/log/messages | tail
5. If the issue occurs again after restarting docker. Collect the systemd logs and all logs under /var/log directory on the worker node. To collect systemd logs, run the following, replacing "YYYY-MM-DD HH:MM:SS" with the start date and time log collection should cover:
[nutanix@karbon-k8s-worker ~]$ sudo journalctl --since "YYYY-MM-DD HH:MM:SS" > /tmp/systemd_log.txt
To disable dockerd debug mode:1. Move the backup daemon.json file back.
[nutanix@karbon-k8s-worker ~]$ sudo mv /etc/docker/daemon_backup.json /etc/docker/daemon.json
2. Reload and restart docker.
[nutanix@karbon-k8s-worker ~]$ sudo systemctl daemon-reload
|
KB13662
|
Alert - Nutanix Cloud Clusters (NC2) on AWS - Cluster Resuming Process has Failed/Stalled
|
This article captures the Alerts generated by the NC2 console notifications center during the Resuming process if any issues are found.
|
Starting with AOS 6.0.1, NC2 on AWS offers the capability to hibernate/resume the cluster to/from an AWS S3 bucket for AWS cost savings if the cluster is needed during a specific period.This article focuses on the resuming process. See KB-13661 https://portal.nutanix.com/kbs/13661 for hibernation. During the resuming stage, the NC2 orchestrator provisions the underlying infrastructure (bare-metal), cluster services are started, and the cluster is restored to its original state during hibernation.Note: User VMs must be started manually, as well as Prism Central and Nutanix Files if hosted on the cluster.Once the Resume operation is triggered from the NC2 console, the NC2 orchestrator goes through a series of processes where if any issues are detected, the NC2 console will display any of the following notifications: The cluster resuming status is stalled.Multiple states can trigger this alert; however, most are related to preparing the cluster to restore the metadata, migrating the extent-store data from cloud disks, etc. The resuming process has not entirely failed during this stage, but it is in a transient waiting state. The cluster resuming status has failed.If the cluster encounters any issues that are automatically irreversible to continue its progress
Error encountered while trying to start servicesFailed resume prechecks (test_no_cvm_down)Connection to Amazon S3
|
The cluster resuming status is stalledReview the NC2 console notifications page for more details on the failure. Depending on the status of the failure and the complexity of the issue, Nutanix Support intervention may be necessary.The cluster resuming status has failedReview the NC2 console notifications page, as the alert indicates the stage where the process has failed. If the cluster resuming process encountered this alert, contact Nutanix Support http://portal.nutanix.com for assistance.
|
KB11440
|
Nutanix Files - Handling a Ransomware attack
|
This KB describes the steps an SRE needs to perform to reactively protect the Nutanix Files System by attempting to restore from SSR snapshots. If no such snapshots exist, then the scenario is beyond the scope of this KB.
|
This KB describes the steps an SRE needs to perform to reactively protect the Nutanix Files System and help a customer to restore data.If there are any doubts, please involve any of the Nutanix Files virtual team members.
|
Please refer to the confluence page for the Support Process on the Ransomware case: https://confluence.eng.nutanix.com:8443/display/SPD/Support+Process-+Malicious+software+situations https://confluence.eng.nutanix.com:8443/display/SPD/Support+Process-+Malicious+software+situations1. Remove the Self-Service Restore (SSR) policy.
a.Using UI:
Browse to the SSR Policy via Prism > FileServer > Select File Server > Protect Button Delete the various snapshot schedules here using the X
b. Command-line method:
ncli fs ls (note the FS UUID)
Example:
<ncli> fs ls
List the existing policies:
<ncli> fs list-snapshot-policies file-server-uuid=%FS_UUID%
Example:
<ncli> fs list-snapshot-policies file-server-uuid=62285f9d-b8e3-4ba9-b6a0-6adea0f3eea5
Delete the existing policies:
ncli fs delete-snapshot-policy file-server-uuid= %FS_UUID% uuid=%POLICY_UUID%
Example:
<ncli> fs delete-snapshot-policy file-server-uuid=62285f9d-b8e3-4ba9-b6a0-6adea0f3eea5 uuid=4ebf4b4f-7c12-4a43-9090-c44d02ac78f0
List the policies to check (you should have none)
<ncli> fs list-snapshot-policies file-server-uuid= %FS_UUID%
On each FSVM, suspend the auto snapshot: Suspend the auto snap by modifying the following file and commented out the entries:
nutanix@FSVM:~$ sudo vi /var/spool/cron/root
Verify that file on all the FSVMs have the same entry:
nutanix@FSVM:~$ sudo cat /var/spool/cron/root
After suspending the snapshots, change all the shares to Read Only (RO) mode ( Ref: KB-9880 http://portal.nutanix.com/kb/9880)
Run the following commands to make all the shares read-only
nutanix@FSVM:~$ for i in `afs share.list | grep -B4 "Protocol type: SMB" | grep "Share name:"|awk '{$1=$2=""; print $0}'`;do afs smb.set_conf "read only" "Yes" section=$i;done
This step is required to prevent any further damage to the data. It is recommended to apply this step especially in the early hours of the attack when the customer is still working on identifying where is the attack is coming from.Assist the customer in identifying which SSR snapshot can be used. If no SSR is available check if there is any PD that can be used.Remember, the customer has to decide which data set can be restored or not, we do not decide for him.
2. Restoring Data:
If there are any SSR Snapshots present on the system prior to the Ransomware attack we can attempt to restore the data.
Restore to the same share:
Creates double entries for each file since the ransomware attack will change the file extensions.Requires the manual intervention of the user to clean up the damaged files by hand.Consumes twice the space.Can be done by simply clicking on the restore button from Windows Explorer or by drag& drop.Data copy takes a long time.
Restore to a new share:
Creates a new share and restores the data from the snapshot to it.Eliminates the cleanup phase, and it is very transparent for the end-users (as we restore to a temp share, hide the old share by adding $, and rename the temp share to the old share name.)Robocopy can be used on multiple clients (similar to the migration process) to speed up data restoration.Example of command: (Ref: SMB MIGRATION EXAMPLE: ROBOCOPY https://portal.nutanix.com/page/documents/solutions/details?targetId=TN-2016-Nutanix-Files-Migration-Guide:smb-migration-example-robocopy.html)
robocopy \\FSNAME\Sharename\Foldername\@GMT-2021.04.17-00.00.00\Foldername
The @GMT is a Timewarp Token that is actually used in SMB2 protocol to identify the snapshot.
The above command can be expanded in multiple ways, and have multiple clients with /MT flag will speed up the store process.NOTE: If you have Files 4.1, you can take advantage of the migration tool built into this version.
|
KB10946
|
Identifying the source IP generating TCP Reset packets in a network path
|
In a pair of Nutanix clusters exchanging replication data/traffic there can be scenarios wherein a firewall for example in the network path is interfering with the replication traffic by sending TCP Reset packets. This KB has steps on how to identify the source IP generating TCP Reset packets in a network path.
|
In a pair of Nutanix clusters exchanging replication data/traffic there can be scenarios wherein a firewall for example in the network path is interfering with the replication traffic by sending TCP Reset packets. This KB has steps on how to identify the source IP generating TCP Reset packets in a network path. The following steps relies on TTL value in the TCP Reset packets to determine the source. Time-to-live (TTL) is a value in a packet that tells a router whether a packet has been in the network for too long and should be discarded or not.TTL is set by the source sending the packet. Each time the packet arrives at a router (a hop) the value is reduced by one before it is routed onwards.For more read refer to this blog https://packetpushers.net/ip-time-to-live-and-hop-limit-basics/ And RFC-791 https://tools.ietf.org/html/rfc791Every Operating System and Network devices have a default TTL value. The default TTL values can be changed to a non-default value for security purposes. The default TTL value of Nutanix CVMs is "64". The default value can be checked by running "cat /proc/sys/net/ipv4/ip_default_ttl" as shown below:
$ cat /proc/sys/net/ipv4/ip_default_ttl
NOTE: It is strongly recommended to NOT change the TTL value of Nutanix CVM.
|
Now, for example if there are 11 hops between 2 Nutanix Clusters (read between 2 Nutanix CVMs), the source packet's TTL will be set to 64.
Source CVM <-> R1 <-> R2 <-> R3 <-> R4 <-> R5 <-> R6 <-> R7 <-> R8 <-> R9 <-> R10 <-> R11 <-> Target CVM
Source CVM is sending a packet with 64 TTL. Target CVM receives the packet with TTL value of 53. This value is obtained by subtracting 11 from 64, where 11 is number of hops/routers. This essentially confirms that a packet sent from the source CVM needs 11 hops to reach the target CVM.When identifying "who" is sending a reset packet in a network path: 1 Firstly a packet capture on both the source CVM and the target CVM needs to be done.2. Using Wireshark filter "tcp.flags.reset==1" , find a few sample connections involving TCP Reset packets. In these you will find the source port number to be 2009 because stargate service does the actual data replication. And the target port to be a random dynamic port. 3. Using some of these target random dynamic port, find a complete life cycle of the TCP connection i.e. Right from SYN, SYN ACK and ACK till TCP RST. A complete lifecycle of TCP connection will show the difference in TTL values. Example used below for explanation shows for source port# 33492.cvm_43_source_tcp.port.33492.pngcvm_88_target_tcp.port.2009.png4. As seen in the captures, both the CVMs - source .43 and target .88 - are complaining that peer sent the TCP RESET packets.5. To rule out that neither of the CVMs sent the RESET packets, use the TTL field from IP packets - https://packetpushers.net/ip-time-to-live-and-hop-limit-basics/5.1. Upon checking the TTL value of the RESET packets on the source noted that RESET packets are having TTL of "127"5.2. Upon checking the TTL value of the RESET packets on the target noted that RESET packets are having TTL of "117"5.3. Adding the TTL column reveals that when the connection starts from CVM .43, the TTL value is 64. Same TTL value is observed when the packets are generating from CVM .88 - the target CVM.5.4. When the packets generated from CVM .43 are reaching CVM .88, the TTL value is "53". The other way too, the TTL value remains "53". This shows that, packets between the hosts are traversing through 11 hops.5.5. However, the RESET packets are having TTL "127" on source CVM .43 and TTL of "117" on target CVM .88. This clearly indicates, Nutanix CVMs are not sending the RESET packets.5.6. The highest of the 2 TTLs of RESET packets is "127" - This means we are looking for an intermediate device with TTL value of 128 or higher. A device with TTL value of "128" is the culprit here. From pointer# 5.4. we know that there are 11 hops between the 2 hosts.
Source CVM <-> R1 <-> R2 <-> R3 <-> R4 <-> R5 <-> R6 <-> R7 <-> R8 <-> R9 <-> R10 <-> R11 <-> Target CVM
Referring the above network path, since the source CVM .43 is getting RESET packets with TTL value of "127" this indicates "R1" is sending the reset packets.And the target CVM .88 is getting RESET packets with TTL value of "117" this again confirms "R1" is sending the reset packets.5.7. Using the RESET packet from CVM .43 - the source, it appears that "R1" here is a Cisco Checkpoint device with MAC "00:1c:7f:6d:ff:83". Refer to the screenshot below.5.8. Looking at all the captures, on source resets packets are always with TTL "127" and the MAC address is "00:1c:7f:6d:ff:83"Thus, confirming that the neither source nor target CVM was not the one to send the reset packets, rather it was R1 which was actually a Checkpoint firewall device - the culprit in this example.
|
KB13902
|
NVMe disks in Lenovo HX3330 may not be utilised if not in slot 6 to 9
|
Lenovo HX3330 natively support SSD and NVMe drives in any slot (bay). When used with AOS, nodes that underwent the foundation process prior to Foundation 5.3.2 will need to use slot 6 to 9.
|
This KB article describes an issue where Lenovo HX3330 nodes imaged using version prior to Foundation v5.3.2 will not detect the NVMe drive on the slots 0-5 and the drives will be not utilized despite them being mounted in the file system.When viewed in Prism > Hardware > Diagram or Table view, the NVMe drives will not appear.Log on to a CVM as the "nutanix" user and execute "list_disks". Here the NVMe drives are shown as not mapped:
nutanix@CVM $ list_disks
Using the "df" command on the CVM, the NVMe are shown as mounted:
nutanix@CVM $ df -h
In the above scenario, the NVMe drives are mounted in slots 4 and 5, the SSDs are mounted in slots 0 to 3.
|
The NVMe drives must be moved to slots 6-9 for them to be detected and utilized. Power down the host and make the slot change for the NVMe. Please follow the below procedure to power down the host: Node shutdown procedure https://portal.nutanix.com/page/documents/details?targetId=Hardware-Replacement-Platform:bre-common-procedures-c.html. When the CVM restarts, the disks will be recognized and used as NVMe drives.Further actions are necessary for the NVMe drive to be utilized for Oplog placement. Engage Nutanix Support http://portal.nutanix.com for assistance.Note: Slots 0 to 9 can be used for SSD or NVMe if the Nodes are imaged using the below version:
Foundation v5.3.2Foundation Platforms v2.12.2
|
""Verify all the services in CVM (Controller VM)
|
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Check Vmkernel logs to see if there was link down (ESXi log)""
| null | null | null |
KB14612
|
NDB - Non-Super Admin users are not able to Provision SQL AG DB from existing AG's
|
While provisioning AG databases, users (non-admin) are unable to see existing AG's.
|
It will be observed that the non-admin user has all required roles except super admin.The non-admin user also has access to all entities including Time Machines, Databases, Clusters, etc.However, while provisioning using an existing AG, the user is unable to see the AG.
|
We were able to successfully reproduce the issue in our lab. Thus confirming this issue to be a bug.Internal Jira ticket: ERA-25520 has been created to track this issue.It is targeted to be fixed under NDB version 2.5.3.Until the fix is available the workaround would be to give users, Super Admin user access to view the list of existing SQL AG DB's when provisioning a new SQL AG.
|
{
| null | null | null | null |
KB10273
|
Not releasing shutdown token since cluster is not fault tolerant at this time
|
Starting with AOS 5.15.3 and 5.18, a new check was added before releasing the shutdown token. Local genesis on a node that owns the shutdown token will also check if the cluster is in a fault-tolerant state, and if not - shutdown token will not be released.
|
The shutdown token is a setting in Zeus (zk-param) that tells One-click upgrades (or any other maintenance tasks) that it is safe to shutdown a CVM. Only one token can exist, and this way only one node at a time can go down after getting the token.For general information about shutdown token please check the KB-5748 https://nutanix.my.salesforce.com/kA00e000000LJMx?srPos=0&srKp=ka0&lang=en_US and the confluence page about shutdown token https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=~aisiri.rshankar&title=Shutdown+Token.In AOS versions 5.15.3 and 5.18, the shutdown token handling logic was changed/improved.
Old logic for handling shutdown token:
When any CVM needs to go down (for a one-click/LCM upgrade, for a rolling restart, or just for any other kind of maintenance using cvm_shutdown script) - Genesis on this node will reach to Genesis leader in the cluster to request a shutdown token. If shutdown token not granted by the leader - local genesis will retry in 30 seconds. Multiple CVMs can be in this stage at the same time, waiting for the token to be granted by the Genesis leader.Genesis leader, when received request for the token will check if shutdown token exists and which CVM owns it. And then Genesis leader will try to revoke shutdown token from CVM that owns it currently, to be able to grant it to one of the CVMs that requesting token. When Genesis on CVM that currently owning token will receive a request from Leader to release the token - it will check if the node is OK to release the token. There are multiple checks performed at this stage, including the following:
If operation (reason work which token was granted to this node) is completed on this node;if all services on this CVM are UP and stable;if the HA route (that redirect storage traffic from local CVM to remote CVM) removed from the underlying hypervisor.
If the node currently owning the shutdown token is healthy, then local genesis will release the shutdown token, and then Genesis leader will grant it to one of the nodes that requested it. If any problem found with the local node that owns the token - then it will not be released and the process will be repeated with another retry request from CVM that needs a token (step1).
The problem with the old workflow is that shutdown token can be released/granted to requestor CVM when overall cluster health is not OK because we check only for the health of CVM that currently own token. As a result, the next node can receive the shutdown token and go down, while the cluster cannot tolerate this. And eventually, User VMs can be affected.
New logic for handling shutdown token, starting from AOS 5.15.3 and 5.18.
Starting from AOS 5.15.3 and 5.18, a new check was added before releasing the shutdown token (see ENG-119095 https://jira.nutanix.com/browse/ENG-119095).The main workflow stays the same, with an extra check added on step 3 from above (before the node that currently owns the token release it).
With these changes, local genesis on a node that owns the shutdown token will additionally check if the whole cluster is in a fault-tolerant state by sending RPC to the Curator leader. The curator leader will check if core cluster services are in fault-tolerant status and, will respond to genesis with a simple answer - cluster fault tolerance status is 0 or 1.
Initial implementation checks only the FT status of 3 components.
As clarified in JIRA comments https://jira.nutanix.com/browse/ENG-119095?focusedCommentId=2578172&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-2578172 - currently Curator leader only checks for Cassandra, Zookeeper, and Stargate components' health when receiving this RPC from Genesis. And for now, the curator only cares about the "node" failure domain.
Improvements starting from AOS 5.20.5 and 6.5.2.
Starting from AOS 5.20.5 and 6.5.2 (see ENG-345358 https://jira.nutanix.com/browse/ENG-345358) Genesis (via RPC to Curator leader) will additionally check that cluster have enough free space (kFreeSpace component FT = 1) before handing over the node shutdown token. This is needed to avoid cases when the several sequential rebuild processes for multiple nodes in a row fill up cluster up to 95% and cause UVM unavailability.This means on recent AOS versions Curator is checking for the FT status of four components: Cassandra, Zookeeper, Stargate, and FreeSpace.
If genesis receives a response from the Curator leader that the cluster fault tolerance status is 1 and the local node is healthy (all old pre-checks) - the shutdown token will be released, and Genesis leader will grant it to the next node.But if genesis receives a response from the Curator leader that cluster fault tolerance is 0 - then the shutdown token will not be released, even if the local host that owns the token is healthy.This way, we will not allow the next node to go down if the overall cluster health is not OK and we cannot tolerate another node outage.
Symptoms
When the shutdown token is not released by genesis due to cluster services are not in the Fault-Tolerant state - the following messages can be found in genesis.out on that CVM: Command
allssh 'grep "fault tolerant" ~/data/logs/genesis.*'
Example Output
nutanix@cvm:~$ allssh 'grep "fault tolerant" ~/data/logs/genesis.*'
At the same time in curator.INFO on Curator leader we can see the following message: Command
allssh 'grep "not fault tolerant" ~/data/logs/curator*'
Example Output
nutanix@CVM:~$ allssh 'grep "not fault tolerant" ~/data/logs/curator*'
|
If the shutdown token is not released due to the cluster fault tolerance status being 0 - then the next step is to find which component is not in a fault-tolerant state.This can be done using ncli: Command
ncli cluster get-domain-fault-tolerance-status type=node
Example Output
nutanix@CVM:~$ ncli cluster get-domain-fault-tolerance-status type=node
To confirm the configured redundancy factor you can run the following command. Command
ncli cluster get-redundancy-state
Example Output
nutanix@NTNX-20SM5D060090-A-CVM:10.134.86.136:~$ ncli cluster get-redundancy-state
From the output, it should be clear which component has a "Current Fault Tolerance" level less than expected (less than 1 for RF2 clusters).Note: if this is an RF3 cluster, the situation in KB 15081 https://portal.nutanix.com/kb/15081 may apply. Please confirm and link if relevant.As mentioned earlier, in the current implementation when handling RPC from Genesis to check cluster fault tolerance status Curator leader will only check the fault tolerance level for the following components:
ZookeeperCassandraStargate Free Space (only starting from AOS 5.20.5 and 6.5.2)
Further troubleshooting should focus on resolving the problematic component's fault tolerance status. If the cluster is in a fault-tolerant state, and the shutdown token is still not released due to some other issues - check KB-5748 https://nutanix.my.salesforce.com/kA00e000000LJMx?srPos=0&srKp=ka0&lang=en_US which provides general recommendations for troubleshooting Stuck Shutdown Token.
|
KB16069
|
Nutanix Files - Share Migration - NFS cache expiry time was reduced during migration. unable to reset that to default value.
|
Instructions on resetting the default NFS cache expiry time when the share migration job completion automated reset fails.
|
Since NFS Share Migration on Nutanix Files happens via ZFS, the NFS client-side view into migrated file attributes will not get updated and be visible to clients until the NFS read cache timer expires and the updated attributes are refreshed. This can result in NFS-mounted clients not seeing the updated file attributes for up to 10 minutes. To work around this issue and make file attributes more quickly visible to clients during share migrations, automation was implemented to reduce the NFS cache expiry to 30 seconds and then back to the default 10 minutes at the end of a share migration job. Since leaving the non-default NFS cache expiry timer is undesired, if the value reset back to default should fail, the following message will be present in the job status and migrator.log.
"LeaderJobWorkFlow: NFS cache expiry time was reduced during migration. unable to reset that to default value. Please refer migration doc for same"
|
To manually reset the NFS cache expiry time back to default, use the following command:
nutanix@FSVM:~$ afs nfs.remove_config_param block=EXPORT export_name=<target_share_name> param=Attr_Expiration_Time restart=False
<target_share_name> :- Name of target share for which migration was run.
|
KB15445
|
DELL - Older LCM RIM bundle for DELL
|
This KB article provides the availability of Older RIM LCM bundles for DELL.
|
** Keep this KB Internal Only - Don't expose to customers **This KB article provides the availability of Older RIM LCM bundles for DELL only on demand basis from customer.DELL has a policy to keep only the latest Firmware bundle available on the Nutanix Portal under the Download page. This is done in order to ensure the customer's run the latest and greatest bundle available on Portal. However, It has been noticed some customers would like to run N-1 bundle for various reasons on the latest LCM framework. This KB article provides the older darksite bundle for customer's which can be used to upgrade to the older RIM models. Please request customer to follow LCM Darksite Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_6:Life-Cycle-Manager-Dark-Site-Guide-v2_6 to upgrade their Firmware. Older RIM bundles will not be available on the LCM UI or as a connected site entitiy and can only be upgraded using Darksite method. Please Note : DELL doesn’t want the customer to use N-1 but they have qualified N-1 for escalation use only. DO NOT recommend the customer to use N-1 unless there’s an escalation. DELL only qualifies the latest LCM version with up to N-1 version of RIM. Thus (in the escalation) recommend customer's to only upgrade up to the N-1 bundle than the latest available RIM.
|
Below is the Path to the older bundles of DELL RIM. Please check Portal Download page https://portal.nutanix.com/page/downloads?product=lcm for latest RIM
[
{
"Si No": "1.",
"RIM Bundle": "DELL-LCM-2.8",
"Path": "Link",
"Release Notes": "Link",
"Note": ""
},
{
"Si No": "2.",
"RIM Bundle": "DELL-LCM-2.7",
"Path": "Link",
"Release Notes": "Link",
"Note": ""
},
{
"Si No": "3.",
"RIM Bundle": "DELL-LCM-2.6",
"Path": "Link",
"Release Notes": "Link",
"Note": ""
},
{
"Si No": "4.",
"RIM Bundle": "DELL-LCM-2.5.2",
"Path": "Link",
"Release Notes": "Link",
"Note": "* RIM 2.5.1 and RIM 2.5.2 carry the same payload"
},
{
"Si No": "5.",
"RIM Bundle": "DELL-LCM-2.5",
"Path": "Link",
"Release Notes": "Link",
"Note": ""
}
]
|
KB14018
|
Alert - A130365 - PauseStretchTriggeredByWitness
|
This Nutanix article provides the information required for troubleshooting the alert PauseTriggeredByWitness for your Nutanix cluster.
|
This Nutanix article provides the information required for troubleshooting the alert PauseTriggeredByWitness on the Nutanix cluster.Alert OverviewThe alert PauseStretchTriggeredByWitness is generated when an automatic synchronous pause has been triggered for a recovery plan by Witness.Sample Alert
Block Serial Number: XXXXXXXXXXXX
Output messaging
[
{
"Check ID": "Automatic Synchronous Replication pause is triggered by Witness for {entity_info_msg} protected by Protection Rule '{protection_rule_name}'"
},
{
"Check ID": "Network connection to remote site or to Witness is lost probably because of a network partition or the remote site being unavailable."
},
{
"Check ID": "Ensure the remote site and Witness are reachable and resume Synchronous Replication."
},
{
"Check ID": "One or more internal services are down or not working as expected."
},
{
"Check ID": "Contact Nutanix Support."
},
{
"Check ID": "Synchronous Replication for {entity_info_msg} will be paused."
},
{
"Check ID": "A130365"
},
{
"Check ID": "Automatic Synchronous Replication pause is triggered by Witness."
},
{
"Check ID": "Automatic Synchronous Replication pause has been triggered for {entity_info_msg}. '{reason}'."
}
]
|
TroubleshootingAn automatic synchronous pause might be triggered by Witness because of the following reasons:
Network connectivity issue between the source and remote site or source and witnessOne or more service down or not working on the source site.
Resolving the Issue
Restore connectivity between source, remote sites and witness and then resume synchronous replication. If the issue persists, consider engaging Nutanix Support.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
Attaching Files to the Case
To attach files to the case, follow KB 1294 https://portal.nutanix.com/kb/1294.
If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB15452
|
PEG is not claimed due to Coalesce tasks being constantly canceled
|
PEG-is-not-claimed-due-to-Coalesce-tasks-being-constantly-cancelled
|
KB14294 http://portal.nutanix.com/kb/14294 explains the way of re-claiming PEG and leveraging Coaelsce is the way to have the PEG returned to free space. However, Coalesce tasks can constantly get canceled despite the instructions in KB14294 being properly followed up, and in this case, Stargate, Curator, and Chonos logs are recommended to be investigated to find out the exact clue to the reason for the task cancellations. Unless any certain clues are still found despite the efforts to investigate those services mainly involved in Coalesce tasks, then please proceed with the KB for a possible other scenario that can be matched to the symptom.
First, please run the curator scan a couple of times and confirm that only Coalesce tasks are mostly canceled while the other Background tasks are all successfully submitted and succeeded. It can be verified on the Chronos page in curator master and "links http://0:2011" can be used for accessing this page. In the below example, 31699 tasks were canceled for "CoalesceBlockmapRegions" task.
Since then, please follow the steps below to further identify the issue.
1. Enable V3 logging for curator service - Since BgCoalesce tasks are in MR4 and the amount of logs can be huge in v3 logging, it is advised to enable V3 when MR4 starts to run while monitoring the scans on the Curator master page.
For enabling V3 logging in curator, "v" gflag should be changed to "3". By default, it is set to 0, and KB1071 http://portal.nutanix.com/kb/1071 can be followed for tweaking the value from 0 to 3. There are 2 different ways (persistent and non-persistent) for the gflag change, and the non-persistent way would be advised as the purpose is to gather the logs with debugging enabled while Coalesce tasks are running. (Please revert it to 0 once Coalesce tasks are completed and logs are gathered)
Below CLI can be used to check the current value of the gflag.
nutanix@NTNX-YYYYYYYYYY-A-CVM:XX.XX.XX.XX:~$ links -dump http://0:2010/h/gflags | grep v=0
2. Confirm that the below events are logged in Curator.INFO. These are the cancellation messages indicating that the reason is due to the nonexistent leaf vdisk in vdisk_chain.
I20230713 04:56:49.223171Z 9142 chronos_master_task_map.cc:229] Received or recovered a task with no leaf vdisk mapping found; task_id=888038312, leaf_vdisk_mapping_found=false, vdisk_chain_id=013397a9-4133-43dc-bd3e-7ec6d3313dce
3. Investigate one of the vdisk_chain_ids reported in these events and confirm whether leaf vdisk exists for the chain. If no leaf vdisk is found, then the above events in step "3" are expected since coalesce tasks require mutable vdisks to exist, so it can overwrite the data for making dead extents in 1M extent where the PEG exists.
4. Investigate vdisks in the chain (from Step "3") and confirm whether one of the shell vdisks is the source for another snapshot chain ID as a clone source vdisk.
For example, vdisk_id "504048" is in the chain ID "013397a9-4133-43dc-bd3e-7ec6d3313dce" and it is a shell vdisk and it is also a clone source vdisk for another chain.
vdisk_id: 504048
Vdisk "854537128", "911302050", "911940143(Mutable leaf vdisk) " are in chain ID "cb2407b3-4e20-4b9c-9421-b328d29744f9" pointing to parent chain _ID " "013397a9-4133-43dc-bd3e-7ec6d3313dce", which those vdisks were clone from shell vdisk ID "504048". We also see clone_source_vdisk_id is all pointing to "504048"
vdisk_id: 854537128
|
In this scenario, we have two different chain IDs where a shell vdisk is referenced by both snapshot chain IDs. We also need to understand the extra below situations.
1. The first chain does not have the leaf vdisk. 2. The cloned second chain has the leaf vdisk and all of the vdisks in this chain point to 504048 in the first chain ID as clone_source_vdisk_id 3. PEG from 2 different chain IDs can exist in any extents of the vdisk ID 504048.
Coalesce tasks first reference chain IDs for performing the overwrites. Hence, both of the chains are scanned by the curator for the coalesce tasks. However, the task for the first chain will get canceled because we do not see any leaf vdisk existing in this chain. For the second cloned chain where it sees the same root shell vdisk "504048" that is part of the first chain ID, the curator scan will go through the second chain ID for PEG reclamiation, however, since all of the vdisks in this second chain ID have the reference "clone_source_vdisk_id: 504048", it eventually goes to the vdisk ID "504048" and scans its first chain ID "013397a9-4133-43dc-bd3e-7ec6d3313dce" for the coalesce task other than scanning its own second chain ID, which will end up resulting in another cancellation of coalesce tasks against the chain id "013397a9-4133-43dc-bd3e-7ec6d3313dce".Due to these problems, the PEG generated from the second chain ID cannot be reclaimed at all. This is an architecture-related bug where the cloned snapshot chain ID cannot be properly handled for PEG reclamation due to the clone source vdisk ID belonging to another snapshot chain ID. ENG-583626 https://jira.nutanix.com/browse/ENG-583626 is open for computing/generating PEG reclamation Bg tasks for the clone source vdisk chain and cloned vdisk chain separately. As this symptom cannot be easily identified without curator v3 logging being enabled, ENG-583814 https://jira.nutanix.com/browse/ENG-583814is also open for improving the logging in curator. Currently, there are no workarounds for reclaiming the PEG in the second cloned snapshot chain ID except for deleting the cloned snapshot.
|
KB12864
|
Alert - A400117 - PolicyEngineVersionMismatch
|
Investigating PolicyEngineVersionMismatch issues on a Nutanix cluster.
|
Note: Self-Service was formerly known as Calm.
This Nutanix article provides the information required for troubleshooting the alert PolicyEngineVersionMismatch for your Nutanix cluster.
Alert overview
The PolicyEngineVersionMismatch alert is generated when the Policy Engine version is not compatible with the Self-Service version.
If Self-Service and Policy Engine versions are incompatible, some of the Self-Service features may not work.
Policy Engine supports LCM (Life Cycle Manager) upgrades, but unlike Self-Service and Epsilon, Policy Engine is not upgraded automatically if the running Policy Engine version is not compatible with the Self-Service version.
Sample alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
[
{
"Check ID": "Calm Policy Engine Version Mismatch"
},
{
"Check ID": "Failed LCM upgrade"
},
{
"Check ID": "Please upgrade Calm Policy Engine using LCM."
},
{
"Check ID": "Calm Policy Engine will not work as expected."
},
{
"Check ID": "A400117"
},
{
"Check ID": "Calm Policy Engine Version Mismatch"
},
{
"Check ID": "Calm Policy Engine policy_version is not compatible with Calm version calm_version, please upgrade Calm Policy Engine"
}
]
|
Resolving the issue
Upgrade Policy Engine using LCM.
Refer to the LCM Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide:Life-Cycle-Manager-Guide or the Self-Service Admin and Operations Guide https://portal.nutanix.com/page/documents/details?targetId=Self-Service-Admin-Operations-Guide:Self-Service-Admin-Operations-Guide for more information.
If you need assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach it to the support case.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 http://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 http://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
Attaching files to the case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB13699
|
HPDX - SPP 2022.03.0 disabled for platforms with HPE SR-416i-a G10+ controller, due to NVMe disk instability
|
This KB describes an issue, where SR416i-a Gen10+ Controller Firmware 03.01.09.056 has been seen to cause failure in mount NVMe drives
|
LCM-2.5 supports SPP 2022.03.0 version firmware upgrade for HPDX Platforms through RIM-1.6.0 release. This version of SPP carries Firmware version 03.01.09.056 for "HPE SR416i-a Gen10+" Controller .This KB describes an issue, where "HPE SR416i-a Gen10+" Controller Firmware 03.01.09.056 has been seen to cause failure to mount NVMe drives. Additionally, we have seen LCM upgrades failure with a failure message "Failed to flash UBM firmware behind Controller" - For more details on this topic, please refer KB 13578 https://portal.nutanix.com/kb/13578.Nutanix has disabled the upgrade to SPP 2022.03.0 on certain platforms running HPE SR416i-a Gen10+ controller on the Node.
ProLiant DX380 Gen10 Plus 8SFFProLiant DX380 Gen10 Plus 24SFFProLiant DX385 Gen10 Plus v2 24SFFProLiant DX325 Gen10 Plus v2 8SFF
The LCM upgrade will be disabled with the following message :
Upgrade to SPP 2022.03.0 on the Hardware model %s has been disabled, since it may lead to issues with HPE SR416i-a controller. Please refer to KB 13699 for more details.
To verify the HPE SR416i-a Gen10+ controller Firmware version, follow the below steps :
From ILO:Login to ILO. And Navigate to System Information -> Device Inventory. Check for "HPE SR416i-a Gen10+" version.On AHV:Execute the below command from CVM.
nutanix@cvm$ ssh [email protected] "export TMP=/root && ilorest serverinfo --firmware" | grep "SR416"
On ESXi:Execute the below command from CVM.
nutanix@cvm$ ssh [email protected] "ilorest serverinfo --firmware" | grep "SR416"
|
This issue has been resolved in latest SPP 2022.03.0.01 released with HPDX-LCM-1.6.1 on the Nutanix Portal. Nutanix has removed the offending SR Controller and UBM Firmware from the above SPP package and unblocked the upgrade to 2022.03.0.01. The Firmware for these components will be added back in one of the upcoming SPP versions.
An approved workaround to restore NVMe disk stability is available, which involves disabling TRIM functionality. Please contact Nutanix Support http://portal.nutanix.com/ for assistance with implementing the workaround.
|
KB10793
|
Nutanix DR | Unable to unprotect VM in Protection Policy
|
This article describes an issue where a VM explicitly protected in Protection Policy cannot be unprotected and the workaround to resolve the issue.
|
An issue has been seen in the field for a VM that is protected explicitly in Protection Policy cannot be unprotected.
When a VM is protected explicitly, a category is attached to this VM and the category mapping {“category” : “policy_name”}. This category (invisible to the end users) is looked up by DR service to process the protection of the VM.
The issue arises due to the fact that requests made by the web client to unprotect (i.e., remove the associated category from the VM) does not have the correct payload and the following field "use_categories_mapping: true" is missing, and the API server does not detach the category and VMs remain protected.
The following events are seen when verifying the action from aplos_engine.out.
2020-10-26 17:39:29,493Z INFO intentengine_app.py:378 Got request from queue {"status": {"state": "PENDING", "execution_context": {"task_uuid": "d00abf35-aad4-4bc9-88b7-62ff18952abe"}}, "uuid": "fcc2a6a6-61df-40d1-81f9-fff8c4e93d3d", "auth_metadata": {"username": "admin", "use_categories_mapping": false, "category_id_list": ["08011ece-9e47-4920-8fbb-dae1383a5e20", "870b981b-a3db-5e6d-a9a8-515f70ad8f03", "f01477fe-3201-5746-9077-38f9ad863287"], "trace": "", "session_json_dict": "{\"username\": \"admin\", \"domain\": null, \"legacy_admin_authorities\": [\"ROLE_USER_ADMIN\"], \"authenticated\": true, \"_permanent\": true, \"log_uuid\": \"a9a48d00-9d56-472b-979f-dea0dcda479a\", \"session_jwt_expiration_time\": 1603734766, \"session_jwt_token\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9.eyJ1c2VyX3Byb2ZpbGUiOiJ7XCJ1c2VybmFtZVwiOiBcImFkbWluXCIsIFwiZG9tYWluXCI6IG51bGwsIFwibGVnYWN5X2FkbWluX2F1dGhvcml0aWVzXCI6IFtcIlJPTEVfVVNFUl9BRE1JTlwiXSwgXCJhdXRoZW50aWNhdGVkXCI6IHRydWUsIFwiX3Blcm1hbmVudFwiOiB0cnVlLCBcImxvZ191dWlkXCI6IFwiYTlhNDhkMDAtOWQ1Ni00NzJiLTk3OWYtZGVhMGRjZGE0NzlhXCIsIFwidXNlcnR5cGVcIjogXCJsb2NhbFwiLCBcImFwcF9kYXRhXCI6IHt9LCBcImV4cFwiOiAxNjAzNzM0MDQ0LCBcImF1dGhfaW5mb1wiOiB7XCJ1c2VybmFtZVwiOiBcImFkbWluXCIsIFwidXNlcl9ncm91cF91dWlkc1wiOiBudWxsLCBcInNlcnZpY2VfbmFtZVwiOiBudWxsLCBcInRva2VuX2lzc3VlclwiOiBudWxsLCBcInJlbW90ZV9hdXRob3JpemF0aW9uXCI6IG51bGwsIFwicmVtb3RlX2F1dGhfanNvblwiOiBudWxsLCBcInRva2VuX2F1ZGllbmNlXCI6IG51bGwsIFwidXNlcl91dWlkXCI6IFwiMDAwMDAwMDAtMDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwXCIsIFwidGVuYW50X3V1aWRcIjogXCIwMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDBcIn19IiwianRpIjoiMzQ4MDgxMWYtYWEzNC00NDhkLTg3ZTgtNWY5ZWY4ODM4NzE5IiwiaXNzIjoiQXRoZW5hIiwiaWF0IjoxNjAzNzMzODY2LCJleHAiOjE2MDM3MzQ3NjZ9.c09vOx6vKJxDsUqr8g0XoSO2Q6uGjKOQhi3LW2_izhfwrgFAWhrnaswZXqGaKyIuUjIyAJ5LdvO2e-w0VVfXk95wdU5FcKgx4-o-xD-xSnXjZysVKKEwAy1SPuIyl9wmTgYUyobqs5IzL1s8nWiseQ-H5_udHBEM39OZ13utZSqDq5TOL5dxBthjTmPHZgG0ZJ4e66Xivg4EFTIAtJTvVpKL5npCFdjbuIX8LRNk3DFMqtvSaOLCuGN3fpIVzqGzINvfcMW4NgiqryybXdJTE7BdwLGCU2Fx0EOltWIng_kmU-jhvwFQ_iZvvuInYivtzZ6CFoRxonZF8FuDoDxnvw\", \"usertype\": \"local\", \"app_data\": {}, \"exp\": 1603734766, \"auth_info\": {\"username\": \"admin\", \"user_group_uuids\": null, \"service_name\": null, \"token_issuer\": null, \"remote_authorization\": null, \"remote_auth_json\": null, \"token_audience\": null, \"user_uuid\": \"00000000-0000-0000-0000-000000000000\", \"tenant_uuid\": \"00000000-0000-0000-0000-000000000000\"}}", "remote_authorization": null, "owner_uuid": "00000000-0000-0000-0000-000000000000", "incap_req_id": null, "session_log_uuid": "a9a48d00-9d56-472b-979f-dea0dcda479a", "service_name": null, "user_uuid": "00000000-0000-0000-0000-000000000000", "tenant_uuid": "00000000-0000-0000-0000-000000000000", "owner_username": "admin"}, "intent_spec_uuid": "539ea445-0019-5953-9052-3ea59fc8e108", "request_metadata":
|
Note: Consult with a Senior SRE or a Support Tech Lead (STL) before making the changes below.
The solution below is for ESXi and AHV clusters. For Hyper-V, engage ENG via ONCALL.
To resolve the issue, update the VM spec on the PC.
SSH into the PC VM and get the VM UUID that needs to be unprotected.
nutanix@PCVM:~$ nuclei vm.list
Edit VM spec (as in vi editor) to remove Categories and Categories Mapping values and add use_categories_mapping: true
nutanix@PCVM:~$ nuclei mh_vm.put 2e7bb826xxxx-xxxx-xxxx-xxxx57362bf5 edit_spec=true
After the changes, the result should look as below. Make sure there is a space between the colon(:) and the curly braces{}.
api_version: '3.1'
Save the update as in a vi editor (:wq!) and wait for 10 minutes and the VM will get unprotected.
|
KB16801
|
Cluster init issues during Calm VM deployment due to DNS unreachable
|
Self Service (Calm) VM deployment issues due to DNS unreachable.
|
Calm VM images on version 3.7.x relies on hardcoded DNS configuration added by dhclient-script during deployment including google DNS (8.8.8.8).
Scenario 1:If customer environment is not enabled to reach google DNS, cluster init will fail with traceback pointing to DNS unreachable.
Scenario 2:In an environment where google DNS (8.8.8.8) is reachable, cluster init will go through but it will end up with empty DNS configuration after reboot preventing the VM to resolve names.
No nameserver entry will be found on /etc/resolv.conf and file cannot be edited as it is protected.
|
To workaround the issue in both scenarios it is recommended to change the attributes on name server file with the command: chattr -i /etc/resolv.conf and empty the file during the first Self Service VM boot.
After the change, during second boot, Calm VM should get the DNS configuration from PE network settings.
In addition to the problem described above, customer may experience issues when attempt to login to Web GUI after deployment with message:
This is a known issue currently being tracked by engineering.To work around use the command below to reset password:
ncli user reset-password user-name="<user_name>" password="<password>"
|
""ISB-100-2019-05-30"": ""Description""
| null | null | null | null |
KB10991
|
Higher average controller wide latency in Prism after upgrading ESXi to 6.7 or 7.0
|
After upgrading to ESXi 6.7 or later, it has been observed that the average controller wide latency has increased in Prism.
|
A higher average controller wide latency has been observed in Prism average controller wide latency graph in a number of cases after upgrading ESXi 6.7 or 7.0.This is happening due to a loss of optimization for small lck files in these ESXi versions and does not affect VMs so the issue described in this KB is purely cosmetic.There is no impact other than increased latency in the graphs.
Detailed explanation
For every VM there are a number of .lck files existent on the datastore to lock files like VMDK flat files as well as others on a given ESXi host using NFS v3. These .lck files are getting updated periodically every 10 seconds (please refer to VMware KB article 1007909 https://kb.vmware.com/s/article/1007909).The issue is evident on every ESXi system running 6.7 and higher and can be confirmed by using esxtop with the option u (disk device) and adding the fields for read and write latency (button f):
Figure 1: esxtop
When looking at the different examples of ESXi 6.5 (top) and ESXi 6.7 (bottom) it becomes evident that every ~5 seconds there is a spike in write latency without an increase of load:
Figure 2: lock issue
The root cause of this behaviour is the change fo the file size of the .lck files from ESXi 6.5 to 6.7 changing from 84 bytes to 92 bytes. This change disables an optimization in the Nutanix code to switch these periodic .lck writes to async mode instead of being synchronous. As a consequence it adds latency which is eventually giving a wrong conclusion on performance in the PRISM graphs. This behavior won't have an influence on the VM workload itself. The problem becomes most evident if there is not much sustained write IO happening while seeing a bigger number of .lck files per host (defined by the number and attached disks per VM) as well as under Metro configuration.
Changes between ESXi 6.5 and 6.7:
ESXi 6.5 and earlier:
[root@pars0pesxmem001:/vmfs/volumes/<datastore>/<vm_name>] ls -lash
Since ESXi 6.7:
root@host:/vmfs/volumes/<datastore>/<vm_name>] ls -lash
Further analysis for .lck files on VMware using NFS v3
This command will show the amount of .lck files on the cluster. The command only needs to run on one of the hosts as datastores are mounted to all hosts. On this cluster there are 161 .lck files present.
nutanix@CVM:~$ ssh -q [email protected] "for i in \$(/bin/esxcli storage filesystem list | grep NFS | cut -c 1-31 | grep vmfs) ; do find \$i | grep .lck- | wc -l; done" | awk "{sum+=\$1}END{print sum}"
Find the host where a given VM (in this case test) is currently hosted
nutanix@CVM:~$ allssh "ssh -q [email protected] esxcli vm process list | grep test"
This VM has a substantial number of .lck files due to the number of attached VMDK flat files plus additional files like .vmx, vswp. The below example illustrates a file size of 92 bytes since release ESXi 6.7.
nutanix@CVM:~$ ssh -q [email protected] ls -lah vmfs/volumes/<datastore>/<vm_name> | grep '.lck-'
To show the related File name being locked by a given .lck file a big -> little endian conversion needs to happen. Manually we would have to translate number in the below example from 0x934 = 2356 (dezimal) and then stat for this given inode.
Figure 3: Big Endian to Little Endian translation
The below example one liner does this for this specific VM:
root@ESXi cd /vmfs/volumes/<datastore>/<vm_name> && for i in $(ls -lah /vmfs/volumes/<datastore>/<vm_name> | grep '.lck-' | cut -c 56-76); do echo $i; stat * | grep -B2 `v2=$(v1=$i;echo ${v1:13:2}${v1:11:2}${v1:9:2}${v1:7:2}${v1:5:2});printf "%d\n
|
This KB should remain internal until the AOS with the fix has been released.This file size change has been reflected in the upcoming AOS releases of 5.15.6, 5.20 as well as all other further AOS releases.If a fix is needed Nutanix support should be contacted to change a configuration parameter to set the file size to 92 bytes.
|
KB12577
|
Aplos service unexpected restarts
|
Aplos service crashes when a greenlet execution takes longer than 20 seconds to complete
|
SCENARIO 1:Recently there are multiple instances of Aplos service crash issues reported during the 3rd Party backup operation. In some cases, this can lead to backup failure or service restarted alert in the cluster. Identification: Aplos service restart alerts were reported in prism during the time of 3rd party backup.
ID : 1f7bc8f8-ded8-47c8-96f4-69f9100e905e
Aplos service crashed with Greenlet and "Zookeeper session expired" errors since some of the change_region response calls took more than 20 sec. This results in blocking other greenlets more than zk timeout of 20 sec.
/home/nutanix/data/logs/aplos.out on CVM:
Prism proxy logs show some of the calls taking more than 20 sec just before the Aplos service crash.
/home/apache/ikat_access_logs/prism_proxy_access_log.out on CVM:
To confirm the time the calls took, you can enable debug logging for Aplos and look for messages similar to the ones below in Aplos.out on the CVM that had the Aplos service crash.
2024-05-11 19:04:57,778Z DEBUG data_changed_regions.py:58 Cerebro query for the changed regions query received at 1715454288 took 0:00:09.413578
At the same time, the load on the CPU is high, and CPU idle drops very low (<10%). Panacea can be used to check CPU statistics.
#TIMESTAMP 1634933108 : 10/22/2021 08:05:08 PM
SCENARIO 2:In Aplos logs, the below zookeeper session timeout is logged. In this example, 21 seconds have elapsed since the last response from zk:
2023-01-27 14:54:06,731Z DEBUG base_heartbeat.py:80 Sent heartbeat for table aplos_gateway_heartbeat
After enabling debug in Aplos, there is a massive response to an LDAP search around the same time:
2023-01-27 14:53:26,067Z DEBUG ldap_client.py:213 search filter -(|(memberOf=cn=it services,ou=exchange,ou=systemaccess,ou=groups,dc=int,dc=censhare,dc=com))
|
Explanation: As described in ENG-439054 https://jira.nutanix.com/browse/ENG-439054, Aplos will crash with "Zookeeper session expired" error message if any greenlet execution takes longer than 20 seconds to complete. This behavior may occur under different conditions, such as backup workflow or long nested results from LDAP queries for certain users.Scenario 1:During the backup workflow, a 3rd party backup application sends /api/nutanix/v3/data/changed_regions API call.During the /api/nutanix/v3/data/changed_regions API call processing, Aplos retrieves the data on changed regions from Cerebro asynchronously. Then, Aplos will build the JSON response, a synchronous operation. JSON response generation locks the process until it gets completed and is also a CPU-intensive operation. This operation might take a long time depending on response size and available CPU resources on CVM.By default, cerebro provides up to 20,000 regions in a single response to the changed_regions API.Permanent Fix:
The fix for this issue is implemented as part of ENG-439051 https://jira.nutanix.com/browse/ENG-439051.
A partial fix for this issue is implemented in LTS version 6.5.2 and later. The yield during CRT changes introduced in 6.5.2 reduces Aplos crashes by 42% compared to earlier AOS versions.The complete fix for this issue is available in AOS 6.7 and later versions. These changes have been qualified, and no crashes were observed during testing.Additional details regarding this fix are available in this comment https://jira.nutanix.com/browse/ENG-439051?focusedCommentId=4907257&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-4907257.The full fix cannot be backported to LTS branch 6.5 due to Zookeeper packaging changes introduced in later AOS versions.
Workaround: WARNING: Support, SEs, and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before making any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/editand KB 1071 https://portal.nutanix.com/kb/1071.The Cerebro gflag below will control the maximum change regions. By default, the value is set to 20000. This can be reduced to a lower value of 5000 or 1500.
--cerebro_slave_compute_changed_regions_max_regions_to_load
To make the changes take effect, you need to restart the Cerebro service on all CVMs with 120 secs delay.
nutanix@cvm:~$ allssh "genesis stop cerebro && cluster start; sleep 120"
Note:
Changing cerebro_slave_compute_changed_regions_max_regions_to_load to lower values will decrease the chance of zookeeper session expiration, but timeout still can happen.Reducing the gflag value can adversely affect the backup performance and duration. In some scenarios, we need to increase the vCPU allocated to CVMs.
Make sure you stay within the NUMA boundary.We can find the NUMA boundary by looking at the number of cores per CPU socket as follows:
nutanix@NTNX-16SM6B490273-A-CVM:10.xx.xx.113:~$ lscpu | grep Core
Refer to this link https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin%3Aapp-nutanix-cloud-infra-cvm-field-specifications-c.html&a=441fe666bd493a8679e41594dc67c9b46319b5454a17e375591c833b689fc00645572286d815edae for the CVM specifications for different node hardware models.
SCENARIO 2:Multiple issues can cause the observed behavior, such as network latency, faulty DNS server, etc. Please investigate further to understand the root cause. If needed, reach out to an STL or EE for help.
|
KB11066
|
Resolving "A duplicate IP address detected for 192.168.5.1 on interface vmk1" error in ESXi vobd.log
|
Logging into the hypervisor from the local CVM via SSH to 192.168.5.1 address fails.
|
SSH login to the ESXi hypervisor management IP address as root via SSH succeeds, but logging in to the internal IP address 192.168.5.1 from a CVM fails.Only genesis and zookeeper services start on the local CVM.
|
1. Log into the ESXi host and check the /var/log/vobd.log. It shows a duplicate IP address detected for 192.168.5.1. Same error messages can be found on other hosts.
root@ESXi# cat /var/log/vobd.log
2. Run the esxcfg-vmknic -l command on all ESXi host, the above MAC (00:50:56:xx:xx:2f) owned by another ESXi host.
root@ESXi#esxcfg-vmknic -l
3. Check vSwitch configuration. We can see vSwitchNutanix is incorrectly configured with an uplink vmnic0:
nutanix@cvm$ hostssh esxcfg-vswitch -l
4. Remove the incorrect uplink port from vSwitchNutanix. The CVM services will start automatically.Run the below command on the ESXi host where the configuration must be adjusted (remove the port vmnic0 from the vSwitchNutanix uplink)
root@ESXi# esxcli network vswitch standard uplink remove --uplink-name=vmnic0 --vswitch-name=vSwitchNutanix
If you need to correct the configuration on all ESXi hosts, prepend the command with hostssh:
nutanix@cvm$ hostssh esxcli network vswitch standard uplink remove --uplink-name=vmnic0 --vswitch-name=vSwitchNutanix
|
}
| null | null | null | null |
KB16445
|
Nutanix Files - Retrieving Logs When Logbay Command Does Not Work From CVM
|
The objective of this KB is to outline methods for gathering Nutanix Files logs when encountering issues or failures with the Logbay command on CVM
|
The goal of this KB is to document Nutanix Files related troubleshooting logs for the SRE team.
|
If encountering issues with the logbay command on the CVM or if it gathers inaccurate data, the following script can be executed from within the FSVM to capture logs for a specific timeframe.To collect logs for the entire cluster, the below script should be run on each FSVM within the Nutanix Files cluster.
nutanix@FSVM:~$/home/nutanix/minerva/bin/minerva_nvm_run_cmd --start_time_epoch=<epoch time> --end_time_epoch=<epoch time> generate_logs_on_nvm
You can convert the epoch time online, considering the corresponding timezone for the given date/time. Subsequently, insert the converted epoch number into the above command.Once the above command execution is completed the Tarball is generated under location on the FSVM where the command was run : /home/nutanix/data/log_collector/<fsvm-ip>-logs/cvm_logs/<tarball>
Example :
|
KB13652
|
Nutanix DRaaS(Formerly Xi Leap) | After failover from on-prem to Xi Cloud, a Windows VM booted up but some data drivers have status = Failed
|
After failover from on-prem to Nutanix DRaaS (Formerly Xi Leap), a Windows VM booted up but with data drivers have status = Failed
|
After a Windows VM failovers to Nutanix DRaaS(Formerly Xi Leap), the VM can boot up, but its data drivers are shown as Failed in Disk Management.
For example, Disk Management shows two data drivers with Type = Dynamic and Status = Failed
|
Symptoms
After failover, Dynamic disks are mounted on a different machine, it is very like causing Dynamic disks to become "Foreign" disks. When a disk becomes a Foreign disk, it needs to be imported manually to the VM.
Solution
Refer to the following steps to import a foreign disk
In Disk Management, right-click the foreign disk and select "Import Foreign Disks"After that, any existing volumes on the foreign disk become visible and accessible. You may need to reassign letter to match the original drive letter
Refer to the official Microsoft documentation https://learn.microsoft.com/en-us/windows-server/storage/disk-management/move-disks-to-another-computer for detailed information.
|
KB15631
|
Unable to access VM disk data after changing disk bus type in Prism Central
|
Unable to access VM disk data after changing disk bus type in Prism Central.
|
Unable to access VM disk data after changing disk bus type in Prism Central.
Changing the bus type for an existing disk doesn’t clone the disk but triggers new disk creation instead. As the disk is not cloned, VM will see a blank disk as a result.
|
This issue is resolved in pc.2023.4. Upgrade Prism Central to prevent it from happening.Workaround
Use acli to change the disk bus type as described in KB-2998 https://portal.nutanix.com/kb/000002998. Do Not change the disk bus type through Prism Central.
In case disk bus type was updated through Prism Central and the data is not accessible, engage Nutanix Support immediately at https://portal.nutanix.com/ https://portal.nutanix.com/
|
KB14411
|
Cluster conversion ESXi-AHV errors
|
Some customers try to convert clusters from ESXi to AHV and they get multiple errors & warnings during the validation process which some of them are not very clear how to solve/clear
|
Cluster conversion needs some requirements as well as some General limitations which could be found under section In-Place Hyperviosr Conversion https://portal.nutanix.com/page/documents/details?targetId=Web_Console_Guide-Acr_v4_6:man_cluster_conversion_c.html Prism Web Console Guide documentation.After running validation from Settings --> Conver Cluster --> Validate, Some errors are not clear about the steps of solution which will be mentioned here.To check the Genesis leader run the below command or refer to KB-4355 https://portal.nutanix.com/kb/4355
nutanix@CVM:~$ panacea_cli show_leaders | grep genesis_cluster_manager
The errors and warnings will be in the genesis logs of the genesis leader if you grep on cluster_conversion
nutanix@CVM:~$ grep -i cluster_conversion data/logs/genesis.out
Error no. 1:
2023-02-27 14:21:46,870Z ERROR 34236688 cluster_conversion.py:663 Node 10.xx.xx.xx external vswitch vSwitch0 does not have homogeneous uplinks.
Error no. 2:
2023-02-27 14:21:46,871Z ERROR 34236688 cluster_conversion.py:663 VM vCXXXXXXXXX with uuid 501XXXXXXX has error: VM has files/virtual disks on non-Nutanix storage
Error no. 3:
2023-02-27 15:35:45,152Z ERROR 41358672 cluster_conversion.py:663 VM XXX with uuid 503a1XXXX has error: VM has delta disks. Please remove any snapshots attatched to the VM
Error no. 4:
2023-02-28 14:21:39,472Z ERROR 72448496 cluster_conversion.py:663 Cluster does not have consistent networks/port groups across nodes (10.XX.XX.XX, 10.XX.XX.XX), non uniform port groups: 'ntnx-internal-pg svm-iscsi-pg vmk-svm-iscsi-pg ntnx-internal-vmk'. Conversion can move VMs to other hosts so prefer consistent network
Error no. 5:
2023-02-28 15:14:14,380Z ERROR 72448496 cluster_conversion.py:663 Datastores ['XXX'] are not mounted on all ESXi hosts
Error no. 6:
VM vCLS-bd0b5587-xx-xxx with uuid 5006f8ba-xx-xxx has error: VM has files/virtual disks on non-Nutanix storage
|
Error no.1 can be due to many things so you need to check the Network Requirements and Limitations in the document above (In-place hypervisor conversion), the tricky part that all NICs which are not used must be removed from the ESXi as per these requirements but it's not clear what that means, so you will have to go to each host and remove all non-connected NICs and keep only single NIC Active or you will get a warning as below but the warning will not stop the conversion only errors --> Follow VMware KB Remove a Physical NIC from a vSphere Distributed Switch https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networking.doc/GUID-6272F768-F7F0-49CA-98DA-F18F2A609310.html
2023-03-01 14:47:57,527Z WARNING 41360912 cluster_conversion.py:665 Node 10.xx.xx.xx external vswitch vSwitch0 after conversion will use NICs active-passive. Currently using active-active with load balancing policy loadbalance_srcid
Error no.2, you can check from Prism what are the mounted containers under the storage tab then migrate all the reported VMs to this container.Error no.3, sometimes because some VMs are having some snapshots or some snapshots need to be consolidated, you can check the affected VM from vCenter --> Right click snapshots --> Consolidate or Delete All Snapshots Error no. 4, check port groups on all the hosts and follow KB-3445 https://nutanix.my.salesforce.com/kA032000000Cifm?srPos=0&srKp=ka0&lang=en_US ESXi: How to recreate Standard Switch configuration via ESXi command lineError no. 5, sheck from Prism if the datastore is mounted on all the hosts or not and re-mount if needed.
Check from Prism --> Storage -->Table View --> choose Container --> update --> if one host is not mounted usually you will see yellow exclamation mark) --> remount it. Also possible to mount it from vSpeher (choose datastore --> Right click Mount datastore to additional hosts --> choose the host under compatabile hosts --> Ok, already mounted hosts will not appear as they appear if you choose unmount option from the menu.
6. Error no. 6, vSphere Cluster Services (vCLS) is activated by default and runs in all vSphere clusters. This feature ensures cluster services such as vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the vCenter Server instance availability. We cannot power off the vCLS VM as the VM gets powered on automatically even if it is manually powered off. Nutanix does not need vCLS during the conversion, so disable vCLS on cluster by enabling Retreat mode. Refer to either VMware KB 91890 https://kb.vmware.com/s/article/91890 or KB-11711 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LWC6SAO for detailed steps to enable Retreat Mode.
|
KB14114
|
AHV node may be stuck in HAFailoverSource state
|
A HA task may put a node stucked in HAFailoverSource state.
|
Acropolis initiates HA task to restart VMs on other hosts when Acropolis detects a node failure.There is a case that the failed node is stuck in HAFailoverSource state. This issue typically happenes when a failure was caused by a network outage, but not limited.Check the following information to identify the issue:
One node status is in HAFailoverSource state:
nutanix@cvm$ acli host.list
Make sure there are no currently running (or queued) HA-related tasks. Do a task get and confirm the HA task was on the same node, which is stuck in HaFailoverSource.Check kHaFailover task and tasks whose parent is kHaFailover. Especially, both kHaFailover task and kStartHAFailover task should be kSucceeded.
## Find kHaFailover task
Acropolis leader migration happened after HaFailover task started, but a restarted task did not find any running HA subtasks. acropolis.out on Acropolis leaders shows the following logs.
## On previous Acropolis leader
A task kHaFailover finished before finishing a subtask kStartHAFaiolver. go_ergon.out shows TRANSITION messages like the following.
I1205 11:57:48.945321Z 2855 common.go:394] TRANSITION:TU=71c490c0-6923-4000-9a4b-0533ecd391da:PTU=:RTU=71c490c0-6923-4000-9a4b-0533ecd391da:Acropolis:41:kHaFailover:kQueued/kRunning
(Optional) There might be no CREATE message for kStartHAFailover subtask, while a Timeout message might exist in go_ergon.out.
E1205 11:57:57.601543Z 2855 task_create.go:132] TaskCreate:request_id(1457) Task('0cdcfb59-32a7-4eeb-a033-57fa400cbafe') Timeout: 3
(Optional) There might be Cassandra timeout messages in cassandra/system.log
ERROR [EXPIRING-MAP-TIMER-1] 2022-12-05 11:57:03,845Z ReadDoneHandler.java (line 181) Caught Timeout exception while waiting for read response from nodes: [/xx.xx.xx.23, /xx.xx.xx.22]. readEndpoints: [/xx.xx.xx.21, /xx.xx.xx.23, /xx.xx.xx.22]. Request Id: 95183. Proto Rpc Id : 7280806063140530168. Request start time: Mon Dec 05 20:56:54 JST 2022. Message sent to nodes at: Mon Dec 05 20:56:54 JST 2022
|
This issue is resolved in:
AOS 6.5.X family (LTS): AOS 6.5.2AOS 6.6.X family (STS): AOS 6.6
Please upgrade AOS to versions specified above or newer.Open an ONCALL attaching a whole log bundle that covers a time from the start of HAFaiover task, and all the information in the Description to involve Devex in recovering the issue.
|
KB9546
|
Citrix PVS (Provisioning Services) - Windows 2016 VM/Virtual Machine Image Not booting
|
Windows vDisk created on ESXi being used for Citrix deployment on AHV leads to BSOD at boot.
|
Windows Server 2016 server not bootingAfter installed MCS on the Delivery Controllers and the PVS plugin on the PVS servers, creating the snapshot and adding a VM via the Citrix Wizard the VM is created in AHV but the upon power on the blue screens appears:
VirtIO drivers confirmed are installed on the imageThe operating system is Windows Server 2016 The VM boots successfully in ESXi but not AHV.VDA Agent version 7.15
|
A vDisk that is created on ESXi will not work on AHV, even if you install the Nutanix Guest Tools and VirtIO drivers.Create a new VM in AHV, boot into a preinstall environment for example Hirens Boot Disk, install network drivers if required, mount the existing vDisk image, and clone it to the disk which will represent C:\ drive when the VM boots.Boot the VM as normal, then use the Image Wizard on the worker server to create a new vDisk.
|
KB13417
|
NKE - The task to add or remove worker_node in the node_pool fails in k8s cluster
|
This article describes an issue where worker_node add/remove in the node_pool fails in k8s cluster.
|
Adding/removing worker_node or k8s upgrade fails with the error:
Not all nodes are ready: expected: x, ready: y
Where x < y.
Example:
PreK8sUpgradeChecks: preupgrade checks failed for k8s and addon upgrades: failed to upgrade cluster k8s version and/or addons:
Cluster list output shows a worker IP is missing (in this example, xx.xx.xx.206).
nutanix@PCVM:~$ ./karbon/karbonctl cluster list
But the node "mycluster-58ff7c-worker-0" is present in node list and Prism Central (PC) UI.
[root@mycluster-58ff7c-master-0 ~]# kubectl get nodes -o wide
JSON status of the cluster shows the affected worker node in "Not provisioned" state.
nutanix@PCVM:~$ ./karbon/karbonctl cluster get --cluster-name mycluster --output json
|
Contact Nutanix Support http://portal.nutanix.com for assistance in resolving this issue.
|
KB10283
|
LCM - AHV NIC upgrade failure
|
LCM NIC upgrades can fail in AHV with
|
LCM NIC upgrades can fail with the following traceback on AHV.
nutanix@cvm:~$ less ~/data/logs/lcm_ops.out
We can see that in the dmesg and journal that the node in phoenix have lost all network connectivity:
phoenix / # dmesg | grep bond0
journalctl logs:
phoenix / # journalctl | grep bond0
|
Currently there is now workaround and the node should recover itself.
If the upgrade fails and the link does not come up after some time, we can try rebooting the node and wait .If the link does not come up then flash the card manually with older firmware or need to replace the card based on further troubleshooting.
|
KB14733
|
HPE DX nodes with hardware Data at Rest Encryption may fail to boot the CVM
|
There is an issue where HPE DX nodes with SED drives, a Microchip e208i HBA and have hardware Data at Rest Encryption enabled, may not be able to boot the CVM after a reboot.
|
There is an issue where HPE DX nodes that also have a Microchip e208i HBA and have hardware Data at Rest Encryption enabled may fail to boot the CVM after a reboot. This generally occurs after a maintenance activity such as a FW upgrade, hardware replacement, power event or other activity that requires a node with this configuration to be rebooted. During that activity the SED enabled disk drives will fail to be correctly unlocked which will result in the CVM failing to boot.
To identify if you are running into this issue:
Confirm if below combination of Node and disk models are present on the cluster - The combination below is the only one which supports the SED feature on HPE nodes: Node Model : HPE DX 360-8 GEN10 Plus Note Gen10 Plus FSC Models do not hit this issue.
Disk models :
PM6 3.84 TB : VO003840XZCLTPM6 7.62 TB : VO007680XZCMB
You can check the Node model as follows :
nutanix@cvm$ ncli ru ls | grep -i "Rack Model Name" -A2 -B3
You can check the disk model as follows:
nutanix@cvm$ ncli disk ls | grep -i “Self Encrypting Drive” - A8 -B7
When you run into this issue, on POST the server will show message
“Drive Locked. Action : Unlock Encryption Enabled Drives(SED)”
The CVM will move into rescue state, and RAID will be shown as broken.
rescue # cat/proc/mdstat
This issue only occurs in case of a Disk Firmware upgrade (regardless of version) or in case of Power outage(Power Cycle) event on the Node with the SED disks.
|
Cause of the issue :
SED Drives revert back to a locked state after a Power Off event. Nutanix expects the first 3 partitions(root, alt-root and /home) are always in an unlocked state and only the 4th partition /dev/sd[ab]4 (metadata partition) goes into a locked state. This allows the CVM to start without KMS connectivity and start the Hades service to communicate with the KMS and unlock the encrypted bands after fetching the keys from the KMS.
With this specific HPE Node configuration indicated in the description, all the 4 partitions present on the disks are locked by the controller(e208i). This in turn causes the CVM to fail to boot up and causes an inability to unlock the drives for this node and/or access the data on the encrypted bands on the disks.
In case you suspect, your customer is running into the above issue, Please Engage Engineering via an ONCALL.
|
KB3654
|
HPE to Nutanix DISA Case Handoff
| null |
The objective of this Services Statement of Work is to establish how Hewlett Packard and Nutanix will work together to provide 99.95% availability for Nutanix appliance(s) located in DISA data centers. Nutanix does not dispatch parts for HPE DX, Dell XC, or any other third-party hardware. HPs primary function is to take the initial service call from DISA and warm hand off that call to Nutanix support. Nutanix will answer DISA questions and perform problem identification that results in a remote solution or a need for HP to go on-site to pull and replace a defective part or unit. The HP technicians will typically not have console access nor the skill set to operate the Nutanix appliance.Nutanix shall provide that a console be available for DISA use or if selected entries are required by the HP technician. A government lap top,Crash Cart, or other device is the responsibility of Nutanix.DISA requires defective media retention for all disk drives. Currently DISA retains all defective disk drives and memory retentive devices. Such defective items will not be surrendered to HP nor Nutanix
|
Level 1: Call managementHP will provide a 24 x 7 x 365 Help Desk. All calls for maintenance will be placed to the United States toll free number. HP will warm handoff the service incident to the Nutanix support.Current contact information:
Email: [email protected]: 844 347 2457 op 1 -> (number for location)
Level 2: Nutanix Technical AssistanceNutanix shall provide immediate priority telephone technical assistance directly to the customer or a call back within 30 minutes for non- priority requests. Telephone answering machines or messaging is not allowed. Human interface is a requirement. Problem Identification shall be made by the Nutanix technical assistance team.Nutanix will make available replacement parts at the DISA locations within four (4) hours from the time the HP Help Desk contacts to the Nutanix support and it is determined that a repair item is necessary.CONUS Locations:
Chambersburg, PAColumbus, OHDayton, OHDenver, COFort Meade, MDHuntsville, ALJacksonville, FLPensacola, FLMechanicsburg, PAMontgomery, ALOgden, UTOklahoma City, OKSan Antonio, TXSt Louis, MOWarner Robins, GAFt Bragg, NCFT Eustis, VAFt Knox, KY
OCONUS locations:
GermanyHawaiiAlaskaBahrainJapanSouth Korea
Nutanix support will notify the HP Help desk when problem identification has been made and approximate time that repair items will be available on-site at the DISA location.Nutanix shall provide all software support and make available a method to deliver patches, updates and new releases as necessary.Level 3 : DispatchHP will dispatch a IT-1 cleared service technician to the site to pull and replace the defective part. The on-site HP technician will work with the Nutanix remote technical support team to replace the defective item. Nutanix shall ship replacement parts prior to receiving the defective part. HP will return the defective item no later than 45 days.The HP technicians are capable of limited diagnostic assistance when on-site and will work at the direction of the Nutanix remote technical support team to effect repairs within the 6 hour call to repair window or a scheduled service window allotted by DISA. The Nutanix appliance shall be configured to eliminate single points of failure. DISA may choose to continue operations in a degraded mode during failover. It is DISA’s option to request a 6 hour Call to repair or delay the repair until a scheduled service window. Level 4 : On site Nutanix Technical escalation supportIn the event that problem resolution has not been completed in the allotted time of scheduled service window then Nutanix shall supply on-site technical support with DOD IT-1 cleared individuals within 24 hours of notice that escalation support is necessary. Nutanix shall provide an escalation plan that describes the levels of technical support and how they intend to provide on-site escalation assistance. This plan shall be approved by HP prior to this agreement taking effect.
|
KB10348
|
HBA firmware upgrade to PH16.00.10 using LCM failing with "ERROR: Failed to Upload Image!"
|
HBA firmware upgrade to PH16.00.10 using LCM failing with "ERROR: Failed to Upload Image!"
|
When performing an HBA firmware upgrade using LCM, the upgrade may fail during the process of reflashing the device. This will cause the upgrade task to fail and leave the node stuck in the Phoenix staging area.
Steps for Identification
Upgrade Task in Prism will appear with a failure similar to the one below.
Operation failed.
The same error signature should appear in the lcm_ops.out log on the lcm_leader CVM. There are 2 types of upgrade failures that can be found from the lcm_leader log. (1) uEFI BIOS failureLog signature in lcm_ops.out:
... ... DEBUG: [2020-11-12 19:15:52.563847] The Command:
(2) Firmware failureLog signature in lcm_ops.out:
... ... DEBUG: [2020-11-17 18:13:21.438833] The Command:
|
Follow KB 9437 http://portal.nutanix.com/kb/9437 to recover the node using the LCM node recovery script. If this fails on AHV, then use KB 3483 http://portal.nutanix.com/kb/3483 to bring the node out of phoenix mode manually. If it fails on ESXi then use KB 9437 http://portal.nutanix.com/kb/9437 to recover it manually.
Once the cluster is back to a normal state, SSH to the CVM where the upgrade failure occurred and verify the firmware and uEFI BIOS versions on the HBA using the following command:
nutanix@CVM:~$ sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil 1
The versions you should expect to see after a successful upgrade are 16.00.10 for firmware and 18.00.00.00 for EFI BIOS, as is shown above.In certain cases, we have observed that an error similar to the one below has appeared when running lsiutil. If you see such an error, or if you see anything other than the expected versions, please provide this information along with your logs in the Jira ticket.
nutanix@CVM:~$ sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil 1
Additionally, if any error was seen like the above, gather the lsiget logs from CVM by the below steps. If no errors showed up from the lsiutil command, then no need to gather the lsiget logs.
Download the lsigetlinux_latest.tgz to the CVM, from https://www.broadcom.com/support/knowledgebase/1211161499563/lsiget-data-capture-script https://www.broadcom.com/support/knowledgebase/1211161499563/lsiget-data-capture-script. Extract the bundle to ~/tmp (/home/nutanix/tmp), use sudo to run the ./lsigetlinux_xxxxxx.sh script:
Notice: It shows Permission denied if you extract lsigetlinux_latest.tgz to /tmp folder.
nutanix@CVM:~/tmp$ tar -zxvf lsigetlinux_latest.tgz
Change the owner and group of the log bundle to nutanix:nutanix:
nutanix@CVM:~/tmp/lsigetlinux_052020$ sudo chown nutanix:nutanix lsi.linux.ntnx-19fmxxxxxxxx-c-cvm.121820.232156.tgz
Upload the log bundle to the case and ENG.
This is an active investigation by Engineering. Please collect the LCM log bundle shown in the failed task in Prism.Please provide the following information in the Jira ticket (found in Internal Comments of this KB)
Location of the unzipped lcm log bundle (be sure to open permissions for all internal files+directories) as described in KB-2620 http://portal.nutanix.com/kb/2620.Location of the unzipped lsiget log bundleModel number of the node where the upgrade failed (example: NX-3060-G5)Serial number of nodeSerial number of block/rackable-unit/chassisHBA firmware version prior to upgradeHBA firmware version after failed upgradeHBA EFI BIOS version after failed upgradeDetails on any errors seen when running lsiutil.Failure Scenario: LCM version
nutanix@CVM:~$ zkcat /appliance/logical/lcm/update
You should run an LCM Inventory before attempting to upgrade the remaining nodes in the cluster.The workaround is to use the ISO from KB-6937 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000LMelCAG and manually upgrade the HBA firmware.
|
KB5241
|
Nutanix Self-Service - How to clean up app that fails to remove using delete and softdelete actions
|
NuCalm How to clean up app that fails to remove using delete and softdelete actions
|
Nutanix Self-Service (NSS) is formerly known as Calm.Customers may experience a failed app deployment that cannot be deleted with either the Delete or Soft Delete actions. In these instances, the buttons for these actions will be greyed out on the CALM application management page. A sample screenshot below shows the delete and soft delete actions after failure.
|
If a customer has reported a deletion failure, identify when the deletion was triggered by viewing the "Started" timestamp of the action in the UI. Capture the app logs from UI and a log bundle targeting both Nucalm and Epsilon logs and attach them to the Calm ticket in the Jira section of this KB before performing the workaround.Please note that this procedure is for deleting customer-created apps. For Nutanix Apps (Self-Service, Foundation Central, Objects, NKE, Files) refer to https://portal.nutanix.com/kbs/14245 https://portal.nutanix.com/kbs/14245
IMPORTANT: Engage Engineering via TH/ONCALL before executing the following workaround.Execute the following commands from the PC VM to delete apps that are unresponsive to the Delete or Soft-Delete actions, or when these buttons are grayed out.
Pro Tip: If you need to identify the underlying VMs in AHV associated with the Calm APP then follow https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=STK&title=Calm+How+To%27s#CalmHowTo's-HowtofindouttheAHVvm'srunningforagivenCalmAPP https://confluence.eng.nutanix.com:8443/display/STK/Find+out+the+AHV+vm%27s+running+for+a+given+Calm+APP
1. To find the APP UUID, click on the Apps tab on the left and select the Application Name that you need to delete.
Note: If you have multiple APPS that need to be deleted, you can run the following command on the PC VM, which will provide a list of app names and UUIDs that are in an error state.
nutanix@NTNX-PCVM:~$ curl -k -s -u admin 'https://127.0.0.1:9440/api/nutanix/v3/apps/list' \
When prompted, enter the PC admin user password.
Sample output of the above command can be found below.
[{'name': u'Jenkins_launch', 'uuid': u'24314c9a-5aa3-414a-9ea1-e3940c80b9fb'},
2. Enter the docker NuCalm environment from the PCVM cli.
a) login to the NuCalm container using the below command:
nutanix@NTNX-PCVM:~$ docker exec -it nucalm /bin/bash
b) Activate the NuCalm virtual environment(venv).
[root@ntnx-a-pcvm/]# activate
c) Enter into the Calm Shell 'cshell.py' to delete the application
(venv) [root@ntnx-10-24-140-31-a-pcvm /]# cshell.py
d) Enter each of the following strings into the cshell prompt replacing the appropriate variables at each step.
app_name = "<application_name>"
#Replace <application_name> with the name of app
from calm.lib.model import Application
ap = Application.get_object('<application_uuid>')
# Replace <application_uuid> with the Application uuid retrieved from Step 1
if ap.name == app_name:
Note: Indentation must be correct for if/else statement.
app = model.Application.get_object('<app_uuid>')
# Replace <app_uuid> with the Application uuid retrieved from Step 1
import ujson
# Replace <AppUUID> with the Application uuid retrieved from Step 1
A successful run should result in the following output
Return <200>
Once all applications have been removed, have the customer confirm that they are removed from the GUI as well.
|
KB5845
|
Nutanix Self-Service - Enable Calm fails with error "Failed to increase memory for PC VMs"
|
Enabling Calm on ESXi requires the Prism Central Virtual machine memory hot add to be enabled
|
Nutanix Self-Service is formerly known as Calm.When enabling Calm, Enable App management task fails with error "Failed to increase memory for PC VMs".
The "VM Update" subtask which follows it will show the error "InternalTaskCreationFailure Error creating host-specific VM update task. Error: MemoryHotPlugNotSupported: Memory hot plug is not supported for this virtual machine.".This issue happens when the Enable Calm operation is started without hot-add being enabled on Prism Central VM.
|
Enable memory hot-add on the Prism Central virtual machine from vCenter and retry enablement.To enable memory hot-add:1. Open vCenter.2. Shut down the Prism Central Virtual Machine.
Caution: Prism Central supports features that could be damaged by shutting down the Prism Central VM at the wrong time. Do not shutdown a Prism Central VM until you are certain it can be done safely.
3. Select Edit Settings > Options > Memory Hotplug4. Check the "Enable" check box and select OK.5. Power on the Prism Central Virtual Machine.Retrying the App Enablement workflow should no longer report the errors in regards to memory hot-add.
|
{
| null | null | null | null |
""ISB-100-2019-05-30"": ""ISB-042-2017-03-10""
| null | null | null | null |
KB15153
|
G8 node may experience CATERR - CPU IFU BUS Fatal error.
|
G8 node may experience CATERR - CPU IFU BUS Fatal error. AHV may hang, ESXi will experience PSOD.
|
G8 node may experience CATERR - CPU IFU BUS Fatal error. AHV may hang, ESXi may experience PSOD.Health Event Log:
2023-07-13 01:06:56 Processor CATERR has occurred - Assertion Sensor-specific
Example PSOD from ESXi:The node may boot after the power cycle.Put the host in maintenance mode until replacement to not impact the production workloads.
|
Engineering is still investigating the issue. Please collect the following data and details and provide them to the HW engineering team over ENG-576958 https://jira.nutanix.com/browse/ENG-576958 with a comment:
Node modelBMC/BIOS versionsHypervisor versionCPU and DIMM model infoHBA details and firmware infoBMC TS logHEL/SELNIC and NIC firmware detailsdmesg log in case of AHVPSOD screenshot if ESXiImportant: Download the System Crash Dump file (Cdump.txt) from IPMI:
Please do not "Generate" a new one - as it will miss the important information of the actual issue.
Replace the node and mark it for Failure Analysis.
|
KB15873
|
Policy Engine Deployment Fails with "Failed to retrieve the prism central information, err Could not find VM uuid using bios uuid"
|
Policy Engine Deployment Fails with "Failed to retrieve the prism central information, err Could not find VM uuid using bios uuid"
|
Policy Engine Deployment may fail with:
Failed to retrieve the prism central information, err Could not find VM uuid using bios uuid : 43bacb92-cbd8-cf4d-8776-79fc0a496cae."
This is generally seen in ESXi environment due to uhura_uvm_uuid mismatch in Zeus config of PCVM, where most probable cause could be that PCVM(s) got vMotioned from one ESXi cluster to another. Please verify with customer if or not that was the case indeed and document the cause in the case.
|
How to identify
SSH to the Prism Element cluster hosting the PCVM and check the VM UUID.
nutanix@CVM$ ncli vm ls name=<PC_VM_Name> | grep Uuid
Verify that the PC VM UUID in Prism Central's Zeus matches the PC VM UUID from above step.
nutanix@PCVM$ zeus_config_printer | grep -i uhura
In this scenario, the current PC VM UUID and "uhura_uvm_uuid" in PC Zeus config is different.In other scenarios, the uhura_uvm_uuid value could be missing entirely.
How to fix
Update "uhura_uvm_uuid" with the correct value or add the PC VM UUID in Prism Central Zeus.
NOTE: Consult with a Sr. SRE or DevEx prior to any Zeus edits.
nutanix@PCVM$ edit-zeus --editor=vim
The updation/addition has to made near the following values:
cassandra_schema_timestamp: 4
As zeus updates are time critical, verify whether the correct UUID is updated in zeus_config_printerOnce the correct uhura_uvm_uuid is updated/added, delete the VMWare account on which the PCVM is hosted from Self-service > Settings > Accounts and re-add the account. This should update the account spec with correct details.Proceed with the Policy Engine Enablement again.
In cases where PCVM UUID is already corrected previously, enablement may still fail with:
Done with scanning all verified vmware accounts, could not find vmware account where prism central vm reside
Here, we just need to delete the VMWare account on which the PCVM is hosted from Self-service > Settings > Accounts and re-add the account.
|
KB12480
|
Nutanix Files - Smart DR: After SmartDR failover existing folders are not accessible
|
This Article helps in troubleshooting issues where after failover using Smart DR existing TLDs / folders are not accessible.
|
We currently have the compatibility check for normal TLD creations for Incompatible (Legacy) shares. Incompatible (Legacy) shares are the shares that were created before AFS 3.5.1. We generally check the legacy share and convert the name to all lowercase and write it to the database. However, during failover/failback scenarios in DR, there is no compatibility check for Incompatible (Legacy) shares.
Hence the name of the share is copied as it is to the DB. Now the issue here is that the remote TLD key in DB will have a capital letter and when SMB tries to access it, it converts the TLD name to lower case and tries to fetch the TLD key using all lowercase. Since the DB only has SMB as a capital-lettered key, it fails to access it.
The below error is received when you attempt to access shares post failover of Smart DR is performed:
<Path to TLD> is unavailable. If the location is on this PC, make sure the device or drive is connected or the disk is inserted, and then try again.
Metadata Information about folders gives below error:
share.get_authorization_info AFTShares Global
None would be mentioned in place of VG UUID in the Share Path field as shown below if you are checking ownership of the TLD:
<afs> share.owner_fsvm AFTShares path=Global
The following error will be observed in the replicator logs:
I1118 04:10:13.736076Z 33610 replicate.go:2345] b2fdc68a-b60d-4d0a-57fc-fb245c0af768: Post replication rpc returned result: Error:<> err: %!s(<nil>)
|
Nutanix recommends that customers upgrade to Files Version 4.0.2 or later which contains the fix for this issue. The Nutanix Files cluster can be upgraded via LCM https://portal.nutanix.com/page/documents/details?targetId=Files-v3_8:fil-fs-lcm-upgrade-c.html or using the Prism 1-Click upgrade software. Refer to the Nutanix documentation https://portal.nutanix.com/page/documents/details?targetId=Files-v4_0:fil-file-server-download-wc-t.html for details on performing the upgrade.
File Server Version Identification
Refer to the steps mentioned in the Logging Into A File Server VM https://portal.nutanix.com/page/documents/details?targetId=Files-v4_0:fil-file-server-fsvm-login-t.htmlsection of the setup guide to SSH into a file server VM (FSVM)Once logged in, execute the following command to retrieve the file server versions:
nutanix@NTNX-FSVM:~$ afs version
Alternatively, to access all File Server versions, log on to a Controller VM in the cluster with SSH and type the following command in the shell prompt:
nutanix@NTNX-CVM:~$ ncli fs ls | egrep -i "Version"
You can also identify the version from Prism UI by going to the LCM page or from the main dashboard of Files Console UI.
|
KB15728
|
Prism Central - Quick Access stuck in "Loading"
|
Article describe situation then PE Quick Access form Prism Central stuck in "Loading"
|
Opening Prism Element via the Quick Access feature may result in a perpetual "Loading Prism Element" state, or some PE widgets may not be correctly displayed as shown on screenshots below:
Identification
The Prism Central instance is version pc.2023.3.x and small sized:
nutanix@PCVM:~$ zeus_config_printer | grep -A1 pc_cluster_info
There are two possible scenarios that may be faced.
Scenario 1:
Browser Developer Tools Network Tab shows API requests with status 500 or 503 and the response message is:
{"message": "Http request to endpoint 127.0.0.1:9080 failed with error. Response status: 2"}
On the PCVM, /home/nutanix/data/logs/prism_gateway.log contains multiple "IOException encountered java.net.SocketTimeoutException: Read timed out" exceptions:
ERROR 2023-11-02 10:05:33,016Z pool-10-thread-23439 [] web.interceptors.LocalhostAuthenticator.authenticate:52 IOException encountered
PC VM average and current CPU utilization is high
|
Nutanix Engineering has resolved the issue in pc.2024.1, as per ENG-600142 https://jira.nutanix.com/browse/ENG-600142
Workaround:
On Prism Central increase vCPUs on PC VM to 12-14. See the Solution -> Scaling up vCPU section of KB-2579 http://portal.nutanix.com/kb/2579 to do so.In case the above workaround does not work, you may see the PCVM average and current CPU utilization to be normal. Check for the below signatures as well:1. On Prism Gateway logs, the below signatures can be found-
ERROR 2024-04-16 08:41:31,109Z http-nio-127.0.0.1-9081-exec-83 [] prism.aop.RequestInterceptor.invoke:227 Throwing exception from GenesisAdministration.executeGenesisCall
2. The Response status error can be seen in the ~/data/logs/mercury.INFO logs as well for /users/session_info:
I20240416 08:41:37.083467Z 106956 request_processor_handle_v3_api_op.cc:845] <HandleApiOp: op_id: 4096495> Api request with message ID: 2, URI: /PrismGateway/services/rest/v1/users/session_info?_=1713256897066&p
3. mercury.out reports the below-
E20240416 08:40:01.529870Z 106957 request_processor_handle_iamv2_cookie_op.cc:1894] <HandleIAMv2CookieOp: op_id: 4095567> Authentication failed with error Username or password is empty
Nutanix Engineering has resolved the issue in pc.2024.1 and pc.2023.4.0.2 as per ENG-565071 https://jira.nutanix.com/browse/ENG-565071.As a workaround, restart the Prism Service on the PCVM:
nutanix@PCVM:~$ genesis stop prism && cluster start
|
KB16940
|
Prism Connect to Citrix Cloud workflow fails
|
Connect to Citrix Cloud workflow in Prism Element fails to make the connection after going through the configuration steps. Deploy the Cloud Connector VMs manually.
|
The expected behaviour for Prism Connect to Citrix Cloud workflow is to use the Citrix Cloud Connector automated installer functionality to establish secure communication between Citrix Cloud and Nutanix.
The below error message is present in the Prism Element (PE) UI and the CLI when PE fails to make the connection:
INTERNAL_ERROR: "Internal Server Error. Rebuild reservation is enabled on the cluster. Cannot switch domain awareness level"
Failed tasks seen in Prism Element UI:
Disabling resource reservation in Prism Element does not resolve the issue.
Error message after checking tasks from the CLI:
{
Error signatures in the home/nutanix/data/logs/aplos.out log:
2024-04-19 12:52:46,950Z INFO adapter_details.py:123 no details available in DB. 2024-04-19 12:52:46,950Z INFO adapter_details.py:37 connector details: None 202
|
Deploy the Cloud Connector VMs manually by following the steps below:
Manually deploy cloud connector VMs. See Citrix technical requirements for Connector VMs https://docs.citrix.com/en-us/citrix-cloud/citrix-cloud-resource-locations/citrix-cloud-connector/technical-details.html. Also, see the steps on how to deploy a VM on Nutanix cluster https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_8%3Awc-vm-create-acropolis-wc-t.html&a=kuaB8BVUrNrLKm08e5ca23fe4167a3a1fdaf22e7da4aeeb5db2e94870327a599b80fa6ee04dca381.Install and configure the Citrix Cloud Connector on the VMs.Register the Citrix Cloud Connector VMs to the Active Directory domain on Citrix Cloud.Establish the Citrix connection to the Citrix Cloud resource location.Install the Nutanix Plug-in.
|
KB14930
|
[NDB] Database deletion from NDB server failing with the following error: 'failed to load details era drive info'
|
This KB describes symptoms and workaround when a database deletion operation from NDB server is failing with the following error: 'failed to load details era drive info'
|
During database removal, API calls from era_agent to era_server are authenticated using the access key. The access key impersonates NDB user account that initially provisioned the database.If Era Drive owner id in the NDB backend database is different from NDB user account impersonated ruing API call, and impersonated NDB user account that initially provisioned the database had Super Admin role revoked, database deletion may fail with an error 'failed to load details era drive info'.Identification:
When deleting a database from NDB server, the operation is failing with the following error : 'failed to load details era drive info'
In the eracommon.log file on Era agent, we see an error and backtrace similar to the following, note 'ownerId': '65610b82-e544-4a9d-864a-d73a8b970b36' in the second line below, this is the account used to provision database:
less ./x.x.x.x-2023-03-20-14-07-42-UTC/logs/drivers/eracommon.log
user account impersonated by API call has no Super Admin privileges:
[era@localhost ~]$ era-server
|
Workaround:
Adding Super Admin role to the account that is triggering the database cleanup (in this example [email protected]) will allow to bypass era drive ownership check and the operation will complete successfully
|
KB10196
|
File Analytics - Scan fails due to special characters in AD account
|
Nutanix File Analytics scan can fail due to presence of special character in AD credentials. Data will fail to populate in the UI.
|
In some instances, the File Analytics UI can appear blank or without data.The following alert 'FileAnalyticsVMComponentFailure' can be seen for File Analytics VM (FAVM):
One or more components of the File Analytics VM X.X.X.X are not functioning properly or have failed
Services running on FAVM are running and in healthy state:
[nutanix@NTNX-x-x-x-x-FAVM /]$ docker ps -a
Container metadatacollector is running:
[nutanix@NTNX-x-x-x-x-FAVM /]$ docker exec -it 97060a469c70 bash -c "supervisorctl status"
But collector processes can be seen in Terminate (T) state:
[nutanix@NTNX-x-x-x-x-FAVM /]$ ps -ax --forest | grep -oE ".{0,150}main.py"
When running a scan, the following error can be seen in monitoring.logs.ERROR:
2020-09-27 08:53:04Z,158 ERROR 17817 common.py:invoke_rpc_method: 538 - Failed to make rpc request to is_fs_scan_in_progress
|
This issue is caused by the presence of special characters in the file server administrator AD account. Fix was released with File Analytics 3.0.The issue in our case was that the AD password contained one of the following characters: comma (,), the single-quote (') and the double-quote (")As a workaround, change the password and remove any special characters.
|
KB5218
|
Enable Management Network Error - Setting ip/ipv6 configuration failed
|
Internal vmk1 IP 192.168.5.1 on vSwitchNutanix IP interferes with the management interface on vSwitch0.
|
The management interface on vSwitch0, gets assigned the internal vSwitchNutanix vmk1 port IP (192.168.5.1) instead of the user-defined IP and changing it from DCUI is not successful.
|
Check if any other vmkernel port is being used for management traffic.
If you find any other vmkernel port being used for management traffic , disable management traffic from this vmkernel port so that it does not interfere with the management interface on vSwitch0.
2. Compare the vSwitch0 configuration and the management vmkernel port configuration with another working host and make necessary changes on the problematic host. 3. If none of the above work, delete the vmk1 interface using command:
esxcli nework ip interface remove --interface-name=vmk1
As soon as you delete vmk1, the mangement vmkernel port should pick the user-defined IP. Recreate the vmkernel port vmk1 and assign it internal IP of 192.168.5.1, using the following commands:
esxcli network ip interface add --interface-name=vmk1 --portgroup-name="vmk-svm-iscsi-pg" esxcli network ip interface ipv4 set --interface-name=vmk1 --ipv4=192.168.5.1 --netmask=255.255.255.0 --type=static
|
KB7195
|
LCM Upgrade Error: Failed to start LSB: Bring up/down networking
|
LCM upgrade fails with error "Failed to start LSB: Bring up/down networking ". Upgrade Foundation to the latest version to resolve the issue.
|
While upgrading BMC/BIOS, SSD & HDD, HBA Host Boot Device (SATADOM) firmware using LCM, the CVM boot sequence is modified leading to the host undergoing an upgrade to boot into phoenix prompt.
The error signature while the host is booting:
Waiting for phoenix cdrom to be available before copying updates...
Check ~/data/logs/lcm_ops.out on the CVM for the following signature:
2019-01-02 13:11:56 INFO catalog_staging_utils.py:283 Staging modules on phoenix 10.x.x.26
|
Ensure that AOS version 5.10.1 or later. Upgrade Foundation to the latest version to resolve the issue.
|
""ISB-100-2019-05-30"": ""Title""
| null | null | null | null |
KB6784
|
Cassandra | Leader sent prepare for a lower paxos instance error in cassandra system.log
|
KB describing a scenario where token range Leader node sent prepare for a lower Paxos instance error in Cassandra system.log
|
Review of the Cassandra system.log may reveal errors such as "Leader sent prepare for a lower paxos instance, even though local replica has a delete chosen value":
ERROR [PaxosReplicaWriteStage:7] 2018-10-23 11:19:02,543 PaxosRequestAtReplica.java (line 647) Leader sent prepare for a lower paxos instance, even though local replica has a delete chosen value. Key: QyYD6000:91890722 col: 1 leader sent clock:
|
This error is benign as long as it does not repeat for the same key.In the first line of the above log excerpt, we can identify the key and column for the value is Key: QyYD6000:91890722 col: 1For example, consider the following error message excerpts from the same Cassandra node:
ERROR [PaxosReplicaWriteStage:8] 2018-08-13 20:00:41,238 PaxosRequestAtReplica.java (line 647) Leader sent prepare for a lower paxos instance, even though local replica has a delete chosen value. Key: FzVV6000:96169145 col: 1 leader sent clock:
The first error message indicates Key: FzVV6000:96169145 col: 1.The second error message indicates Key: bITS6000:95443829 col: 1.Since these are not repeated errors on the same key and column, we can consider the above errors benign.These messages indicate that the leader node has an older value for the key mentioned in the error message than the replica nodes. This can occur when a write succeeds on all replica nodes but not the leader node - like when the leader node for a token range is in forwarding mode. Once the leader node returns to normal, it will synchronize the new value from the replica nodes and no longer exhibit the same PrepareFailedEpochMismatch error.If the error repeats on a key with the same signature, check whether the same value is reported for the key from each node in the cluster.This can be done using medusa_printer and deriving the egroup ID from the key in the error message.Checking the values using the second error above as an example, the key mentioned is bITS6000:95443829. 95443829 is the egroup ID and the cluster consists of 3 nodesSubmit the egroup ID to medusa_printer and confirm the values are identical when the metadata request is made from each node:
medusa_printer --lookup egid --egroup_id 95443829 --node_ip_for_read <CVM A IP>
If the values are different between nodes, engage engineering via ONCALL to resolve the issue.
|
KB4086
|
How to Correctly Restart IPtables on AHV
|
The following KB article describes how to correctly restart IPtables on AHV.
|
IPtables is the firewall installed on the Nutanix Controller VM (CVM) and AHV. You can easily stop the firewall service on the AHV host by running the following command.
root@host$ service iptables stop
By running the service iptables start command, all the firewall rules that IPtables load by default will not load. This command starts with a smaller set of rules when you run the following command.
root@host$ service iptables status
This article discusses ways to load the right IPtables rules on AHV.
Note: No problem occurs when you restart IPtables on a CVM.
|
Perform one of the following procedures to correctly restart IPtables on AHV.
Method 1: Restart the Host
Restart the AHV host. When AHV comes up, IPtables must load the default rules.
Follow the steps in Shutting Down a Node in a cluster https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-node-shutdown-ahv-t.html and Starting a Node in a cluster https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-node-start-ahv-t.html to stop and start the AHV host.
Method 2: Through Command Line
Log on to one of the Controller VMs with SSH.Determine the IP address of the CVM with the Acropolis Master service.
nutanix@cvm$ allssh "links -dump http://0:2030 | grep Master"
Sample output:
Executing links -dump http://0:2030 | grep Master on the cluster
SSH to the IP address of the Acropolis Master service.Perform the checks described in KB 12365 http://portal.nutanix.com/kb/12365 to make sure it is safe to stop Acropolis.Stop the Acropolis Master service.
nutanix@cvm$ genesis stop acropolis
Restart the Acropolis service. (This is non-intrusive.)
nutanix@cvm$ cluster start
|
KB17202
|
LCM Framework upgrade with proxy to 3.0 might get stuck, leaving genesis in a crashloop.
|
when upgrading LCM framework from version 2.7.1 to 3.0, genesis was in a restart loop trying to upgrade to version 2.7.1
|
During an LCM Framework upgrade from version 2.7.1 to 3.0, the proxy server was serving a cached image of the LCM Framework, so genesis was continuously restarting, upgrading to the already installed version.From the genesis.out, we can see that upgrade intent set to 2.7.1:
024-05-17 10:21:46,380Z INFO 58638256 framework_updater.py:875 LCM tar bundle: /home/nutanix/tmp/lcm_staging/framework_image/lcm_2.7.1.44719.tar
|
**************Please Engage Sr. SRE Specialist or LCM SME for confirmation to perform these steps.***************** ************The steps in the KBs below involve making ZK edits. ************If you're facing this issue, follow the KBs to clean up the catalog, stop the inventory and upgrade the LCM framework manually:
KB-9658 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000XmaXCAS to clean up the catalog KB-14042 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000H4qzSAC to manually upgrade the framework KB-16699 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0VO0000001qML0AY to recover Genesis from the crashloop and remove the Inventory intent.
|
KB12664
|
Dell LCM - Redfish based upgrades fail with error stage 1 "Unable to create Dell Redpool Tools, error Unable to obtain dell plugin"
|
This KB describes an issue, where Dell iDRAC upgrade might fail in stage 1
|
Nutanix has released DELL-LCM-2.0.0 along with LCM-2.4.4 for DELL Clusters.
DELL RIM 2.0.0 enables the Redfish-based upgrades for DELL Firmware upgrades on Dell 14G and Dell 15G nodes. These Redfish based upgrades help in faster upgrade procedure.Nutanix has identified an issue, where the iDRAC upgrade on DELL clusters might fail with the below error messages
Operation failed. Reason: Update of Hardware Entities (Redfish) failed on xx.xx.xx.xx (environment hypervisor) at stage 1 with error: [Unable to create Dell Redpool Tools, error Unable to obtain dell plugin] Logs have been collected and are available to download on xx.xx.xx.88 at /home/nutanix/data/log_collector/lcm_logs_xx.yy.zz.tar.gz
|
Nutanix is working along with the DELL to qualify a newer firmware of DELL IDRAC with the fix and release with LCM. This KB would be updated when the newer firmware is available for upgrade in LCM.
Workaround :
Soft reset the iDRAC where the upgrade is stuck:
Connect to the iDRAC Web interface.Log in to the interface by entering the username and password.Click the Maintenance tab.Select Diagnostics.Click Reset iDRAC(or Reboot iDRAC in newer versions) to reboot the iDRAC.
Run the LCM Inventory againLCM will show the upgrade of Hardware entities (Redfish) (14G Firmware Payload)LCM Upgrade will finish successfully.
For further queries, please contact Nutanix support https://portal.nutanix.com/.
|
KB14906
|
VM related tasks may take a long time to complete due to insights_server not registering clients
|
This article describes a specific issue where an insights_server issue can case VM related tasks to take longer than expected to be marked as completed
|
During rare scenarios, when insights_server on one of the nodes goes down, the watch client is unregistered, and an error is thrown to the client with the message that the watch client no longer exists.Due to a rare race condition, the server might never send an error to the client. The client keeps doing the normal operations(watch registration, unregistration, etc). Although they appear to succeed, they are treated as a NO-OP on the insights_server, resulting in no updates being delivered to the client.A consequence of this is that VM-related tasks can take longer than usual to complete, where the behavior is that the tasks stay at 1% until they are eventually marked as completed.The types of tasks that are seen to be affected are VM management tasks like VmClone, VmCreate, VmDelete, VmSetPowerState, etc.Each subtask takes 5 minutes to be marked successful, although the subtask is executed and completed in a few milliseconds. Example:Initiating a VmClone task from the CLI creates the below tasks. the parent Acropolis task took 15 minutes to complete.
63fbbac6-da00-4178-9bc6-c83fd5a8e50b Acropolis 1519885 kVmClone kSucceeded
nutanix@CVM:xx.xx.xx.47:~$ ecli task.get 63fbbac6-da00-4178-9bc6-c83fd5a8e50b | grep -i time_
The time taken depends on the number of subtasks generated. A Vmclone task initiated from CLI will only have 1 parent acropolis task and 3 child acropolis tasks, hence the time taken is 15 minutes. In case of a VmClone task triggered from GUI, an additional 5 minutes are spent to update the anduril and uhura tasks which leads to the total time being taken to be 20 minutes.
Task UUID Parent Task UUID Component Sequence-id Type Status
In case of VM power on/off task. you will see the Watchdog fired message for the task for 5 minutes in the Anduril leader logs and eventually, the task gets marked as complete by copying the task details from Acropolis. Logs: /home/nutanix/data/logs/anduril.out
2023-12-01 22:28:22,179Z WARNING task_slot_mixin.py:73 Watchdog fired for task 950d6689-56a2-54ed-9f9f-b22f8c2e6324(VmSetPowerStateTask) (no progress for 90.00 seconds)
When you filter the Anduril logs with the task in question, the task gets marked as complete after 300 seconds as seen in the example below:
nutanix@CVM>:~$ allssh "grep '950d6689-56a2-54ed-9f9f-b22f8c2e6324' ~/data/logs/anduril*"
Identification:
Please check the creation and completion time of the parent and child tasks:
nutanix@CVM:xx.xx.xx.47:~$ ecli task.get 63fbbac6-da00-4178-9bc6-c83fd5a8e50b | grep -i time_
As can be seen above, although each task completes execution within few milliseconds, the next subtask is only getting triggered after 5 minutes.Looking at the go_ergon logs on the go_ergon master, we can see that a new subtask is getting created only after the cas value is getting refreshed manually.
I1124 08:35:21.665827Z 15032 common.go:396] CREATED:TU=63fbbac6-da00-4178-9bc6-c83fd5a8e50b:PTU=:RTU=63fbbac6-da00-4178-9bc6-c83fd5a8e50b:Acropolis:1519885:kVmClone
Checking the insights_server.INFO logs on the go_ergon master, we can see that the watch client is not getting registered and hence, there is a delay as we are manually waiting for the cas value to refresh every 5 minutes.
E20231124 08:50:31.527510Z 14796 watch_coordinator.cc:1653] SetErrorIfWatchClientNotRegistered: Watch client is not registered. watch client = client_id: "TaskEntityWatcher:xx.xx.xx.63" session_id: "d41cbb7e-e557-46b3-796e-19ae3ef9f369" reset_watch_client_on_operations: 0
This signature is logged multiple times and the entire insights_server.INFO log file is flooded with this signature.
nutanix@CVM:$ allssh "grep -i 'watch client is not registered' ~/data/logs/insights_server.*INFO* | wc -l"
|
Solution
For a permanent fix, upgrade the AOS to version 6.7 or later.
Workaround:
Restart insights_server on the go_ergon master CVM where "Watch client is not registered" message is observed.
Note: Restarting insights_server and go_ergon does not clear out entries for "watch client is not registered" in insights_server.INFO log. Additionally, when the services are restarted the backlog of tasks will start to take place and possibly could cause performance impact so best to perform during a maintenance window or outside production hours.
|
KB9057
|
Node removal gets stuck in a cluster as it is unable to migrate EC (Erasure Coding) egroups from the disks
|
Node removal process may get stuck at disk removal as EC egroups are not able to migrate from the disks to another block in the cluster.
|
When removing a node from a cluster, it may get stuck at the disk removal task because of not being able to migrate egroups with Erasure Coding from the disk. When erasure coding is enabled, by default it is configured to prefer fault tolerance to be block aware. The strip size will be chosen to comply with this even though it may result on smaller savings. Even if a cluster is set to Node fault tolerance level, EC will set the fault tolerance level to block. For further explanation see KB 6091 https://portal.nutanix.com/kb/6091.A node removal scenario can get stuck if there is no sufficient space in the remaining node in the block(s) to migrate all the egroups into while still compliant to the BA setting.
Symptoms
Following symptom-3 in KB 16236 https://portal.nutanix.com/kb/16236, Check to see how many egroups need to be migrated from the disk.
nutanix@cvm:~$ allssh "grep ExtentGroupsToMigrateFromDisk /home/nutanix/data/logs/curator.INFO"
In the above output, for this example, you see that there is 1 extent group on disk 47376936 that needs to be migrated from that disk for the node removal task to finish. There may be more in your scenario.
Sample output:
Executing grep ExtentGroupsToMigrateFromDisk ~/data/logs/curator.INFO on the cluster
Now find the egroup id that is residing on disk 47376936 by running the below command:
nutanix@cvm$ for i in `svmips` ; do echo ------$i------ ; ssh $i "grep 'Egroups for removable disk' /home/nutanix/data/logs/curator.INFO" ; done
OR
nutanix@cvm$ allssh 'cat data/logs/curator.* | grep "Egroups for removable disk"'
Sample output:
I0328 14:40:56.331195 5599 group_logger.cc:55] Egroups for removable disk 47376936: (7199337139)
From the output, egroup 7199337139 is being called out as the problematic egroup.Check medusa_printer output for the egroup that is yet to be migrated,
nutanix@cvm:~$ medusa_printer --lookup egid --egroup_id <egroup-ID> | less
Example:
nutanix@cvm:~$ medusa_printer --lookup egid --egroup_id 7199337139 | less
From this output, check egid metadata for the egroup IDs listed under erasure_code_parity_egroup_ids or erasure_code_info_egroup_ids (using the same command above)Search Stargate logs on all CVMS for "Unable to pick a suitable replica on tier” or "Unable to allocate replica"
Example:
nutanix@cvm:~$ allssh "less ~/data/logs/stargate.* | grep 'Unable to pick a suitable replica on tier'"
If there is a match for the above-mentioned signature, check if there is enough space in the cluster and check the fault tolerance level of the cluster.
nutanix@cvm:~$ zeus_config_printer | grep cluster_fault_tolerance_state -A3
Verify that the desired fault tolerance level is set to kNode.
cluster_fault_tolerance_state {
Verify if prefer higher EC fault domain is set to true (default),
nutanix@cvm:~$ zeus_config_printer | grep prefer_higher_ec_fault_domain -B30
|
Workaround
The cluster has Fault tolerance configured as node however Stargate is still trying to comply with block fault tolerance as it is the default setting for the prefer-higher-ec-fault-domain configuration.These is the behavior for this setting:
It is only used for a container that has erasure coding turned on. Non-EC containers do not use this setting.When set to the default, true EC strips will be created such that they are BA fault tolerant even if they are shorter. Existing longer EC strips that yield higher space savings but are lower fault domain aware will be made shorter so that they become higher fault domain aware.
In order to disable this setting and allow the Egroups to be placed in a node aware manner and strips to be created accordingly the setting can be modified on a per-container basis from NCLI:
nutanix@cvm:~$ ncli ctr update name=<ctr-name> prefer-higher-ec-fault-domain=false
Once the prefer-higher-ec-fault-domain value is set to false, the Egroups will start migrating and the Disk/Node removal will complete successfully. This value does not need to be set back to True if the highest fault tolerance level expected of the cluster is node however it should be understood the cluster cannot be block-aware in these cases.Please consult STL before making the changes.
|
KB8457
|
[Performance] Degraded performance on clusters with certain SSD models due to low max_sectors_kb kernel setting
|
Certain SSD models on a Nutanix system can cause performance degradation due to a wrong kernel setting with the name: max_sectors_kb.
This KB explains how to identify this issue, the work around to correct it and the permanent resolution in AOS.
|
Certain SSDs on a Nutanix system can cause performance degradation due to a wrong kernel setting with the name: max_sectors_kb. With a change in the the firmware of these disk models the mentioned setting was changed from the default of 512 to just 8. This issue was so far only seen with some Samsung SSDs drive models but the issue exists also on non Nutanix hardware like DELL for the same drive model:
Samsung PM1633Samsung PM1633aSamsung PM1635a
As we are not 100% sure if the problem also exists for the newer devices PM1643 and PM1645 it was decided to also include them into AOS 5.10.10.With AOS 5.10.9, 5.15 and 5.16 the issue has been fixed with:
Samsung PM1633 Datacenter series SSD 3.84TB (MZILS3T8HCJM/003)Samsung PM1635a series SAS SSD 800GB (MZILS800HEHPV3)Samsung PM1635a series SAS SSD 1.6TB (MZILS1T6HEJHV3)
With AOS 5.10.10/5.15/5.16.1 and 5.17 the issue has been fixed with:Nutanix:
Samsung PM1633a Datacenter series SSD 3.84TB (MZILS3T8HMLH/007)Samsung PM1643 RI Datacenter series SSD 7.68TB (MZILT7T6HMLA/007)Samsung PM1633 Datacenter series SSD 3.84TB (MZILS3T8HCJM/003)
Dell:
Dell Samsung PM1633a Datacenter series SSD 3.84TB (MZILS3T8HMLH0D3)Dell Samsung PM1635a Datacenter series SSD 1.6TB (MZILS1T6HEJH0D3)Dell Samsung SSD PM1645 1.6TB (MZILT1T6HAJQ0D3)Dell Samsung PM1635a Datacenter series SSD 800GB (MZILS800HEHP0D3)Dell Samsung SSD PM1645 800G (MZILT800HAHQ0D3)Dell Samsung PM1635a Datacenter series SSD 400GB (MZILS400HEGR0D3)Dell Samsung PM1643 RI Datacenter series SSD 3.84TB (MZILT3T8HALS0D3)
Lenovo:
Lenovo Samsung PM1645 1.6TB (MZILT1T6HAJQV3)Lenovo Samsung PM1635a series SAS SSD 1.6TB (MZILS1T6HEJHV3)Lenovo Samsung PM1635a series SAS SSD 800GB (MZILS800HEHPV3)Lenovo Samsung PM1645 800GB (MZILT800HAHQV3)Lenovo branded PM1633a 3.84TB Samsung SSD (MZILS3T8HMLHV3)Lenovo Samsung PM1643 3.84TB (MZILT3T8HALSV3)
HPE:
HPE 7.68TB SAS 12G Read Intensive SSD(PM1643) (VK007680JWSSU)
When looking at the Panacea report for such a system with these disks we see that the avgrq-sz (The average size (in sectors) of the requests that were issued to the device) is always below 16 which means the IO requested from or to the device is always cut into a max of 8 KB (16 sectors). If for example a 1 MB request get's issues to the device the IO will be cut into 128 pieces which as a result cause IO to queue up and latency spikes. The latency spikes are short term so the issue is not easy to catch immediately.The below figure 1 illustrates the issue:
Figure 1: avgrq-sz with max_sectors_kb set to 8 KB
While in figure 2 we see how a device on systems without this issue shows higher spike as the IO size is sometimes bigger:
Figure 2: avgrq-sz with max_sectors_kb set to 512 KB
The same is visible when doing a simple iostat -x 1 (sampling rate of 1 second) or looking at historic data in ~/data/logs/sysstats/iostat.INFO:Notice how the avgrq-sz never goes above 15
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
The below command shows the actual max_sectors_kb setting of the drives in the system (it could be that there are multiple disks with this issue, compare with entries in /etc/nutanix/hcl.json):
nutanix@CVM:~$ allssh 'for i in $(list_disks | grep -e nvme -e sd | grep -Po "/dev/[a-z0-9]{1,7}" | cut -c 6-12 | sort); do echo $i; cat /sys/block/$i/queue/max_sectors_kb; if [ $? -eq 0 ];then echo; fi; done'
This command provides a summary of all disks including their Model, Vendor, Serial and Firmware which can be verified against the list above:
nutanix@CVM:~$ allssh 'for i in $(list_disks | grep -e nvme -e sd | grep -Po "/dev/[a-z0-9]{1,7}") ; do sudo smartctl -i $i | grep -e Model -e Vendor -e Serial -e Product -e Firmware -e Revision; if [ $? -eq 0 ];then echo; fi; done'
|
Resolution
This problem was introduced with AOS 5.5 and is fixed with ENG-235137 in AOS 5.10.9, 5.16 and 5.11.2. Support for additional drives was added as part of ENG-273121 in 5.10.10, 5.15, 5.16.1 and 5.17 (see description for specific models being included in that version).The permanent fix for this issue will set the max_sectors_kb to 512 across all devices irrespectively of the drive, meaning all drives belonging to the SSD-SATA tier will have this setting. As there won't be a negative impact for drives having this already set, it will prevent future issues with drives where the kernel setting is set too low again. This work is being tracked and fixed with ENG-265146.
Work around
To work around the issue until the cluster can be upgraded to an AOS version with the fix a new user udev rule to set max_sectors_kb to the proper value of 512 has to be created.Udev is the device manager for the Linux kernel. Udev dynamically creates or removes device node files at boot time in the /dev directory for all types of devices. Udev rules determine how to identify devices and how to assign a name that is persistent through reboots or disk changes. When Udev receives a device event, it matches the configured rules against the device attributes in sysfs to identify the device. Rules can also specify additional programs to run as part of device event handling.Udev rules files are located in the following directories:/lib/udev/rules.d/ – The default rules directory/etc/udev/rules.d/ – The custom rules directory. These rules take precedence.Important notice: Please consult a Support Tech Lead (STL) for guidance on performing the steps below. If additional help is required open a TH or get in contact with the performance experts of your region.
Create a new udev file in the customer rules directory in one of the CVM:
nutanix@CVM:~$ allssh sudo touch /etc/udev/rules.d/98-ssd-max-sectors-bump.rules
Add a new entry to the file with the correct model name identified in the nodes in the cluster (cross check with /etc/nutanix/hcl.json as the exact device name has to be provided - see description section for more guidance).
Note multiple lines are needed if there are different models to fix in the cluster (in this example only MZILS1T6HEJH0D3 is used):
SUBSYSTEM== "block", ATTRS{model}== "MZILS1T6HEJH0D3", ACTION=="add|change", ATTR{queue/max_sectors_kb}="512"
Deploy of the new udev rule across all the CVMs in the cluster:
nutanix@CVM:~$ allssh sudo chown nutanix:nutanix /etc/udev/rules.d/98-ssd-max-sectors-bump.rules && for i in $(svmips); do sudo scp /etc/udev/rules.d/98-ssd-max-sectors-bump.rules nutanix@$i:/etc/udev/rules.d/98-ssd-max-sectors-bump.rules; done && allssh sudo chown root:root /etc/udev/rules.d/98-ssd-max-sectors-bump.rules
Run of chcon to set the correct SElinux context to the new .rules file that has been created:
nutanix@CVM:~$ allssh sudo chcon -R --reference /etc/udev/rules.d/99-ssd.rules /etc/udev/rules.d/98-ssd-max-sectors-bump.rules
Check the new .rules file has the permissions and size is as expected:
nutanix@CVM:~$ allssh ls -latr /etc/udev/rules.d/98-ssd-max-sectors-bump.rules
Reload of new udev rule:
NOTE: this is non disruptive and will apply the new setting immediately to all the CVM:
nutanix@CVM:~$ allssh sudo udevadm control --reload-rules && allssh sudo udevadm trigger --attr-match=subsystem=block
Final check to verify if max_sectors_kb is now set to 512 as expected:
nutanix@CVM:~$ allssh 'for i in $(list_disks | grep -e nvme -e sd | grep -Po "/dev/[a-z0-9]{1,7}" | cut -c 6-12 | sort); do echo $i; cat /sys/block/$i/queue/max_sectors_kb; if [ $? -eq 0 ];then echo; fi; done'
At this point all the CVMs will be working with the new setting and the avgrq-sz in the iostat output will not be limited to 16 anymore. The performance improvement should be visible immediately.This configuration will be preserved across CVM reboots however will not persist AOS upgrades. Ensure customers affected by this upgrade to an AOS with the fix or understand otherwise we have to repeat the procedure.
|
KB7878
|
Prism login for domain user falls back to Prism login page without errors
|
Post upgrading to AOS 5.10.5, customers might experience login failure on Prism web console. Post entering the domain credentials, the login falls back to the Prism login page.
|
Post upgrading to AOS 5.10.5, customers might experience login failure on Prism web console. Post entering the domain credentials, the login falls back to the Prism login page. Look out for the below signatures before performing the workaround.
Check out the console logs on the web browser when trying to log in to understand what REST API requests are sent out from Prism. In my case, it was sending out "An Authentication object was not found in the SecurityContext".
Check for service account being configured correctly and try to remove the LDAP entry and register the AD again. (Make sure the serviceAccountUsername is filled). Try changing Search Type from Recursive to Non-recursive and vice versa.
<ncli> authconfig list-directory
Check nuclei user.list to see if the desired email address you are trying to log in should be listed. In our case, [email protected] was not listed in the nuclei user.list output:
<nuclei> user.list
As a troubleshooting step, you can try to log in to Prism using any of the listed users from above and you will be able to log in successfully. This isolates that the user details to nuclei is failing.
Check the aplos.out logs (on leader as well as the fellow nodes) for any issues recorded when trying to login:
nutanix@cvm$ allssh "grep -i [email protected] data/logs/aplos.out"
Try to log in using Incognito mode so that we can eliminate browser cache issues.Check Athena logs for any issues reported. Look out for below signatures in the log, if any:
nutanix@cvm$ allssh "grep -i 'not in Sync' data/logs/athena.log*"
Enable debugging mode on Aplos using the KB 5647 http://portal.nutanix.com/kb/5647 and re-produce the issue and check the below signature:
2019-07-04 13:24:11 DEBUG user_util.py:230 Verifies whether [email protected] is part of existing valid groups
|
Try out the steps below to resolve the issue:
Ask the customer to change the domain group to a different group and add the User Role Mapping and try to log in. In my case, the Domain Group was "Domain Admins", which we changed to "NUTANIX_ADMINS" and created a new Role Mapping on Prism.
If the customer is not comfortable creating a new domain group, then try performing the below workaround:
Change users Primary Group from `Domain Admins` to `Domain Users` and then they can use `Domain Admins` for their role mapping.
This issue was reported in ENG-158073 http://jira.nutanix.com/browse/ENG-158073 where it was fixed in 5.10.4 but seems to be partially fixed. Please attach your case to ENG-238434 http://jira.nutanix.com/browse/ENG-238434.
|
KB4561
|
Windows Domain Controller fails to replicate data from other domain controllers after restoring from VM snapshot
|
Windows Domain Controller fails to replicate data from other domain controllers after restoring from VM snapshot
|
Windows Domain Controller fails to replicate data from other domain controllers after restoring from VM snapshot.
|
AHV supports VM-Generation ID starting from AOS 6.6.1. The new VMs will have the Gen-ID assigned during the VM creation process. The existing VMs on the cluster will automatically get a VM-Generation ID assigned and can access it after the power cycle.Please refer to the following article to learn more about VM-Generation ID and its role in Domain Controller recovery: https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/get-started/virtual-dc/virtualized-domain-controller-architecture
|
KB9330
|
CVM stacktraces during boot-up with sysfs group 'power' not found for kobject
|
During CVM boot-up process, stacktraces pointing to physical drives might be observed.
|
Background:
1. During CVM boot process, an initial kernel is loaded via svmboot.iso and later on a kexec order is executed to switch into the regular CentOS kernel. Before loading the CentOS kernel, all modules loaded by the initial kernel are unloaded in order to provide a cleaner boot process.
Identification:1. Look for the following signature on dmesg logs or in the serial console output:
In ESXi hypervisor: /vmfs/volumes/NTNX-local-ds-[serial-number]-[node-position]/ServiceVM_Centos/ServiceVM_Centos.0.outIn AHV hypervisor: /tmp/NTNX.serial.out.0
[ 27.441164] mpt3sas version 24.125.01.00 unloading
Root Cause: 1. ENG-243167 http://jira.nutanix.com/browse/ENG-243167 describes this behaviour. Some drives when the related disk controlller modules are unloaded from the initial kernel will throw this stacktrace as they don't handle this modules unloading in a graceful way.
|
If the CVM boots normally, the stacktraces can be safely ignored. If the CVM doesn't boot, these messages should not be a contributing factor and the reason for the CVM not booting up should be troubleshooted normally.
|
KB12458
|
Refresh VM Tools Entity task is stuck at 0%
|
Cannot use the 'nutanix_guest_tools_cli refresh_vm_tools_entity' CLI utility to update NGT client certificates on user VMs as the task is stuck at 0%.
|
When 'nutanix_guest_tools_cli refresh_vm_tools_entity' CLI utility is used to update NGT client certificates to resolve the alert generated by ngt_client_cert_expiry_check ( KB 10075 http://portal.nutanix.com/kb/10075), in some scenarios, the task is queued and remains stuck at 0% in Prism.
Running the nutanix_guest_tools_cli command again does not resolve the issue. It generates new tasks in Prism, that remain queued as well.
|
Contact Nutanix Support http://portal.nutanix.com to resolve the issue.
|
KB12885
|
VPN Gateway upgrade fails with the error "Update failed with error: ['resources']"
|
A higher version of the VPN available for upgrade earlier was removed from the backend due to an issue, but LCM inventory does not dismiss the higher version from the list of upgrades, so the LCM upgrade task fails.
|
When attempting to upgrade VPN gateway from LCM, it fails with the following error trace in log file /home/nutanix/data/logs/lcm_ops.out on the lcm leader CVM. The LCM leader can be found by running the command "lcm_leader" on any CVM.
2022-03-08 16:38:01,379Z INFO lcm_ops_by_pc:564 (x.x.x.x, update, c5e53cbd-0b82-4aeb-70dd-123f5e3ea1c2) Got existing scratchpad from task c5e53cbd-0b82-4aeb-70dd-123f5e3ea1c2
This error occurs in rare scenarios where the VPN upgrade was available when the inventory was run, but has been removed from the database due to an issue with the VPN version.
|
Upgrade Prism Central to version 2022.4.x or later, and reattempt the VPN upgrade.
|
KB3697
|
Enabling Intel Turbo Boost for G4 hosts running BIOS version 1.5d
|
This Kb explains the required BIOS settings changes to improve the application's response time.
|
In G4 1.5d BIOS we disabled Power Technology in BIOS. By disabling Power Technology, server will not leverage Intel's Turbo boot because of which application's response time will be poor. This KB will talk in detail on what BIOS settings needs to be changed to improve the application's response time.Applications such as EPIC have shown significant improvements in application responsiveness after making the configuration changes detailed in the solution section of this article.
|
If the Power Technology setting is set to Disabled in the BIOS then Intel's Turbo Boost Technology will not be enabled for the CPU. http://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.html http://www.intel.com/content/www/us/en/architecture-and-technology/turbo-boost/turbo-boost-technology.htmlCertain applications may see performance gains from being able to take advantage of the Turbo Boost capabilities of Intel's processors. If the application is known to benefit from Intel Turbo Boost the following steps can be used to enable the feature in the 1.5d G4 BIOS version where it is disabled by default.
Under the BIOS setup go to: Advanced >> CPU Configuration >> Advanced Power Management and make sure the setting is as follows:
Power Technology >> Energy Efficient
Under the BIOS setup go to: Advanced >> CPU Configuration >> Advanced Power Management and make sure the setting is as follows:
Energy Performance Tuning >> Enable
Note that step 2. will also pass the power management control of the CPU to the host OS.After making the above BIOS settings changes, if the Virtual Machine application runs slower than expected in ESXi, along with the above recommended changes, we need to change the Power Management Policy to High Performance on the ESXi host. Reference: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018206 https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1018206
|
KB2860
|
How to configure IPMI Active Directory Authentication for Supermicro platforms
|
This article describes how to set up Active Directory based authentication (LDAP) for IPMI access.
|
This document describes how to set up Active Directory based authentication (LDAP) for IPMI access.
|
Perform the following steps to set up Active Directory based authentication (LDAP) for IPMI access.
G8 Platforms:
Log in to the IPMI with the ADMIN account.Click Configuration > Account Services in the left pane.Select Directory Service in the right pane.Under Setting, turn ON Active Directory.Under Active Directory, add Server Addresses and Rules.
Click Submit.When logging into the IPMI web console, use the following format to log in: username@domain
Non-G8 Platforms:
Log in to the IPMI with the ADMIN account.Click Configuration in the list of available options.Select Active Directory in the left pane.Click the "here" link, as shown by the red arrow in the following screenshot, to enter the Active Directory configuration screen.
Click the Enable Active Directory Authentication check box to enable and configure Active Directory settings as desired, and then click Save.
Note: The default timeout is set at 0, which results in an error message HTTP 500 - Internal Server Error. Make sure to update it to 10.
Select the User and then select Add Role Group to configure the Role Group. (Be sure that it matches an existing AD group name.)When logging into the IPMI web console, use the following format to log in: username@domain
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.