id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB8689
|
Node will be stuck at "DXE--OOB Data Sync-Up” state while booting up
|
NX-3060-G5 Node will remain stuck at the Nutanix logo page while booting up and will show "DXE--OOB Data Sync-Up" in lower left corner of the screen
|
NX-3060-G5 node will remain stuck at the Nutanix logo page while booting up and will show "DXE--OOB Data Sync-Up" in lower left corner of the screen:
As seen from the image above, if you check the lower right corner of the screen, you will see that the post code it is stuck on is DXE Phase: 0xB2 - which means that the node is stuck on Legacy Option ROM Initialization as per KB 1926 ( http://portal.nutanix.com/kb/1926 http://portal.nutanix.com/kb/1926) No hardware errors/failures reported in the IPMI SEL logsThe "Post Analysis" section present in the mce_analyze.txt file (one of the hardware log files collected using OOB scripts (KB 2893 - http://portal.nutanix.com/kb/2893 http://portal.nutanix.com/kb/2893)) states that no trouble is detected. Performing the following operations won't help to resolve the issue:
Performing a node restart operation from IPMI web consoleReseating/power drain the node is a mandatory stepUpdating BIOS to the recommended versionReseating all DIMMs present in the node is a mandatory step
|
This issue is either related to the NIC cards present in the node or with the node itself. So, in order to resolve this issue, perform the following operations:
Remove all NIC cards present in the node (i.e. 10 G NIC card and any add-on NIC card if present). If without NIC cards, node still cannot boot up, then replace the node.If without the NIC cards, the node boots up fine, then re-install NIC cards one by one (if addon cards more than 1 are present). If it fails after re-inserting the NIC card, then replace the NIC card.If after re-inserting all the NIC cards, it boots up normally at this point, which indicated the NIC connectivity issue after re-install NIC, then the issue should be gone. If it happens again, then replace NIC card
|
""Verify all the services in CVM (Controller VM)
|
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Removes VIB packages from the host""
| null | null | null |
}
| null | null | null | null |
KB16587
|
How to modify the Traefik access log verbosity in DKP
|
How to modify the Traefik access log verbosity in DKP
|
By default Traefik access log level is set to WARNING. When troubleshooting access issues, for instance access to the DKP dashboard, the operator may need to increase the verbosity in Traefik logs so in this article we explain how to achieve this in DKP.
|
The Traefik default configuration is stored in the configmap “traefik-10.30.1-d2iq-defaults”. (note that the specific version numbers may change depending on the DKP version).
To override the default configuration, we should first identify which configmap should be modified with the values we’re looking to change. To do so we check the traefik appdeployment, specifically the value of the spec.configOverrides field:
kubectl --kubeconfig <kubeconfig> -n kommander get appdeployment traefik -ojsonpath='{.spec.configOverrides}'
{"name":"traefik-overrides"}
Then we have to edit the traefik-overrides configmap:
kubectl --kubeconfig <kubeconfig> -n kommander edit cm traefik-overrides
And add the following values to increase the traefik access log verbosity:
logs:
general:
level: DEBUG
access:
enabled: true
This is how the correct format looks like:
apiVersion: v1
data:
values.yaml: |
logs:
general:
level: DEBUG
access:
enabled: true
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
kind: ConfigMap
metadata:
name: traefik-overrides
namespace: kommander
|
KB7099
|
Explanation on reserved Space usage on ESXi Clusters due to thick provisioning
|
This KB explains how ESXi thick-provisioning results in reserved space at Nutanix side, including practical examples and interaction with snapshots.
|
In environments running ESXi, the users might see a "reserved space usage" higher than expected in the capacity planning view in Prism Central:Storage container at the same moment shows zero "Explicit reservation" - which can be configured manually in Prism/NCLI:
ncli ctr ls | egrep "Name|Reservation"
|
As cluster is running ESXi, some VMs could have thick-provisioned disks since Nutanix does support Thick provisioning on NFS.
Note: AHV uses iSCSI connections for VM disk access and does not support thick provisioning.
To give an example of the amount being thick-provisioned, Prism displays the following:
You can see the same from the CLI with the use of the "ncli ctr ls" command:
Id : 00057acb-4079-ae3e-0da6-ac1f6b4b858c::3655
This can also be noted from a per VM perspective.For example, a CentOS VM was created with 2 disks: a 16GB OS and a 200GB thick-provisioned data disk. We created 20 GB of random data on the data disk:
Used Space (Physical) : 203.87 GiB (218,906,435,584 bytes)Thick Prov. (Logical) : 200 GiB (214,748,364,800 bytes)
We created a VMware snapshot of the UVM and wrote an additional 20 GB of data to the data disk, and note a total of 446G of total physical data used from the storage pool, expressing approximately 86GB in actual use and 360GB thick-provisioned and unused:
From the specific display of the container, we note 43 GB of logical data (i.e. RF2) being used to store the VM.
Here, we have only ONE VM with 16+200 GB disks, but the used space is 223.17 GB, and we have to keep the VDisk related to snapshot + original data.
nutanix@CVM:~$ vdisk_config_printer
nutanix@CVM:~$ curator_cli get_vdisk_usage lookup_vdisk_ids=24219 <-- nfs_file_name: "centos-flat.vmdk"
If you wish to remove the thick provisioning and reclaim space claimed from thick provisioning, please refer to the steps outlined in KB 4180 https://portal.nutanix.com/kb/4180.Notes:
It has been observed that vDisks with reservations marked for removal are not taken into account in the calculations of the capacity runway view in PC. This means, the "total_reserved_capacity" value of the vdisk is not considered and does not affect the reserved capacity at the container level. You can use the following command to see the reserved capacity per vdisk:
nutanix@CVM$ vdisk_config_printer | grep reserv -A10 | egrep "reserved|nfs_file_name"
When analyzing the reservation numbers, it's helpful to view this information from the stargate page on port 2009
nutanix@CVM$ links http:0:2009
|
KB5893
|
How to check Power Consumption of each node
|
This KB documents the procedure to check the power consumption of each node
|
Currently, we can't see the Power consumption of each node from Prism but we can get these stats using Ipmitool commands because each IPMI interface would be able to show/collect this info.
|
The NX platforms do have this information presented and can be pulled via ipmi SDR/sensor command. In our testing, the granularity of the measurement was not small enough, and not accurately reflected when compared to the power supply reading for multi-node systems. These readings do not include power consumed by backplane and drives.For example, the Power consumption of the node is 144 WattsWhen using the ipmitool command on a CVM you need to specify the host, username and password if you want to communicate with a remote IPMI session.-H --> IPMI IP -U --> IPMI Username -P --> IPMI Password
ipmitool -H <IPMI IP> -U ADMIN -P <IPMI password> sdr
|
KB13574
|
Invalid msp-registry record in /etc/hosts on PCVM could affect IAMv2 pods during PCVM upgrade
|
Invalid msp-registry record in /etc/hosts file on PCVM node could affect IAMv2 pods during PCVM upgrade.
|
Invalid msp-registry record in /etc/hosts file on PCVM node could affect IAMv2 pods during PCVM upgrade.When this problem is observed, IAMv2 pods images pull may fail on the affected PCVM node, and pods may be stuck in ImagePullBackOff state.
Note: CMSP upgrade and IAMv2 pods could fail due to multiple various issues; this KB article focuses on the specific case when IAMv2 upgrade wasn't finished due to an invalid msp-registry entry in /etc/hosts on one PCVM node, so pods on this node could not pull updated images. Refer to KB-9696 https://portal.nutanix.com/kb/9696 for general IAMv2 troubleshooting.Identification:1. While checking the IAMv2 pods, they are unhealthy: The pods are stuck in ImagePullBackOff and not running. All pods in ImagePullBackOff state are scheduled on the same node, on remaining 2 PCVM nodes images are pulled successfully, and pods are running.In the sample below, all stuck pods, including iam-bootstrap scheduled to PCVM .43 and failed with ImagePullBackOff error:
nutanix@NTNX-xx-xx-xx-43-A-PCVM:~$ sudo kubectl get pods -A -o wide | grep ImagePullBackOff
2. Inspect /etc/hosts file on PCVM nodes; node having pods in ImagePullBackOff has a different entry for msp-registry: In the sample below we see the registry was running on xx-xx-xx.42(192.168.2.54); however, the /etc/hosts of the xx-xx-xx.43 point to the wrong node, so pods scheduled to PCVM .43 could not pull images:
nutanix@NTNX-xx-xx-xx-42-A-PCVM:~$ allssh "cat /etc/hosts |grep MspRegistry"
3. Validate the registry service status on all the PCVMs and find where the registry is running; the below sample shows the registry running on PCVM xx-xx-xx.42:
nutanix@NTNX-xx-xx-xx-42-A-PCVM:~$ allssh sudo systemctl status registry
4. Check eth1 interface configuration to correlate PCVM addresses to entries in /etc/hosts for msp-registry record.Checking the IP configuration on the eth1 interface for each PCVM is visible that the 192.168.2.54 IP is assigned to the eth1 on PCVM xx-xx-xx.42:
nutanix@NTNX-xx-xx-xx-42-A-PCVM:~$ allssh ip addr show eth1
Note: effect of unhealthy IAMv2 deployment or IAMv2 deployment running on outdated images could be various; below error is one of the possible samples:After upgrading the Prism Central version, an issue might be observed when the Authentication or the SSL Certificate option is selected from the PC Settings. The displayed error is: "Expected EOF at line 1 column 11 ". The Authentication details are not listed, and it is impossible to modify the configurations.
To validate this issue, first, need to be checked the mercury.out logs and a similar error code 500 should be present:
[2022-08-30T12:34:16.282Z] "GET /PrismGateway/services/rest/v1/authconfig?__=1661862851428 HTTP/2" 500 - 0 72 159 159 "192.168.2.1" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36" "0e8acda1-f026-4bf4-a5c2-6c6d0703452c" "xx.xx.xx.43" "xx.xx.xx.43:9444"
As a consequence, on the prism_gateway.log, the error "Expected EOF at line 1 column 11" is visible:
INFO 2022-08-30 12:34:16,286Z http-nio-127.0.0.1-9081-exec-4 [] commands.config.GetClientAuthKey.prepareToExecute:55 Invoking Get Client Auth Key command.
|
Workaround:As a workaround to this issue, the registry IP in /etc/hosts on the affected PCVM identified in steps 1 and 2 in the Description section needs to be changed to point to the correct PCVM.Considering the sample above we need to change /etc/hosts file on PCVM xx-xx-xx.43Using the vi command on xx-xx-xx.43 to update the registry IP.
nutanix@PCVM_xx-xx-xx.43:~$ sudo vi /etc/hosts
change msp-registry entry on the node with invalid entry to valid one identified using steps 3 and 4 in the Description section.Considering the sample above, change the entry from 192.168.2.53 to 192.168.54:
nutanix@PCVM_xx-xx-xx.43:~$ sudo vi /etc/hosts
after change PCVM xx-xx-xx.43 will be able to reach msp-registry service running 192.168.2.54 and will be able to pull imagesAfter updated images are pulled, pods will be able to start. after some time the IAM pods should be up and running, monitor status using kubectl command:
nutanix@NTNX-xx-xx-xx-43-A-PCVM:~$ sudo kubectl get pods -A
|
KB11962
|
Prism Element GUI throws console error when trying to configure RDMA network
|
Prism Element GUI throws console error when trying to configure RDMA network
|
When trying to configure RDMA network from the Prism UI, the configure window is stuck at "Loading"
You may notice similar error as below in Chrome Developers Tools.
lib.js?uiv=1612565316743:16 19:13:22 nutanix.ui.AppManager: Environment: PE
Prism gateway logs shows up the below information during the time of error :
DEBUG 2021-06-15 19:26:17,025 http-nio-0.0.0.0-9081-exec-6 [] prism.util.Utils.executeHTTPPostData:738 Proceeding to execute the call to url http://127.0.0.1:2100/jsonrpc, payload is {".oid":"NodeManager",".method":"get_host_physical_networks",".kwargs":{}}
Clearing the cache or launching Prism from a different browser does not solve the issue.
|
As per ENG-240241 https://jira.nutanix.com/browse/ENG-240241, this scenario is observed when there is an issue with fetching network information from backend service (caused due to vCenter not being registered in Prism for an ESXi cluster, acropolis service crashing in an AHV cluster, etc). ENG-416252 https://jira.nutanix.com/browse/ENG-416252 is open to indicate the issue with an error instead of getting stuck at loading screen when there is an issue with fetching the network information. Engineering was unable to reproduce this issue. If the issue matches this scenario, please reopen the ENG
Workaround:
A possible workaround is to use the network_segmentation CLI to configure the RDMA network:
network_segmentation --rdma_network --rdma_subnet="xx.xx.xx.0" --rdma_netmask="yyy.yyy.yyy.0" --rdma_vlan=zzz --pfc=xyz
|
KB13359
|
Drive letters changed during the failover of VM in cross Hypervisor DR
|
This KB describes how to avoid the drive letter change during the failover of the VM in Cross Hypervisor DR
|
Drive letters are changed during the failover of the Windows VM in Cross Hypervisor DR Scenario:During cross hypervisor DR ( AHV -> ESXI ) or ( ESXI -> AHV )
If the CD ROM is assigned with the drive letter apart from D:/, may be E:/ or G:/ etc When the SAN policy within the OS is set to Offline Shared and the drive letter assigned to CD ROM is some other letter apart from drive letter D:/
If the CD ROM is assigned with the drive letter say E:/ and having the SAN policy set to Offline Shared then after cross hypervisor DR, the CD ROM will be assigned with drive letter D:/ and because of this the data won't be available as the different drive letter will be assigned to the disk.The VM where the CD ROM is assigned with Drive Letter E:\Post migration the drive letters get changed and CD ROM will be assigned with drive letter D:\
|
Need to change the SAN (Storage area network) policy within the Windows VM in order to avoid the drive letter change even if the CD ROM is assigned with different Drive letter apart from D:\ during the failover operation in Cross hypervisor DRThe steps to follow to change the SAN policy within the Windows VM are:1. Launch the console for the Windows VM and then open the command prompt
C:\Users\Administrator> diskpart
2. Type SAN to find out the policy that is currently set
DISKPART> san
3. Change the policy to Online All by running the following command
DISKPART> SAN POLICY=OnlineAll
4. Please check if the policy is changed to OnlineAll
DISKPART> san
Post changing the policy the drive letters do not change even if the CD ROM is assigned with the different drive letterThis is noted in the Data protection and Recovery Prism guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-dr-cross-hypervisor-c.html https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-dr-cross-hypervisor-c.html
|
KB12548
|
NCC Health Check: attached_vm_volume_different_recovery_clusters_check
|
The NCC Health Check: directly_attached_vg_vm_replication_to_different_clusters_check introduced in NCC 4.6 verifies if VM(s) with directly attached Volume Group(s) specified in Recovery Plan have their latest recovery points on the same cluster.
|
The NCC Health Check: directly_attached_vg_vm_replication_to_different_clusters_check introduced in NCC 4.6 checks if VM(s) with directly attached Volume Group(s) are replicated to different recovery clusters. Running NCC CheckThis check can be run as part of a complete NCC health check:
nutanix@cvm$ ncc health_checks run_all
You can also run this check separately:
nutanix@cvm$ ncc health_checks directly_attached_vg_vm_replication_to_different_clusters_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check will generate an alert after 1 concurrent failure across scheduled intervalsThis check runs on AHV.This Check runs on Prism Central only.Output Messaging
[
{
"300439": "Checks if VM(s) with directly attached Volume Group(s) are replicated to different recovery clusters",
"Check ID": "Description"
},
{
"300439": "VM(s) and directly attached Volume Group(s) do not have the same recovery cluster.",
"Check ID": "Cause of Failure"
},
{
"300439": "Ensure that the VM(s) and directly attached Volume Group(s) have their latest recovery points on the same cluster",
"Check ID": "Resolution"
},
{
"300439": "Volume Group(s) attachment post-recovery of VM(s) would fail.",
"Check ID": "Impact"
},
{
"300439": "A300439",
"Check ID": "Alert ID"
},
{
"300439": "Recovery Plan has VM(s) with directly attached Volume Group(s) that have their latest Recovery Points on different clusters",
"Check ID": "Alert Title"
},
{
"300439": "Recovery Plan {recovery_plan_name} has VM(s) with directly attached Volume Group(s) that have their latest Recovery Points on different clusters",
"Check ID": "Alert Smart Title"
},
{
"300439": "Recovery Plan {recovery_plan_name} has VM(s) with directly attached Volume Group(s) that have their latest recovery points on different clusters. {alert_msg}",
"Check ID": "Alert Message"
}
]
|
Resolving this issue:Ensure that the VM(s) and directly attached Volume Group(s) listed in the alert have their latest recovery points on the same cluster.
Refer to VM Recovery Points https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-recoverypoints-ui-pc-r.html section in Nutanix DR guide(formerly Leap) for details about VM Recovery points.Refer to Volume Group(s) Recovery Points https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-vgrecoverypoints-ui-pc-r.html section in Nutanix DR guide(formerly Leap) for details about Volume Group(s) recovery points.You may also refer to KB 13040 https://portal.nutanix.com/kb/13040 to ensure VM(s) and attached Volume Group(s) are protected by the same protection policy if the VM(s) and directly attached Volume Group(s) recovery points are on different clusters.
In case the above mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/.Additionally, gather the following command output and attach it to the support case:
nutanix@cvm$ ncc health_checks run_all
|
KB4784
|
CVM in reboot loop during AOS upgrade
|
CVM stuck in reboot loop during AOS upgrade
|
We have faced a situation where CVM is stuck in reboot loop during AOS upgrade & vmware logs has following signatures :
2017-09-13T12:58:39.528Z| vcpu-5| I125: Beginning monitor coredump
This issue happens when memory assigned to virtual machine does not align with NUMA configuration on node.
|
This issue is described in VMware KB 10213 https://kb.vmware.com/s/article/10213.
Get the VM ID of the CVM:
vim-cmd vmsvc/getallvms
Power off the CVM from ESXi command line:
vim-cmd vmsvc/power.off <vm_id>
Reference: VMware KB: Powering on a virtual machine from the command line when the host cannot be managed using vSphere Client (1038043) https://kb.vmware.com/s/article/1038043
On the ESXi host of the problematic CVM, go to Advanced configuration and change Numa.MonMigEnable to 0.Power on the CVM.If the CVM fails to power on, do the following:
Disable NUMA Memory Affinity:
Select CVM -> Edit settings -> Resources Tab -> Advanced Memory -> NUMA Memory Affinity -> Select "No affinity" radio button
Try powering on the CVM again.If the CVM powers on successfully, change the NUMA Memory Affinity setting back to its default value of "Use memory from nodes" and select 0, as per the screenshot below:
Once the CVM is up and running, change Numa.MonMigEnable back to 1.
At this point, the CVM should power on normally and the upgrade should continue for the rest of the CVMs.
|
KB9231
|
NX Hardware [Memory] – Incorrect amount of memory displayed in Prism due to unsupported DIMM placement
|
Incorrect amount of memory displayed in Prism / host CLI because DIMMs were in the wrong slots.
|
This KB describes a scenario where Prism will incorrectly display a higher amount of memory on one or more nodes than the one installed in the system.NOTE: The identification section of this KB focuses on AHV as this was the hypervisor where this behaviour was observed first. Consider expanding the KB if this is found on different hypervisors.
Identification:
In the example below, there is a 1GB difference between nodes with identical hardware specs:
Verify dmesg output between the AHV nodes in the cluster to obtain the amount of memory reserved by the operating system.Note that AHV host .X1 reports 385 GB (12 x 32 GB DIMM) when .X2 reports 384 GB, which is higher than the physical amount of RAM installed in the system.
nutanix@NTNX-ABC-CVM:XX.XX.XX.X1:~$ hostssh "dmesg | grep -i memory"
/proc/meminfo will report the same information:
XX.XX.XX.X1
Memory reported in linux kernel boot up sequence:
hostssh "dmesg | head -n100"
============= XX.XX.XX.X2 ============
NUMA configuration:
.X2
Root Cause:This inconsistency is caused by an incorrect DIMM allocation on the motherboard.
Refer to the hardware memory replacement page: Hardware memory replacement https://portal.nutanix.com/page/documents/list?type=hardware&filterKey=Component&filterVal=MemoryCompare the memory configuration from "ncc hardware_info show_hardware_info" with the documentation for this node type and reseat DIMMs to the proper slots.Compare dmidecode output from the nodes to learn the current DIMM allocation
During TH-3765 https://jira.nutanix.com/browse/TH-3765 the following was observed on the nodes showing up 1GB of extra memory:
Node: NX-1065-G6Incorrect hardware memory configuration
root@host$ dmidecode
According to the hardware memory replacement documentation for this node type, the supported memory configuration for 12 DIMMs should look like the following:
|
Work with the customer to reseat the DIMMs to the proper slots as per the hardware configuration guide corresponding to the model ( Example link for 1065-G6 http://portal.nutanix.com/page/documents/list?type=hardware&filterKey=Component&filterVal=Memory) .
Consider involving the account team to understand why the slots were incorrectly populated and to assist with customer communication.
Update the current KB if this is observed in different hardware models / hypervisors.
|
KB3309
|
Foundation fails with error: 'Block ID' cannot contain non-alphanumeric characters except '-' and '_' after chassis replacement.
|
Foundation fails with error "'Block ID' cannot contain non-alphanumeric characters except '-' and '_'" after chassis replacement.
|
Foundation may fail with following error in log and leave node in phoenix prompt:
20160501 07:56:11 INFO Running Phoenix20160501 07:56:22 INFO INFO: Running updated Phoenix20160501 07:56:22 INFO INFO: Model detected: NX-1065-G420160501 07:56:22 INFO INFO: Using node_serial from FRU20160501 07:56:22 INFO INFO: Using block_id from IPMICFG20160501 07:56:22 INFO INFO: Using cluster_id from FRU20160501 07:56:22 INFO INFO: Using node_position from IPMICFG20160501 07:56:22 INFO INFO: node_serial = OM15AS005215, block_id = (Empty), cluster_id = 44143, model = USE_LAYOUT, model_string = NX-1065-G4, node_position = B20160501 07:56:22 INFO Downloading Acropolis: Downloading Acropolis...20160501 08:08:08 ERROR ERROR: 'Block ID' cannot contain non-alphanumeric characters except '-' and '_'20160501 08:08:08 CRITICAL FATAL: An exception was raised: Traceback (most recent call last):File "./phoenix", line 85, in <module>main()File "./phoenix", line 42, in mainparams.validate()File "/phoenix/imagingUtil.py", line 246, in validateraise ValidationError()ValidationError
While node is at booted to Phoenix, run "ipmicfg -tp info", you will notice that the S/N is empty along with others.
phoenix ~ # ipmicfg -tp info Node | Power | IP | Watts | Current | CPU1 | CPU2 | System ---- | -------- | --------------- | ----- | ------- | ---- | ---- | ------ A | Active | 192.168.10.135 | 94W | 7.6A | 41C | 41C | 30C B | Active | 192.168.10.136 | 93W | 7.6A | 41C | 40C | 32C C | Active | 192.168.10.137 | 93W | 7.5A | 38C | 39C | 25C D | | | | | | | Node | Node P/N | Node S/N ---- | -------- | -------- A | X10DRT-P-G5-NI22 | ZM16BS024031 B | X10DRT-P-G5-NI22 | ZM16BS023852 C | X10DRT-P-G5-NI22 | ZM16BS024044 D | | Configuration ID : 4 Current Node ID : B System Name : (Empty) System P/N : (Empty) System S/N : (Empty) <============ Chassis P/N : (Empty) Chassis S/N : (Empty) BackPlane P/N : BPN-SAS3-217HQ-NI22 BackPlane S/N : PB155S012038 Chassis Location : FF FF FF FF FF BP Location : N/A (FFh) MCU Version : 1.08 BPN Revision : 1.01 phoenix ~ #
|
This can happen after chassis replacement is performed and for some reason, those fields were not filled in the factory.For Foundation to succeed, we only need to fill in "System S/N", which can be found on the sticker attached to the chassis.Please run following command to set S/N:
ipmicfg-linux.x86_64 -tp systemsn <system s/n>
Or
ipmicfg -tp systemsn <system s/n>
NOTE: Please consider that the command "ipmicfg -tp systemsn XXXXX" is not available on AOS 6.x and 5.20.1+. Engineering is addressing this under ENG-415183 https://jira.nutanix.com/browse/ENG-415183. Refer to KB1546 https://portal.nutanix.com/kb/1546 for an option to download ipmicfg, or download a previous version from HERE https://jira.nutanix.com/secure/attachment/625348/IPMICFG-Linux.x86_64_1.32_internal_use_only, and use it for this task only.MAKE SURE TO REMOVE THE IPMICFG BINARY ONCE FINISHED. DO NOT LEAVE IT IN THE SYSTEM.
If using smcipmitool or ipmiview was not helpful to resolve this issue, and if you are unable to download an ipmicfg tool with the capability to edit the "systemsn", the workaround is to manually install hypervisor and AOS on the nodes.
After you download the ipmicfg tool which has the Get/Set option:
IPMICFG 1.14.7 (2014/05/23)=========================Additions---------1. Added -sgifru list Show all FRU values for SGI. -sgifru <MFG Date> <BSN> <MAC1> <MAC2> OEM FRU command for SGI. -tp info Get MCU Info. -tp info <Type> Get MCU Type Info. (Type: 1 - 3) -tp systemname [Value] Get/Set System Name. -tp systempn [Value] Get/Set System P/N. -tp systemsn [Value] Get/Set System S/N. <================ Set
Copy it to the node:
phoenix ~ # chmod +x ipmicfg-linux.x86_64.staticphoenix ~ # ./ipmicfg-linux.x86_64.static -tp systemsn 15SM60250003
Confirm the value has been changed by executing the command:
phoenix ~ # ipmicfg-linux.x86_64 -tp info Node | Power | IP | Watts | Current | CPU1 | CPU2 | System ---- | -------- | --------------- | ----- | ------- | ---- | ---- | ------ A | Active | 192.168.10.135 | 95W | 7.7A | 39C | 39C | 29C B | Active | 192.168.10.136 | 93W | 7.5A | 40C | 38C | 31C C | Active | 192.168.10.137 | 92W | 7.5A | 38C | 39C | 25C D | | | | | | |Node | Node P/N | Node S/N ---- | -------- | -------- A | X10DRT-P-G5-NI22 | ZM16BS024031 B | X10DRT-P-G5-NI22 | ZM16BS023852 C | X10DRT-P-G5-NI22 | ZM16BS024044 D | | Configuration ID : 4 Current Node ID : B System Name : (Empty) System P/N : (Empty) System S/N : 15SM60250003 <================== Chassis P/N : (Empty) Chassis S/N : (Empty) BackPlane P/N : BPN-SAS3-217HQ-NI22 BackPlane S/N : PB155S012038 Chassis Location : FF FF FF FF FF BP Location : N/A (FFh) MCU Version : 1.08 BPN Revision : 1.01 phoenix ~ #
|
KB9523
|
Duplicate Zookeeper Processes running on CVM
|
Multiple Zookeeper processes running on a single CVM can lead to cluster wide service degradation and/or downtime.
|
Under normal conditions whenever zookeeper_monitor QFATALs or segfaults it will terminate the local Zookeeper server process and spawn a new Zookeeper server process. In certain conditions where access to /tmp on the CVM’s local filesystem is impacted zookeeper_monitor will not be able to kill the current Zookeeper process and will spawn a second Zookeeper process on the same CVM. In this state if another Zookeeper CVM in the cluster is restarted Zookeeper will be unable to re-establish quorum and there will be cascading service impact throughout the cluster, potentially causing an outage. To verify if this condition is present run the following command to check for log files containing the signature:
nutanix@CVM:~$ allssh 'grep -l "Couldn’t bind to port 2888" ~/data/logs/zookeeper.out*'
Verify the following signature is in the previously identified CVMs Zookeeper logs:
2020-06-06 15:28:14,035 - ERROR [QuorumPeer[myid=3]0.0.0.0/0.0.0.0:9876:Leader@169] - Couldn't bind to port 2888
Check for the existence of multiple active processes across the cluster, in this example CVM x.x.x.2 has 2 ZK processes running
nutanix@CVM:~$ allssh 'pgrep -fa zkServer'
In normal conditions only one zkServer.sh start-foreground process should be running on a CVM. If you see two processes and the previous signatures in the logs you are impacted by this issue. Get the current list of Zookeeper CVMs:
nutanix@CVM:~$ for zk in zk{1..3}; do echo $(getent hosts $zk) $(nc $zk 9876 <<< srvr | grep Mode); done
Collect the current disk utilization for the CVM's /tmp directory:
nutanix@CVM:~$ df -h /tmp
|
Cause:
This condition was seen in ONCALL-8875, ONCALL-8906, ONCALL-8701, and ONCALL-6876 along with an epoch and zkid mismatch. ENG-186817 https://jira.nutanix.com/browse/ENG-186817 addressed this issue since AOS 5.10.11 and 5.11+Previously whenever zookeeper_monitor would QFATAL or segfault it would terminate and restart a Zookeeper Service instance on the local CVM.Zookeeper_monitor relied on the PIDs provided by JPS (Java Virtual Machine Process Status Tool) to kill Zookeeper. This JPS data is stored on the CVM's local filesystem under /tmp.If I/O is impacted to the /tmp directory it is possible zookeeper_monitor will be unable to get these PIDs and can spawn a duplicate Zookeeper service process.In order to prevent this, the JPS dependency was removed and zookeeper_monitor leverages the CVM kernel to get the PIDs through ps since AOS 5.10.11 and 5.11
Important Notes:
Whenever duplicate ZK processes exist on the cluster, it is recommended to schedule a maintenance window to take the corrective actions and fix the duplicate ZK processes. Due to the extreme sensitivity of the procedure below and the potential to cause data consistency issues, Please consult with an STL before proceeding
Script Method (Recommended)
ENG-387833 https://jira.nutanix.com/browse/ENG-387833 provided a script to better handle the duplicate processes, as the manual killing of duplicate zookeeper servers on the CVMs in random order can lead to the loss of some transactions. The script automates the process of killing the duplicate servers in the right order and also backs up the most up-to-date version-2 directory in the ensemble so that it can be used to restore the ensemble in case there is a rollback.Note: This script doesn't support Single Node or Two Node Clusters. Please use manual method insteadS3 Link: https://download.nutanix.com/kbattachments/9523/dual_zk_servers_cleanup https://download.nutanix.com/kbattachments/9523/dual_zk_servers_cleanupFile MD5: 7c13d3cd476319eb0685b84da535c550
NOTE: Script got bundled with AOS starting with version 6.6 and later:
nutanix@CVM~$ which cluster/bin/dual_zk_servers_cleanup
Download the script to one of the CVMs at ~/bin
nutanix@CVM:~$ wget -O ~/bin/dual_zk_servers_cleanup.py https://download.nutanix.com/kbattachments/9523/dual_zk_servers_cleanup
Check for dual zkservers
nutanix@CVM:~$ python ~/bin/dual_zk_servers_cleanup.py check_dual_servers
List the IPs of the CVMs with Dual zkservers
nutanix@CVM:~$ python ~/bin/dual_zk_servers_cleanup.py list_dual_server_ips
Stop genesis on all CVMs
nutanix@CVM:~$ allssh 'genesis stop genesis'
Backup the /home/nutanix/data/zookeeper directory into ~/tmp on each CVM
nutanix@CVM:~$ allssh "tar czvf ~/tmp/zkbkp.tar.gz /home/nutanix/data/zookeeper"
Cleanup the CVM with dual zkservers
nutanix@CVM:~$ python ~/bin/dual_zk_servers_cleanup.py cleanup_dual_servers
If any errors are returned, please consult with an STL.
Manual Method
nutanix@CVM:~$ cluster stop
nutanix@CVM:~$ allssh "genesis stop zookeeper && genesis stop genesis"
nutanix@CVM:~$ allssh "pkill -9 -fe zookeeper"
nutanix@CVM:~$ allssh "genesis restart"
At this point, verify that there are no duplicate ZK processes running on the cluster.
nutanix@CVM:~$ allssh 'pgrep -fa zkServer'
Also, verify that all zookeeper nodes are responding to the following command
nutanix@CVM:~$ allssh 'for i in $(seq `cat /etc/hosts | grep zk | wc -l`);do echo zk$i; echo ruok | nc zk$i 9876;
Verify that the epoch id and zxid for all zookeeper processes are in sync using the following commands
nutanix@CVM:~$ allssh 'cat data/zookeeper/version-2/currentEpoch; echo'
At this point, all the services in the cluster should be up and running. Run an NCC health check to verify cluster health and check the cluster data resiliency from Prism before powering the VMs back on the cluster.
|
{
| null | null | null | null |
Stargate.INFOFor a disk that fails during runtime
|
Stargate will be the first to notice any I/O problems. If Stargate finds that it takes more than 10 seconds* to finish a read or write operation to a particular SSD/NVMe drive
|
it will log a “hung op” and the drive will be marked offline.*This is the threshold for AOS 5.20.5+ and 6.1.2+. In earlier AOS releases
|
Stargate encounters a hang in write/read operation it had sent to the NVMe disk with Logical ID 1361406410 (check zeus_config_printer or “panacea_cli show_disk_list” for a mapping of disk serial numbers to logical Disk IDs).
|
the threshold is 40 seconds (ref ENG-428812 https://jira.nutanix.com/browse/ENG-428812).In the example below
|
KB15523
|
File Analytics - Elastic Search docker container stuck at starting state
|
Analytics_ES1 docker shows unhealthy due to container stuck in starting status
|
You may see the below alert being triggered In Prism Element."One or more components of the File Analytics VM xx.xx.xx.xx are not functioning properly or have failed"1. Verify the container for ES1 status using 'docker ps -a' and it will show as Unhelathy.
[nutanix@NTNX-10-91-xx-xx-FAVM ~]$ docker ps -a
2. Verify the Running status of ES1 service and it will be stuck in starting state.
[nutanix@NTNX-10-91-XX-XX-FAVM ~]$ docker exec -it 378d90789494 bash -c "supervisorctl status"
If you reboot the FAVM and restart the container then it will become "running" state for few seconds then again shows "starting" state.3. curl command from FAVM refuse the connection because this is the ES API and the container is stuck at starting state so no output will be there.
[nutanix@NTNX-10-91-xx-xx-FAVM ~]$ curl -sS "172.28.5.1:9200/_cluster/health?pretty=true"
4.Validate the /mnt/logs/containers/elasticsearch/elasticsearch.out logs for below instances.
[nutanix@NTNX-10-91-xx-xx-FAVM ~]$ less /mnt/logs/containers/elasticsearch/elasticsearch.out
|
Verify the FAVM configuration in PE as this problem generated due to insufficient memory configured for File Analytics VM.Follow the below guide and configure the correct required Memory and vCPU for the FAVM. File Analytics System Limits https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-Analytics-v3_2_1:ana-fs-analytics-system-limits-r.html
|
KB1961
|
Foundation hits a timeout when mounting hypervisor ISO
|
Foundation fails the imaging process at less than 10%. The node logs (from Foundation) show that Foundation is timing out when attempting to mount the hypervisor ISO.
|
Foundation fails the imaging process at less than 10%. The node logs (from Foundation) show that Foundation is timing out when attempting to mount the hypervisor ISO.
20150106 113329: IPMI node IP: 155.xx.xx.87
|
Verify that the SMCIPMITool can access the nodes BMC interface by using the following command from the Foundation VM. This verifies that the necessary ports are open.
Follow KB-8889 https://portal.nutanix.com/kb/8889 on how to leverage SMC folders in various Foundation versions.
[nutanix@nutanix-installer ~]# cd /home/nutanix/foundation/lib/smcipmitool
If the SMCIPMITool is successful, then there is something on the switch causing the connection to time out during the mount. The most common cause of this issue is that STP is enabled on the switch. Disable STP portfast or ensure you are using a dumb flat switch for the imaging process.
There are known issues where older BMC firmware versions may cause intermittent issues with Foundation imaging. Verify with Nutanix Support if the BMC/BIOS version is eligible for an update on the node.
Check the BMC and BIOS version on your cluster by running the commands mentioned in the following article.
KB-1262: How to check BMC firmware and BIOS version of Nutanix node https://portal.nutanix.com/kb/1262
If an upgrade is required, refer to the following articles for more details.
One Click BMC and BIOS Upgrade Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_5:wc-cluster-bios-bmc-firmware-upgrade-wc-t.html Manual BMC upgrade Guide https://portal.nutanix.com/kb/2896 Manual BIOS upgrade https://portal.nutanix.com/kb/2905
For new versions of Foundation, use the following command (follow KB-8889 https://portal.nutanix.com/kb/8889 on how to leverage SMC folders in various Foundation versions.):
~/foundation/lib/bin/smcipmitool/java –jar SMCIPMITool.jar <IP> ADMIN <password> shell
If the connection fails, reset BMC to resolve the issue.
|
KB12498
|
Container latency wrongly displayed in Prism
|
In some cases container latency might be displayed in very high numbers in Prism and those stats may be not correct
|
It has been seen that the latency for the storage containers in Prism suddenly goes up and stays at very high levels. At the same time, all the VMs are showing completely normal latencies and aren't experiencing any issues.The bandwidth and IOPS graphs are not showing any increase of the load and the workload remains the same as before. No additional new VMs or replications were configured.The abnormal latency can be seen for only some of the containers and can be only for reads or writes as well as for both.To verify that the issue is only cosmetic, the arithmos agents can be verified on all nodes. The below command will check the container read (can be replaced with write) latency reported by each arithmos agent for the last hour with 1 minute interval:
allssh "arithmos_cli agent_get_time_range_stats entity_type=container field_name=controller_avg_read_io_latency_usecs entity_id=<container_id> start_time_usecs=`date +%s -d '1 hour ago'`000000 end_time_usecs=`date +%s `000000 sampling_interval_secs=60"
If the issue is present, some nodes will be displaying the real latency with normal numbers and some nodes will be showing very high numbers:
================== xxx.xxx.xxx.103 =================
Arithmos logs do not show any errors and the arithmos leader is just taking the average value from all agents and that results in reporting the wrong latency statistics.
|
The workaround is to restart the arithmos service cluster-wide:
allssh genesis stop arithmos; cluster start
After that, the reporting goes back to the correct numbers immediately:
|
KB9915
|
Unable to take Application Consistent Snapshot(VSS) on Windows VM when Windows DLL is missing
|
Unable-to-take-Application-Consistent-Snapshot-VSS-on-Windows-VM-when-Windows-DLL-is-missing
|
If Windows DLL such as atl110.DLL is missing NGT fails to take application consistent snapshot.Windows application event logs show the following.
Failed to register Nutanix VSS Software Provider.
NGT guest_agent_service.log shows the following.
2020-06-28 10:50:47 ERROR C:\Program Files\Nutanix\python\bin\guest_agent_service.py:488 Failed to send RPC request
Run the following commands from Windows command prompt.
cd "C:\Program Files\Nutanix\VSS"
You will see the following message.
|
Restore the missing alt110.DLL from backup.
|
KB5541
|
Nutanix Files - Updating the AD Machine Account Password
|
The purpose of this article is to instruct the end-user how to properly update the Nutanix Files Machine Account Password.
|
In some cases, it is possible that the stored machine account password for the File Server can be lost or corrupted internally on the FSVMs. While this is rare, it will prevent the File Server from authenticating with Active Directory. Another, more likely scenario, is that an AD Administrator updated the Nutanix Files machine account password manually or via GPO, however never updated the machine account on the File Server side.Either scenario will prevent the File Server from authenticating with AD and in turn lead to share unavailability. As of Nutanix Files 2.2.2 and 3.0.1, the update frequency has been changed from 7 days to 1 year for auto-renewal of the machine account password. Following logs are seen when executing domain join test:
nutanix@NTNX-IP-A-FSVM:~$ allssh "sudo net ads testjoin"
From the windows client you may see the following in the event logs:
Log Name: System
|
To correct this condition, you will need to update the machine account password for the File Server in both Active Directory and the File Server. To do this, first identify the File Server that you plan to update the machine account password. You can use the following 'ncli' command from any CVM to find this:
nutanix@NTNX-IP-A-CVM:~$ ncli fs ls | egrep " Name|Realm"
Login to Active Directory Users & Computers and verify the name of the machine account: From the Domain Controller, open PowerShell as an Administrator and run:
PS C:\> net user <fileserver-name>$ <some-password>
SSH to any FSVM from belonging to the File Server and run:
nutanix@NTNX-IP-A-FSVM:~$ sudo net -f changesecretpw
Lastly, instruct the user to perform a "Force Replication" across all Domain Controllers to ensure each DC holds the correct machine account password for Nutanix Files.
|
KB9210
|
Sudden increase in space utilization in clusters using NearSync data protection feature after PD transitions out of NearSync
|
This KB explains an issue with NearSync clusters that can result in an increase of space utilization under some error conditions, the workaround when encountered and the final resolution.
|
Note: This KB describes only one of the possible issues with leaked/stale snapshots on Nutanix clusters. There is a general KB about stale/leaked snapshots ( KB-12172 https://portal.nutanix.com/kb/12172) with reference to specific KBs for each different issue.
Background
As part of the NearSync workflow, there is a process of hydration, which in a broader sense means merging of LWS checkpoint snapshots to the latest vDisk level snapshots.
The hydration process is started in case NearSync transitions out to an hourly schedule, which can happen due to replication lagging or failing due to underlying storage issues or on-demand PD hydration, which is started when restoring a snapshot.
To complete the hydration process, the VM files are temporarily copied to the staging area, and the process of hydration has to be completed within 15 minutes. If the process does not complete within 15 minutes, it times out. When these timeouts occur, hydration starts over again from the last full snapshot with a new set of temporary files created in a new temporary folder in the staging area. Repeated failures can cause space utilization to bloat, which, if not stopped, leads to the storage in the cluster to be fully used.
Identification
If the hydration process was triggered due to PD transitioning out of the NearSync, there should be an alert on the Primary cluster for replication failing and NearSync transitioning out.
Message : Protection Domain 'DBs-01' has degraded to '1 hour(s)' RPO. Replication of snapshot to remote has failed.
ID : a9c8f0c0-0652-48c1-b051-765b0909d32c
Before the alert, you will see Cerebro logs failing for action ApplyToStagingArea consistently with kRetry for 10 minutes.
I0309 11:30:19.927304 10126 consistency_group_entry.cc:1724] CG: 4744168508424029158:1570015128395435:296776 Top level meta op completed, Meta opid = 32891888, Opcode = ApplyLcsMetaOp, Creation time = 20200309-11:30:14-GMT+0100, Duration (secs) = 5, Attributes = lcs_uuid=c4052de1-6d6a-488e-94cf-56df3742a5fb lcs_count=1 last_applied_lcs_uuid=8b6f6ffd-a094-4558-aa73-9180d6598693 latest_lcs_uuid=c4052de1-6d6a-488e-94cf-56df3742a5fb action=ApplyToStagingArea staging_area=/.snapshot/latest_staging/11/4744168508424029158-1570015128395435-170511, Aborted = No, Aborted detail = -, Error = kRetry, cg_id = 4744168508424029158:1570015128395435:296776
After about an hour later, PromoteToFullSnapshot (Auto promote) action is triggered. In the Cerebro logs, you will consistently see PromoteToFullSnapshot action, which explains multiple attempts for the hydration process.
I0309 12:49:55.734367 10126 consistency_group_entry.cc:1724] CG: 4744168508424029158:1570015128395435:296776 Top level meta op completed, Meta opid = 32942193, Opcode = ApplyLcsMetaOp, Creation time = 20200309-12:42:58-GMT+0100, Duration (secs) = 417, Attributes = lcs_uuid=8b6f6ffd-a094-4558-aa73-9180d6598693 action=PromoteToFullSnapshot staging_area=/.snapshot/tmp_staging/87/8992188856941251134-1570012143973610-57047387, Aborted = No, Aborted detail = -, Error = kRetry, cg_id = 4744168508424029158:1570015128395435:296776
To quickly check recent Cerebro logs and see if the issue is still happening, the following command can be used:
nutanix@cvm$ allssh 'grep "action=PromoteToFullSnapshot" /home/nutanix/data/logs/cerebro.INFO | grep "Error = kRetry"'
If there is no recent occurrence of Retry for PromoteToFullSnapshot action found in Cerebro.INFO logs, then check if the cluster has some files in the tmp_staging area and how old they are:
nutanix@cvm$ nfs_ls -liaR | grep tmp_staging | grep -v drwxr-xr-x
|
Root Cause
Initial implementation of Near sync on-demand PD hydrations and auto promote in Mesos layer only had a fixed number of retries for attempts to clear temporary files created in the LSW staging area. In some cases, certain error conditions led to the retries being exhausted, and the files leftover and manual reclamation of the staging areas is required. Until the manual cleanup is performed, the stale staging areas occupy space unnecessarily.
The workaround section provides a method to clear these files by using a custom script.
The solution for this issue is implemented in ENG-146852 https://jira.nutanix.com/browse/ENG-146852 (fixed in 5.10.10, 5.11.2, and later) by recording the state of staging areas persistently when the cleanup attempts fail. A background operation periodically verifies stale areas and cleans them up automatically.
Note: This is only effective for stale areas that happen post-upgrade to versions with the fix with older staging areas still requiring cleanup manually.
Additionally, ENG-285178 https://jira.nutanix.com/browse/ENG-285178 has been raised to have an NCC check that alerts of stale staging areas in the field.
Workaround
Note: If the cluster capacity is at 95 percent and Stargate logs show kDiskSpaceUnavailable, as an immediate relief to the cluster to allow new incoming writes, you can change Stargate gflag --stargate_disk_full_pct_threshold from 95 to 97 percent and remove any reserved space due to thin provisioning or manual reservations (see KB 4180 http://portal.nutanix.com/kb/4180).
WARNING: Below steps can cause an outage if not applied properly. Do not proceed with these steps without Sr. SRE or Support Tech Lead (STL) supervision.
High-Level Overview
Disable auto promote function on the cluster while you work on the cleaning of the staging area. This will stop the cluster from continuing to create stale staging areas if it is constantly being retried.
This step is needed if the cluster is still running the affected AOS version without a fix from ENG-146852 https://jira.nutanix.com/browse/ENG-146852 and still actively generating new leftover files in tmp_staging area (AOS versions before 5.10.10, 5.11.2, and 5.15 are affected).
If AOS has a fix from ENG-146852 https://jira.nutanix.com/browse/ENG-146852, this step can be skipped.
Clean the stale staging areas from the cluster by using custom scripts.
Wait for the Curator scan to finish to reclaim the free space.
Revert auto promote gflag if it was changed in step-1 and monitor the Cerebro logs to further auto promote operation failure and investigate if there are further auto-promote operations failures.
Detailed Description of the Procedure
If cluster still running affected AOS version and from Cerebro logs, we confirmed its fresh occurrence of Retry for PromoteToFullSnapshot action then:
Disable auto promote function on the cluster while you work on the cleaning of the staging area. This will stop the cluster from continuing to create stale staging areas if it is constantly being retried.
For this, temporarily set gflag --cerebro_test_skip_auto_promote to true on all nodes:
NOTE: if there is no recent occurrence of Retry for PromoteToFullSnapshot action found in Cerebro.INFO logs - then we only need to clean up old files, and setting this gflag is not needed.
nutanix@cvm$ allssh 'links -dump http:0:2020/h/gflags?cerebro_test_skip_auto_promote=true'
Clean up stale staging areas from the cluster:
Download the following 2 scripts into one CVM in the cluster into ~/bin folder and verify md5sum is correct:
scan_near_sync_map_v1.py http://download.nutanix.com/kbattachments/9210/scan_near_sync_map_v1.py cddc730fcbdc2e9304d53363f18b4221 clear_staging_area_v1.py http://download.nutanix.com/kbattachments/9210/clear_staging_area_v1.py b3649306673e4adbe046e87ed557ca7a
Execute the first script to scan staging areas and redirect the output to a file:
nutanix@cvm:~/bin$ python scan_near_sync_map_v1.py > ~/tmp/staging_area.txt
A list of the staging areas found in the cluster is provided:
nutanix@cvm:~/bin$ cat ~/tmp/staging_area.txt
Filter the list of staging areas from the scan to only contain tmp_staging folders:
nutanix@cvm:~/bin$ grep tmp_staging ~/tmp/staging_area.txt > ~/tmp/stale_staging_area.txt
Note: Some folders in the staging area called latest_staging are used for inline hydration on the standby side and can be ignored as their space utilization is negligible and will not contribute to the space bloat of this issue. These will continue to be created and removed in the background.
Before proceeding with clearing them, ensure these are not currently in use by Cerebro. Take care not to remove staging files currently in use.
Save the nfs_ls output into a file:
nutanix@cvm:~$ nfs_ls -liaR > ~/tmp/nfs_ls-liaR.txt
Compare the paths from nfs_ls with the output from the scan to ensure they match. Use the following command to perform a diff on the two lists of directories. When the lists match, the output will resemble the example below. If the two lists do not match, return to step 1.
nutanix@cvm:~$ diff -up <(awk -F"/" {'print $4'} ~/tmp/nfs_ls-liaR.txt|sort -u) <(awk -F"/" {'print $5'} ~/tmp/staging_area.txt|sort -u)
--- /dev/fd/63 2020-10-23 10:11:53.178327657 +0200
Execute the commands to clean in a loop:
nutanix@cvm:~$ cd ~/bin
Verify that the staging area folders have been removed from the cluster:
nutanix@cvm:~$ nfs_ls -liaR > ~/tmp/nfs_ls-liaR_after_cleaup.txt
After cleaning the files in the staging area, the Curator scan needs to run to reclaim the free space. If the cluster is more than 85 percent full in storage capacity, the Curator should already be doing dynamic freespace scans, and no action is required than to wait for the space to be reclaimed. However, if cluster usage is well under 85 percent, you can invoke a manual full scan on the cluster or wait for the periodic Curator scans to run to reclaim the free space.
Once the free space is reclaimed, we need to revert the gflag --cerebro_test_skip_auto_promote to false (if changed on step 1).
nutanix@cvm:~$ allssh 'links -dump http:0:2020/h/gflags?cerebro_test_skip_auto_promote=false'
After the gflag is reverted, there is a chance that auto promote operation starts again and fails again, potentially causing further stale files in the staging area. At this point, we need to investigate the failure of the auto promote function.
To find out recent auto promote operation failure, run the following command.
nutanix@cvm:~$ allssh 'grep "action=PromoteToFullSnapshot" /home/nutanix/data/logs/cerebro.INFO | grep "Error = kRetry"'
Note: If the auto-promote function fails or cannot complete the operation, it restarts the function every 15 minutes. Note : In a couple of cases it has been seen that "flow_data" container was receiving Nearsync replication. By default flow_data container doesn't have compression enabled. Enabling compression on flow_data container doesn't have any impact and can provide space savings if it is receiving or storing nearsync or async snapshots/ To enable compression on flow_data container, ncli with the force=true option is required:
ncli container edit name=flow_data enable-compression=true force=true
|
}
| null | null | null | null |
KB1876
|
NCC Health Check: disk_id_duplicate_check
|
The NCC health check disk_id_duplicate_check checks for duplicate disk IDs.
|
The NCC health check disk_id_duplicate_check checks for duplicate disk IDs.This check was introduced in NCC 1.3 onwards.This check returns a FAIL if it is not able to fetch a disk ID.
Running the NCC CheckYou can run this check as part of the complete NCC Health Checks :
nutanix@cvm$ ncc health_checks run_all
Or you can run this check separately:
nutanix@cvm$ ncc health_checks hardware_checks disk_checks disk_id_duplicate_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run daily by default.This check does not generate an alert.
Sample OutputFor Status: PASS
Running : health_checks hardware_checks disk_checks disk_id_duplicate_check
For Status: FAIL
Detailed information for disk_id_duplicate_check:
Output messaging
[
{
"Check ID": "Check for duplicate disk ids"
},
{
"Check ID": "Multiple disks with same disk id were found."
},
{
"Check ID": "Review KB 1876."
},
{
"Check ID": "This can interfere with cluster functionality."
},
{
"Check ID": "Duplicate disk ids present in the same cluster"
},
{
"Check ID": "Duplicate disk ids present in the same cluster"
},
{
"Check ID": "Duplicate Disk IDs disk_id present in the nodes svm_ip_address of the cluster. This may impact normal cluster operation and maintenance, and requires further investigation. Please refer to KB 1876 to gather information to report to Nutanix Support for further assistance."
}
]
|
If the NCC check reports duplicate disk IDs, there may be a problem with the cluster configuration that needs to be further investigated. Debugging the duplicate disk IDs check issue is complex and requires assistance from the support team to verify if this poses a potential problem. Following are the steps to log a case and include the output of the check along with the case:
Run the NCC check for duplicate disk IDs:
nutanix@cvm$ ncc health_checks hardware_checks disk_checks disk_id_duplicate_check
Run the following commands on any one of your CVMs (Controller VMs):
nutanix@cvm$ allssh list_disks
Provide the output to Nutanix Support https://portal.nutanix.com.
The NCC check for duplicate disk IDs can generate false or positive if the CVM on which the check fails is unreachable. CVM connectivity can be verified by pinging the IP address noted in the FAIL or ERR output from the check.
|
KB10529
|
Nutanix Kubernetes Engine - How to configure email alerts for a Karbon kubernetes cluster
|
How to configure email alerts for a Karbon kubernetes cluster
|
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services. The NKE clusters created via Karbon don't display the Prometheus alerts outside of the Karbon K8s cluster Alerts page.following are the steps to get email notifications for a Karbon created Kubernetes cluster Alerts until they are integrated into Prism Central in future releases.The below prerequisites are needed to successfully run the script as instructed below:
Kubeconfig file of the targeted K8s cluster. See the Nutanix Karbon Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Karbon Downloading Kubeconfig section https://portal.nutanix.com/page/documents/details?targetId=Karbon-v2_2:kar-karbon-download-kubeconfig-t.html for instructions. A working SMTP server in the environment allows the Karbon clusters to relay the email sending.
|
To set up email alert notifications, the script below is going to update the config for the alertmanager secret and restart the pods running inside the Karbon ntnx-system namespace.
Download the alertmanager_update http://download.nutanix.com/kbattachments/10529/alertmanager_update.tar.gz script to a Linux machine. Download a Kubeconfig https://portal.nutanix.com/page/documents/details?targetId=Karbon-v2_2:kar-karbon-download-kubeconfig-t.html file of the k8s cluster for which the email alerts need to be configured.Extract the downloaded script using:
tar xzvf alertmanager_update.tar.gz
Change directory into the extracted dir:
cd /path/to/dir/alertmanager_update/
Update the alertmanager_config.yaml to include the values of smtp_from: <your sender email address> , smtp_smarthost: <SMTP hostname: port> and -to: < Email receiver1>:
vi alertmanager_config.yaml
Run the update script as follows:
./update_alermanager_config --kubeconfig_path /path/to/<clustername-kubectl>.cfg --alertmanager_cfg_path ./alertmanager_config.yaml
An example:
[root@servera-lab alertmanager_update]#./update_alertmanager_config --kubeconfig_path /tmp/calico1x-kubectl.cfg --alertmanager_cfg_path ./alertmanager_config.yaml
With successful communication to the SMTP relay, you should receive the first email from the alert manager for the 'alertname = Watchdog' shortly.Notes:
Re-run this procedure again if K8s upgrade takes place (which resets the Alertmanager secret back to the default) According to the config parameters in alertmanager_config.yaml it is expected that new emails are triggered within 5 minutes, and for unresolved alerts (in firing state), a new email is sent every 12 hours (including the Watchdog which is continuously in firing state by default) In case the SMTP requires Authentication add the following fields under the smtp_smarthost line in the alertmanager_config.yaml
smtp_require_tls: true
For further customization for the email receivers, content and alerts check Prometheus documentation https://prometheus.io/docs/alerting/latest/configuration/ for more details.
|
KB7356
|
LCM SATADOM upgrade failed with 'ata4: COMRESET failed (errno=-16)' error
|
LCM SATADOM upgrade failed with 'ata4: COMRESET failed (errno=-16)' error
|
Problem:This KB was submitted in order to track and analyze a new issue experienced during the SATADOM upgrade via LCM, from the FW version S560301N to higher versions, such as S670330N. As per ISB-090 https://docs.google.com/document/d/1xRYEshe-frshD9aBz73MNBroRZyuIxFkY-xk5Qg37OQ/edit#heading=h.4z6t03g9lz4g, satadoms may become bricked after the AC power cycle required to commit this firmware update. However, in this case Phoenix will lose the ability to communicate with the satadom device right after the upgrade but before the AC Power Cycle is performed. This means that the node is still booted in Phoenix and was unable to execute the reboot_to_host script to restore the hypervisor's original boot configuration because the backup file could no longer be read from the satadom.Symptoms:a. SATADOM FW upgrade task fails in Prism with the following error message:
Operation failed. Reason: Command (['/home/nutanix/cluster/bin/lcm/lcm_ops_by_phoenix', '102', '303', '8dd0649c-aa57-4d9b-afa7-0f274a22e78b']) returned 1.
b. Host's console shows the below error message on the Phoenix screen. The Phoenix does not reboot back to the hypervisor, stays in Phoenix prompt:
[root@phoenix ~]#
2. Example of matching ~/data/logs/lcm_ops.out logs on the LCM leader:
DEBUG: First stage of upgrade for entity 3IE3
Please collect all the info below and create a bug assigned to the hardware engineering team.
|
Gather the following info, then reboot the node to Phoenix again. Attempt to run python phoenix/reboot_to_host.py and see if the SATADOM comes back.Raise a new ENG if it's still unresponsive and reference that you tried to apply the contents of the KB.1. Collect the output of commands on the SATADOM device, e.g. /dev/sda, it is highly likely that you'll get "I/O error, can't open device" error message for b. and c.:
a. lsscsi
2. Follow the KB-7288 https://nutanix.my.salesforce.com/kA00e0000009CUZ?srPos=0&srKp=ka0&lang=en_US to collect necessary data to analyze this issue3. Create a JIRA ticket and provide the engineering all above collected information.4. When dispatching the new SATADOM, mark the dispatch for failure analysis. Following the process found at: Dispatch Process - A Step-by-Step Guide https://confluence.eng.nutanix.com:8443/display/SPD/Dispatch+Process+-+A+Step-by-Step+Guide#DispatchProcess-AStep-by-StepGuide-DispatchDetails(withFailureAnalysis)
|
KB16620
|
Why is Chartmuseum stuck in Pending State?
|
Why is Chartmuseum stuck in Pending State?
|
When deploying Kommander in a cluster deployed with the DKP vSphere provider, some users have reported that chartmuseum gets stuck in pending state.
In some cases, when describing the pod, it becomes evident that the chartmuseum persistent volume claim (PVC) is in pending status:
kubectl -n kommander get pv,pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpersistentvolumeclaim/chartmuseum Pending vsphere-raw-block-sc 22m
This issue is usually encountered in environments where vCenter credentials are rotated, so the vSphere CSI controller credentials becomes outdated so the storage provisioner (csi.vsphere.vmware.com) is not able to authenticate with the vCenter API to create/delete and attach/detach disks to and out of the nodes in the cluster.
|
To confirm whether this is the cause of chartmuseum being stuck, please describe the PVC with name chartmuseum and look in the events whether the controller is having issue login in to the vCenter, example shown below:
kubectl -n kommander describe pvc chartmuseum Name: chartmuseumNamespace: kommanderStorageClass: vsphere-raw-block-scStatus: PendingVolume: Labels: <none>Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com volume.kubernetes.io/selected-node: sortega-dkp230-rhel79-airgap-fips-md-0-6b995b565-dwpfl volume.kubernetes.io/storage-provisioner: csi.vsphere.vmware.comFinalizers: [kubernetes.io/pvc-protection]Capacity: Access Modes: VolumeMode: FilesystemUsed By: chartmuseum-5fdf74f69d-zn96pEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal WaitForFirstConsumer 23m (x2 over 23m) persistentvolume-controller waiting for first consumer to be created before binding Normal ExternalProvisioning 3m18s (x82 over 23m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com"or manually created by system administrator Normal Provisioning 3m14s (x14 over 23m) csi.vsphere.vmware.com_vsphere-csi-controller-77c86f86d4-9wn9r_e18473a9-93a3-43dd-ae73-476794819d1d External provisioner is provisioning volume for claim "kommander/chartmuseum" Warning ProvisioningFailed 3m10s (x14 over 23m) csi.vsphere.vmware.com_vsphere-csi-controller-77c86f86d4-9wn9r_e18473a9-93a3-43dd-ae73-476794819d1d failed to provision volume with StorageClass "vsphere-raw-block-sc": rpc error: code = Internal desc = failed toget shared datastores in kubernetes cluster. Error: ServerFaultCode: Cannot complete login due to an incorrect user name or password.
To remediate this, the user should update the vCenter credentials in the secret named vsphere-config-secret in the vmware-system-csi namespace:
kubectl get secret vsphere-config-secret -n vmware-system-csi -ojsonpath='{.data.\csi-vsphere\.conf}'|base64 -d [Global]cluster-id = "default/dkp230-rhel79-airgap-fips"thumbprint = ""[VirtualCenter "<VCENTER-ADDRESS>"]user = "<USER>"password = "<PASSWORD>"datacenters = "dc1"[Network]public-network = "Airgapped"
Note: upon updating the credentials in aforementioned secret, the operator may need to delete the PVC chartmuseum before chartmuseum will deploy correctly.
kubectl delete pvc chartmuseum -n kommander --kubeconfig <path-to-kubeconfig>persistentvolumeclaim "chartmuseum" deleted
|
KB6848
|
NX Hardware [Memory] – Memory showing incorrect size rather than the real physical size
| null |
Memory "M393A4K40BB0-CPB" is showing as 16384 MB in IPMI console when it should be 32768 MB.
NCC check ipmi_sensor_threshold_check returns the following:
Node x.x.x.x:
NCC plugin show_hardware_info shows memory size is different than others:
nutanix@cvm$ ncc hardware_info show_hardware_info
Get the DIMM hardware information from IPMI console by logging in to IPMI "http://<IPMI_IP>/" with ADMIN account and navigating to "System"--"Hardware Information"--"DIMM".
|
Get the real size of the product by part number "M393A4K40BB0-CPB" from Google or SFDC. It is "C/X-MEM-32GB-DDR4-R".Dispatch new DIMM for replacement.For more information on DIMM replacement, check KB 3703 https://portal.nutanix.com/kb/3703.
|
KB11322
|
NCC Health Check: disk_configuration_vs_intent_consistency_check
|
The NCC health check plugin disk_configuration_vs_intent_consistency_check reports whether SED disk configuration in zeus is consistent with the intent in regard to different states of SED protection and KMS server availability.
|
The NCC health check plugin disk_configuration_vs_intent_consistency_check reports whether SED disk configuration in zeus is consistent with the intent in regard to different states of SED protection and Key Management Server (KMS) availability.For more information on Data-at-Rest encryption with SED, see Nutanix Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_0:wc-security-data-encryption-sed-wc-c.html.Running the NCC CheckThe check can be run as part of the complete NCC check:
ncc health_checks run_all
Or individually as:
ncc health_checks self_encrypting_drive_checks disk_configuration_vs_intent_consistency_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run every 7 days by default.This check will generate the alert A101080 as follows:
Warning: if SED disk(s) found having tentative passwordsCritical: when cluster and SED protection do not match, i.e. combination violates the intent
Sample outputFor status: PASS
nutanix@cvm:~$ ncc health_checks self_encrypting_drive_checks disk_configuration_vs_intent_consistency_check
For status: WARN
Running : health_checks self_encrypting_drive_checks disk_configuration_vs_intent_consistency_check
For status: FAIL
Running : health_checks self_encrypting_drive_checks disk_configuration_vs_intent_consistency_check
or
Detailed information for disk_configuration_vs_intent_consistency_check:
Output messaging
[
{
"Check ID": "Check if SED disk configuration is consistent with the intent"
},
{
"Check ID": "Inconsistency in disk's zeus configuration and the intent"
},
{
"Check ID": "This issue is usually seen when the KMS is switched off when the disks are still in protected state.\t\t\tPlease check whether the KMS is on or off. If the issue persists, contact Nutanix Support."
},
{
"Check ID": "Data unavailability requiring SRE intervention."
},
{
"Check ID": "A101080"
},
{
"Check ID": "SED Disk Configuration Misconfigured"
},
{
"Check ID": "Either the cluster had SED protection enabled and the disks did not have a password set\t\t\tOR the cluster did not have SED protection enabled and disks had password set OR found tentative passwords."
}
]
|
CAUTION: Do not perform any upgrades, CVM, node reboot, or HW maintenance before resolving this issue.Consider engaging Nutanix Support https://portal.nutanix.com/
|
KB3621
|
How to import an OVA image to AHV cluster
|
From AOS 5.18, OVA files can be natively imported and exported through Prism Central. This article provides manual steps to complete the task.
|
From AOS 5.18, OVA files can be natively imported and exported through Prism Central. Refer to the OVA Management https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc:mul-ova-manage-pc-c.html chapter of the Prism Central Infrastructure Guide. For manual steps, proceed with the solution section below.
|
Manual stepsThe following figure summarises the steps to create a new VM from an OVA file.
Pre-requisite for moving Linux VMs/Windows VMs from ESXi to AHV
Install VirtIO Drivers from the following:
Migration Guide: Migrating Linux VMs from ESXi to AHV https://portal.nutanix.com/page/documents/details?targetId=Migration-Guide-AOS-v59:vmm-vm-migrate-linux-ahv-c.htmlAHV Admin Guide: Virtual Machine Management - Windows VM Provisioning https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_6:Windows%20VM%20Provisioning
Note: If the image is already exported without VirtIO driver installed, refer to KB 5338 https://portal.nutanix.com/kb/5338 - "AHV | How to inject storage VirtIO driver if it was not installed before migration to AHV" to inject the VirtIO driver after creating the VM with following procedures.
To create a new VM from an OVA file, do the following:
Untar the OVA file. For example, download and install 7-Zip http://7-zip.org to extract the OVA file. An OVA file is an archive file in the tar format.Check the VM settings in the .ovf file. The OVA file has a .ovf file that is in an XML format. Open the .ovf file in a text editor and check the CPU, memory, and other VM settings.Upload the VMDK files. The OVA file also has the VMDK files. A VMDK file is a disk image. You can upload the VMDK files using the Prism web console or nCLI.
Do the following to upload the VMDK files using the Prism web console:
Pull down the gear icon and click Image Configuration.In the Image Configuration screen, click + Upload Image.In the Create Image screen, specify the necessary details. Select DISK in the IMAGE TYPE drop-down list.Click Save.
Create a new VM. Configure the VM with settings similar to the settings in the .ovf file (step 2). You must use the image files you uploaded in step 3 to add the disks. In the Add Disk screen, select Clone from Image Service from the Operation drop-down list. If the VM fails to boot, try changing the bus type to IDE.
For more details, see the following KB articles:
KB 2627: Migrating Linux VMs to Acropolis https://portal.nutanix.com/kb/2627 KB 2622: Transferring virtual disks into an Acropolis cluster https://portal.nutanix.com/kb/2622
|
KB17134
|
NDB - SSH Keys for era & erauser not renewed for more than 1 year leading to a critical security risk
|
SSH Keys for era & erauser not renewed for more than 1 year leading to a critical security risk.
|
NDB requires the end-user to register a valid SSH key while provisioning a DB server VM from the NDB WebUI as part of the automation provisioning process. This feature enables the end-user to connect to the provisioned DBVM via SSH from their application.However, once the SSH keys have not been renewed for more than 1 year, they are captured as a critical security risk during the DBVM's security scans.In the NDB, we do not have a process for rotating the SSH-Key for the above two users. However, it has been found that removing the offending SSH keys have no effect on NDB workflows involving the DBVM, and thus it is a useful short term approach to unblock the customer.
|
As a short term workaround, delete the contents of the authorized_keys file: .ssh/authorized_keys.Note: Please backup the file before deleting its contents as a precaution.
|
KB5618
|
CVM lost networking in VMWare ESXi after changing VMKernel settings
|
CVM was not reachable to other CVMs and services did not come online after changing VMKernel settings
|
CVM was not ping reachable to other CVMs and services did not come online after changing VMKernel settings. Affected CVM might show in genesis.log with this message
2018-05-22 15:25:07 INFO ipv4config.py:841 Discovered network information: hwaddr 00:0c:29:f6:d0:57, address 10.XX.X.118, netmask 255.255.254.0, gateway 10.XX.X.4, vlan None
|
This is known VMWare bug, but no fix other than workaroundOriginal setting of Network Adapter 1, checked for connected and connect at power on configuration for original port group named "public" (can be anything). Click OKDisconnect and change the network label to different network label "Backplane Network" (Can be anything). Click OKChange it back to original settings of Network Label "Public" and connected (can be anything)Now, CVM would be reachable, CVM reboot might be required
|
KB16091
|
How to set a Delegate Approver
| null |
Guide on Delegate Approver set in SFDC including1. FAQ2. Steps to Assigning a Delegate Approver3. how to Enabling Delegate Approver Email notifications
|
For more information, click here https://docs.google.com/document/d/1cdCOC8jtTbV9T6LbKOB4PM-x1JrhQ3706HsTe221REc/edit#heading=h.o4lalfpvhxq.Point of contact: [email protected]
|
KB5594
|
Alert - A1092 - Unsupported Configuration For Redundancy Factor 3
|
This article describes the steps for troubleshooting an Alert A1092.
|
This KB describes how to troubleshoot the alert A1092 - unsupported configuration for redundancy factor 3 (UnsupportedConfigurationForFT2).For an overview on alerts and whom to contact when an alert case is raised, see KB 1959 https://portal.nutanix.com/kb/1959.
Alert Overview
The unsupported configuration for redundancy factor 3 alert is generated when a Controller VM no longer has the necessary amount of RAM to support Redundancy Factor 3. In this unsupported configuration, the cluster performance might be affected. In case of multiple Controller VMs with the same condition, the cluster might become unable to service I/O requests. This alert is generated due to one of the following reasons.
A new Controller VM containing less than the necessary amount of RAM to support Redundancy Factor 3 is recently added to the cluster.
The RAM on a Controller VM was reduced without considering that the cluster is configured for Redundancy Factor 3.
Sample Alert
alert_time: Mon Dec 30 2017 14:22:03 GMT-0800 (PST)
Where [N] = other feature memory requirements (i.e. dedup) + 8GB, up to 32GB.
Output messaging
[
{
"Check ID": "Unsupported Configuration For Redundancy Factor 3."
},
{
"Check ID": "To support 'Redundancy Factor 3' feature all controller VMs in the cluster must meet the minimum requirements for RAM."
},
{
"Check ID": "Increase RAM on the Controller VM to meet the minimum requirement. Contact Nutanix support for assistance."
},
{
"Check ID": "Cluster performance may be significantly degraded. In the case of multiple nodes with the same condition, the cluster may become unable to service I/O requests."
},
{
"Check ID": "A1092"
},
{
"Check ID": "Unsupported Configuration For Redundancy Factor 3"
},
{
"Check ID": "Controller VM service_vm_id with IP address service_vm_external_ip has actual_ram_size_gbGB RAM which does not meet the configuration requirement of min_ram_required_gbGB RAM to support 'Redundancy Factor 3' feature."
}
]
|
The AOS Web Console Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:arc-redundancy-factor3-c.html describes the Redundancy Factor 3 feature and requirements.
The Acropolis Advanced Administration Guide https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin-AOS:app-cvm-memory-config-feats-r.html describes the Controller VM memory requirements for this feature.
Perform the following steps for troubleshooting.
Determine which Controller VM needs more memory by reviewing the alert. Change the Controller VM memory for your hypervisor as described in the Prism Web Console Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-controller-vm-memory-increase-wc-t.html.
To check the current Redundancy Factor
SSH to any Controller VM in the cluster.
ncli> cluster get-redundancy-state
To set the desired Redundancy Factor
SSH to any Controller VM in the cluster.
ncli> cluster set-redundancy-state desired-redundancy-factor=desired redundancy factor
|
KB13909
|
Domain fault tolerance for Oplog component in Prism periodically flips between 1 and 2
|
Oplog fault tolerance can flip between 1 and 2 when a container RF was increased after cluster creation.
|
This issue can be seen on clusters with Redundancy Factor 3 and Fault Tolerance 2 which were initially created as FT 1 and Redundancy Factor 2 and then converted.Customers may notice that the "Data Resiliency Status" is not OK due to the Oplog component fault tolerance flipping between 1 and 2 periodically.Note: This has been observed mostly for the rack domain type, but can also happen for both node and rackable-unit domains.
Symptoms
All the containers in the cluster are RF3 so the expected supported number of failures should be 2.
nutanix@cvm~$ ncli ctr ls | grep "Replication Factor" | sort | uniq -c
Note: If just one container in the cluster is RF2, the FT will be set at 1 permanently (without flipping between 1 and 2).
If all containers are RF3 but Oplog Fault Tolerance flip frequently between 1 and 2 - then an additional investigation is required to confirm this issue.
Example oplog component status from "ncli cluster get-domain-fault-tolerance-status type=rack" output on FT2 cluster with all containers on RF3:
Domain Type : RACK
Curator info logs will show messages stating that the FT changed from 1 to 2, and nothing for changing it from 2 to 1.
I20221010 16:56:39.675974Z 26418 curator_fault_tolerance_info.cc:1104] Maximum faults tolerated for a component changed; domain_type=400, component_type=2, old_max_faults_tolerated=1, max_faults_tolerated=2, message=Based on placement of oplog episodes the cluster can tolerate a maximum of 2 rack failure(s)
MaxOplogFaultTolerance and NumNonFaultTolerantOplogEntries are always at 2/o respectively in curator.
I20221010 23:39:46.257903Z 4567 mapreduce_job.cc:574] MaxOplogFaultTolerance[200] = 2
In Stargate info logs, we can see that Oplog FT is reduced for the rack and rack_unit domains, from 2 to 1:
nutanix@cvm~$ allssh 'zgrep -h "fault tolerance state" data/logs/stargate.n*INFO*| tail'
In stargate info logs within a few seconds before it changes FT from 2 to 1 it should be possible to find messages about counters-* vdisk hosted.
And when hosting these counters-* vidisks stargate prints messages where we see the oplog replication factor is 2 (look for this part: "{ oplog_params { replication_factor: 2 skip_oplog: true }").
I20221017 13:35:50.136297Z 11666 admctl_host_vdisk_op.cc:720] vdisk_id=14488629956 caller=23 operation_id=1816153790 Pithos config lookup for vdisk name NFS:1:0:2365 returned vdisk_id: 14488629956 vdisk_name: "NFS:1:0:2365" vdisk_size: 4398046511104 container_id: 12 params { oplog_params { replication_factor: 2 skip_oplog: true } random_io_tier_preference: "DAS-SATA" random_io_tier_preference: "SSD-SATA" random_io_tier_preference: "SSD-PCIe" sequential_io_tier_preference: "DAS-SATA" sequential_io_tier_preference: "SSD-SATA" sequential_io_tier_preference: "SSD-PCIe" } creation_time_usecs: 1625125879187280 vdisk_creator_loc: 14488624745 vdisk_creator_loc: 14488629955 vdisk_creator_loc: 16 nfs_file_name: "counters-14488624745" chain_id: "\234^a\"\000\363MS\210U\032u1\372\001a" vdisk_uuid: "[X\000.W\333I\360\242]\303q\3129\010\002" last_updated_pithos_version: kChainIdKey
This oplog replication settings override comes from the vdisk configuration and can be checked with vdisk_config_printer.
Example showing that counter vdisk has an override for oplog params with replication factor 2 set for the oplog:
nutanix@cvm~$ vdisk_config_printer -id=14488629956
In the cluster there is one counter vdisk per node, so the number of vdisks with a static oplog replication factor should be equal to the number of counters vdisks in the cluster
In the example below, we have 21 vdisks with a static oplog RF, and 21 counters-* vdisks in the NutnaixManagementShare, one for each node in the cluster.
nutanix@cvm~$ vdisk_config_printer | grep -c "replication_factor: 2"
And each time when one of these vdisks is hosted by Stargate - it will reduce Oplog FT status from 2 to 1. Then during a Curator scan it changes Oplog FT back to 2, till the next time one of the counters-* vidisk hosted by Stargate.
|
This behavior happens as the counters vdisks are created soon after the cluster is created and with a static oplog replication_factor in vdisk config that matches the replication factor of NutanixManagementShare container. And if a cluster was created as RF2(FT1) and later changed to RF3(FT2) the static oplog override setting on the counters vdisks is not updated and do not match with RF3 configured for the container.Moreover, these counters-* vdisks are statically configured to skip the oplog. However, oplog is not skipped for a brief time while the vdisk is being hosted (which happens by design), and a temporary oplog episode is created, with only two replicas instead of 3 required for the container. And this is the reason for stargate to reduce FT status for Oplog from 2 to 1.Resolution: ENG-506502 https://jira.nutanix.com/browse/ENG-506502 was created to implement a fix for the issue, and to remove the static replicatioin_factor on the counters vdisks. This ENG is fixed in AOS 6.7 and later.
|
KB1484
|
HW: NX GPU Troubleshooting (ESXi)
|
Troubleshooting GPU based issues on ESXi Hypervisor
|
Certain NX series nodes can be configured with one or two Nvidia GPU cards that allow video rendering to be passed through to VMs. This document is meant to assist with troubleshooting issues related to GPU functionality.
Additional documentation: NVIDIA vGPU on Nutanix. https://portal.nutanix.com/page/documents/solutions/details?targetId=TN-2046-vGPU-on-Nutanix:TN-2046-vGPU-on-Nutanix
|
There are four ways to configure video rendering for use with VMs in ESXi.
Soft 3d -- Software-based 3d rendering. It uses CPU for rendering and is slower than GPU.sVGA -- GPU-based rendering. Multiple GPUs can be shared across multiple virtual machines.dVGA -- GPU-based rendering. A single GPU is directly passed through to the VM.vGPU -- GPU-based rendering. The VM has the NVIDIA driver and has direct access to a subset of the GPU's memory.
sVGA is more flexible in the sense that you can provision multiple VMs on GPU cores, and you can also tune the amount of memory used within the GPU core. However, certain applications may have trouble detecting and utilizing. With sVGA, there are 3 options you can configure on the VMs:
Automatic -- Uses hardware acceleration if there is a capable and available hardware GPU in the host in which the virtual machine is running. However, if a hardware GPU is not available, the Automatic option uses software 3D rendering for any 3D tasks.Software -- Only uses software 3d renderingHardware -- Only uses a GPU if available. If a GPU is not available, the VM may not be able to start.
dVGA passes through a single GPU to a single VM. This is less flexible than sVGA, but seems to have better application compatibility. dVGA will also utilize the nvidia driver (which has to be installed directly on the VM) rather than the built-in sVGA driver.
vGPU is only available for NVIDIA and requires the installation of a different driver on the ESXi host. The driver will be named similarly to "NVIDIA-vGPU-kepler-VMware_ESXi_6.0_Host_Driver_352.54-1OEM.600.0.0.2494585.vib".
vGPU is a combination of an ESXi driver .vib and some additional services that make up the GRID manager. This allows dynamic partitioning of GPU memory and works in concert with a GRID enabled driver in the virtual machine. The end result is that the VM runs a native NVIDIA driver with full API support (DX11, OpenGL 4) and has direct access (no Xorg) to the GPU hardware, but is only allowed to use a defined portion of the GPU's memory. Shared access to the GPU's compute resources is governed by the GRID manager. The end result is that you can get performance nearly identical to vDGA without the PCI pass-through and its accompanying downsides. vMotion remains a 'no' but VMware HA and DRS do work.
Useful commands
For logs, vmkernel.log will often detail issues with drivers or the GPU itself.
Note: On initial install, the ESXi host will likely include an incorrect nVidia driver. From "esxcli software vib list", the driver would appear similar to "NVIDIA-vGPU-VMware_ESXi_6.0_Host_Driver". Note that 'kepler' is not reflected in the VIB name. To update the NVIDIA GPU VIB, you must uninstall the currently installed VIB and install the new VIB. Refer to this KB for details KB 2033434 https://kb.vmware.com/kb/2033434[
{
"Goal": "Confirming GPU installation",
"Command": "lspci | grep -i display",
"Component": "Hypervisor"
},
{
"Goal": "Print out which VMs are using which GPUs",
"Command": "gpuvm",
"Component": "Hypervisor"
},
{
"Goal": "Confirming GPU configuration",
"Command": "esxcli hardware pci list -c 0x0300 -m 0xff",
"Component": "Hypervisor"
},
{
"Goal": "Check if Xorg is running",
"Command": "/etc/init.d/xorg status",
"Component": "Hypervisor"
},
{
"Goal": "Manually start Xorg",
"Command": "/etc/init.d/xorg start",
"Component": "Hypervisor"
},
{
"Goal": "Check Xorg logging",
"Command": "cat /var/log/Xorg.log | grep -E \"GPU|nv\"",
"Component": "Hypervisor"
},
{
"Goal": "Verify the VIB installation",
"Command": "esxcli software vib list | grep NVIDIA",
"Component": "Virtual GPU Manager/Resource Manager"
},
{
"Goal": "Confirming the VIB is loading",
"Command": "esxcfg-module -l | grep nvidia",
"Component": "Virtual GPU Manager/Resource Manager"
},
{
"Goal": "Manually load the VIB",
"Command": "esxcli system module load -m nvidia",
"Component": "Virtual GPU Manager/Resource Manager"
},
{
"Goal": "Verify the module is loading",
"Command": "cat /var/log/vmkernel.log | grep NVRM",
"Component": "Virtual GPU Manager/Resource Manager"
},
{
"Goal": "Check the vGPU management",
"Command": "nvidia-smi",
"Component": "Virtual GPU Manager/Resource Manager"
}
]
|
2023-06-05T10:58:45.205804-05:00 NTNX-22SH6M300266-C-CVM kernel: [ 1312.270764] vmd 0000:13:00.0: PCI_INTERRUPT_LINE read is non zero 0xff[
| null | null | null | null |
""Latest Firmware"": ""14.27.1016""
| null | null | null | null |
KB6854
|
Linux Boot Error - NetworkManager System Settings was not provided by any .service files - Move
|
Linux Boot Error - NetworkManager System Settings was not provided by any .service files - Move
|
After Move migrated a Linux VM to AHV it was booting up hanging with the following error
process 1788: WARNING **: get_all_cb: couldn't retrieve system settings properties: (2) The name org.freedesktop.NetworkManager System Settings was not provided by any .service files
|
To resolve the issue on the AHV side
Booted into single user mode on the VM Ran the following commands to uninstall NetworkManager
#service NetworkManager stop
To remove NetworkManager without removing dependencies ( DO NOT REMOVE DEPENDENCIES) , use rpm methodTo list the NETWORKMANAGER packages run the following command - There should be 3 but could be more.
rpm -qa | grep NetworkManager
Then remove each individual package by using the command below
#rpm -e packagename --nodeps . (--nodeps only removes the packages NOT the dependencies)
The Linux System will boot with the same error but will automatically reboot & start successfully To stop the issue happening on the AHV, Run the above commands on the VM before its migrated using MoveOnce it's migrated it should start up successfully on first boot .
|
KB12194
|
Unreferenced Nutanix Disaster Recovery snapshots taking up space
|
This article describes an issue where unreferenced Nutanix Disaster Recovery snapshots are taking up space.
|
NOTES:
This KB is part of KB 12172 - Stale/leaked snapshots in Nutanix http://portal.nutanix.com/kb/12172. If the symptoms below do not apply, verify the content of the main KB for a different match.Nutanix Disaster Recovery (DR) was formerly known as Leap.
When changing from Nutanix DR Nearsync to syncrep or async replication and vice-versa, some of the previous snapshots may not be completely cleaned up.
Checking on Cerebro page,
Finding Cerebro Leader:
CVM$ panacea_cli show_leaders | grep cerebro
On Cerebro Leader/Slave, CVM$ links http:0:2020 -> Navigate to Cerebro Master (If connected to Slave) -> Entity Centric -> Protected Entities, VMs may show as "Unknown", and still have a PD but with 0 snapshots. While checking on the nfs namespace, there would be snapshots referenced as below:
To get the disk uuid(s) for this "Unknown" VM,
nutanix@cvm$ acli vm.get 6890c6af-22be-4275-8a33-56e7514c057f | grep vmdisk_uuid
Now, verify that the vmdisk UUIDs match the output from nfs_ls,
nutanix@cvm$ nfs_ls -liaR /CTN-A | grep 17260c11-295b-4abb-a36f-ddec365cea2c
KB 12172 - Stale/leaked snapshots in Nutanix http://portal.nutanix.com/kb/12172 contains a script that assists in identifying this type of unreferenced snapshots automatically.
Refer to the step "Automated script to quickly detect known stale snapshots types" section of the KB 12172 - Stale/leaked snapshots in Nutanix http://portal.nutanix.com/kb/12172 to identify stale snapshots.
|
A TH is required to clean up the unreferenced snapshots on Cerebro using specific scripts. If the snapshots are not too old, collect a log bundle and open an ENG ticket for RCA.
|
KB8460
|
Alert- A130174 - RecoveryPointReplicationFailed
|
Investigating RecoveryPointReplicationFailure issues on a Nutanix cluster
|
This Nutanix article provides the information required for troubleshooting the alert RecoveryPointReplicationFailed for your Nutanix cluster.
Alert Overview
After configuring the categories and protection policies on the on-prem PC (Prism Central), if the network connectivity is not normal between the on-prem PC and the availability zones (on-prem PC or Xi-Leap), then you may see the alert for "Replication of Recovery Point failed".
Sample Alert
Block Serial Number: XXXXXXXXXXXX
From NCC 4.6 onwards
Block Serial Number: XXXXXXXXXXXX
Output messaging
From NCC 4.6 onwards.
From NCC 4.6.3 onwards.
[
{
"Check ID": "Recovery Point replication failed."
},
{
"Check ID": "Various."
},
{
"Check ID": "Retry the Recovery Point replication operation."
},
{
"Check ID": "Recovery Point will be unavailable for recovery at the recovery location."
},
{
"Check ID": "A130174"
},
{
"Check ID": "Recovery Point Replication Failed"
},
{
"Check ID": "Failed to replicate recovery point created at: 'recovery_point_create_time' UTC of the VM: 'vm_name' to the recovery location: availability_zone_physical_name."
},
{
"Check ID": "130174"
},
{
"Check ID": "VM Recovery Point replication failed."
},
{
"Check ID": "VM Recovery Point will be unavailable for recovery at the recovery location."
},
{
"Check ID": "A130174"
},
{
"Check ID": "VM Recovery Point Replication Failed"
},
{
"Check ID": "Failed to replicate VM recovery point created at:'{recovery_point_create_time}' UTC of the VM: '{vm_name}' to the recovery location: {availability_zone_physical_name}"
},
{
"Check ID": "Network Connectivity issues between the Primary and the Recovery Availability Zone. Check network connection between the Primary and the Recovery Availability Zone."
},
{
"Check ID": "Data Protection and Replication service is not working as expected. The service could be down. Contact Nutanix support."
},
{
"Check ID": "VM migration process is in progress. Retry the VM Recovery Point replication operation after the migration is complete."
},
{
"Check ID": "If NearSync is enabled, the container name on remote cluster should match the one on source cluster hosting the VM. Create container(s) on which VM is running with the same name on the remote cluster."
},
{
"Check ID": "Virtual IP address has not been configured on the remote cluster. Configure the Virtual IP address and then retry the Recovery Point replication operation."
},
{
"Check ID": "Remote clusters are unhealthy. For a manually taken VM Recovery Point, retry replication again. For a scheduled VM Recovery Point replication, ensure all the remote clusters are healthy. Then wait for the scheduled Recovery Point replication."
},
{
"Check ID": "VM Recovery point was created for an entity having datasource-backed virtual disks. Retry the VM Recovery Point replication operation after storage migration for the datasource backed virtual disks is complete."
},
{
"Check ID": "Remote cluster is in read-only state.Remote Availability Zone does not support PowerPC architecture.Replication of VM Recovery Point with dedup entity to a Recovery Availability Zone without dedup support is not allowed.Remote Availability Zone does not support Volume Groups.Replication of Recovery Point with VMs having attached File Level Restore (FLR) disks to a Recovery Availability Zone without FLR support is not allowed.\n\n\t\t\tResolve the stated reason for the failure. If you cannot resolve the error, contact Nutanix support."
},
{
"Check ID": "Replication to the remote site have been suspended for an entity by the user through CLI. Remove the suspended replication timer associated with the entity using the CLI."
},
{
"Check ID": "130174"
},
{
"Check ID": "VM Recovery Point replication failed."
},
{
"Check ID": "VM Recovery Point will be unavailable for recovery at the recovery location."
},
{
"Check ID": "A130174"
},
{
"Check ID": "VM Recovery Point Replication Failed"
},
{
"Check ID": "Failed to replicate VM recovery point created at:'{recovery_point_create_time}' UTC of the VM: '{vm_name}' to the recovery location: {availability_zone_physical_name}"
},
{
"Check ID": "Network Connectivity issues between the Primary and the Recovery Availability Zone. Check the network connection between the Primary and the Recovery Availability Zone."
},
{
"Check ID": "Data Protection and Replication service is not working as expected. The service could be down. Contact Nutanix support."
},
{
"Check ID": "VM migration process is in progress. Retry the VM Recovery Point replication operation after the migration is complete."
},
{
"Check ID": "If NearSync is enabled, the container name on remote cluster should match the one on source cluster hosting the VM. Create container(s) on which VM is running with the same name on the remote cluster."
},
{
"Check ID": "The virtual IP address has not been configured on the remote cluster. Configure the Virtual IP address and then retry the Recovery Point replication operation."
},
{
"Check ID": "Remote clusters are unhealthy. For a manually taken VM Recovery Point, retry replication again. For a scheduled VM Recovery Point replication, ensure all the remote clusters are healthy. Then wait for the scheduled Recovery Point replication."
},
{
"Check ID": "VM Recovery point was created for an entity having datasource-backed virtual disks. Retry the VM Recovery Point replication operation after storage migration for the datasource backed virtual disks is complete."
},
{
"Check ID": "Remote cluster is in a read-only state.Remote Availability Zone does not support PowerPC architecture.Replication of VM Recovery Point with dedup entity to a Recovery Availability Zone without dedup support is not allowed.Remote Availability Zone does not support Volume Groups.Replication of Recovery Point with VMs having attached File Level Restore (FLR) disks to a Recovery Availability Zone without FLR support is not allowed.\n\t\t\tResolve the stated reason for the failure. If the issue persists after the next replication schedule, contact Nutanix Support."
},
{
"Check ID": "Replication to the remote site has been suspended for an entity by the user through CLI. Remove the suspended replication timer associated with the entity using the CLI."
},
{
"Check ID": "The replication target site may not support Consistency Group Recovery Points. Upgrade the AOS version of the target cluster to version 6.1 or higher."
},
{
"Check ID": "VM is part of a Consistency Group and Nutanix DRaaS Remote Availability Zone does not support Consistency Groups. Remove the VM from the Consistency Group or update Protection Policy mapped to Nutanix DRaaS Remote Availability Zone."
}
]
|
Troubleshooting and Resolving the IssueFor both on-prem and Xi-Leap use cases, gather log bundles on both Prism Central and the Prism Element cluster hosting the affected VM. See "Collecting Additional Information".
If the target availability zone is an on-prem PC, verify network connectivity between the local on-prem PC and the target on-prem PC.
Confirm if the ports are opened on the firewall.
If the target availability zone is a Xi-Leap PC, follow the steps given below:
Confirm the IPSec and eBGP status on the Xi side.
Investigate if there is any other networking issue on the on-prem side.
Check if you can ping other external servers - try pinging google.com http://google.com/.Try pinging my.nutanix.com http://my.nutanix.com/.
Check the Xi-Leap VPN status - Log in to the Xi-Leap PC -> Click on "Virtual Private Clouds" -> Click on Production -> Select the VPN tab. Check the status of the following on the page:
IPSec Status should show Connected.EBGP Status should show Established.Xi Gateway Status should show On << This could sometimes show Off if NTP on the VPN VM is not running. If you see this as Off, check if NTP is running on the VPN appliance.Xi Gateway Routes should show various routes (verify that you are seeing the subnet route received from the on-prem side).
Look at the categories and the protection policy configured on the on-prem side and verify the configurations.Verify the network connectivity between the on-prem PC and Xi-Leap PC.
Try to ping the LB IP from the on-prem PC.
nutanix@CVM$ ping x.x.x.x
Check port connectivity for ports 1034 and 1031.
nutanix@CVM$ wget --no-check-certificate https://x.x.x.x:1034
Log in to the XI-Leap PC and verify if you see any VMs listed in the categories and if they are syncing.Try to manually replicate the entity, i.e. go to the Onprem PC -> Virtual Infrastructure -> Recoverable entity -> Click on any entity -> Replicate
Note: An alert stating that the replication is a full replication might be triggered when doing the above step. Engineering is aware of the improper messaging in the alert and will be resolved in future releases.
If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach them to the support case.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run NCC health checks and collect the output for the Prism Element cluster hosting the VM affected and Prism Central. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect a Logbay bundle for the Prism Element cluster hosting the VM affected and Prism Central using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@CVM$ logbay collect --aggregate=true
To automatically attach your logs to the case, you can add your case number to the command.
nutanix@CVM$ logbay collect --aggregate=true --dst=ftp://nutanix -c <case_number>
If the log bay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command:
nutanix@CVM$ ncc log_collector run_all
Attaching files to the case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.[
{
"TCP/UDP": "TCP",
"Port Number": "2020",
"Description": "To orchestrate data replication between two clusters",
"Inbound/Outbound": "Both",
"Source": "PE",
"Destination": "PE"
},
{
"TCP/UDP": "TCP",
"Port Number": "2074",
"Description": "To communicate with other clusters. Used by application-consistent Recovery Points, configuring static IP address, file-level replication, and self-service restore features",
"Inbound/Outbound": "Inbound",
"Source": "PE",
"Destination": "PE"
},
{
"TCP/UDP": "TCP",
"Port Number": "3260",
"Description": "To expose the data services (volume groups) outside of the cluster",
"Inbound/Outbound": "Both",
"Source": "PE",
"Destination": "PE"
},
{
"TCP/UDP": "TCP",
"Port Number": "9440",
"Description": "To run remote API calls.",
"Inbound/Outbound": "Both",
"Source": "PC & PE",
"Destination": "PE & PC"
},
{
"TCP/UDP": "TCP/UDP",
"Port Number": "Port Number",
"Description": "Description",
"Inbound/Outbound": "Inbound/Outbound",
"Source": "Source",
"Destination": "Destination"
},
{
"TCP/UDP": "TCP",
"Port Number": "2030/2036",
"Description": "To orchestrate replication of VM configuration",
"Inbound/Outbound": "Both",
"Source": "PC & PE",
"Destination": "PE & PC"
},
{
"TCP/UDP": "TCP",
"Port Number": "2073",
"Description": "To orchestrate replication of NGT configuration",
"Inbound/Outbound": "Both",
"Source": "PC & PE",
"Destination": "PE & PC"
},
{
"TCP/UDP": "TCP",
"Port Number": "2090",
"Description": "To check the status of tasks",
"Inbound/Outbound": "Both",
"Source": "PC & PE",
"Destination": "PE & PC"
}
]
|
KB12902
|
Nutanix Files: Smart DR failing to manage SPNs during failover
|
Smart DR fails to manage SPNs when NFS and SMB are using AD as a Directory Service
|
During Smart DR failovers the SPNs for the source Nutanix Files server is deleted under the source Nutanix Files machine account and then are added under the machine account of the target Nutanix Files server. The result will look like the below.Source Nutanix Files machine account:
No SPNs
Target Nutanix Files machine account
Target Nutanix Files server SPNsSource Nutanix Files server SPNs
There is a known issue where the SPN management fails when Nutanix Files is using Active Directory (AD) for both NFS and SMB under Directory Services.On the target Nutanix Files server, we see the below in the minerva_nvm.log on the FSVMs.
nutanix@FSVM:~$ allssh 'egrep "not joined to Active Directory|Skip AD operations and direclty move to DNS operations" /home/nutanix/data/logs/minerva_nvm.log*
On the source Nutanix Files server we can see that it is joined to AD.
nutanix@FSVM:~$ afs fs.info
|
This issue is scoped to be resolved in Nutanix Files 4.1, 4.2, and 4.0.3.Below are the two workarounds for this issue.1) Change the Directory Service for NFS.If there are no NFS exports on Nutnaix Files using Kerbose authentication, you can change the Directory Service to unmanaged or LDAP. If there are no NFS exports at all, you can disable NFS.2) Manually manage the SPN’s on Active Directory.Request the customer to manually manage the SPNs under the Nutanix Files machine account from the Domain Controller
First, you would note the SPNs on the source Nutanix Files machine account.Then you would delete the SPNs on the source Nutanix Files machine account.Then you would add the source Nutanix Files SPNs to the target Nutanix File's machine account.
|
KB10134
|
File Analytics Deployment Fails Due To Network Communication Issues Between FAVM and CVMs
|
Nutanix Files Analytics cannot be deployed due to no communication between Files Analytics VM and CVM network.
|
Issue 1: deploying File Analytics fails with the below error:
Cannot connect to File Analytics VM from Prism. Please verify that network details for the VM are correct and the IP is reachable from Prism.
This issue occurs when the FAVM's IP is unreachable via the CVMs network. The File Analytics Guide: Deployment Requirements https://portal.nutanix.com/page/documents/details?targetId=File-Analytics:ana-fs-analytics-prereqs-r.html states that the FSVM's IP be on a unique VLAN or the FSVM's External (Client) Network. It must also be reachable by the CVM's network. Verify the port requirements as defined by Portal Port Reference https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=File%20Analytics. To verify if the File Analytics VM is reachable from the CVM network via TCP ports 7 and 22 follow the steps outlined in the Solution section of this article.Issue 2: FA deployment failed because Controller VM (CVM) is unable to access File Analytics via SSH on port 22. The FA Deployment task status in the prism element UI shows that the FA deployment failed with "error creating volume group" message as shown in the following example:If the SSH from CVM to FA VM is failing then perform the following steps:
Once the deployment is initiated, the FA VM is powered ON and it is temporarily reachable from CVM via ping but SSH access to FA VM fails as shown in the following example:
nutanix@CVM:~$ ping 10.XX.XX.150
Verifying SSH port 22 connectivity from CVM to FAVM's IP shows "Connection timed out" errors as shown in the following example:
nutanix@CVM:~$ allssh 'sudo nc -z 10.XX.XX.150 22 -v'
The prism_gateway log shows that the deployment process failed during the volume group creation stage which requires CVM to open as SSH session to the FA VM as seen in the following in the prism_gateway.log snippet on the prism leader CVM:
INFO 2024-01-30 17:58:45,090Z Thread-53 [] prism.util.AnalyticsPlatformAdminUtil.changeVmPowerState:467 Change VM power state - ON
Issue 3: FA cannot be deployed after a failed deployment due to still running Ergon tasks. In UI, you see the error:
File Analytics server deployment is in progress
|
Solution1:
Use the below to identify the Prism leader CVM:
nutanix@CVM:~$ curl localhost:2019/prism/leader && echo
Open an SSH session to the Prism leader CVM. Run the below to tail the prism_gateway.log on the Prism leader CVM:
nutanix@CVM:~$ tail -F /home/nutanix/data/logs/prism_gateway.log | grep Analytics
Note: Starting in AOS 6.7 and PC.2023.3, relevant log entries pertaining to Prism leadership change will be logged into prism_monitor.INFO and vip_monitor.INFO log files.
Re-deploy File Analytics to re-create the issue and note the IP address that the customer is using.Verify that the process is stuck at the volume group creation stage of the deployment in the prism_gateway.log file:
INFO 2019-05-07 09:44:29,938 Thread-76585 prism.util.AnalyticsPlatformAdminUtil.createVolumeGroup:396 Pinging...
Open an additional SSH session and run the below commands to verify if the CVM's network can access the FAVM's IP via ports 7 and 22. Replace <FAVM IP> with the actual FAVM's IP address:
nutanix@CVM:~$ nc -vz <FAVM IP> 7
Once the firewall rules are updated, you should see the deployment logging in prism_gateway.log on the Prism leader VM progress past the Ping test:
INFO 2019-05-07 09:44:29,938 Thread-76585 prism.util.AnalyticsPlatformAdminUtil.createVolumeGroup:396 Pinging...
Solution 2
Advise the customer to engage the network team to update the firewall rules to allow SSH connection on port 22. You an also share the Ports and Protocols https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=File%20Analytics needed for FA VM document with the customer. FA deployment completed successfully once the firewall rules were updated. You can reference the File Analytics Guide https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v3_3:ana-fs-analytics-launch-t.html for next steps.
Solution 3Reach out to Nutanix Support https://portal.nutanix.com/ for assistance in clean up File Analytics entries.
|
}
| null | null | null | null |
KB13484
|
HPE CVM fails to power on with error message "unexpected exit status 1: [Errno 2] No such file or directory: '/sys/bus/pci/devices/0000:85:05.5/driver_override'"
|
If an HPE host with NVMe drives has its motherboard replaced, it may need the Intel VMD (Volume Management Device) enabled in the BIOS before the Controller VM (CVM) can see the RAID controller.
|
When trying to boot the Controller VM (CVM) on an HPe node, it may fail to complete the boot giving the error:
[root@AHV ~]# virsh start NTNX-xxxxxx-CVM
This implies that the device at the PCI bus address 85:05.5 is absent.
[root@AHV ~]# lspci | grep -i 'RAID bus controller'
No devices were seen.
Confirm that the driver is loaded.
[root@AHV ~]# lsmod | grep vfio
Driver does not get associated with any device in dmesg.
[root@AHV ~]# dmesg | grep vfio
Confirm that the Intel VMD is disabled. From the host, log in to the ilorest utility. For example, from AHV, use the rawget command to dump the BIOS information and look for "IntelVmd".
[root@AHV ~]# ilorest
Note: If ilorest utility (specific to HPe hardware as it sends REST API commands to iLO) is failing with the following error:
[root@AHV ~]# ilorest
Apply the following workaround to remount /tmp as shown below (applicable for AHV):
[root@AHV ~]# sudo mount /tmp -o remount,exec
|
Enable Intel VMD in the BIOS.
Create the following JSON file called vmdupdate.json:
{
Apply via the ilorest utility:
[root@AHV ~]# ilorest
After the host restarts, confirm that the CVM has now completed booting.The host should be able to see the RAID controller as well.
[root@AHV ~]# lspci | grep -i 'RAID bus controller'
Note: You may see the following error if the JSON file is formatted incorrectly. You can validate JSON using an online JSON format.
iLOrest > rawpost vmdupdate.json
|
KB4069
|
vlog : gflag verbose logging option
|
Enable gflag vlog if you need verbose logging.
Ask senior SRE or escalation engineer before enable it. Verbose logging requires a lot of care.
|
Sometimes the default logging level doesn't tell us enough info as to why the Cluster/CVM is hitting the particular issue.
Verbose logging (vlog) generates lots of logs at .INFO log level.
|
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit
Caution
The following pre-cautions must be taken:
Please be careful to use vlog especially for Stargate.It can increase the CPU and disk IO and consume lots of disk space for logging purpose.You should log the current setting before gflag change and revert the setting to original one after troubleshooting is done.Please don't forget to recover vlog option.By default, it must be set to 0.
How to enable verbose logging:
Each process able to change gflag verbose logging option via below syntax:
curl -s http://cvm_IP:Port/h/gflags?v=LOGGING_LEVEL
LOGGING_LEVEL is from 0 to 7 with the following to remember:
The bigger the level value, the more verbose logging gets.Default is 0.Level 3 contains 1 and 2.Level 7 contains all.
Live Gflag changes on AOS 6.5.1 and above:(* Starting with AOS 6.5.1 gflags need to be whitelisted before being able to update them live. )
allssh gflags_support_mode.py --service_name=stargate --service_port=2009 --enable
Example: Enable level 3 curator logging on all CVM.
nutanix@NTNX-13SM36440015-A-CVM:~$ allssh 'curl -s http://0:2010/h/gflags?v=3'
With default verbosity (level 0), the Curator(C++) logging will not be written to log, however with verbose logging level 3 option enables it.NOTE: You might not need to restart the process.
xref: /danube-4.7.4-stable/curator/mapreduce/mapreduce_task/disk_balance_group.cc
How to check current setting:
curl http://cvm_IP:Port/h/gflags 2>/dev/null
Log Size
Log size depends on the process and log level.
But please be careful, it can generate lots of logs. For example, before enabling v3 on Curator, log rotation happens every 2 days (Almost 50 MB per day).But after enabling it, rotation happens per 10 sec-to-few-minutes interval. Thus, it's imperative to insure log level is changed back to default. It also strongly depends on which code they are running. In this case, Curator full scan is running.
nutanix@CVM:~/nutanix_support_114169$ ls -lh
|
""Latest Firmware"": ""0x80004cdb""
| null | null | null | null |
KB1437
|
Troubleshooting the Remote Site Replication Error Message "StartReplicationRPC to remote X failed with error kStaleCluster"
|
The following error message is displayed in Prism: Protection domain <Protection Domain 1> replication to remote site <Remote Site 1> failed. StartReplication RPC to remote <Remote Site 1> failed with error kStaleCluster
|
The following error message is displayed in Prism:
Protection domain <Protection Domain 1> replication to remote site <Remote Site 1> failed. StartReplication RPC to remote <Remote Site 1> failed with error kStaleCluster
|
The error message can occur in a disaster recovery or backup configuration where:
A mirror backup site is not configured in the remote site, which is a requirement for remote replication to work.You destroyed a remote cluster and recreated it, but the local site still references the IP address of this cluster.
The workaround is to delete the remote cluster as a remote site and then create it as a remote site available for replication.
The Nutanix Prism Element Data Protection Guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v5_19:wc-remote-site-any-configuration-c.html and nCLI command reference documentation https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v5_19%3Aman-ncli-c.html describes how to create a remote site https://portal.nutanix.com/page/documents/details?targetId=Command-Ref-AOS-v5_19%3Aacl-ncli-remote-site-auto-r.html.
After configuring the local and remote sites, you can create a protection domain, add VMs and remote sites to it, and schedule the backup frequency.
See also KB 1602, Replication RPC to the remote cluster completed with error kInvalidValue https://portal.nutanix.com/kb/1602.
|
KB2219
|
NCC Health Check: check_cvm_configuration
|
The NCC health check "check_cvm_configuration" verifies the CVM configuration. This check is relevant only to Nutanix clusters running the Microsoft Hyper-V hypervisors.
|
The NCC health check check_cvm_configuration verifies a series of configurations and best practices that need to be in place for the Controller VMs (CVMs).
Note: This check runs only on Nutanix clusters running the Microsoft Hyper-V hypervisor.
This check verifies the following aspects of the Controller VMs configuration.
If the Controller VM is set to automatically start when the hypervisor boots.
The NutanixDiskMonitor service is responsible for starting the CVM.
The automatic start action should be set to "Nothing".
If the internal adapter MAC address is set to the expected value of 00:15:5d:00:00:80.
If all physical hard drives are mapped to the Controller VM as SCSI drives.
If the boot ISO attached to the Controller VM is the expected svmboot.iso file.
If all of the drives are presented to the CVM configuration.
Note: The location of the svmboot.iso on Hyper-V hosts is C:\Program Files\Nutanix\Cvm\[CVM_Name]\svmboot.iso.
Running the NCC Check
You can run this check as a part of the complete NCC health checks.
nutanix@cvm$ ncc health_checks run_all
You can also run this check individually.
nutanix@cvm$ ncc health_checks hypervisor_checks check_cvm_configuration
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is not scheduled to run on an interval.
This check does not generate an alert.
Sample output
Status: PASS
Running : health_checks hypervisor_checks check_cvm_configuration
Status: FAIL - Case 1
Detailed information for check_cvm_configuration:
Status: FAIL - Case 2
Detailed information for check_cvm_configuration:
Status: FAIL - Case 3
Detailed information for check_cvm_configuration:
Status: FAIL - Case 4
Detailed information for check_cvm_configuration:
Status: FAIL - Case 5
Detailed information for check_cvm_configuration:
Status: FAIL - Case 6
Node x.x.x.x:
Status: FAIL - Case 7
Detailed information for check_cvm_configuration:
Output messaging
[
{
"Check ID": "Check if the CVM is properly configured"
},
{
"Check ID": "A problem with the CVM configuration has been detected."
},
{
"Check ID": "Review KB 2283 and 2219."
},
{
"Check ID": "CVM services may not start after reboot."
}
]
|
Before running the Repair-CVM command, see Instructions on running Repair-CVM on Hyper-V https://portal.nutanix.com/kb/1725 (if required for the resolution of any of these issues) as the Controller VM is rebooted after running this command and the cluster should be able to tolerate a node down or failure before the Controller VM reboots.
In case the NCC check fails and matches any of the mentioned cases, use the following methods for troubleshooting or fixing the issue.
Resolution: Case 1 - The MAC address of the internal network adapter in the Controller VM is incorrect.
The expected MAC address for the Controller VM is "00:15:5d:00:00:80" on the internal adapter.
To verify through PowerShell, you can run the following command.
> Get-VM NTNX*CVM | Get-VMNetworkAdapter -Name Internal
For example:
nutanix@cvm$ winsh
In case you see that the Controller VM MAC address is not configured to "00:15:5d:00:00:80", run the following command on the Hyper-V PowerShell that reported the NCC failure to fix through the automated scripts.
> Repair-CVM
Resolution: Case 2 - The following disk numbers are not added to the CVM.
You can recover from this incorrect configuration by running the Repair-CVM script from the Hyper-V host. Repair-CVM powers off the existing Controller VM and restores the expected configuration and powers it back on.
As the Nutanix clusters can only sustain a single or dual node failure depending on whether your cluster is configured for a redundancy factor of 1 or 2, ensure the rest of the cluster is healthy by checking the cluster status before you run the following command.
> Repair-CVM
Resolution: Case 3 - The Controller VM automatic start policy is set to Start.
In the event that the failure message indicates the automatic start policy is not configured correctly, NCC prints a warning.
NutanixDiskMonitor service starts up the CVM.
Ensure to have the automatic start action set to "Nothing" for the Controller VM.
Verify the current configuration by running the following command.
> Get-VM NTNX*CVM | ft Name,AutomaticStartAction -AutoSize
Change the Controller VM to automatic start action to the expected configuration (Nothing).
> Get-VM NTNX*CVM | Set-VM -AutomaticStartAction Nothing
Resolution: Case 4 - svmboot.iso is not attached to the Controller VM.
This failure suggests that the svmboot.iso is not mounted to the Controller VM as the Controller VM boots up using the svmboot.iso.
You can verify whether an ISO is mounted from Hyper-V PowerShell.
> Get-VM *CVM* | Get-VMDvdDrive | fl
Resolution: Case 5 - C:\users\Administrator\Virtual Machines\NTNX-xxxx-X-CVM\svmboot.iso is not up-to-date.
The NCC failure suggests that during the AOS upgrade process, svmboot.iso might not have been updated as expected.
A copy of the expected svmboot.iso exists on the Controller VM at this location /home/nutanix/data/installer/el*/images
nutanix@cvm:~/data/installer/el6-release-euphrates-5.1.3-stable-1208b7c73994fec67cf1c0238cd962ed61e5f372/images$ ls -la
NCC checks will look for a file called svmboot.iso.upgraded, if the file doesn't exist then it will look for a file called svmboot.iso. Check if there is a file called "svmboot.iso.upgraded" the directory above. Step1: SSH to the CVM that is reporting the failure in NCC check. Check if svmboot.iso.upgraded file or svmboot.iso file exists in the CVM.If svmboot.iso.upgraded file exists then, run the following command to check it's md5sum.
nutanix@cvm: ~$ md5sum /home/nutanix/data/installer/$(cat /etc/nutanix/release_version)/images/svmboot.iso.upgraded
If there is no svmboot.iso.upgraded then, run the following command to check the svmboot.iso file's md5sum.
nutanix@cvm: ~$ md5sum /home/nutanix/data/installer/$(cat /etc/nutanix/release_version)/images/svmboot.iso
Step 2: Now let's compare with the md5sum of the svmboot.iso file on the Hyper-v host
Run the following command from the same CVM to check the md5sum of all the svmboot.iso file that exists on the Hyper-v host. Check the Hash
nutanix@cvm: ~$ winsh '(Get-FileHash "C:\Program Files\Nutanix\Cvm\$((get-cvm).name)\svmboot.iso" -Algorithm MD5).Hash'
Step3: If the md5sum from Step1 and Step2 doesn't match follow the instructions below to update the svmboot.iso file on the Hyper-v host
Rename the old svmboot.iso on the local datastore to svmboot.iso.old and then run the following command from the same CVM
nutanix@cvm: ~$ wincp ~/data/installer/$(cat /etc/nutanix/release_version)/images/<boot_iso_file> 'C:\Program Files\Nutanix\Cvm\$((get-cvm).name)\svmboot.iso'
In the command above replace the <boot_iso_file> with the file that is identified in Step1
Mount this to the Controller VM and run the NCC check again to verify the check passes.
Resolution: Case 6 - Not all local disks are available for storage.
This failure indicates that multiple disks are not configured in the CVM to be passed through from the Hyper-V node.
Navigate to the Hyper-V Manager and connect to the host that contains the Controller VM. Right-click to select Settings of the Controller VM.Confirm that a disk is listed under each SCSI Controller. There should be as many SCSI Controllers/Disks as the Internal disks on the node (SATADOM is an exception).Compare them to other similar nodes in the cluster and ensure all the disks are presented as Pass-Through to the CVM.To fix this issue, run Repair-CVM on the Hyper-V node PowerShell which had the NCC failure.
> Repair-CVM
Resolution: Case 7 - VLAN configuration inconsistent.
This result is explained in detail in KB 2283 https://portal.nutanix.com/kb/2283. Refer to "Solution => VLAN configuration inconsistent".In case the preceding steps do not resolve the issue, contact Nutanix Support http://portal.nutanix.com for further assistance.
|
KB14707
|
Time skew between PE and PC causes MSP enable failure due to registry became unavailable during MSP deployment
|
Time skew causes MSP enable failure due to registry became unavailable during MSP deployment
|
If an MSP platform is enabled on scale-out Prism Central, an internal docker registry is deployed on two of the PCVM nodes to support the deployment of PCVM MSP services.Internally, the registry and MSP controller use a connection to the Prism Element to store its data there. To handle registry service failover between PCVM nodes MSP controller uses API calls to PE to clear VG attachments.Authentication for API calls from MSP controller to Prism Element relies on a token that has time constraints.If PE and PC time is not in sync, GetVersions() API may fail with 401 error due to time skew on an attempt to activate the registry service after failover. The following can be observed in '/home/nutanix/data/logs/msp_controller.out' log on a PC VM:
2023-04-13T09:53:27.879Z msp_registry.go:1401: [WARN] [msp_cluster=prism-central:RECONREG] Registry is inactive on primary node ntnx-x.x.x.x-a-pcvm:z.z.z.z - activating it
If this happens, msp_controller fails to clear VG attachments to start the registry on the new PRIMARY, and, as a result, the registry is left in an inactive state.This scenario may cause MSP enablement failures or prevent successful registry service failover if MSP is already enabled and time skew is introduced after MSP enablement.
|
Check for NTP-sync issues inside PE or between PC and PE and resolve the issue. Refer to KB-4519 http://portal.nutanix.com/kb/4519 for time sync troubleshooting.Once NTP is in sync GetVersions() should not fail, and MSP enablement should complete successfully.
|
KB12454
|
Cluster Expand - Guidance to add a node to Nutanix cluster with VMware NSX-T setup
|
This KB covers a scenario of Cluster Expand - Adding a node to Nutanix cluster after configuring for VMware NSX which is not supported and expand operation will fail at pre-check Errors:Failed to get VLAN tag of node
|
In VMware setups using NSX-T, performing a cluster expand operation may fail with the below error in Prism.In the error, we will see the IPV6 address of the new node eth0 interface.
Errors: Failed to get VLAN tag of node" fe80::xxx:xxxx:xxxx:e23a%eth0
We can confirm by running discover_nodes from any of the existing nodes in the cluster that the new node is getting discovered over ipv6.
nutanix@CVM:~$ discover_nodes
You may encounter this error when you expand a Nutanix cluster that is running ESXi host(s) with NSX-T network configuration where the CVM and host management network is already backed by N-VDS.
This occurs because Genesis requires persisted NSX-T credentials to communicate with the NSX-T manager and the newly added un-configured nodes cannot have this information unless it joins the cluster. Genesis relies on the NSX-T manager for the CVM network configuration information. When Genesis fails to get this information from the NSX-T manager, it refuses to add the new node to the Nutanix cluster.
|
As per Infra Engineering team, Expanding a cluster with NSX-T is supported. However, we expect customers to configure NSX-T in vCenter only after adding the node to the cluster.
Migrate the node networking to the NSX-T enabled configuration after the node gets added to the cluster.To avoid this scenario, Nutanix recommends that you add the new node to the Nutanix cluster where the host and CVM management network is in standard network configuration (Standard vSwitch or VDS).If you have already added the New nodes networking to N-VDS port-groups remove the new node from NSX manager and bring the Networking to Standard vSwitch, post which the cluster expand operation should be successful. [
{
"NSX-T enabled on cluster": "Yes",
"New node with NSX-T configure in vCenter": "Yes",
"Result": "Failure",
"Workaround": "Expand cluster first and then configure NSX-T from vCenter."
}
]
|
KB8608
|
Finding the serial ID of a bad HDD or SSD
| null |
When a bad HDD/SSD is present on a node, and foundation is performing the imaging of the node, it may fail with the following message:
StandardError: Failed command: [/usr/bin/sg_inq /dev/sda] with reason [sg_inq: error opening file: /dev/sda: No such device or address]
The following messages can be seen when the node boots from phoenix:
[ 91.741083] sd 0:0.0:0: rejecting I/O to offline device
To identify and replace the bad disk, the disk serial ID needs to be found out.Trying to find the disk serial ID via smartctl fails with the message:
smartctl open device: /dev/sda failed: No such device or address
|
The disk serial can be identified by udevadm:
udevadm info --query=all --name=/dev/sdX | grep ID_SERIAL
As an example, if you want to find the serial ID of disk /dev/sda :
$ udevadm info --query=all --name=/dev/sda | grep ID_SERIAL
|
KB9456
|
Alert - A400114 - PolicyEngineServiceDown
|
This Nutanix article provides the information required for troubleshooting the alert PolicyEngineServiceDown for your Nutanix cluster.
|
Nutanix Self-Service is formerly known as Calm.
Alert Overview
The PolicyEngineServiceDown alert is generated when any of the internal services of NuCalm Policy Engine is down.
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
[
{
"400114": "Calm Policy Engine internal service is down",
"Check ID": "Description"
},
{
"400114": "Calm Policy Engine internal service may have stopped working",
"Check ID": "Cause of failure"
},
{
"400114": "Please refer to KB-9456",
"Check ID": "Resolutions"
},
{
"400114": "You will not be able to perform App Management related operations.",
"Check ID": "Impact"
},
{
"400114": "A400114",
"Check ID": "Alert ID"
},
{
"400114": "Calm Policy Engine Internal Service has Stopped Working",
"Check ID": "Alert Title"
},
{
"400114": "Discovered that the Calm policy engine internal service running on {policy_engine_ip} is not working: '{service_names}'",
"Check ID": "Alert Message"
}
]
|
Troubleshooting and Resolving the Issue
Check the status of Policy Engine VM from Prism Central UIVerify the status of Policy Engine service by running below commandsLog in to Policy Engine VM through ssh sessionRun the below command:
docker ps
Sample output:
[nutanix@localhost ~]$ docker ps
Confirm if the container is listed as healthy. Though the container shows as healthy, its internal services might be crashing. Run the below commands to list the status of internal services of Policy Engine VM.
Command 1:
docker exec -it policy bash -ic "source ~/.bashrc; activate ; echo exit | status; echo"
Example Output:
[nutanix@localhost ~]$ docker exec -it policy bash -ic "source ~/.bashrc; activate ; echo exit | status; echo"
Command 2:
docker exec -it policy-epsilon bash -ic "source ~/.bashrc; activate ; echo exit | status; echo"
Example Output:
[nutanix@localhost ~]$ docker exec -it policy-epsilon bash -ic "source ~/.bashrc; activate ; echo exit | status; echo"
If any of the internal services is in crash state with lesser uptime indicating the service has crashed recently, use logs on Policy Engine VM (/home/nutanix/data/log) to find out why internal services of Calm Policy Engine crashed.
Identify the timestamp of the alert from Alert details. Collect logs from for 2hrs time frame around the alert time. To collect logs, execute the following command on any PCVM:
nutanix@NTNX-x-x-x-x-A-PCVM:~$ logbay collect -t calm_policy_engine
Attaching Files to the Case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.
If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB4114
|
NCC Health Check: mellanox_nic_status_check
|
The NCC health check mellanox_nic_status_check checks if Mellanox NICs are down or if any Mellanox NIC has speed other than 10GbE or 40GbE.
|
The NCC Health Check ncc health_checks network_checks mellanox_nic_status_check checks if Mellanox NICs are down or if any Mellanox NIC has a speed other than 10GbE or 40GbE.
NOTE:
If the cluster runs NCC version earlier than 3.0.3, upgrade to NCC 3.0.3, and re-run the above check.
To find the NCC version running on your system:
From Prism:
Log in to Prism.Click the gear icon on the top right and click on Upgrade Software.In the Upgrade Software dialog, select NCC tab. The current NCC version will be shown.
From CLI:
nutanix@cvm$ ncc --version
Running the NCC Check You can run this check as part of the complete NCC Health Checks :
nutanix@cvm$ ncc health_checks run_all
Or you can run this check separately :
nutanix@cvm$ ncc health_checks network_checks mellanox_nic_status_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
Sample output For Status: PASS
Running : health_checks network_checks mellanox_nic_status_check
For Status: WARN
Detailed information for mellanox_nic_status_check:
Output messaging
Note: This hardware-related check executes on the Nutanix NX hardware. [
{
"Description": "The NIC is disconnected from the switch, or the switch port is not functioning correctly, or both 10GbE and 40GbE Mellanox NICs are installed on one node."
},
{
"Description": "Ensure that the NIC is connected to the switch and that the switch port is functioning properly. Ensure that only 10GbE or 40GbE Mellanox NIC is installed on one node.case of multiple nodes with the same condition, the cluster may become unable to service I/O requests."
},
{
"Description": "Mellanox NIC Speed is not 10GbE and 40 GbE or both 10GbE and 40GbE NICs are installed on host host_ip"
},
{
"Description": "Mellanox NIC Speed is not 10GbE and 40 GbE or both 10GbE and 40GbE NICs are installed on one node"
},
{
"Description": "Mellanox NIC on host host_ip has problem: alert_msg"
},
{
"Description": "This check is scheduled to run every minute, by default."
},
{
"Description": "This check will generate an alert after 1 failure."
}
]
|
If you have mixed NIC types (For example, Intel and Mellanox), the following warning appears when the Mellanox NIC type is not in use :
WARN: All mellanox NICs on host machine are Down.
To find the type of NICs you are using on the host, run the below command :
nutanix@cvm$ ncc hardware_info show_hardware_info | awk '/SATADOM/{f=0} f; /NIC/{f=1}'
Sample output:
+--------------------------------------------------------------------------------------------------+
Check the status on the hosts to ensure that no ports are down and the status is as expected.
For AHV
Find the number of interfaces:
nutanix@cvm$ allssh manage_ovs show_interfaces
Sample output for one host:
================== x.x.x.x =================
Verify if all the ports are up:
for i in 0 1 2 x; do echo '===eth' $i '===='; hostssh ethtool -i eth$i ; done
Replace x with the number of ports.For example, if there are eth0-eth3:
for i in 0 1 2 3; do echo '===eth' $i '===='; hostssh ethtool -i eth$i ; done
WARNING: Updating uplinks using "manage_ovs" will delete and recreate the bond with the default configuration.Consider the following before updating uplinks:
If active-backup load balancing mode is used uplink update can cause a short network disconnect.If balance-slb or balance-tcp (LACP) load balancing mode is used uplink update will reset the configuration to active-passive. Network links relying on LACP will go down at this point as the host stops responding to keepalives.
It is strongly recommended to perform changes on a single node at a time after ensuring the cluster can tolerate node failure. Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter to ensure that the cluster can tolerate the node being down.The use of "allssh manage_ovs update_uplinks" command may lead to a cluster outage. Only use it if the cluster is not in production and has no user VMs running.
For ESXi
nutanix@cvm$ hostssh esxcli network nic list
Sample Output for one host:
============= x.x.x.x ============
For Hyper-V
nutanix@cvm$ allssh 'winsh "get-netadapter|fl Name,InterfaceDescription,driverversion,status"'
Sample output for one host:
================== x.x.x.x =================
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com/.
|
KB2007
|
Genesis Keeps Restarting: "Data disks are not clean"
|
A connection error message or connection timeout is observed if you open the Prism Central web console. Genesis restarts every few seconds.
|
A connection error message or connection timeout is observed if you open the Prism Central web console.Review of the cluster logs shows Genesis restarts every few seconds with an output similar to the following in genesis.out:
2015-01-16 10:06:38 ERROR node_manager.py:2935 Zookeeper mapping is unconfigured
The same error messages can be seen on a PE cluster CVM as well.
|
Check the /etc/hosts file and verify the integrity of the line with the zookeper hostname.It should look like the following:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
It is important to ensure a single space between the IP and the zk entry. If there is anything else, like a tab or 2 spaces, Genesis fails with the same message mentioned above. Here is the regex for the entry:
"^(\d+\.\d+\.\d+\.\d+) zk(\d+) # DON'T TOUCH THIS LINE$"
|
KB16050
|
Nutanix Self Service Policy Engine: LCM is unable to upgrade Calm Policy Engine version
|
A solution for the scenario where the Lcm is not able to finalize the Calm Policy Engine versions upgrade.
|
Nutanix Self-Service (NSS) is formerly known as Calm.Identification:
The Lcm Inventory page on Prism Central will show the Policy Engine version available even after a successful upgrade was performed.Checking the ~/data/logs/nucalm.out log file we might notice that the calm is detecting the policy engine as a lower version:
nutanix@PCVM:~$ less ~/data/logs/nucalm.out
Validating the current version of the container images on the Policy engine VM we can find that the images are upgraded:
[root@ntnx-calm-policy-vm]# docker exec -it policy bash
Note: the versions of the Policy engine and Calm version may differ based on he environment.
|
During the LCM upgrade process, we noticed that container images are been upgraded however, the zk node is not getting updated with the correct version. To ensure you are hitting the same issue. These zk nodes need to be validated:NOTE: The command output should show two entries, comma-separated, one with the incorrect version.
nutanix@PCVM:~$ zkcat /appliance/logical/policy_engine/transient_data
LCM and Calm(in calm logs) show the Policy engine version as 3.5.2 even though the policy version version is 3.7.0. This happening due to the two entries present in the policy Zookeeper key (/appliance/logical/policy_engine/transient_data).
From the above ZK value LCM and calm identified the policy engine version as 3.5.2 as this is present in the first index.
Workaround:As a workaround, the zookeeper key (/appliance/logical/policy_engine/transient_data) can be overwritten using the following command to leave only the one with the correct version:Note: please make sure to update the details on the below command as per the corresponding environment
nutanix@PCVM:~$ echo '{"node_map": {"aaaabbb-cccc-dddd-eeee-aaaaabbbbb": {"ip": "xx.xx.xx.144", "version": "3.7.0"}}}' | zkwrite /appliance/logical/policy_engine/transient_data
The final entries on /appliance/logical/policy_engine/transient_data should be:
nutanix@PCVM:~$ zkcat /appliance/logical/policy_engine/transient_data
Post this, the LCM shows the correct version for the policy engine.
|
KB13981
|
LCM: BMC/BIOS upgrade failed with Error LcmActionsError: no node
|
LCM: BMC/BIOS upgrade failed with Error LcmActionsError: no node
|
LCM BMC / BIOS upgrade failed following which the affected host stuck in phoenixlcm_ops.out from LCM leader node
2022-11-15 03:33:38,006Z ERROR lcm_actions_helper.py:709 (x.x.x.x, kLcmUpdateOperation, 4321a8b0-fdca-455c-ab1f-79ee173d5bf4)
Genesis logs
Exception 'UpdateOp' object has no attribute '_filter_task' in __check_rpc_intent_thread:
Checking further in Genesis logs, look for multiple duplicate threadsExample - check rpc intent threadThere were 2 running check_rpc_intent_thread for LCM. Having multiple threads of the same type causes inconsistent behaviour in LCM.Thus cleanup of one thread caused the removal of lcm action zknode similar to the ENG-515240 https://jira.nutanix.com/browse/ENG-515240.
2022-11-15 03:32:45,931Z INFO 47182544 zeus_utils.py:614 check rpc intent thread: I (x.x.x.x) am the LCM leader
|
Restore the node from phoenix using KB 9437 https://portal.nutanix.com/kb/9437Refer to Debugging PhoRest-based Async Upgrade Failures KB 13023 http://portal.nutanix.com/kb/13023Restart genesis on the LCM leader node and retry the updateGenesis restart will re-start the LcmFramework on the node with the correct number of threads.
# To find LCM leader
If still the same, Perform BMC cold reset and retry the upgrade, A similar workaround was tried in KB 13610 http://portal.nutanix.com/kb/13610
[root@host]# ipmitool mc reset cold
If the issue is not resolved via BMC cold reset, consider opening a Tech-Help or an ONCALL ticket for further investigation depending on the urgency
|
KB7385
|
Hyper-V: Unable to boot VM after migrating protection domain to remote site on Hyper-V cluster
| null |
VM failing to boot with following error after migrating protection domain to remote site on Hyper-v 2012 R2 cluster
Error (12711)
|
Run following command from SCVMM machine:1. Create a variable for problematic VM
$vm = Get-scvirtualmachine -name “VM_Name”
2. Force a refresh
refresh-vm -force $vm
Or, from any other machine:1. Load the SCVMM cmdlets:
Add-PSSnapin microsoft.systemcenter.virtualmachinemanager
2. Connect to the SCVMM server:
Get-VMMServer –ComputerName <SCVMM FQDN>
3. Create a variable for problematic VM:
$vm = Get-VM –name “VM_Name”
4. Force a refresh:
refresh-vm -force $vm
|
KB7002
|
Nutanix Files - File Server Unreachable after upgrade from 2.x to 3.x
|
Minerva might crash due to having old snapshots (WPV) when migrating from 2.x to 3.x
|
CASE I:
On upgrading to 3.x code from 2.x code, customer might see an unreachable on Prism for the File Server.The share access will be uninterrupted however you will see the following symptoms :1. Alert will be generated:
ID : 926f2fbd-d206-4004-ad1d-c060e37f834b
2.Genesis.out of the FSVMs will have the following signature:
nutanix@NTNX--A-FSVM:~$ allssh "grep 'CRITICAL' ~/data/logs/genesis.out* | tail -n10"
3. Check for the following signature on minerva_nvm logs:
2019-02-05 00:01:10 WARNING 98361936 insights_interface.py:217 Insights RPC returned error 1000.
CASE II:
The following traceback will be seen on prism_gateway.log:
Caused by: java.lang.Exception: java.lang.Exception: java.lang.RuntimeException: Unrecognized field file_server passed, unable to build filter. at com.nutanix.prism.commands.minerva.MinervaConfigEntityDbImpl.enrichFileServer(MinervaConfigEntityDbImpl.java:776) at com.nutanix.prism.commands.minerva.MinervaConfigEntityDbImpl.getFileServers(MinervaConfigEntityDbImpl.java:637) ... 117 moreCaused by: java.lang.Exception: java.lang.RuntimeException: Unrecognized field file_server passed, unable to build filter. at com.nutanix.prism.commands.minerva.MinervaConfigEntityDbImpl.getVirtualNetworkPerFileServer(MinervaConfigEntityDbImpl.java:968) at com.nutanix.prism.commands.minerva.MinervaConfigEntityDbImpl.enrichFileServer(MinervaConfigEntityDbImpl.java:704)
This is the error:
Unrecognized field file_server passed, unable to build filter. - this is the error
CASE III:
The following traceback is seen on Prism_gateway.log on the FSVM:
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /appliance/physical/clusterexternalstate/0005876e-b710-edc1-0000-00000000807d
|
SOLUTION I:
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)"Insights throws this exception, if an entity update is older than insights_entity_timestamps_no_older_than_server_time_in_seconds gflag setting which is set to 180 days.
When we upgrade from 2.X to 3.X, we put all the WPV entries into DB, if there are WPV entities older than 180 days, we try to add these entities into DB and fail with that exception.
Follow the steps on 2.X ( prior to the upgrade) :
1. On 1 FSVM, create the following file:
nutanix@FSVM:~$vi /home/nutanix/config/insights_server.gflags
2. Paste the following flag in the file:
--insights_entity_timestamps_no_older_than_server_time_in_seconds=35552000
3. Copy the file to other FSVMs:
nutanix@FSVM:~$ for i in `svmips`; do scp /home/nutanix/config/insights_server.gflags $i:/home/nutanix/config/insights_server.gflags; done
4. No need to restart the service as upgrading will handle the service restart 5. After upgrading, verify the change persisted using:
nutanix@FSVM:~$ links --dump http:0:2027/h/gflags?show=insights_entity_timestamps_no_older_than_server_time_in_seconds
If this is being performed on 3.X ( after the upgrade)/Keep in mind you need both the flag file and live if this is post upgrade
1.To set the flags live:
nutanix@FSVM: allssh 'curl -s http://0:2027/h/gflags?insights_entity_timestamps_no_older_than_server_time_in_seconds=35552000'
2.To set the flags to persist:
2.1. On 1 FSVM, create the following file:
nutanix@FSVM:vi /home/nutanix/config/insights_server.gflags
2.2. Paste the following flag in the file:
--insights_entity_timestamps_no_older_than_server_time_in_seconds=35552000
2.3. Copy the file to other FSVMs:
nutanix@FSVM: for i in `svmips`; do scp /home/nutanix/config/insights_server.gflags $i:/home/nutanix/config/insights_server.gflags; done
2.4. Verify the gflags as follows:
nutanix@FSVM:~$ links --dump http:0:2027/h/gflags?show=insights_entity_timestamps_no_older_than_server_time_in_seconds
2.5.No need to restart the service as upgrading will handle the service restart
Solution II:
1. Restart the insights server:
nutanix@FSVM: allssh "genesis stop insights_server;cluster start"
2. Restart Prism on the leader:
2.1 Find the leader as follows:
nutanix@FSVM:afs
2.2 Bounce the Prism Leader:
nutanix@FSVM: genesis stop prism ; cluster start
SOLUTION III:
1. List the following from the cluster or from the traceback:
nutanix@FSVM~: zkls /appliance/physical/clusterexternalstate
2.Remove it for the cluster using :
nutanix@FSVM~: zkrm /appliance/physical/clusterexternalstate/<UUID>
Example:
nutanix@NTNX-10-63-37-3-A-FSVM:~$ zkls /appliance/physical/clusterexternalstate
3.Restart Prism on PC and FSVM
nutanix@FSVM~: genesis stop prism; cluster start
|
KB9101
|
Cluster Expansion fails with "Failed to ssh to node. Please check if keys are setup properly" or "Failed to fetch the memory of new CVM" or "Failed to fetch disks from the node"
|
The article explains why cluster Expansion fails with "Failed to ssh to node. Please check if keys are setup properly" or "Failed to fetch the memory of new CVM" or "Failed to fetch disks from the node" and how to work around the problem.
|
Scenario 1
Expand cluster operation fails with "Failed to ssh to node. Please check if keys are setup properly" error at 78%.
Scenario 2
Expand cluster operation fails with "Failed to fetch the memory of new CVM x.x.x.x" error. The error appears in /home/nutanix/data/logs/genesis.out logs on the leader node.
2020-04-21 14:41:50 ERROR genesis_utils.py:3645 Failed to run cmd grep MemTotal /proc/meminfo. Ret 255, out , err
You can run the following command from one of the CVMs in the cluster to find the leader node (the node that performed the cluster expansion):
nutanix@existing-cvm$ allssh 'grep "Elected cvm id" ~/data/logs/genesis.out'
The above command will show an output similar to the following. The IP address shown in the following result is the leader node.
2020-03-17 17:02:42 INFO pre_expand_cluster_checks.py:556 Elected cvm id: 4, ip: xx.xx.xx.xx as foundation node
Scenario 3Expand cluster operation fails with "Failed to fetch disks from the node" error. The error appears in /home/nutanix/data/logs/genesis.out logs on the leader node.
2022-12-05 18:55:39,483Z INFO 14646992 pre_expand_cluster_checks.py:1744 Checking if node(s) already in cluster
2022-12-05 18:55:39,510Z INFO 14646992 pre_expand_cluster_checks.py:2410 Trying to fetch disks info from the node x.x.x.x
2022-12-05 18:55:39,871Z ERROR 14646992 pre_expand_cluster_checks.py:2420 Failed to fetch disks from the node x.x.x.x. ret: 255, out: , err:
CauseThe expand cluster failure in Scenarios 1 and 2 occurs when non-default SSH keys are present on the leader node.
|
Before you proceed with the following steps to replace the non-default SSH key, verify that Expand cluster Prerequisites and Requirements https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_1:wc-cluster-expand-wc-r.html are met.You can use the following steps to verify connectivity from a cluster to the new node:
1. Run ping command from one of the CVMs in the cluster to the new node.2. SSH to the new node from one of the CVM in the cluster. It should ask for a password.
Note: If the prerequisites are not met, do not proceed with the KB.
Steps to replace non-default SSH keys
Run the following command from a CVM in the cluster to check the md5sum of SSH keys on all nodes.
nutanix@cvm$ allssh md5sum ~/ssh_keys/nutanix*
Check if the md5sum of the keys from the genesis leader node matches the following ones(default keys).
81ecc880b6f81bc3a060b5a684efcdf2 /home/nutanix/ssh_keys/nutanix
The md5sum of SSH keys should match the above for all nodes in the cluster. This is important because leadership may change in the future. The expand cluster operation would fail again if that happens.
Sample output: md5sum of SSH keys from one node is not matching the default on one of the CVM.
nutanix@existing-cvm$ allssh 'md5sum ~/ssh_keys/nutanix*'
If the md5sum on any node does not match the default, you need to copy the default keys from another node using the following commands. Alternatively, you can copy the default keys from ~/data/installer/el7*/ssh_keys/ on the same node or from a CVM from another cluster that has the default SSH keys.Note: The following command should be run from the CVM with the keys that do NOT match.
nutanix@cvm$ scp xx.xx.xx.1:~/ssh_keys/nutanix ~/ssh_keys/
Rerun the command from step 2 to verify that the md5sum of the files are the same across the cluster.
|
KB2893
|
Collecting diagnostic data during a ESXi PSOD or a Hung AHV Host
|
This article should be used for the collection of diagnostics logs and output for when a host goes down.
|
Collecting diagnostics data during an ESXi PSOD (purple diagnostic screen) or when an AHV Host is hung can provide valuable insight into why it happened. Even if the host has already come up, logs and command output can be collected to identify the cause.
|
As the default behavior of ESXi is to sit at the PSOD (purple diagnostic screen) our recommendation is to leave it in this downstate and open a support case so that the OOB script can be run against the host. The out-of-band script gathers a wide range of pre & post-data as well as runs diagnostic tests that can help provide insight into what happened. Please know that VMware software support is limited so we advise that you also open a VMware support case and follow their PSOD diagnostics KB, KB-1004128 https://kb.vmware.com/s/article/1004128.The default behavior of AHV is to come back up, if possible, making it impossible to run the OOB script against the host. There are two options for this situation. Gather the troubleshooting data that is available upon its reboot or make a configuration change in the IPMI or CLI that will keep the AHV host in the down state so that the OOD script can be run. This is an option for when what is gathered does not provide enough data to determine what happened. The OOB script is designed for use on the G6 & G7 NX platforms. If the NX platform is a G* please collect the TS Logs as well as the Health Event Logs for review.
Gather and attach the following logs and outputs to your Nutanix support case.
Troubleshooting Logs from the IPMI Web GUI.
Navigate to IPMI WebUI Miscellaneous > Troubleshooting, "Download" button, ONLY use the download button to gather the data unless it is not available to be clicked. If it is not ONLY then should you click the dump button to generate a dump for download and then the download will be lit. Refer to the limitations and usage guideline section above for additional information on this process.
G5 - G7 View G5 G8 and later view:
Known limitations and usage guidelines
The "Troubleshooting" (TS) download is available even post-reboot and can only be gathered directly from the IPMI Web GUI.Select the "Download" button to collect the current failure state registers. Only select the "Generate" button if the option to download the TS dump isn't presented to you.If you click on the "Generate" button, it will produce a new TS dump of the current state, which will overwrite the previous register values with the current state. We want the failure state registers DO NOT click generate unless the download button is not available to the clicked.The troubleshooting data will be lost if the node loses power. Do not reseat the node or detach power from it before collecting this data.
Example of the download button available for clicking and downloading the stored data: This is an example where the download button is not active and clickable. You would need to click the dump button and then the download button. Unfortunately, this dump output is unlikely to be helpful but still gather it for what benefit it could offer (you need to wait a minute while the collection occurs for the download button to become enabled).
Get the IPMI Web GUI system event logs (SEL).
Navigate to IPMI Web GUI > Server Health - Event Log (in newer NX platforms, these logs can also be referred to as Health Event Logs).
G5 - G7 view:
G8 and later view Note that AHV is designed to power back on after a crash. To prevent this from happening, you can set the node not to reboot and instead stay in its hung state so Nutanix support can run the OOB script to collect additional data that could help in better understanding the issue. G5 to G7 - uncheck the "system auto reset":
|
KB5012
|
[Performance] Interpreting CPU Ready values
|
High RDY values can be concerning, but they are subject to misinterpretation. This article is intended to help interpret CPU Ready values reported from various sources.
|
High CPU Ready values can be a sign of CPU contention, which can cause performance issues for many applications. It is important to consider the specific application impacted and its tolerance while interpreting these values, as some applications experience delays at values as low as 1%, and others can tolerate higher values with no noticeable impact. Most applications experience performance delays when CPU Ready % is above 5. This article rexplains how to retrieve and compare CPU Ready values reported by different system tools.Prism Element - AHV or ESXiYou can find a value for CPU Ready % reported for each VM in Prism Element. To check this value:
Log into PrismNavigate to the VM dashboardSelect the VM you'd like check CPU Ready % values on, then scroll to the bottom of the page to find the chart with those values (figure 1).If you would like to manipulate the time range for the chart, hover your mouse over the chart and select "Add chart to analysis page," near the top-right corner of the chart.
Figure 1: Prism Element: VM Dashboard, showing VM CPU Ready %vSphere and esxtopCPU ready times is shown in the vSphere client in milliseconds (ms). Esxtop reports %RDY as a percentage value (figure 4), which vSphere can also do (figure 3). These values are not directly comparable to the % value reported by Prism Element, as they are summation values (PE displays average value).The examples below illustrate the difference between CPU ready time in ms (milliseconds) compared to %:Figure 2: CPU Ready time in ms (milliseconds), from vSphereFigure 3: CPU Ready time in %, from vSphereFigure 4: CPU ready time in ms (milliseconds), reported on a host from esxtop
|
CPU Ready is the time a virtual CPU is ready to run but is not being scheduled on a physical CPU; this usually indicates that there is not enough physical CPU to schedule the work immediately. To be able to interpret ready times it is essential to know the relationship between the CPU ready time in percentage (%) and milliseconds (ms), and to also consider the number of vCPU on the VM being observed. This value is most commonly evaluated as a percentage across all vCPU, i.e. the average CPU Ready on the VM. Below you will find an explanation on how to find average CPU Ready on the VM.PrismCPU Ready % reported in Prism Element is stated in terms of a percentage across all vCPU, for both AHV and ESXi.Note: Releases prior to AOS 5.10.8 and 5.11 show the sum of the % Ready of all its vCPUs instead. Divide this value by the number of vCPUs on the VM to display the average CPU Ready value for the VM.vSphereIt is possible to change "CPU ready time in ms" to "CPU ready time in %" in the vSphere Client. Values displayed in the vSphere Client for either metric type display Ready values for each vCPU as well as a summation of all vCPU assigned to the virtual machine. Performance graphs in the vSphere Client update every 20 seconds (20000 milliseconds) by default.The table below shows a summary of ms vs. % CPU ready time:
To get the average value for CPU Ready across the VM, we need to divide the summation value reported by the number of vCPUs assigned to the VM. For example, 5% of total ready reported in vSphere for an 8 vCPU virtual machine has the average of 0.625 % per vCPU. Therefore, to get the CPU ready % from the ms value reported by vSphere, use the formula below:
CPU ready % = ((CPU Ready summation value in ms / number of vCPUs) / (<chart update interval in seconds, default of 20> * 1000 ms/s)) * 100
A calculation example:
vSphere Client shows: 400msVirtual Machine has 8 vCPUs 400/8 = 50ms per vCPUChart interval using the default of 20 seconds
Using the formula above:
((400 ms / 8 vCPUs) / (20 second collection interval * 1000 ms/s)) * 100 = 0.25% CPU ready time
esxtopIn esxtop it is only possible to get CPU ready time per virtual machine in % (expandable to view per vCPU ready time in % using the e key). Esxtop refreshes every 5 seconds by default and can be adjusted to as low as 2 seconds via the s key. If esxtop is not displaying %RDY, press the f key to update display fields, then the i key to toggle CPU Summary Stats. The number reported here is a summation value, so it should be divided by the number of vCPUs assigned to the VM, in order to get an average value for the VM's CPU Ready %. This formula is a bit simpler:
CPU ready % = (CPU Ready % summation value from esxtop / number of vCPUs) * 100
For VMware systems, also see their KB 2002181: https://kb.vmware.com/s/article/2002181 Converting between CPU summation and CPU % ready values [
{
"CPU Ready Time in % (1 vCPU)": "5%",
"CPU Ready Time in ms (20 second reporting interval)": "1000 ms"
},
{
"CPU Ready Time in % (1 vCPU)": "10%",
"CPU Ready Time in ms (20 second reporting interval)": "2000 ms"
},
{
"CPU Ready Time in % (1 vCPU)": "100%",
"CPU Ready Time in ms (20 second reporting interval)": "20000 ms"
}
]
|
KB14778
|
File Analytics - FA 3.2.1 may fail to start services after a reboot
|
This KB addresses resetting the expiration timer for the 'root' user password, which was preventing File Analytics services from starting.
|
Beginning in File Analytics (FA) 3.2.1, the root account password expiration is now set to 60 days.If you are running File Analytics 3.2.1, the password for the root account will expire after 60 days from the time of deployment or upgrade.File Analytics has a startup script that mounts the backing Volume Group where the File Analytics services run, which calls root privileges during runtime.As a result, if the File Analytics VM (FAVM) is powered off/on or rebooted while the password has expired, that script will fail to mount the Volume Group and subsequently fail to run the FA Services.Check the docker status to see if the failure is in the mount_volume_group.sh script:
SSH to the FAVMRun sudo systemctl status docker.service
nutanix@FAVM ~]$ sudo systemctl status docker.service
To check the current expiration of the root account:
SSH to the FAVMRun sudo chage -l root
[nutanix@FAVM ~]$ sudo chage -l root
Note in the example above that the password expired as of April 02, 2023, compared to the current date in May.
|
This issue has been fixed in File Analytics 3.3.0.1 and later.If File Analytics 3.2.1 is currently unable to start services due to this issue, use the following workaround to reset the 60-day timer:
SSH to the FAVMRun sudo chage -d <current date> root
[nutanix@FAVM ~]$ sudo chage -d 2023-05-10 root
This sets the "Last password change" to the current date, pushing the next expiration date out by 60 days.Reboot the FAVM
[nutanix@FAVM ~]$ sudo reboot
Note: This workaround may need to be used again if another 60 days have passed before being able to upgrade to File Analytics 3.3.0.1 or later.If a long-term workaround is desired, contact Nutanix Support referencing this KB to update the startup script.After rebooting, verify that all three containers are up and running:
[nutanix@FAVM ~]$ docker ps -a
An example if one or more are missing (Analytics_Gateway):
[nutanix@FAVM ~]$ docker ps -a
If any are missing, run the following ONLY for whichever container is missing:
env $(cat /opt/nutanix/analytics/config/deploy.env) docker-compose -f /mnt/containers/config/docker-compose.yml up -d --no-deps -t 120 Analytics_Gateway
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm:~$ logbay collect -s <cvm_ip> -t avm_logs
Log bundle will be collected at /home/nutanix/data/logbay/bundles of CVM IP provided in the command.
You can collect logs from FA VM through FA CLI, FA UI, or by running logbay collect_logs command on CVM. Open an SSH session to FA VM with “nutanix” user credentials and run the command:
nutanix@favm:~$ /opt/nutanix/analytics/log_collector/bin/collect_logs
Upload the generated log bundle to the case.
|
KB4710
|
Nutanix Files - Troubleshooting Kerberos authentication on a Nutanix Files server.
|
Nutanix Files - Troubleshooting Kerberos authentication on a Nutanix Files server.
|
The Nutanix Files server uses Kerberos authentication. The Kerberos version 5 authentication protocol provides a mechanism for authentication — and mutual authentication — between a client and a server, or between one server and another server.Important: This is a generic KB that is created to explain the basics and relevant information about troubleshooting Kerberos.
|
If you are able to browse the share by IP and not the DNS name you are having a problem with Kerberos authentication. Most of the time it will be the one of the following which should be checked in order.1. Time2. DNS3. SPN4. Machine account
There have been cases where following occurs
Nutanix Files joined to AD domainClient gets Kerberos ticket from KDC(while attempting connect to Nutanix Files share) which will expire by default after 10 hoursNutanix Files renewed the machine account password.Client authenticates using ticket obtained at time 2: SUCCESSFULNutanix Files renewed the machine account password.Client authenticates using ticket obtained at time 2 : FAILS. The original ticket has not expired but was based on the first machine account password.Also check KB 5852 for similar machine account password expiration and kerberos ticket timeout.
winbind log(/home/log/samba/winbindd.log) on the Nutanix Files server during the problem
Failed to fetch record with key "/appliance/logical/samba/tdbs/secrets.tdb/SECRETS:MACHINE_PASSWORD.PREV:BE", No such node
Snippet of a packet trace
24258 54.743169 xx.xx.xx.140 xx.xx.xx.68 SMB2 261 961 KRB Error: KRB5KRB_ERR_GENERIC 445
The proper fix for this problem is to do the following
1. Run allssh klist purge on the Nutanix Files cluster.2. Unjoin the Nutanix Files server from the domain (Please discuss with Engineering before proceeding with Unjoin Operation some customers don't like doing unjoin operations on prod clusters)3. Rejoin the Nutanix Files server to the domainNote : It's critical to understand that domain unjoin should be used as a last resort and we need to understand where the problem lies winbindd.log, smbd.log, log.wb-<DOMAIN> and <client-ip/host>.log files logs will be helpful.<afs> smb.health_check verbose=true will provide you update related to nature of problem.If you suspect DC in use is a problem, "smb.dc preferred add dc_ip_list=__________ " command will be helpful to use a functional Authentication DC. In newer AFS versions the command syntax has changed (as of at least 3.7.x) to "ad.dc preferred add dc_ip_list=__________"
Kerberos Authentication Dependencies
This section reviews dependencies and summarizes how each dependency relates to Kerberos authentication.
Operating System
Kerberos authentication relies on client functionality that is built in to the Windows Server 2000 and higher operating system, and the Windows 2000 operating system. If a client, domain controller, or target server is running an earlier operating system, it cannot natively use Kerberos authentication. Currently we only support Domain functional level Windows Server 2008 and higher. https://portal.nutanix.com/#/page/docs/details?targetId=Acropolis-File-Services-Guide-v21:afs-file-server-supported-config-r.html https://portal.nutanix.com/#/page/docs/details?targetId=Acropolis-File-Services-Guide-v21:afs-file-server-supported-config-r.html
TCP/IP Network Connectivity
For Kerberos authentication to occur, TCP/IP network connectivity must exist between the client, the domain controller, and the target server. For more information about TCP/IP, see https://technet.microsoft.com/en-us/library/cc732974(v=ws.10).aspx https://technet.microsoft.com/en-us/library/cc732974(v=ws.10).aspx
Domain Name System
The client uses the fully qualified domain name (FQDN) to access the domain controller. DNS must be functioning for the client to obtain the FQDN.For best results, do not use Hosts files with DNS. For more information about DNS, see https://technet.microsoft.com/en-us/library/cc779926(v=ws.10).aspx https://technet.microsoft.com/en-us/library/cc779926(v=ws.10).aspx
Double check to make sure that mappings are correct.
Windows PowerShell
The following command will show you the Nutanix Files dns mappings that
nutanix@FSVM:~$ afs get_dns_mappings
If necessary you will have to make changes in the customer's DNS configuration.
Active Directory Domain
Kerberos authentication is not supported in earlier operating systems, such as the Microsoft Windows NT 4.0 operating system. You must be using user and computer accounts in the Active Directory service to use Kerberos authentication. Local accounts and Windows NT domain accounts cannot be used for Kerberos authentication. The link below lists out the Nutanix requirements for Active Directory. https://portal.nutanix.com/#/page/docs/details?targetId=Acropolis-File-Services-Guide-v21:afs-file-server-supported-config-r.html https://portal.nutanix.com/#/page/docs/details?targetId=Acropolis-File-Services-Guide-v21:afs-file-server-supported-config-r.html
Time Service
For Kerberos authentication to function correctly, all domains and forests in a network should use the same time source so that the time on all network computers is synchronized.An Active Directory domain controller acts as an authoritative source of time for its domain, which guarantees that an entire domain has the same time. For more information, see https://technet.microsoft.com/en-us/library/cc773061(v=ws.10).aspx https://technet.microsoft.com/en-us/library/cc773061(v=ws.10).aspx Kerberos authentication will fail if time skew is greater than 5 minutes between the client and server.Use the regular time troubleshooting, the FSVM'S use the same time structure of the CVM's. The allssh ntpq -pn command will give you the ntp configuration and time information across the Nutanix Files cluster.
nutanix@FSVM:~$ allssh ntpq -pn
The sudo net ads info command will show you the time on the DC and which DC that the FSVM is connected to. Also it displays the time offset if it is greater then 300 seconds Kerberos authentication will fail.
nutanix@FSVM:~$ allssh sudo net ads info
There can be there a situation when the time skew is greater than 5 minutes on Nutanix Files cluster it will cause the cpus on the FSVM's to spike to very high levels causing a complete outage on the AFS cluster.The full root cause is still unknown but we will believe it due smbd and winbind services crashing incorrectly.The workaround to fix this issue and other minor time skew problems is to run the following command
Nutanix Files shares not accessible via name only.
Users typically use \\sharename\ to browse to Nutanix Files shares and access their data. If issues are encountered on the domain controller authentications could fail, and depending on length of time the DC has issues kerberos tickets could also expire/become stale.
SMB Clock SyncThis will instantly sync the time on the FSVM's; do not run this command when the time skew is greater then 5 minutes without approval from a Sr member. In AOS 5.1.2/Nutanix Files 2.2.0 the command changes to the following.
nutanix@FSVM:~$ scli sync-clock
In Nutanix Files 3.0, the command has changed and it's a hidden command (<afs> set show_hidden=True).
nutanix@FSVM:~$ afs smb.sync_clock ntp=X.X.X.X
If you receive an error similar to the following
nutanix@FSVM:~$ afs smb.sync_clock
Based on Code , check whether the following command works
nutanix@FSVM:~$ sudo net ads info -P
Service Principal Names
Service principal names (SPNs) are unique identifiers for services running on servers. Every service that uses Kerberos authentication needs to have an SPN set for it so that clients can identify the service on the network.If an SPN is not set for a service, clients have no way of locating that service.Without correctly set SPNs, Kerberos authentication is not possible. For more information about user-to-user authentication, see https://technet.microsoft.com/en-us/library/cc772815(v=ws.10).aspx https://technet.microsoft.com/en-us/library/cc772815(v=ws.10).aspxJoining the Domain for SMB will create 32 FSVM FQDN, 32 FSVM Object, 1 File Server FQDN, and 1 File Server Object "HOST" SPN's.Joining the NFS Realm for Active Directory will create 32 FSVM FQDN, 32 FSVM Object, 1 File Server FQDN, and 1 File Server Object "nfs" SPN's.To get the file server name run the following command on the Nutanix Files server
nutanix@FSVM:~$ afs fs.info
To list all SPNs for the File Server, run from any FSVM:
nutanix@FSVM:~$ sudo net ads status -P | grep -i service
To list only SMB SPNs for the File Server, run from any FSVM:
nutanix@FSVM:~$ sudo net ads status -P | grep -i service | grep -i "host\/"
To list only NFS SPNs for the File Server, run from any FSVM:
nutanix@FSVM:~$ sudo net ads status -P | grep -i service | grep -i "nfs\/"
This is an example of a working Nutanix Files server that is joined for SMB only:
servicePrincipalName: HOST/NTNX-RMRF-STAR-32.sre.local
This is an example of a working Nutanix Files server that is joined for NFS only:
servicePrincipalName: nfs/NTNX-RMRF-STAR-32.sre.local
Note: If both are joined, you will see all HOST and nfs entries present!SMB -> To add SMB SPNs you can run the following commands from any FSVM:
nutanix@FSVM:~$ /usr/local/nutanix/cluster/bin/sudo_wrapper -E KRB5_KTNAME=FILE:/home/nutanix/tmp/krb5.keytab net ads spn add smb -U '[email protected]' -S '<fqdn of dc>' -d 0
To remove SMB SPNs:
nutanix@FSVM:~$ sudo net ads spn remove smb -U '[email protected]' -S '<fqdn of dc>' -d 0
NFS -> To add NFS spns you can run the following commands from any FSVM:
nutanix@FSVM:~$ /usr/local/nutanix/cluster/bin/sudo_wrapper -E KRB5_KTNAME=FILE:/home/nutanix/tmp/krb5.keytab net ads spn add nfs -U '[email protected]' -S '<fqdn of dc>' --dnsdomain=<full domain> -d 0
To remove NFS SPNs:
nutanix@FSVM:~$ sudo net ads spn remove nfs -U '[email protected]' -S '<fqdn of dc>' -d 0
Advanced Troubleshooting - Verify SPN count of all discoverable Domain Controllers:
The following commands will help debug issues with Domain Controller (DC) replication issues where some DC's may not have all SPNs.
Host Records... Count should equal 66
nutanix@FSVM:~$ for i in `nslookup -type=SRV _ldap._tcp.dc._msdcs.<fqdn_of_domain> | awk '{print $7}' | sed 's/.$//' | sort`; do echo "$i"; sudo net ads status -P -S $i | grep -i "host\/" | wc -l; done
NFS Records... Count should equal 66
nutanix@FSVM:~$ for i in `nslookup -type=SRV _ldap._tcp.dc._msdcs.<fqdn_of_domain> | awk '{print $7}' | sed 's/.$//' | sort`; do echo "$i"; sudo net ads status -P -S $i | grep -i "nfs\/" | wc -l; done
If joined to both SMB and NFS Domains... Count should equal 132
nutanix@FSVM:~$ for i in `nslookup -type=SRV _ldap._tcp.dc._msdcs.<fqdn_of_domain> | awk '{print $7}' | sed 's/.$//' | sort`; do echo "$i"; sudo net ads status -P -S $i | grep -i service | wc -l; done
If any entries that should be there are missing for a specific DC, refer above to re-add them by specifying the "-S" to the FQDN of the DC missing entries. This will not duplicate entries and only add the missing ones by default.
|
KB8627
|
NX hardware health stresstest
| null |
There are instances when there can be no evidence of hardware faults in1. NCC health Checks2. IPMI SEL Logs3. Running SmartCtl on all the disk and SATADOMsBut there are evidence in Log files (dmesg , /home/log/messages or /var/log/messages) where you can see binary corruptions which point to HW issues.
|
Run a StressTest on the system components(CPU and memory) to see if there are any evidence of hardware issues:1. Upgrade to the latest BMC and BIOS version . You can use LCM or manually upgrade the host.2. Mount Regular Phoenix ISO on the affected server and reboot to it.3. Once in phoenix prompt , mount the Stressiso2 on another Device in IPMI downloaded from here: https://ntnx-sre.s3.amazonaws.com/Tools/stressiso2.iso https://ntnx-sre.s3.amazonaws.com/Tools/stressiso2.iso4. Once the ISO is mounted, mount the CD-ROM in Phoenix by doing the following :
[root@phoenix ]# mkdir /root/cd-live
It can be either sr0 or sr1
5. Copy of the contents of the ISO to a different folder in Phoenix to install the program:
[root@phoenix ]# mkdir /stresstest/
6. Run the test for at least 48 hours by running the following command :
stressapptest -s 172800 -W
As long as the time is decreasing the test is progressing.
7. Check IPMI SEL logs during this time to see if there are any events for any hardware failures.
|
KB10011
|
Objects - Read failures due to object controller pods crashing with "Check failed: data endpoint=x.y.z:2009"
|
Customer on AOS 5.18/5.18.0.5 may report read issues. Further investigation finds that OC pods are crashing with signature: "Check failed: data endpoint=x.y.z:2009"
|
Customers on AOS 5.18/5.18.0.5 may report object store READ issues. Further investigation finds that object controller pods are crashing with signature:
F0909 15:45:55.066007 3945 storage_endpoint.cc:237] Check failed: data endpoint=a.b.c.d:2009
Symptoms:
Customer reporting issues reading from the object cluster. The error message received on client-side would differ based on client software used.The customer has deployed the cluster on AOS 5.18 or 5.18.0.5.Logging into the MSP cluster and reviewing the logs, the object controller pods are found to be crashing with the error signature below. Please refer to KB 8170 http://portal.nutanix.com/kb/8170for access the MSP cluster and checking the persistent logs. You can view the object-controller pods FATAL signature using the below command:
[nutanix@obj-a80773-default-1 ~]$ for i in `kubectl get pods -l app=object-controller -o jsonpath='{.items[*].metadata.name}'`;do echo "====="$i"====" ; kubectl exec $i -- grep -i cat /home/nutanix/logs/$i/logs/object_store.FATAL ;done
One or more object controller pods output this fatal signature:
F0909 15:45:55.066007 3945 storage_endpoint.cc:237] Check failed: data endpoint=a.b.c.d:2009
|
This issue is resolved in AOS 5.18.1 and higher. Please upgrade the AOS cluster to version 5.18.1 or higher.
|
KB12533
|
Nutanix Files - Single node FSVM cluster failed to activate File Server
|
File server activation in DR fails after a Protection Domain migration.
|
After migration of a Protection Domain to the DR site, you see the following Prism alert:
File Server activation failed A160021 warning :
A stand-alone File server fails to activate on the DR site.In Minerva leader (Follow KB-4355 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA032000000TWV5CAO to find leaders in the Nutanix cluster): ~/data/logs/minerva_cvm.log shows below errors:
restore_task.py:1196 Got the exception while getting file server info: Failed to send RPC request
|
This issue is resolved in Nutanix Files 4.0. Upgrade File server to the latest supported version.
|
}
| null | null | null | null |
KB8880
|
Understanding 3rd Party Backup Integration
|
3rd Party Backup integration is explained for different hypervisors
|
This article gives a brief overview of 3rd party backup integration used on a Nutanix cluster. Nutanix supports integration of 3rd party backup software on clusters running AHV, ESXi and Hyper-V.The integration on these hypervisor differ from each other. The key differences are mentioned below: AHV
Leverages a particular set of API callsLeverages NTNX snapshots and the snapshots remain on the clusterTemporary volume groups are created during the backup process and deleted once the backup process completesAn internal system PD is leveraged
ESXi
Depending on the backup software, ESXi can be integrated either via the Prism VIP or the vCenter IP
If integrated via the Prism VIP:
An internal system PD is leveraged Temporary volume groups are created during the backup process and deleted once the backup process completesNTNX snapshots are leveragedAPI calls are used
If integrated via the vCenter IP:
All tasks are carried out via vCenterVMware snapshots are leveraged and the snapshots are temporary - they are created during the backup process and then deleted once the backup completes
Hyper-V
All tasks are carried out via Hyper-VLeverages Hyper-V checkpoints and the checkpoints are temporary - they are created during the backup process and then deleted once the backup completesDepending on the backup vendor, use of proxies is optional
High-Level Workflow The diagram below depicts the workflow for 3rd party backups and the components involved:NOTE: This workflow mainly applies to AHV and NTNX snapshots created via AOS. Typically 3rd party backup integration on ESXi or Hyper-V does not utilize Nutanix API calls and snapshot creation is handled via the hypervisor (VMware snapshots/Hyper-V checkpoints). However, depending on the backup vendor, this workflow can also be utilized on other hypervisors - some backup vendors can allow the utilization of NTNX snapshots instead of VMware snapshots or Hyper-V checkpoints.
|
If any issues occur while using 3rd party backup software, please engage Nutanix Support for further assistance and have the following information available:
Backup software vendorHypervisor versionAOS versionNCC versionNumber of backups jobs / schedule
|
KB6970
|
PE-PC Connection Failure alerts
|
PE-PC Connection failure alerts are being generated and not auto resolved. Verify underlying connectivity, upgrade NCC, and re-set the check.
|
Scenario 1. "PE-PC Connection Failure" while no underlying issue with PE (Prism Element) - PC (Prism Central) connectivity.
nutanix@CVM$ ncli alert ls
Scenario 2. PE - PC connectivity checks return red heart even if there are no alerts or underlying conditions present.
If PE - PC connectivity checks fail for brief periods (e.g., 1-2 minutes), the check does not cause an alert to be generated and presented to the Web UI. The default threshold to generate the alert is 15 minutes. Because the alert is not triggered in these instances, it cannot auto-resolve, and the Cluster health indicator (heart) remains red rather than returning to green.
Verification steps
Verify PC-PE connectivity:
Verify that the port is open.
nutanix@CVM$ nc <prism_central_ip_address> 9440 -v
If you use HTTP proxy, perform the nc command through the proxy using the following command.
nutanix@CVM$ nc -x <proxy-url>:<proxy-port> <prism_central_ip_address> 9440 -v
Example output:
nutanix@CVM$ nc x.x.x.x 9440 -v
The nc command or other connectivity checks may work. Use a nuclei remote connection health check to request more details on potential issues when using a proxy.
nutanix@CVM$ nuclei remote_connection.health_check_all
Note: In AOS 6.7 and later, pc.2023.3 and later, the command has been changed to nuclei cluster_connection.list_all.
If connectivity fails via proxy, whitelist all the Controller VM (CVM) IP addresses and Virtual IP (VIP) in PC and PC IP address on PE. Confirm that the proxy port configured is correct.
Check if the PC is reachable by the CVM:
nutanix@CVM$ ping <prism_central_ip_address> -s <packet_size> -c <count>
Example outputs:
nutanix@CVM$ ping x.x.x.x -s 996 -c 4
nutanix@CVM$ ping x.x.x.x -s 997 -c 4
Check SSL connectivity between PE to PC.
Run the below command from PE to PC. If you see the following, then something on the network is blocking/dropping, or manipulating the SSL traffic between PE and PC. For example a firewall, IPS, proxy, or anything other device that can do deep packet inspection.
nutanix@CVM:~$ curl -v --insecure https://10.x.x.x:9440
A successful SSL connection through curl should look similar to the below:
nutanix@CVM:~$ curl -v --insecure https://10.x.x.x:9440
Ensure the MTU/MSS is configured between Prism Central and its connected clusters.
Nutanix Engineering has tested MTU/MSS values with default packet size. The recommended low is "MTU-1500 and MSS-1460," which have been qualified. The packet size below these values has not been tested and may not work properly.
Check if the Remote Connection Exists value is true.
Get cluster state:
nutanix@CVM$ ncli multicluster get-cluster-state
Example output:
nutanix@CVM$ ncli multicluster get-cluster-state
Scenario 3. Attempting to register a PE to a PC fails, and you receive an error that states "Remote Connection Creation Timed Out" in the browser
In ~/data/logs/prism_gateway.log on cluster CVMs:
ERROR 2020-03-28 09:18:19,460 TaskRegistration prism.tasks.TaskRegistration.run:192 [REG] Failed to register to Prism Central.
|
Note: If you are having issues either enabling or disabling this check after the NCC upgrade, this behaviour is fixed in NCC 3.10.1 and above. Upgrade to the latest NCC.
Scenario 1. The auto resolution of this alert is improved in NCC 3.9.4 or later. To resolve this issue, upgrade NCC to the latest.
In addition, alerts can be generated regarding connection failure due to stale nuclei Prism Element entries on the Prism Central VM. This happens if the PE instance no longer exists but the nuclei entry is still in the Prism Central VM. The same can be checked by comparing the outputs of the following two commands:
nutanix@PCVM$ nuclei remote_connection.list_all
Note: In AOS 6.7 and later, pc.2023.3 and later, the command has been changed to nuclei cluster_connection.list_all.
nutanix@PCVM$ ncli multicluster get-cluster-state
In the above outputs, you can see that the nuclei remote_connection.list_all output has certain entries that are no longer present and that are unseen in the output of ncli multicluster get-cluster-state. (In this example, UUIDs xxdee9 and xx4c4deed are not seen in the second output.)
In such a case, if you can confirm that no other Prism Elements are registered to Prism Central than those seen in the ncli multicluster get-cluster-state output, the stale nuclei entries will need to be removed to stop the alerts. Instructions on the clean-up can be found in KB-4944 https://portal.nutanix.com/kb/4944. The following command can be run in the Prism Central VM:
nutanix@PCVM$ python /home/nutanix/bin/unregistration_cleanup.py uuid
Replace the UUID with the stale UUID(s) in the above command, in this example, xxdee9 and xx4c4deed.
Scenarios 2 and 3. Once you verify that there is no underlying PE-PC connectivity issue, manually reset the check. Turn the check OFF and turn it back ON. See the below screenshot:
If the steps above do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com.
|
KB17129
|
License missing on Support Portal
|
When a license is valid and available in SFDC but not reflecting in Support portal
|
When customer contact us stating the license is purchased but not reflecting in Support Portal
| null |
KB17053
|
VM with APC (Advanced Processor Compatibility) fails to be created or Live migrate between clusters
|
When attempting to create a VM or CCLM (Cross Cluster Live Migration) with the new AOS 6.8.x APC (Advanced Processor Compatibility) feature, an error might be seen indicating that the destination cluster does not support the VM’s CPU model.
|
Scenario 1
When attempting a CCLM (Cross Cluster Live Migration) with the new APC (Advanced Processor Compatibility) set to a supported level for both the source and the destination cluster, the validation may fail with the following error:
Cannot migrate VM with UUID <UUID> because target cluster with UUID <UUID> doesn't support chosen VM's CPU model.
Looking at the acropolis logs(acropolis. out) on the destination or source, in the below example on the destination, the below log entries are noted:
2024-06-20 14:23:53,449Z ERROR cap_spec_validation.py:424 Local cluster doesn't have APC support
The APC capability is automatically enabled when upgrading to AOS 6.8.x and its supported AHV version.Filtering for the APC capability in the Acropolis.out logs(allssh "cat data/logs/acropolis.out | grep CAP_APC"), the feature is noted to be "False"
2024-06-19 03:04:55,737Z INFO capability_tracker.py:337 Updating cluster capabilities to {'CAP_AHV_GW_V2': True, 'CAP_AMPERE_GPU': True, 'CAP_APC': False, 'CAP_CONNTRACK_MIGRATION_V2': True, 'CAP_CROSS_CLUSTER_LIVE_MIGRATION': True, 'CAP_DIRTY_QUOTA': True, 'CAP_GPU_HOPPER_H100': True, 'CAP_GPU_LOVELACE_L4': True, 'CAP_GPU_LOVELACE_L40': True, 'CAP_GPU_LOVELACE_L40S': True, 'CAP_HARDWARE_VIRTUALIZATION': True, 'CAP_HEALTH_MONITORING': True, 'CAP_LIBSSH2': True, 'CAP_LIVE_MIGRATION_MULTIFD': True, 'CAP_LIVE_MIGRATION_MULTIFD_ZEROCOPY': True, 'CAP_MANAGE_ISCSI_REDIRECTOR': True, 'CAP_MEM_OVERCOMMIT_V2': True, 'CAP_MIGRATION_STATS_PUBLISHING_V2': True, 'CAP_MIGRATION_TSC_SCALING': True, 'CAP_PCIE_HOTPLUG_DISABLE_PER_SLOT': True, 'CAP_PEER_TO_PEER_MIGRATION': True, 'CAP_PROVIDES_PY3_PYNVML': True, 'CAP_SECURE_BOOT': True, 'CAP_SWAP_ATTACHED': True, 'CAP_UNMANAGED_NETDEV': True, 'CAP_VGPU_CONSOLE': True, 'CAP_VIRTUAL_TPM3': True, 'CAP_VMCOREINFO': True, 'CAP_VM_GEN_BIOS_UUID': True, 'CAP_VM_SERIAL_PORT_COMM': True}
Scenario 2
VM creation might fail with the below error:
"CLUSTER_CAPABILITY_ERROR: No capable cluster found, failed at property apc_config.cpu_model_reference.uuid"
In Prism Central, the below tasks for the metropolis and aplos might be in a failed state.
nutanix@PCVM:~$ ecli tasks.list | grep -i fail
Checking the metropolis task details, the same error could be noticed.
nutanix@PCVM:~$ ecli task.get f94e6c5e-2b05-4de2-ad9c-8b918754305f
Checking the aplos task details, the same error could be noticed.
nutanix@PCVM:~$ ecli task.get 263ca3fa-d614-44e4-96d3-353f65de8db4
|
This is currently a known race condition between AHV Gateway, EMM and libvirt - ENG-666516 https://jira.nutanix.com/browse/ENG-666516This only occurs on clusters that were upgraded to AOS 6.8.WorkaroundPerform the following actions to apply a workaround:
Ensure that the selected APC level is supported by the nodes on both the source and the destination cluster.Identify the Acropolis leader on the site where this state is noted:
panacea_cli show_leaders | grep acropolis
SSH to the identified CVM.Perform the checks described in KB 12365 http://portal.nutanix.com/kb/12365 to make sure it is safe to stop Acropolis.Restart the Acropolis service on this CVM:
genesis stop acropolis; cluster start
Confirm with a new Acropolis leader selected, that the APC feature is set to "True".
allssh "cat data/logs/acropolis.out | grep CAP_APC"
Once the above steps are completed, reattempt the CCLM.
|
KB6591
|
NCC Health Check: recovery_plan_multiple_az_order_check
|
NCC 3.7.1. The NCC health check recovery_plan_multiple_az_order_check raises an alert if VMs replicated to different Availability Zones are part of the same recovery plan.
|
Note: This health check is retired in NCC 5.0.0 and later. Ensure you are running the latest version of NCC before running the NCC health checks.
The NCC check recovery_plan_multiple_az_order_check verifies and raises an alert if VMs replicated to different Availability Zones are part of the same recovery plan.
This NCC check/Health Check is available only from Prism Central.
The Prism Central Web console will report a failure if it finds VMs replicated to different Availability Zones as part of the same recovery plan. Once the issue is resolved, the alert will auto-resolve in 2 days.
When a Recovery Plan is saved as part of a recovery plan create or update workflow, there is a guardrail in place that checks if there are 2 or more VMs in this recovery plan, replicated to different Availability zones. If this is true, an error is raised and you will not be able to save that recovery plan. Update the protection policy for the VMs that are in the Recovery plan to ensure they are being replicated to the same Availability Zone.
Running the NCC check
This NCC check can be run as part of the complete NCC check by running the below command from Prism Central CLI:
nutanix@cvm$ ncc health_checks run_all
Or individually as:
nutanix@cvm$ ncc health_checks draas_checks recovery_plan_checks recovery_plan_multiple_az_order_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every hour, by default.
This check will generate an alert after 1 failure.
Sample Output when the NCC check is run from Prism Central CLI
Running : health_checks draas_checks recovery_plan_checks recovery_plan_multiple_az_order_check
Error message seen when the Recovery Plan has Entities replicated to multiple AZs is saved:
VMs in Recovery Plan have been protected across multiple destination Availability Zones
Output messaging
[
{
"Check ID": "Checks if Recovery Plan has multiple Availability Zone orders."
},
{
"Check ID": "Recovery Plan contains more than one Availability Zone Order."
},
{
"Check ID": "Update associated entities in Recovery Plan to have single Availability Zone Order."
},
{
"Check ID": "The Recovery Plan update will not be allowed."
},
{
"Check ID": "A300424"
},
{
"Check ID": "Recovery Plan has multiple Availability Zone Orders."
},
{
"Check ID": "Recovery Plan recovery_plan_name has multiple Availability Zone Order."
},
{
"Check ID": "Recovery Plan recovery_plan_name has more than one Availability Zone Order."
}
]
|
Update the Protection Policies for the entities in the recovery plan so that they replicate into the same Destination Availability Zones.
|
KB12179
|
Windows 11 support on AHV
|
Is Windows 11 supported on AHV?
|
Is Windows 11 supported on AHV?
|
Windows 11 requires Trusted Platform Module (TPM) version 2.0, which is supported starting from AOS 6.5.1 with AHV 20220304.242. Note that 20220304.242 is an optional upgrade path, and by default, AOS 6.5.1 is bundled with AHV 20201105.30411. Refer to Securing AHV VMs with Virtual Trusted Platform Module (vTPM) https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:mul-vtpm-overview-pc-c.html chapter of the Security Guide for more information on how to enable vTPM.See https://www.microsoft.com/en-us/windows/windows-11-specifications https://www.microsoft.com/en-us/windows/windows-11-specifications for more details about Windows 11 system requirements.It is not recommended to apply workarounds to disable TPM verification that can be found on the Internet.
|
KB4085
|
Troubleshooting Network Visualization on AHV cluster
|
Tips for investigating issues with Network Visualization setup on an AHV cluster.
|
Cluster Network Visualization is a feature first added in AOS 5.0 which visually displays network paths up to the immediate switches. The network visualization configuration requires the user to add the switch details in Prism Element (Settings / Network Switch). Network visualization uses SNMP to get statistical information about the physical switch and LLDP to get information about the connected ports. Users can check the Network Visualization https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v5_11:wc-network-visualization-intro-c.html section in the Prism Guide for more details about the prerequisites and initial configuration.
Network Visualization relies on a functional internal DNS system to map switch hostnames and IPs based on LLDP responses. Incorrect DNS configuration, such as using public DNS servers without knowledge of the internal infrastructure, may lead to inaccurate details or ’none’ is displayed.
Without proper configuration on both sides, you may not get all the information expected. This article discusses general tips for troubleshooting network visualization issues and several specific issues and how to resolve them.AHV, AOS, and NCC all play a part in completing the network visualization feature. Capture the version of each and make sure they are compatible and support this feature.
Currently, Nutanix uses the Switch BRIDGE-MIB (RFC 1493) or Q-BRIDGE-MIB (RFC 2674) MAC table to discover the switch interfaces. The switches that are not compatible with these RFCs are not supported.Refer to Prism Web Console Guide - Configuring Network Switch Information https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_1:wc-system-network-switches-wc-t.html.
Firewall requirements for the feature to work:
Issue #1: Network Visualizer shows correct switch name but "Port Data Unavailable"The network visualizer shows a switch with "Port Data Unavailable" below it.
Hardware > Table > Switch view shows the switch, but clicking on it does not show any of its switch interface ports.
Querying the switch from AHV using lldpctl shows PortID as "local <ifindex>" instead of "ifname <name>", which is the default for many switch vendors and what AOS expects to see.
[root@host ~]# lldpctl
Listing the switch port details returns "None".
nutanix@cvm:~$ ncli net list-switch-ports switch-id=558746ed-310c-4a08-974f-f9ceec4b4979
/home/nutanix/data/logs/health_server.error shows:
2017-03-27 12:32:12 ERROR switch_interface_collector.py:446 [switch_interface_stats_collector] switch port name unknown for host nic eth0
Issue #2: Network visualization shows a switch as "Switch None"If a network switch is created with incorrect information, it is possible this switch can become stuck in the network visualizer. Deleting the switch from the configuration will be successful, but the switch will continue to appear in the GUI as "Switch None".
Clicking on the switch and selecting "Go to switch details" will show the Hardware > Table > Switch view with an error message saying "Switch with id <uuid> was not found."
Issue #3: Network visualization is not reporting accurate information - wrong physical interfaces detected on the switchThis is due to the reason that LLDP frames from Dell switches are not recognized in AHV.
Issue #4 Incorrect port mapping in visualizationIn the below scenario, node 5 connects to an unknown switch while the other nodes are connecting to "port-channel" ports of the switch.
Physically, each node has two 10G ports and each connects to one of the two switches and the diagram should show the physical port name instead of port-channel and all nodes should be mapping to the same switches.
/home/nutanix/data/logs/health_server.log content on node5:
2019-05-03 10:05:10 ERROR network_cmd_utils.py:489 [switch_interface_stats_collector] Discarded the invalid port number on switch X.X.X.12 vlan 1(key_value = mib-2.17.1.4.1.2.968 = No Such Instance currently exists at this OID)
LLDP output from hypervisor shows but switch name (SysName) is missing
[root@host ~]# lldpctl
Issue #5 Network Visualizer shows the connected physical switch while links are still connected to the "Unknown Switch"One of the possible causes is the connected physical switch sends both IPv4 and IPv6 as MgmtIP in LLDP frame, and the switch does not have FQDN or FQDN cannot be resolved.Specifically, IPv4 comes first, IPv6 comes second, the latter one (in this case IPv6) is used.
Another cause for this behaviour is that even after DNS is properly configured, (switch mgmt IP successfully resolves to the Switch's FQDN) network visualization can still fail since the A records for the switches will resolve to the entire FQDN and not just the hostname, network visualization will try to forward lookup the switch's hostname retrieved via SNMP/LLDP but since it will not know to which domain it should resolve it will cause the switch ports to not be associated to any switch.
The following signature can be found in the health_server.log (/home/nutanix/data/logs/health_server.log):
2020-04-23 14:47:24 ERROR network_utils.py:418 [switch_interface_stats_collector] Failed to get IPv4 address by name Rack-Switch-1, error gaierror(11, 'ARES_ECONNREFUSED: Could not contact DNS servers')
Use the following command to look for this signature:
nutanix@cvm:~$ egrep 'INFO|WARNING|ERROR' data/logs/health_server.log | grep switch_interface_stats_collector | less
Issue #6 Health Server logging may show that name resolution for switches is failing if switches are configured via only their short name.Switches are configured via short name <MyFirstSwitchShortName> versus the full URL including the domain <MyFirstSwitchShortName.MyDomain.com>. This issue can be related to issue 5.
A nslookup for the switch short name fails:
nutanix@cvm:~$ nslookup MyFirstSwitchShortName
In this scenario, we will observe the following in the GUI: Log entries if short names are not resolving to the IP:
2021-05-26 06:07:50 ERROR network_utils.py:438 [switch_interface_stats_collector] Failed to get name info from ip address X.X.X.1, error herror(1, 'Unknown host')
If Network Switch is configured with an SNMP profile on Prism, snmpget command timing out might be seen on health server logs:
2021-09-03 23:45:18Z WARNING command.py:156 [switch_interface_stats_collector] Timeout executing snmpget -v2c -c *** -OXsQ 172.xxx.xxx.zzz sysObjectID.0 sysName.0: 5 secs elapsed
Issue #7 Unknown switch is seen in Network Visualization tab due to misnamed internal iDRAC NICThe iDRAC NIC, which we see as cabled to the unknown "switch", is populated as a unique USB NIC by the kernel.
[root@AHV~]# dmesg | grep -i "<iDRAC NIC MAC ADDRESS>"
Looking at the other hosts, we see they have iDRAC named NICs rather than the eth named NIC for iDRAC connectivity:
nutanix@CVM:~$ hostssh 'ifconfig | egrep -A2 "idrac\|eth" | grep -B1 "169.254"'
Check iDRAC USB ID matches udev rules.
nutanix@CVM:~$ hostssh "cat /etc/udev/rules.d/95-iSM-usbnic.rules"
Issue #8 LLDP discovery does not populate switch info, despite LLDP returning legitimate dataProcessing of the LLDP information may fail due to the lack of a standardized name provided by the switch as in <interface type><port> (e.g. not Ethernet44, but just 44). This results in the service being unable to process the data and the Prism interface not showing any interfaces connected to the switch. To verify this is the case, search for the following traceback in CVM where the switch is not showing properly in the /home/nutanix/data/logs/health_server.log.* files:
2020-07-27 16:14:50 INFO switch_interface_collector.py:979 [switch_interface_stats_collector] NWVIZ: Switch address 172.x.x.x, try to fetch SNMP ifIndex for port name(teX/Y/Z) or port descr(tengigabitethernetX/Y/Z) using ifName detail.
Issue #9 NCC discovery does not populate switch info, despite Switch configuration being updated in Prism Element and the switch visible through LLDP and SNMPwalkAfter changing the physical connection of nodes to switch devices and reconfiguring switches details in Prism Element, we don't see new devices discovered in Prism Element > Hardware > Switches. Should we check Health server logs for entries related to switch checks (id:150000, 150001) /home/nutanix/data/logs/health_server.log.* we can see checks are disabled:
2022-03-07 09:30:52,296Z INFO config.py:3222 Add/update override check schema cache with new schema: 150001
Issue #10 The Prism UI displays an unknown (non-existing) switch Only two switches are connected to the cluster but an unknown 3rd switch comes up in Prism UI after the changes are done on the name server: [
{
"Source": "Cluster VIP,\t\t\tCVM IPs,\t\t\tAHV host IPs",
"Destination": "The management interface of ToR switch",
"Purpose": "SNMP",
"TCP/UDP": "UDP",
"Port number": "161"
}
]
|
The network visualizer display will add an "Unknown Switch" for any ports at the AHV host side that it cannot associate with discovered ports from the switch side. All ports without discovered switch information to match will show linked to this "Unknown Switch".
Switches that are added successfully in Prism will show up above "Unknown Switch" and will display the discovered switch name data and port data, if available. If the physically attached switch is discovered, but you are seeing "Port Data Unavailable" under the switch, or host ports are linking to the "Unknown Switch", there may be additional optional TLVs that need to be enabled. See the relevant switch documentation for the required method.
You may also want to review the Network Visualization https://portal.nutanix.com/#/page/docs/details?targetId=Web-Console-Guide-Prism-v5_11:wc-network-visualization-intro-c.html page in the Prism Web Console Guide for any prerequisites you may have missed.
In the Prism Hardware dashboard, you can see some switch information under Table view. From AHV, you can attempt to get LLDP discovery data. You can check if the host can get neighbor switch data for each port with the command "lldpctl". Alternatively, you could run "lldpcli show neighbors".
nutanix@cvm:~$ ssh [email protected] lldpctl
Note: If lldpctl does not return a successful output, check the same has been enabled on the AHV host via the below commands:
Transmit LLDP/CDP settings can be checked or enabled through NCLI:
nutanix@cvm:~$ ncli cluster get-hypervisor-lldp-config
On the CVM side, you can check for switch data using Prism/NCLI and the "ncli net *" commands.
Switch information from NCLI:
nutanix@cvm:~$ ncli net list-switch
Switch port information from NCLI:
nutanix@cvm:~$ ncli net list-switch-ports switch-id=<switch UUID>
Examples of full data for reference:
Neighbor switch information per port from AHV host:
[root@host ~]# lldpctl
Network switch information from NCLI:
nutanix@cvm:~$ ncli net list-switch
Network switch port information from NCLI:
nutanix@cvm:~$ ncli net list-switch-ports switch-id=234567be-8c9e-012c-3dfc-45678901234d
Workaround for Issue #1
Change the port-id-subtype to interface-name. Should be applied to all interfaces.
Caution: This might impact other monitoring tools used to monitor the switch.
Querying the switch using lldpctl should now show PortID as "ifname <name>"
[root@host ~]# lldpctl
Note: In some cases, rebooting the hosts one by one might be required if lldpctl is still showing "local" even after changing port-id-subtype to interface-name.
The below is the snippet of the LLDP packet from the Juniper switch showing the port details and in Prism, the port-ID was showing as "990" as the switch port and the switch port has description configured as below:
Port:
Once the LLDP option with port-description is changed to "interface-alias" instead of "interface-description" in Juniper switches https://www.juniper.net/documentation/en_US/junos/topics/reference/configuration-statement/port-description-type-edit-protocols-lldp.html, the description changed to switch port number and Prism started to showing the correct switch port ID "xe-0/1/0.0". Removing the port description also works but this is not feasible for the network team.
Port:
Solution for Issue #2
Get the UUID of the switch either from the error message in the Prism > Hardware > Table > Switch view or by running "ncli net list-switch" and looking at the "Switch ID" in the output. For example:
nutanix@cvm:~$ ncli net list-switch
Delete the switch using the following command:
nutanix@cvm:~$ ncli net delete-switch-config switch-id=<uuid>
The switch may not immediately disappear from the "ncli net list-switch" output or from the Prism > Network visualization view. Wait for some time, then check the output of "ncli net list-switch" again or check the network visualization view again.
Solution for Issue #3Disable IPv6 if the customer does not use IPv6 in the environment. Refer to KB 5657 http://portal.nutanix.com/kb/5657 for more details.
Solution for Issue #4Enable management name and IP address in LLDP advertisement on the network switch side.
Network visualization showed proper port/switch mappings after adding the below settings to the switch's LLDP configuration.
Switch# advertise management-tlv system-capabilities management-address system-description system-name
After changing the LLDP configuration on switch, lldpctl shows switch name:
[root@host ~]# lldpctl
Workaround for Issue #5
Stop sending IPv6 MgmtIP in LLDP from the physical switch side. OrConfigure FQDN for management IP and make sure it could be resolved by the cluster.
Workaround for Issue #6Network visualization depends on a functional internal DNS system to map switch hostnames and IP addresses based on the LLDP responses. Incorrect or partial DNS configuration displays inaccurate network details in the UI.
If the short/alias name is not resolving to the Switch Mgmt IP address, work with the Network team to fix the DNS issues.CVM should be able to resolve to the Switch name learned via sysname field in lldpctl.If Network Switch is configured with an SNMP on Prism and the physical switch is not responding to snmpget command from the CVM, please check the switch configuration.It might be that ACL and SNMP responses are missing on the switch.Workaround for Issue #7This issue is resolved in AOS 5.15.X family: AHV 20170830.434, which is bundled with AOS 5.15.2. Upgrade both AOS and AHV to versions specified above or newer.
Solution for Issue #8This issue is fixed in NCC 3.10.1 and later. Upgrade NCC to this version or later if this issue is encountered.
Solution for Issue #9Enable check manually if they have been disabled.
nutanix@CVM:~/ncc/bin$ python plugin_config_cli.py --enable_check=True --check_id=150000
If Checks have been disabled recently based on timestamp, collect logbay bundle for RCA.
Solution for Issue #10Checked for the name servers, if it can resolve the switch. Run the below command and verify if the error traces is similar to below one
2022-11-21 05:16:42,220Z ERROR network_utils.py:421 [switch_interface_stats_collector] Failed to get IPv4 address by name arci-dc
Use the below command to find the error message as above:
nutanix@cvm:~$ egrep 'INFO|WARNING|ERROR' data/logs/health_server.log | grep switch_interface_stats_collector | less
If the error signature matches, identify the name servers that are present:
[root@ahv~]# cat /etc/resolv.conf
Use the below command to make each server as default for testing purpose and see if it resolves the name of the switch. This is just a test and we are not making any changes to the environment
nutanix@cvm:~$ nslookup
The above commands are used to set one name server as default. If a switch name is not resolved, the above error will appear. When there are no name servers, to resolve, the system can utilize any name server in a random way. The issue will appear when it uses the name server, which doesn't resolve the switch's name. That is the reason the issue will be seen intermittently and not continuously.Once you find the culprit name server, ask customer to either add the entry in his DNS server.Once the DNS issue is resolved, restart the service cluster_health:
nutanix@CVM:~$ allssh 'genesis stop cluster_health; sleep 30; cluster start'
It will take some time for the Prism UI to get updated, and monitor the cluster, and logs.
|
KB9550
|
1-Click ESXi Upgrade stuck at 14% due to missing locker symlink
|
1-Click ESXi upgrade stuck at 14% as underlying ESXi upgrade fails to load files on locker directory
|
Issue1-Click ESXi upgrade stuck at 14% due to missing /locker symlink on ESXi hosts.Symptoms
Host upgrade would be stuck at 14%host_upgrade_status would show “Host upgrade failed during previous run” for host
nutanix@CVM:~$ host_upgrade_status
CauseWe'll see in the vobd.log the host has entered and exited the maintenance mode, when the host exits maintenance mode, AOS could see the upgrade in failed state. Checking the esxupdate.log and hostd.log would show:esxupdate.log
2020-06-19T07:01:06Z esxupdate: 3444088: root: ERROR: vmware.esximage.Errors.InstallationError: ('VMware_locker_tools-light_10.3.10.12406962-14141615', '[Errno 32] Broken pipe')^
hostd.log
2020-06-19T07:01:06.251Z info hostd[9B5FAE0] [Originator@6876 sub=Solo.Vmomi opID=esxcli-5d-dd1b user=dcui] Result:
Looking at the same time stamp in the vmkernel.log logs, we might see error related to ramdisk full:vmkernel.log
2020-06-19T07:01:05.891Z cpu30:65866)WARNING: VisorFSRam: 353: Cannot extend visorfs file /locker/packages/vmtoolsRepo/vmtools/linux.iso because its ramdisk (root) is full.
But, vdf -h command on ESXi would show enough space on the ESXi directories:
Ramdisk Size Used Available Use% Mounted on
The cause for the above problem is due to locker file missing pointer to store. When ESXi initiates the upgrade process, it loads the VIBs into the locker directory and since the locker directory was not pointing to the store, ESXi was failing with broken pipe error and ramdisk full error.Example of a bad configuration:
nutanix@CVM:~$ hostssh "ls -altr /locker"
Example of a good configuration:
nutanix@CVM:~$ hostssh 'ls -la / | grep store'
|
1) From the CVM, SSH into the host that does not have the /locker softlink to /store.
nutanix@CVM:~$ ssh root@<IP of host>
2) Make sure you are in root directory.
[root@ESXi:~] cd /
3) Make a backup of the /locker directory.
[root@ESXi:~] mv /locker /locker.old
4) Create the /locker symlink to the /store directory.
[root@ESXi:~] ln -s /store /locker
5) From the CVM, verify the symlink creation.
nutanix@CVM:~$ hostssh "ls -altr /locker"
Example:
nutanix@CVM:~$ hostssh "ls -altr /locker"
5) Retry the upgrade.
|
KB4818
|
Nutanix Move (Data Mover): Migrating VMs having both IDE and SCSI disks
|
This article describes how to migrate VMs having both IDE and SCSI disks using Nutanix Move.
|
If VM has a combination of IDE and SCSI Bus type disks, and if BOOT disk is on IDE Bus type, then we might see performance issues after migrating such VMs to AHV.
Nutanix Move cannot determine the Bus type of the disks (whether it is SCSI or IDE) at source ESXi. This can lead to performance issues after migrating VMs having IDE Bus type BOOT disk.
|
If you are observing performance issues after migrating such VMs to AHV, then you need to convert the IDE Bus type BOOT disk to SCSI Bus type disk and then boot the VM again.
Follow KB 8062 http://portal.nutanix.com/kb/8062 to convert IDE Bus type disk to SCSI Bus type disk.
|
KB7785
|
LCM: Pre-check failure "test_catalog_workflows".
|
LCM: Pre-check failure "test_catalog_workflows".
|
During an LCM (Life Cycle Manager) inventory operation, the LCM framework downloads required modules and files from Nutanix servers with the help of the Catalog service.
If the download operation is not successful, the LCM pre-check "test_catalog_workflows" fails.
Operation failed. Reason: Lcm prechecks detected 1 issue that would cause upgrade failures.
NOTE: If the pre-check fails in Prism Central (PC), make sure that there is at least 1 Prism Element (PE) cluster registered to PC. PC relies on the storage from PE to run LCM workflows.
|
Log in to a CVM (Controller VM) via ssh and identity the LCM leader using the following command:
nutanix@cvm$ lcm_leader
SSH to the LCM leader and check lcm_wget.logs:
nutanix@cvm$ less /home/nutanix/data/logs/lcm_wget.log
If you see any issue in that log file, check your connection from the CVM to the Internet and fix any issues found.
Below are some scenarios where you will get the above error message and the solutions to each scenario.
Scenario 1: Pre-check 'test_catalog_workflows' failed: 403 Forbidden or 403 URLBlocked
Log Signature:
The following errors appear in /home/nutanix/data/logs/genesis.out in LCM leader node.
2019-02-02 19:54:41 ERROR lcm_checks.py:61 Failed to create catalog item. Error: Task 7360bd48cff14073ba74783210d4efee did not succeed
Find the LCM leader node using the following command.
nutanix@cvm$ lcm_leader
403 Forbidden error is visible in /home/nutanix/data/logs/lcm_wget.log.
--2019-02-02 20:31:13-- http://download.nutanix.com/lcm/2.1/master_manifest.tgz.sign
Or 403 URLBlocked error is visible in /home/nutanix/data/logs/lcm_wget.log.
--2020-07-20 19:07:42-- http://10.xx.xx.80/2.1.5320/master_manifest.tgz.sign
Solution:
For LCM version < 2.2.3, make sure LCM URL is set to "http://download.nutanix.com/lcm/2.0" from Prism --> Life Cycle Management --> Settings and Perform Inventory operation.For LCM version >= 2.2.3, set source to Nutanix Portal from Prism --> Life Cycle Management --> Settings and Perform Inventory operation.
Scenario 2: Pre-check 'test_catalog_workflows' failed in PC(Prism Central)
Log Signature:
The following errors appear in /home/nutanix/data/logs/genesis.out in LCM leader node (PC). If you have Scale-out Prism Central, run lcm_leader command from one PC to find out the leader.
2019-10-31 08:03:05 WARNING ergon_utils.py:1128 Ecli exited abnormally. Command: /usr/local/nutanix/bin/ecli -o json task.poll 6219dae9-da59-44ef-afa3-44c001a1ff7f timeout=300, result: (-1, '', '')
The following errors appear in catalog.out in LCM leader node (PC). If you have Scale-out Prism Central, find the catalog master as per the instructions from Scenario 3.
2019-10-31 08:12:36 ERROR misc_utils.py:24 Couldn't find a healthy stargate node
The error "Couldn't find a healthy Stargate node" suggests that PC cannot communicate with PE (Prism Element).
Solution:
Run the following command from PC and PE to verify the connectivity.
nutanix@cvm$ ncli multicluster get-cluster-state
Refer KB-9670 https://portal.nutanix.com/kb/9670, if the connectivity is good and the inventory on PC is still failing.Scenario 3: Pre-check 'test_catalog_workflows' failed when the catalog could not retrieve the size of master_manifest.tgz.sign
Log Signature:
Find Catalog leader node:
nutanix@cvm$ cd /home/nutanix/cluster/bin/
Sample output:
nutanix@cvm:~/cluster/bin$ ./catalog_leader_printer
The following error appears in /home/nutanix/data/logs/catalog.out of Catalog leader node.
2019-04-02 13:50:39 ERROR download_util.py:138 Could not retrieve the size of image http://download.nutanix.com/lcm/2.0/master_manifest.tgz.sign: Failed to parse file size
Verify that manual download is working fine using wget.
nutanix@cvm:~$ cd ~/tmp
Make sure that 100% of the file is downloaded.
If you are using a proxy, export the proxy parameters before you execute the wget command.
nutanix@cvm:~/tmp$ export http_proxy='http://username:password@proxy_server_ip:port'
Run the following command to check for the Content-Length. Possible results are listed below.
Output 1: Content-Length was fetched.
nutanix@cvm$ curl -s -S -f -k -L -I http://download.nutanix.com/lcm/2.0/master_manifest.tgz.sign
Output 2: Content-Length was not fetched.
nutanix@cvm$ curl -s -S -f -k -L -I http://download.nutanix.com/lcm/2.0/master_manifest.tgz.sign
Output 1 is showing a successful scenario and output 2 is generally observed due to some environmental issues in firewall/proxy/intrusion prevention software etc.
Solution:
Code improvement to handle the above scenario (output 2) was integrated into AOS 5.11, 5.10.5 and later. Upgrade AOS to 5.11, 5.10.5 or later and retry LCM inventory operation.
Scenario 4: Pre-check 'test_catalog_workflows' failed when the proxy username includes FQDN
Check the proxy setting by executing the following command:
nutanix@cvm$ ncli http-proxy ls
In the above example, the username includes FQDN and the proxy server is proxya.somecompany.com.
Log Signature:
The following error appears in /home/nutanix/data/logs/catalog.out of catalog leader node. You need to find the leader as per the instructions from scenario 3.
2018-11-29 09:39:19 ERROR download_util.py:330 Failed to fetch the file size of the file http://download.nutanix.com/lcm/2.0/master_manifest.tgz.sign: 7 (, curl: (7) Failed connect to intra.somecompany.com:0; Operation now in progress
Solution:
This is fixed in 5.5.9.5, 5.10.2, 5.11 and later. Upgrade AOS to 5.5.9.5, 5.10.2, 5.11, or later and retry LCM inventory operation.
OR:
Make sure the proxy username does not contain the domain name. For instance, use "user1" instead of "[email protected]".
Scenario 5: Pre-check 'test_catalog_workflows' failed when the proxy password has a special character
Log Signature:
The following error appears in /home/nutanix/data/logs/catalog.out in catalog leader node. Find the leader node as per the instructions from scenario 3.
2019-02-03 23:45:35 INFO base_task.py:580 Task CatalogItemCreate 60bb188c-6535-4de8-9e44-e1d1af66769b failed with message: File addc4ab2-2ef5-4c3b-810d-7decc797567e does not exist
Solution:
This is fixed in AOS 5.10.2, 5.11, and above. Upgrade AOS to 5.10.2, 5.11, or later and retry LCM inventory operation.
OR:
Do not use any special character in the proxy password.
If you are using LCM version 2.3.0.1 and it fails with the message "Failed to retrieve test catalog item with UUID" when a proxy is configured, refer to KB 8721 https://portal.nutanix.com/kb/8721.
|
KB15024
|
Selective vdisk usage scan timeout after AOS upgrade
|
Selective vdisk usage scans might repetitively fail due to timeout after AOS upgrade to version 6.5.2 and above.
|
PE clusters after upgrade to AOS versions 6.5.2 or 6.6 and above (containing the fix for ENG-422034) are susceptible to an issue described below.
The one-time Selective Vdisk Usage (SVDU) Curator scan is triggered post-upgrade in order to fix the inherited_user_bytes calculation, that was causing discrepancy between actual UVM/VG guest OS usage and reported Prism usage. SVDU scans might repetitively fail due to timeout after AOS upgrade.
Clusters containing dense nodes and thus having large metadata size (hundreds of GB to TB) are more likely to experience this issue, as the scan would not complete scanning large metadata footprint in a busy cluster in the allotted timeout window.Example: SVDU scan took ~4 hours 43 mins 2 seconds to finish on ~70TB node * 3 node cluster which is ~210TB cluster
Symptoms include:
AOS upgraded recently (days, few weeks, possibly a month ago) to 6.5.2, 6.6 or aboveCurator Selective Scan for Reason "VDiskUsage" times out with status kCancelled in Curator http:0:2010 service pageFull and Partial Curator scans not being run as the system continuously re-tries and fails at completing the SVDU scanAlert A1081 - CuratorScanFailure gets triggered due to Full / Partial scans not running in the last 48 / 24 hours, correspondinglyGarbage accumulation in the clusterPerformance issues reported by the customer
Curator log signature:
F20230517 19:09:24.400985Z 25426 curator.cc:1251] QFATAL NFF MapReduce task 71251.r.5.VDiskBlockNumberReduceTask has been running for too long, 10981 secs
|
WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit and KB 1071 https://portal.nutanix.com/kb/1071.
Selective scans have a static threshold of 3hrs timeout, even though AOS has dynamic scan timeout for Full and Partial scans, but this feature is not extended to selective scans.
NOTE: Selective vdisk usage scan is the least important scan. It does not issue any foreground or background tasks. It just updates Arithmos with the user_bytes of all the vdisks. It has to run exactly once after AOS upgrade. Ideally, it should choose the run time anytime after upgrade. There is no urgency to run it immediately after upgrade.
Workaround:
Increase the Curator selective scan timeout from 3 to 18 hours. Set the value by changing the gflag:
curator_static_selective_scan_max_secs= 64800 (Default 10800)
Wait for the successful completion of one Selective Vdisk Usage scan. To check if any Selective Vdisk Usage scan has successfully completed, we can check that no booleans are true in the output of the following command:
curator_cli features operation=list | grep "update_vdisk_usage_stats"
Reset curator_static_selective_scan_max_secs to the default value of 10800.
Permanent fix:
ENG-565854 https://jira.nutanix.com/browse/ENG-565854 is being fixed in AOS versions 6.5.4 and 6.7. Once they go GA, recommend customers to upgrade.
|
KB12691
|
"NIC link down" alert for Internal USB device interface raised due to LCM inventory
|
The KB provides you troubleshooting steps to understand and verify if you are facing the issue.
|
The NCC health check nic_link_down_check verifies if the configured host NICs have a UP link status, refer KB http://portal.nutanix.com/kb/2480 2480 http://portal.nutanix.com/kb/2480 for further details on the check. With NCC versions prior to 4.5.0.2 or 4.6.0 we have observed that an alert or warning message in NCC may be generated after or during the LCM Inventory process as follows:
Link on NIC enp0s20f0u7u2c2 of host x.x.x.x is down. NIC description: None.
Background:
The NIC starting with "enp" as in example:"enp0s20f0u7u2c2" is a virtual USB NIC used by Nutanix Internal processes during LCM Inventory and is not related to actual physical NIC in the host.
Verify:
We expect you to verify if you are observing the above alert due to an LCM Inventory process or if it is an actual NIC down condition.
AHV: How to check the interface status
Open a SSH session to a CVM and run the following command:
nutanix@cvm:~$ allssh "manage_ovs show_interfaces"
The command output shows the interfaces for the local host of each CVM. In the example below, X.X.X.10 is the CVM IP and not the host one. The interfaces listed are the ones for the local CVM's host with IP X.X.X.10.
================== X.X.X.10 =================
For NCC versions prior to NCC 4.5.0.2+ or 4.6.0+:
[
{
"Interface Name": "enp******",
"Link Status": "False",
"Explanation of the Alert": "The link status is False therefore you are receiving the alert in your cluster. Please check the solution section of the KB."
},
{
"Interface Name": "True",
"Link Status": "If the link status is True then you need to further evaluate why you are observing the alert. Please engage Nutanix Support."
},
{
"Interface Name": "eth* (Any physical NIC)",
"Link Status": "False",
"Explanation of the Alert": "Please do not follow this KB and refer KB 2480"
}
]
|
If enp******* USB-NIC link status is False and you receive the nic down alert:
Please upgrade NCC to 4.5.0.2+ or NCC 4.6.0+. The USB interface is ignored in versions beyond 4.5.0.2+ and 4.6.0+ because it is only utilized during specific inventory and upgrade operations and its state is expected to vary because of that.Additional improvements were put in place to remove the interface from use after and inventory/upgrade operation. If you see the enp******* USB-NIC listed as an interface even after the inventory is completed, this issue is fixed in LCM 2.4.4.1. If you are already on LCM 2.4.4.1 and still seeing the usb interface after inventories/upgrades please engage Nutanix Support http://portal.nutanix.com to ensure there is not another problem occurring.
|
KB13098
|
Nutanix Files -- SMBD service crashes may cause TDB database bloat
|
SMBD service crashes there could be tdb database overflow
|
In Nutanix Files 3.8.x and 4.0 when the SMBD service crashes or restarts for any reason including, unplanned Host outages the TDB database can bloat. Some of the symptoms of this issue are the following:
Extremely poor performance on the Nutanix file server. A large amount of lock contention on VDI user profiles/failure to amount user profiles
There are several different TDB databases and when the issue occurs there can be several different error signatures in the smbd and client logs located in the following path on the FSVM:
/home/log/samba/
After the SMBD service restarts, the client logs will show an increase in the smbXsrv_open_global TDB database. The TDB database is what the open-source samba library uses to track and maintain client SMB connections. We periodically perform a clean-up of the TDBs using a wipe operation and due to a code defect, the outbounds area is not maintained properly, this causes the bloat on the TDB database.Log output showing the error signatures.
nutanix@FSVM:~$ zgrep -B1 "tdb_expand overflow detected current map_size" /home/log/samba/clients*.log.* | head -n 2
Additional signatures in the /home/log/samba/smbd.log
home/samba/lock/leases.tdb): tdb_chainwalk_check: circular chain
The following command can check the TDB in question.
nutanix@FSVM:~$ sudo tdbtool /home/samba/cache/smbprofile.tdb check
|
Currently, this issue has been resolved in Nutanix Files 4.0.1 and above, however, if the customer is using a previous version of Files, the current workaround is the following:
allssh 'afs ha.minerva_ha_stop && sleep 60 && cluster start && sleep 60'
This command will trigger a restart of all the services on the File server cluster and stop the HA service cluster-wide causing a 3-5 minute outage for all users connected to the file server.
|
KB15509
|
DRaaS: Redeploying Nutanix On-Prem VPN gateway
|
How to redeploy VPN gateway on the ON-Prem PC/PE
|
During Nutanix DRaaS troubleshooting, we may need to redeploy the On-Prem VPN gateway. If it's only a redeployment with no additional changes, follow the procedure in the solution section.
|
Consult with a DRaaS SME or Xi-SLE team before following this procedure.Below are the step-by-step instructions.1. To force the redeployment in a specific version, set the below gflags in On-Prem PC with the required OCX version and it's checksum.Example below for version 5.0.0e67a8a.20220628 on a PC deployed on AHV cluster:
Note: Skip Step 1 if the gateway just needs to redeploy in the same version as it is running currently.
nutanix@onprempcvm:~$ allssh echo "--vpn_gateway_image_sha256_checksum=37360f26b3097a355c2950a424313cde3a815926a61aa519e0c4ba6b4036480f" >> /home/nutanix/aplos.gflags
Example below for version 5.0.0e67a8a.20220628 on a PC deployed on ESXi cluster:
nutanix@onprempcvm:~$ allssh echo "--vpn_gateway_image_sha256_checksum=37360f26b3097a355c2950a424313cde3a815926a61aa519e0c4ba6b4036480f" >> /home/nutanix/aplos.gflags
Note: If the above method does not work, use KB 1071 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0600000008SsHCAU to make gflag changes for aplos, aplos_engine and atlas2. Login to On-prem Prism central and browse to Network & Security --> Connectivity --> VPN ConnectionsNote: Please delete the VPN gateway immediately after deleting the VPN connection. This will avoid a conflict between the recreation of new VPN connection task as set through gflag in step 1.3. Select the VPN connection and Delete.4. Go to "Gateways", select the On-prem VPN gateway, note down the gateway version, browse it and delete.5. Once the Gateway is deleted, in couple of minutes a new Network gateway create task will get triggered automatically and now the VPN gateway will get created in the version defined in the gflag.6. VPN gateway is created with the version given in glfag.7. Make sure VPN gateway and the connection are up.Login to on-prem PCVM.
nutanix@onprempcvm:~$nuclei vpn_gateway.get "onprem_VPNGW_UUID"
Login to VPN_gateway onprem
nutanix@NTNX-PCVM:~$ ocx-vpn-version
|
KB12410
|
Prism Central fails to send rule-based email alerts to the required recipients.
|
Due to an Out Of Memory crash of the alert_manager service, PCVMs may not send alert emails. This KB documents a workaround until customer can upgrade to version
pc.2021.12 or later where the issue has been resolved.
|
Alert emails may not be sent with the following conditions even though Prism shows the alerts.
AOS 6.0 or newer, Prism Central (PC) 2021.5 or newerRule-based alert emails failing to send from PCWorking through KB-6387 http://portal.nutanix.com/kb/6387is successfulIn `dmesg -T` on the Prism Central VMs, we see the following out of memory reporting: Task in /alert_manager killed as a result of limit of /alert_manager. Also highlighted in the log snippet below.
[Wed Aug 4 10:30:13 2021] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name
|
Upgrade Prism Central to a fixed version (pc2021.12 or later) as soon as possible.
|
KB11087
|
Importing Config for Palo Alto Firewall does not retain MAC address
|
Importing Config for Palo Alto Firewall does not retain MAC address
|
Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.Customers can deploy a firewall as a VM in XI. This VM can monitor and define policies for inbound/outbound traffic on the Xi cloud. Customers hitting this issue will notice that when they import Onprem Palo Alto's(PA) configuration to a PA VM on Xi, the connectivity breaks down and the PA VM is not able to reach any resource on the networkThe issue happens because the PA VM's MAC address changes when the config is imported. Symptoms of the issue
The PA VM cannot arp for the gateway.Virsh dump XML for the VM has a different MAC than that seen on the interface (running-config) on the PA VM
virsh dumpxml PaloAlto300
MAC address seen on the upstream switch for the PA VM will be different. MAC can be seen from the switch's CAM table. For example
7c:c2:56:89:ea:10
|
The MAC address as seen on Virsh needs to be retained when the config is imported. To achieve the above, the below line should be added to the config file downloaded from the Onprem PA, before importing the config to the PA VM on Xi side.
<auto-mac-detect>yes</auto-mac-detect>
|
KB11592
|
Alert - A130218 - VgAutonomousReplicationDelayed
|
Investigating VgAutonomousReplicationDelayed issues on a Nutanix cluster
|
This Nutanix article provides the information required for troubleshooting the alert VgAutonomousReplicationDelayed for your Nutanix cluster.Alert Overview
The alert VgAutonomousReplicationDelayed is generated when the replication of the nearsync recovery point is lagging.
Sample Alert
Block Serial Number: XXXXXXXXXXXX
Output messaging
[
{
"Check ID": "Replication of the nearsync recovery point is lagging."
},
{
"Check ID": "Replication of the nearsync recovery point is slow."
},
{
"Check ID": "Ensure that the number of entities are within the supported limit."
},
{
"Check ID": "Configured RPO to the remote site might get affected."
},
{
"Check ID": "A130218"
},
{
"Check ID": "Nearsync Replication is lagging for recovery point"
},
{
"Check ID": "Nearsync replication for {volume_group_name} to availability zone {availability_zone_physical_name} is lagging by {replication_lagging_seconds} seconds."
}
]
|
Troubleshooting and Resolving the issueThe system retries every few minutes to meet the RPO. VgAutonomousReplicationDelayed alerts are auto resolved once the replication catches up.Heavy workload, Network congestion, or any other prevailing network issues in the environment could cause the NearSync replication delay alerts. If the issue persists even after making sure that there are no network-related issues and that there is enough bandwidth between the sites, consider engaging Nutanix Support at https://portal.nutanix.com. https://portal.nutanix.com./ Collect additional information and attach them to the support case.Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, refer KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC health check bundle via cli using the following command.
nutanix@cvm$ ncc health_checks run_all
Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, refer KB 2871 https://portal.nutanix.com/kb/2871.
Attaching Files to the Case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
Requesting Assistance
If you need assistance from Nutanix Support, add a comment to the case on the support portal asking for Nutanix Support to contact you. You can also contact the Support Team by calling on one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/.
Closing the Case
If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case.
|
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""esxcli software vib update --depot /vmfs/volumes///zipfile.zip\t\t\tOr:\t\t\tesxcli software vib update -d /vmfs/volumes///zipfile.zip""
| null | null | null | null |
KB3284
|
RMA: Return Instructions (EMEA)
|
This article shows the printout sent to the customer when a replacement part is requested.
|
Below is the return instruction letter provided with a replacement part for customers in EMEA region. Nutanix is using DHL for shipments within the European Union. The below instructions are valid only for the European Union. For countries outside of the EU, local couriers might be used with different return instructions. Reach out to [email protected] or to Nutanix Support http://portal.nutanix.com if you need assistance with the return.
|
Instructions for returning the failed device Dear Customer, You have received a replacement part. Please return the defective unit within 1 week from today.This shipment contains replacement parts provided under Nutanix’s Advance Replacement Service. Under the terms and conditions of the RMA policy, the replaced parts must be returned to us within 1 week. To ensure a rapid return, please follow the instructions below to return the defective product to our consolidation point in Amsterdam:
Use the packaging in which the good part was received. Place the return/defective part in the antistatic bag (ESD if available) inside the box that the replacement parts arrived in. Please ensure that the part goes into the box marked with the same part number.Remove old Address Labels or Air Waybills from the box. Please do not return any other products beside the one covered by the RMA you received for this product.Please write the Nutanix RMA/Case number on the outside of the box.Please update the fields below and place this sheet in the box with the defective.
RMA/Case number: _________________________________ Part number: _________________________________ Serial Number: _________________________________ Your Company Name: _________________________________ Address: _________________________________ Postal Code: _________________________________
Seal the box for safe shipping. If you have a regular daily pickup, please give the package to the DHL driver. Otherwise, please call the local DHL office to schedule a collection. Please reference the following account number when arranging transportation: 952501390. If your country is not listed below and is in the EU, please go to www.DHL.com http://www.DHL.com and select your country to find the local contact. Alternatively, you may contact us at [email protected]
Once pick-up date is confirmed, please have your material ready for pick up during normal business hours on the booked date at the reception of the pickup address.
Important Note:
The collection will not be made if there is any access control to the building.Failing to place RMA/Case# on the carton may cause Futile pick-up as the item is identified by Case #
[
{
"COUNTRY": "FRANCE",
"CONTACT NUMBER": "0820202525",
"FRANCE": "NETHERLANDS",
"0820202525": "0880552000"
},
{
"COUNTRY": "GERMANY",
"CONTACT NUMBER": "01806345300",
"FRANCE": "SPAIN",
"0820202525": "0902122424"
},
{
"COUNTRY": "IRELAND",
"CONTACT NUMBER": "01890725725",
"FRANCE": "SWITZERLAND",
"0820202525": "0848711711"
},
{
"COUNTRY": "ITALY",
"CONTACT NUMBER": "199199345",
"FRANCE": "UNITED KINGDOM",
"0820202525": "08442480844"
}
]
|
KB12355
|
Converting CentOS servers to Red Hat Enterprise Linux 7 or 8 using the "convert2rhel" tool
|
Convert2RHEL is a Red Hat-supported utility for conversation of certain Red Hat Enterprise Linux-derivative distributions into a supportable Red Hat Enterprise Linux system.
|
Convert2RHEL is a tool officially supported by Red Hat for converting certain Red Hat Enterprise Linux (RHEL) derivative distributions, such as CentOS 7 and 8, into a supportable RHEL system. While technical support for the Convert2RHEL tool is provided by Red Hat, Nutanix Support may encounter customers that have used this tool to convert a supported Linux distribution into a RHEL system, or Nutanix Support may need to use this tool as part of a reproduction effort to troubleshoot underlying issues on a Nutanix platform. This article will provide high-level, example steps on how to use the Convert2RHEL tool to convert a CentOS VM to RHEL.Note that the steps in this article are for reference only. See this Red Hat blog post https://www.redhat.com/en/blog/introduction-convert2rhel-now-officially-supported-convert-rhel-systems-rhel and the official Red Hat documentation https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/converting_from_an_rpm-based_linux_distribution_to_rhel/index for additional information.
|
Download the Red Hat GPG key to the target VM:
[user@centos ~]# curl -o /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release https://www.redhat.com/security/data/fd431d51.txt
Install the Convert2RHEL repository, replacing VERSION_NUMBER with the appropriate major version of the OS, for example 7 or 8.
[user@centos ~]# curl -o /etc/yum.repos.d/convert2rhel.repo https://ftp.redhat.com/redhat/convert2rhel/VERSION_NUMBER/convert2rhel.repo
Install the Convert2RHEL utility. Then run it with -h to see all options
[user@centos ~]# yum -y install convert2rhel
Run Convert2RHEL to begin the conversion process and automatically register to Subscription Manager. The -y option can be used after testing to auto-answer "yes" for known, tested scenarios. Additional capabilities such as using Activation Keys instead of sensitive credentials are explained in the documentation https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/converting_from_an_rpm-based_linux_distribution_to_rhel/index.
[user@centos ~]# convert2rhel --auto-attach --username=<USERNAME> --password=<PASSWORD>
Once the conversion succeeds, a reboot is required to start the system as a Red Hat Enterprise Linux system. Note that it may be necessary to re-install third-party RPMs or otherwise re-configure some system services after conversion.
|
KB12374
|
Nutanix VirtIO or NGT installation/upgrade may fail due to large amount of Windows VSS snapshots present in a VM
|
Nutanix VirtIO or Nutanix NGT installation or upgrade may fail if a large amount of Windows VSS snapshots present inside a guest VM
|
Installation or upgrade of Nutanix VirtIO or Nutanix Guest Tools may fail during installation of "Nutanix VirtIO SCSI pass-through controller" driver if a large amount of Windows VSS snapshots is present in a guest VM with large data disks.
In Windows SetupAPI log https://learn.microsoft.com/en-us/windows-hardware/drivers/install/setupapi-logging--windows-vista-and-later- (C:\Windows\INF\setupapi.dev.log), the error that Windows timed out waiting for completion of PnP query-remove device can be seen:
dvi: Device Status: 0x0180400a, Problem: 0x0 (0x00000000)
In Nutanix VirtIO (%temp%\MSIxxxx.log) or Guest Tools (%temp%\Nutanix_Guest_Tools_<timestamp>_<number>_NutanixVirtIO.log or %temp%\Nutanix_Guest_Tools_<timestamp>_<number>_NutanixVmMobility.log) installer, the below error can be seen:
InstallScsiDriver: InstallDriverForDevice: Device ID is PCI\VEN_1AF4&DEV_1004&SUBSYS_00081AF4&REV_00
Another error in the same log file may look as below (with symptoms including NGT installation hanging on "Processing: Nutanix VM Mobility"):
DIFXAPP: INFO: ENTER: DriverPackageInstallW DIFXAPP: INFO: Installing INF file 'C:\Program Files\Nutanix\VirtIO\vioscsi\vioscsi.inf' (Plug and Play).
There are many VSS snapshots present in the affected VM. To check the number of VSS snapshots and how much spaces they are using, you can use below commands executed from an administrative command prompt:
C:\> vssadmin list shadowstorage
|
During the installation of the "Nutanix SCSI pass-through controller" driver, Windows tries to remove the SCSI controller to update the driver. Among the steps, when the controller is removed, there are two items that cause a delay:
Dismounting VSS snapshots.Validating their differential areas.
In order to workaround the issue, clean up the VSS snapshots or temporarily disconnect the drive containing the snapshots. Once completed, retry the Nutanix VirtIO or Guest Tools installation or upgrade. To delete the VSS snapshots, you may use "vssadmin" tool. It only allows to deletion of either all the snapshots or only the oldest one.Note: Before proceeding with the deletion of the VSS snapshots, make sure they are no longer needed. Examples of the "vssadmin" commands:
To delete the oldest VSS snapshot of C:\ drive:
C:\> vssadmin delete shadows /for=c: /oldest
To delete all the snapshots
C:\> vssadmin delete shadows /all
To delete the specific snapshot
C:\> vssadmin delete shadows /shadow=<ShadowID>
Note: Windows 2012 R2 VMs may experience an error message when attempting to use vssadmin delete commands.
Error: Snapshots were found, but they were outside your allowed context. Try removing them with the backup application which created them.
Steps to clear snapshots:
Open Command Prompt as administrator.Enter the wmic:root\cli shell
C:\Users\Administrator> wmic
Type in shadowcopy to list the current shadow copies.
wmic:root\cli> shadowcopy
Type in shadowcopy delete and confirm to delete the copies one after the other.
wmic:root\cli> shadowcopy delete
To confirm the snapshots have been successfully removed type in shadowcopy.
wmic:root\cli> shadowcopy
Reattempt NGT installment/upgrade.
|
KB10450
|
LCM Troubleshooting: Dark site direct upload (no webserver)
|
The KB provides you different scenarios where you can observe a failure message when using the "Dark site direct upload (no webserver) method in Prism Element's Life Cycle Manager.
|
General Information and Limitations:
Please refer Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_6:Life-Cycle-Manager-Dark-Site-Guide-v2_6 for understanding the Direct upload method.Currently LCM only supports direct upload for Prism Element. For Prism Central, use a local web server.LCM framework version should be LCM-2.4.1.1 or later.For components which cannot be updated via this method we suggest to continue using Local Web Server procedure.We can only upload one bundle at a time.
Please note: LCM bundles that are supported for direct upload use the filename format lcm_component_version.tar.gz instead of the older format lcm_darksite_component_version.tar.gz.After LCM-2.6, you can upload two type of bundles:
LCM Bundle: lcm_<component_version>.tar.gz provided by Nutanix in Nutanix Portal Download page https://portal.nutanix.com/page/downloads/list3rd Party Bundle: In LCM-2.6, we are supporting ESXi images to be uploaded with Metadata json - please refer Fetching the ESX Bundle with Direct Upload https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_6:top-lcm-darksite-esx-direct-upload-t.html
3rd Party Image (eg. .zip): The image file can be downloaded from the respective distributors.Metadata (.json): The metadata file can be downloaded from Nutanix Portal Download page https://portal.nutanix.com/page/downloads
Schema of Metadata json:
To create a custom metadata.json: Please refer KB-15919 http://portal.nutanix.com/kb/15919
|
3rd Party Bundle:
Please note: For ESXi upload
Supported
ESXi HCI and CO nodesSecure bootMinimum ESXi version is 6.7
Not supported for LCM-2.6
Non NX platformsError message:
10001: 500: Uploaded metadata file validation failed [3rd party upgrades for non-NX platforms are not supported in LCM]
We have added metadata json validation guardrails to avoid incorrect metadata json to be uploaded.Guardrail 1: Different zip file with incorrect metadata json (Name does not match)
The filename does not match with the name specified in the metadata file.
Please make sure the name of 3rd party image uploaded and metadata json "name" matches.This guardrail will fail when you add the metadata file.
Guardrail 2: Checksum type error
u'<checksum_type>' is not one of ['shasum', 'hex_md5']
We are expecting shasum or hex_md5This guardrail will fail when you add the metadata file.
Guardrail 3: Checksum error
Operation Failed. Reason: 10001: 500: Incorrect image uploaded. No image with checksum 4e1ca8c0b74408eb322f86b61025ae2a could be found in metadata json
Please verify the md5 or shasum value from the actual ESXi image.Clean the browser cache or use incognito window to verify if it is browser cache issue.This guardrail will fail once the entire upload operation is completed.
Guardrail 4: Size error
This file does not match with the size specified in the metadata file.
Please make sure to verify the size mentioned in the metadata json and 3rd party bundle size.This guardrail will fail when you add the metadata file.Do not use google or other tools to convert to bytes, In VMware/Broadcom website ESXi ZIP file is specified in MB. Use Files Properties in Windows or MAC to check file size in Bytes.
Guardrail 5: Uploading Unqualified ESXi version
10001: 500: Uploaded metadata file validation failed [Entity: ESX_NX hypervisor-7.0.3-21053776 is unqualified but has been incorrectly passed as qualified]
The above indicates that you have incorrectly marked the metadata content to qualified: truePlease change the metatdata json's field :
"qualified": false,
LCM Bundle:
The general failures occur when you are uploading incorrect bundle, always verify what LCM bundle you are uploading. Because the failure will be shown once the bundle is completely uploaded which might take time based on your environment.
Scenario 1: If you are uploading LCM framework Bundle version which the cluster is running.
Example:
The cluster is running LCM-2.4.1.1 and you upload LCM Framework Bundle (Version: 2.4.1.1)
Error sample:
Operation failed. Reason: Failed to process bundle of type lcm. Master manifest of the bundle to be uploaded is same as the current manifest in catalog. Aborting upload.
Solution:
Please upload a newer LCM Framework Bundle compared to your Cluster's LCM Version.If you are already running the latest LCM version, please skip uploading the LCM Framework Bundle.
Scenario 2: If you are uploading a component's LCM bundle which is still not compatible with the Direct Upload Method.
Example:
Cluster Maintenance Utilities (CMU) LCM Bundle is not compatible with the Direct Upload Method instead it can be used only for Local Web Server Method.If you upload the lcm_darksite_cmu-builds_2.0.3.tar.gz file then you will get the following error.
Error sample:
Operation failed. Reason: Failed to process bundle of type image. Failed to process image bundle with error Key: entity_class is missing from the bundle. Please ensure that only LCM uploadable bundles are being uploaded. Please refer KB 10450
Solution:
Please use the Local Web Server Method to upgrade the component which cannot be upgraded via the Direct Upload Method.
Scenario 3: If you are uploading "Upgrade Software" upgrade bundle or any Non-LCM tar file (Task stuck at 50%)
Example:
Filename: "nutanix_installer_package-release-euphrates-5.10.11.1-stable-x86_64.tar.gz" is not applicable to LCM instead this is used for uploading the binary in Upgrade Software.You will find it to be stuck at 50% with the following message:
Waiting for pending task to finish
Solution:
Please engage Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/to cancel the task.
Scenario 4: If you logout from prism element session then the upload task might fail.
Example:
Uploading a large file might will take time and if in the mid-way of the upload task, your prism session expires then you will receive the below error message.
Error sample:
Prism session expired while uploading the bundle. Please refer KB 10450
Solution:
Again start the upload process and keep the prism session active.
Scenario 5: Upload takes long time to complete and you want to cancel the upload task
The upload task does depend on your environment and file sizeExample:
Large file size and slow network speed might take longer time for the upload task to complete.
Solution:
If the task completion is below 50%:
Log out of Prism Element and login back.The LCM upload task will fail with following error message:
Prism session expired while uploading the bundle. Please refer KB 10450
You can resume the upload process again.
If the task completion is stuck at 50%:
Refer - Scenario 3 solution.
|
KB7732
|
Unable to join cluster to Prism Central: "Prism Central is unreachable"
|
When trying to add PE to PC, it may fail with the error "Prism Central is unreachable" despite no network connectivity issue. This article explains how to resolve the issue.
|
Scenario 1
Joining Prism Element (PE) cluster to Prism Central (PC) may fail with the error message:
Prism Central is unreachable
Network connectivity between PE and PC is fine, which can be checked using the below commands:
Check if port 9440 is open between PE and PC using the command:
nutanix@cvm$ nc <prism_central_ip_address> 9440 -vv
Check SSH connectivity from PE to PC using the command:
nutanix@cvm$ ssh <prism_central_ip_address>
Prism gateway logs (/home/nutanix/data/logs/prism_gateway.log on the Prism leader https://portal.nutanix.com/page/documents/details?targetId=Advanced-Admin-AOS:tro-prism-log-entries-c.html) show below error message:
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
Scenario 2
When connecting to the Prism Element (PE) cluster from Prism Central (PC) Quick Access Link, the following error message can be seen in PE:
Unable to get Prism Central recovery information.
While the Prism Central (PC) Quick Access widget in Prism Element (PE) displays Disconnected:
These issues may occur when a proxy is configured in the environment.
|
Whitelists need to be configured on both Prism Central and its managed clusters. Use the ncli http-proxy add-to-whitelist command to bypass a proxy server used by a managed Prism Element cluster and allow network traffic between Prism Central and the cluster. Previously, if you attempted to register a cluster that implemented a proxy server, the registration failed.
Open an SSH session to any Controller VM (CVM) in the cluster to be managed by Prism Central.In this example, add the Prism Central VM IP address to the whitelist, then verify the Prism Central VM IP address was added to the whitelist:
nutanix@cvm$ ncli http-proxy add-to-whitelist target-type=ipv4_address target=<PCVM IPv4 Address>
Open an SSH session to the Prism Central VM managing the cluster where you just modified the HTTP whitelist.Add the cluster virtual IP address to the whitelist, then verify the IP address was added to the whitelist:
nutanix@PCVM$ ncli http-proxy add-to-whitelist target-type=ipv4_address target=<CVM VIP IPv4 Address>
In this case, Prism Central and its managed cluster can communicate, with network traffic bypassing any proxy servers configured in the cluster.
|
KB4195
|
Nutanix Files - Delete File Server Forcefully
|
Deleting the file server forcefully if a graceful way is not working.
|
In order to delete the File Server instance, Nutanix recommends that you follow the steps updated in the Nutanix Files Setup Guide. https://portal.nutanix.com/page/documents/details?targetId=Files-v4_4:fil-file-server-delete-t.htmlThere can be instances when graceful removal of the file server is not working and you may see the following error. This may happen when the file server is not available and has been deleted:
nutanix@CVM:~$ ncli fs delete uuid=xxxxx-xxxxx-xxxxx
Second type of error you may see during removal is below if there are DR instances of File Server which have Orphaned VMs referencing that container:
Error: Storage container DR_TST-AFS-00 is mounted on hosts with ids 10.xx.xx.111,10.xx.xx.112,10.xx.xx.113
|
1. Get File Server info for later commands (UUID/Name).
nutanix@CVM:~$ ncli fs ls
2. Delete File Protection Domain via Prism.
Data Protection > Async DR > NTNX-<Files server>
Make sure the 'Schedules' for the Protection Domain is deleted, otherwise the above task will fail. To delete the Schedules, select the Files Protection Domain » goto 'Schedules' Tab in the details section below » Delete the Schedules. Now you should be able to delete the Protection Domain.
3. Delete the file server permanently from the Minerva Database.
nutanix@CVM:~$ afs infra.force_fileserver_delete <Name of File Server>
4. Verify if the File Server container has data:
nutanix@CVM:~$ nfs_ls /<Files Server Name>_ctr
5. If there are files in the File Server's container, delete them.Warning: make sure you verified that files from the container can be safely deleted. Consult with Sr.SRE or STL if you are not sure.There can be files/vdisks from some VMs/VGs/images allocated from the Files container by mistake or it receiving snapshots from the remote cluster:An example of an analysis approach to understand which entity on cluster owns files from container can be found in KB-1659 https://portal.nutanix.com/kb/1659.While deleting container we might run into issue where the container deletion might error out with "Error: Container <name> contains small NFS files."Refer KB-2532 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA032000000TTHKCA4 to delete the container.In addition to KB-1659 https://portal.nutanix.com/kb/1659, if the customer is using Leap then Recovery Points of the FSVMs must be deleted from PC as well, see TH-7393 https://jira.nutanix.com/browse/TH-7393?focusedCommentId=3821239&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-3821239 for more details.And here is a list of possible issues that can occur if files/disks are removed when they are still in use by the cluster:
User VMs may stop working if live vdisks corresponding to VM's disks was marked for removal (this means data loss);REST API calls may stop working. For example, Veeam trying list all VMs with disk details during the backup job ("/PrismGateway/services/rest/v2.0/vms/?include_vm_disk_config=true"), this API call may fail due to VM with disk(s) from deleted container and Veeam will not be able backup the VMs from this cluster.AHV image service could crash or be impacted if manually marked for removal vdisk was owned by an image. For additional details about the possible impact for AHV images, see scenarios 3A, 3B, and 8, in KB 6142 https://portal.nutanix.com/kb/6142. Snapshots associated with the vdisk could be lost or impacted.
When you confirmed that files on from container are some kind of leftover and they can be safely removed - you can proceed with steps below:
For ESXi based clusters, via VCenter:
Storage view > click Files Server's datastore > click browse file icon.
Delete the files listed in the nfs_ls command.
AHV based clusters:
Add your computer's IP address to the cluster's whitelist https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_18:wc-system-filesystem-whitelists-wc-t.html.Use WinSCP or FileZilla to SCP/SFTP to the cluster's VIP using Prism's admin credentials and port 2222.
Navigate to the File Server's container and delete the files listed in the nfs_ls command.
6. If the hypervisor is VMware, unmount the File Server container via Prism.
Storage > click File Server container > Update
7. Delete the File Server container.
nutanix@CVM:~$ ncli container remove name=<File Server's ctr> ignore-small-files=true force=true
8. Verify that the File Server no longer exists in Prism's "File Server" section.9. Verify that the File Server details no longer exist.
nutanix@CVM:~$ ncli fs ls
|
KB7055
|
Nutanix Files - Sticky Bit Behavior on NFS distributed exports
|
This article describes changes to sticky bits introduced in Nutanix Files 3.5.
|
The sticky bit is a permissions flag that is mainly used on folders. When a folder's sticky bit is set, users can create files and directories in the folder but they can only delete their own files and directories. Without the sticky bit set, any user can delete any other user's files and directories. This is usually set for common directories like /tmp in Unix.
To set the sticky bit:
nutanix@FSVM$ sudo chmod +t <share_root>
To unset it:
nutanix@FSVM$ sudo chmod -t <share_root>
This feature introduces a behaviour change from previous Nutanix Files versions. If you want to delete a top-level directory (TLD) on a multi-protocol home share, the procedure now depends on whether or not the sticky bit is set on the home share.
|
Below are the new steps that need to be followed when deleting a top-level directory (TLD) on a home share where SMB is the primary protocol and NFS the secondary:
First of all, verify whether the sticky bit is set or not on the home share root. This can be done by running:
nutanix@FSVM$ stat <share_root>
Or :
nutanix@FSVM$ ls -ld <share_root>
If the sticky bit is not set, run these commands one-by-one to delete the TLD:
nutanix@FSVM$ rm -rf <TLD>
If the sticky bit is set:
There are now two options to delete the TLD:
Unset the sticky bit on the share, delete the TLD and then, set the sticky bit back:
nutanix@FSVM$ chmod -t <share_root>
Or, run "ls -l <share_root>" after un-mounting the TLD:
nutanix@FSVM$ rm -rf <TLD>
|
KB7287
|
alert_manager keeps crashing with group_result.raw_results_size() == 1 (2 vs. 1)
|
Alert Manager service keep crashing and with the message group_result.raw_results_size() == 1 (2 vs. 1) in FATAL logs
|
Customers might complain that they are receiving the alerts of cluster services crashing repetitively as alert_manager is crashing.
When we run "ncli alert ls", we do not see any alerts:
nutanix@cvm$ ncli alert ls
When checking the FATAL logs we see that it failed with expecting to return 1 value but received 2 from insights.
nutanix@cvm$ allssh "cat data/logs/alert_*.FATAL"
alert_maanger.out on all the nodes will show that there are missing fields for random checks:
E0322 17:43:39.094504 13680 alert_metadata_manager.cc:405] Missing required field resolution_list for id: A111059
|
According to ENG-140439 https://jira.nutanix.com/browse/ENG-140439, this was resolved using the below work around:
Run allssh pkill alert_manager
nutanix@cvm$ allssh pkill alert_manager
On one CVM, run the following command:
nutanix@cvm$ ~/bin/alert_manager --clear_leader_service_state
After 10 minutes, kill the alert manager process and run "cluster start"
nutanix@cvm$ cluster start
In a case where alert_manager is crashing again after modifying any user action like updating e-mail address configured for the alerts, please proceed with the below section.
Make sure that no actual NCC checks are causing the issue. Consider running NCC checks and resolve the warnings or failures and restarting alert_manager if that stops alert_manager from crashing.If even after resolving NCC related issue the alert_manager is not stable, follow the below steps:Bring the alert_manager service down across all the nodes:
nutanix@cvm$ allssh pkill alert_manager
Download the script to one of the CVMs at `/home/nutanix/serviceability/bin`
nutanix@cvm$ cd /home/nutanix/serviceability/bin
Script URL:
https://download.nutanix.com/kbattachments/7287/clear_duplicate_alert_notification_policy_v101.py https://download.nutanix.com/kbattachments/7287/clear_duplicate_alert_notification_policy_v101.py
Run the Script
nutanix@cvm$ python clear_duplicate_alert_notification_policy_v101.py
Record the output of the script as it shows when was the duplicate entry created and successful deletion of the entry. (this will help in performing RCA) On one CVM, run:
nutanix@cvm$ ~/bin/alert_manager --clear_leader_service_state
After 10 minutes, run:
nutanix@cvm$ cluster start
Take note of the EPOCH timestamp from the script and convert to get the exact time and collect logs. Consider reaching out to the STL to help with the RCA.
|
KB7247
|
PROBLEM: ssd_repair_status shows "Unable to transfer svmrescue image to host x.x.x.x"
|
Customer was trying to upgrade SSD capacity of single SSD nodes using "Hardware Replacement Documentation" > "SSD Boot Drive Failure (Single SSD Platforms)" but it fails after clicking "Repair Disk" option in Prism.
|
NOTE: Also mind ISB-117 since firmware S670330N still contains defects. If the customer is running an older version than S740305N, recommend upgrading.
The ssd_repair_status shows "Unable to transfer svmrescue image to host x.x.x.x".
nutanix@cvm$ ssd_repair_status
The genesis leader log genesis.out shows that the iso transfer marked as failed after 10 minutes.
2019-03-04 20:45:15 INFO ssd_breakfix.py:301 Repair CVM bootdisk SM: starting rescue image transfer
Bandwidth test using iperf between two CVMs shows that network is good.
nutanix@cvm-source$ iperf -c <cvm-destination-ip> -f MB -n 10GB
Checked SATA DOM firmware and found that it is at version S560301N.
[root@ESXI-HOST]# /smartctl -d sat -A -i /dev/disks/t10.ATA_____SATADOM2DSL_3IE3_V20000000000000000000000000000000000000000 | grep ^Firmware
|
Recover the CVM
Create a CVM rescue ISO using this https://confluence.eng.nutanix.com:8443/display/STK/SVM+Rescue%3A+Create+and+Mount procedure.Repair the CVM using this https://confluence.eng.nutanix.com:8443/display/STK/SVM+Rescue%3A+Repairing+a+CVM procedure.
Root cause
Due to a firmware bug in S560301N, the trim operation did not work properly on SATA DOM, which causes write operations to fail after running out of free blocks.<
Even upgrading to firmware S670330N will not immediately fix the problem as trim operation will only happen during writes.
Go ahead with TRIM operations first ( KB 7321 http://portal.nutanix.com/kb/7321) which should be effective and relatively quick
If TRIM submission fails to remediate the free block count, then proceed with secure erase and reinstall, which is explained below.
Secure erase and reinstall is a guarantee to return the free blocks to a good state.
Secure Erase and Re-Install Hypervisor
Download centos6_8_satadom_S670330N.isoDownload Phoenix ISO https://portal.nutanix.com/page/downloads?product=phoenix and keep it handy.Upgrade SATA DOM firmware manually following the documentation here https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Ref-AOS-v5_10:bre-satadom-manual-update-t.htmlRe-install HypervisorBoot the system using Phoenix ISO and select "Reconfigure Hypervisor" Perform post network configuration and add the node back to cluster
|
KB6141
|
Stargate fails to initialize cluster-wide due to unresponsive Pithos leader
|
A Pithos leader could be elected but due to a software defect it doesn't become fully functional. This results in a cluster-wide outage since Stargate can't initialize.
|
Whenever the current Pithos leader restarts, a new leader will be elected. During ONCALL-4555 we encountered a situation where a new Pithos leader was elected which ended up not being fully functional.This means that all other Pithos processes in the cluster assumed that Pithos on CVM X was the leader but Pithos on CVM X itself was not initialized as leader.As a result, since Pithos is not available, Stargate will crash and fail to start again. This will lead to a cluster-wide outage.
Symptoms:
Stargate will FATAL every 60 seconds on each CVM in the cluster with the following error:
F0912 16:46:09.651286 17857 stargate.cc:1323] Watch dog fired: stuck during initialization
Connection with links to local Pithos (port 2016) is functional (unless you're on the Pithos leader)
However, attempting to follow the link to Pithos leader will not take you to Pithos leader. The connection will just "hang".
After doing ssh to the CVM which presumably has Pithos leader, executing:
links http:0:2016
Results in a "request sent" status in links and a blank page.
Looking at pithos.out logs on the CVM, you will only have entries for Pithos startup but no further logging:
2018-09-12 15:23:27,358:15084(0x7fe0ebcab940):ZOO_INFO@log_env@918: Client environment:zookeeper.version=zookeeper C client 3.4.3
pithos.INFO on Pithos leader CVM will have the following entry:
I0912 15:23:27.423413 15089 function_disabler.cc:180] Waiting for 1 actively running callbacks to finish at file=/home/hudsonb/workspace/workspace/OSLAB_euphrates-5.1.5-stable-release/pithos/server/pithos_server_watch_op.cc line=41
|
When the above conditions are met, follow these steps to stop Pithos so another CVM in the cluster can assume Pithos leader role:
Collect threads output from Pithos process on the CVM which has the Pithos leader (find this IP by following step 2 in the Description section and look for the leader Handle IP)
links --dump http://0:2016/h/threads
Example output:
nutanix@cvm:~$ links --dump http://0:2016/h/threads
In case this step does not produce any output, CTRL-C the operation and go to step 2.
Run the following command on the CVM which has the Pithos leader (find this Master Handle IP by following step 2 in the Description section):
pstack <pithos_PID>
You can obtain the Pithos PID by looking at the pithos.INFO logs and find the Pithos child PID here:
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
Expected output:
nutanix@cvm:~$ pstack 15084
Stop the Pithos process on this CVM (which has Pithos leader) :
genesis stop pithos; cluster start
You can verify that Pithos leader is fully functional again by:
links http://0:2016 - should have a link to Pithos leader and you can successfully navigate to Pithos leaderlinks http://0:2009 - indicates all Stargates (one instance per CVM) are up and running
|
KB1592
|
NCC Health Check: cassandra_log_crash_check
|
The NCC health check cassandra_log_crash_check checks for Cassandra restarts and reports a FAIL status if there were two or more restarts in the last 30 minutes or a WARNING status if there was one restart in the last 5 minutes.
|
The NCC health check cassandra_log_crash_check verifies if the Cassandra service crashed or restarted recently.
This check results in a FAIL status if Cassandra restarted 5 or more times in the last 30 minutes.This check results in a WARN status if Cassandra restarted 3 or more times in the last 5 minutes.
NOTE: Thresholds above exclude graceful Cassandra restarts.
The Cassandra logs are located at /home/nutanix/data/logs/cassandra. The most recent file is named system.log. When the file reaches a certain size, it will roll over to a sequentially numbered file (for example, system.log.1, system.log.2, and so on).Running the NCC Check
You can run this check on CVM as part of the complete NCC Health Checks:
ncc health_checks run_all
Or you can run this check individually:
ncc health_checks cassandra_checks cassandra_log_crash_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run every 5 minutes, by default.This check will generate a critical alert A21010 after 1 failure across scheduled intervals.
Sample output
For Status: PASS
Running /health_checks/cassandra_checks/cassandra_log_crash_check on all nodes [ PASS ]
For Status: WARN
Running /health_checks/cassandra_checks/cassandra_log_crash_check on all nodes [ WARN ]
For Status: FAIL
Running /health_checks/cassandra_checks/cassandra_log_crash_check on all nodes [ FAIL ]
Output messaging
[
{
"Check ID": "Check if Cassandra service is crashing continuously."
},
{
"Check ID": "Cassandra service has restarted frequently in the last 30 minutes."
},
{
"Check ID": "Contact Nutanix support."
},
{
"Check ID": "Data resiliency is compromised."
},
{
"Check ID": "A21010"
},
{
"Check ID": "Cassandra service crashed"
},
{
"Check ID": "Cassandra service has recently crashed. Contact Nutanix support for assistance."
}
]
|
If the NCC check cassandra_log_crash_check returns a FAIL, further investigation is required to determine the reason for the crash.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
Use below two steps to gather the most relevant logs from the cluster:
nutanix@cvm$ logbay collect -t=cassandra,sysstats --duration=-24h -s=<list of CVM_IPs where crash is occuring(comma-separated if more than one)>
Upload the bundle stored in the location below, or as shown in the Logbay execution above.
/home/nutanix/data/logbay/bundles/
Attaching Files to the Case
When viewing the support case on the support portal, use the Reply option and upload the files from there.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
Requesting Assistance
To review the collected log bundle, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/.
|
KB16234
|
Prism Central "Create Recovery Point" task fails
|
Prism Central's "Create Recovery Point" task fails for VMs residing on RF1 containers on the Prism Element cluster.
|
Disaster Recovery operations like snapshots and replication are not supported for VMs residing in an RF1-enabled storage container. This is notified when the RF1 feature is enabled on the Prism Element (PE) cluster.VMs residing on RF1 containers are also indicated in the Prism Central (PC) "VMs" page, as seen in the screenshot below:The "Create Recovery Point" task will fail when an inadvertently VM snapshot (Recovery Point) operation is attempted. The tasks on the PC GUI will indicate the following errors
The "Entity snapshot" task will fail with the error "Recovery Point of entity failed for VM". The "Create VM recovery point" task will fail with the error "INTERNAL_ERROR: Failed to create recovery point."
Further, upon checking the CLI, the following tasks are in the failed/aborted state. Note: The timestamp in the task details is in the UTC timezone.
nutanix@PCVM:~$ ecli task.list
nutanix@PCVM:~$ ecli task.get 13b67b95-45c3-5859-84e0-18ab63289dec
nutanix@PCVM:~$ ecli task.get e09f3a16-eb00-456f-8521-02447ed99286
nutanix@PCVM:~$ ecli task.get 84a4588e-fea8-4667-989d-f2f79f64e5a2
The aborted "EntitySnapshot" task is against the Cerebro component, implying that the "Create Recovery Point" task made it to the PE cluster. On the PE cluster, the "Entity snapshot" task will be in "canceled" status with the message "Failed to snapshot entities".
nutanix@CVM:~$ ecli task.list
nutanix@CVM:~$ ecli task.get 13b67b95-45c3-5859-84e0-18ab63289dec
In the cerebro.INFO (/home/nutanix/data/logs/cerebro.INFO) log file on the Cerebro leader CVM will also indicate that the "create backend snapshot" operation was skipped because the "VM has RF1 disks" and hence is "CBR incapable".
nutanix@CVM:~$ allssh 'grep 13b67b95 ~/data/logs/cerebro.INFO | grep "Starting meta op"'
nutanix@CVM:~$ allssh 'grep -w 97 ~/data/logs/cerebro.INFO | grep reason'
NCC Health Checks on the PE cluster flags RF1 container(s).
Detailed information for rf1_container_check:
|
DR operations like snapshot and replication are not supported on VMs in an RF1 storage container. This limitation is documented in
KB-11291 https://portal.nutanix.com/kb/11291 - RF1 Guidelines and LimitationsPrism Element Guide. AOS 6.5 guide link https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_5:wc-replication-factor1-recommend-r.html. AOS 6.7 guide link https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-replication-factor1-recommend-r.html.
|
KB12587
|
NCC Health Check: lenovo-disk-order-check
|
This KB describes an NCC check which detects certain Lenovo hardware models where the drive mapping on slot 3 and slot 4 drives are swapped.
|
The NCC Health Check lenovo_disk_order_check validates the drive mapping on certain Lenovo Hardware.
Running the NCC CheckRun the check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or run this check separately:
nutanix@cvm$ ncc health_checks hardware_checks disk_checks lenovo_disk_order_check
This check is scheduled to run every day, by default.
This check will generate an alert after 1 failure.This check has been enabled in NCC-4.6.1.
Sample output
For Status: PASS
Running /health_checks/hardware_checks/disk_checks/lenovo_disk_order_check [ PASS ]
For Status: FAIL
Running /health_checks/hardware_checks/disk_checks/lenovo_disk_order_check [ FAIL ]
Note : This hardware-related check executes on Lenovo hardware.
The check only executes on the below Lenovo models :
[
{
"HX1320": "HX1321"
},
{
"HX1320": "HX2320"
},
{
"HX1320": "HX2321"
}
]
|
Detection of the issue :
On an affected configuration, the information displayed on PRISM for slots 3 and 4 is swapped. This could result in actions intended for slot 3 to affect slot 4 and vice-versa.
When you run list_disk command on the CVM you'll see the issue as the information for drive in Slot 3 and Slot 4 will be swapped.
Any Amber LED indicating a failed drive for either slots 3 or 4 will be reversed as well.Fix : The fix for this version has been released in AOS 5.20.4, if your cluster is impacted by this issue. Please upgrade your cluster to AOS 5.20.4In case the above steps do not resolve your issue, Please contact Nutanix support http://portal.nutanix.com.
|
KB6485
|
Customizing the Docker Ethernet bridge to avoid IP address conflict in X-Ray
|
This article describes how to customize the Docker Ethernet bridge to avoid IP address conflict in X-Ray.
|
When you install Docker, it automatically creates an Ethernet bridge. Containers are automatically connected to the bridge, and traffic to and from the container passes over the bridge to the Docker daemon.
By default, Docker uses the IP address range 172.17.0.0/16 for the bridge. This range conflicts with the IP address range that X-Ray uses for vCenter, and this can cause issues with connectivity for X-Ray.
|
To avoid this problem, edit the daemon.json file.
Find the daemon.json file in /etc/docker. If the file does not exist, create it.Edit the "bip" parameter to set your desired IP address range. For example:
{
If the file exists and there is existing information inside a square-bracketed section, make sure the bip line is outside the square bracketSave and close the daemon.json file.Restart Docker.
For more information on the daemon.json file, go to https://docs.docker.com https://docs.docker.com and search for "Customize the docker0 bridge."
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.