id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB2124
Prism shows an incorrect number of blocks or assigns nodes to blocks that do not exist
This article details corrective action that can be taken if the Prism user interface shows more blocks than are currently installed.
The Prism GUI might show nodes in blocks that do not exist and/or more blocks than are currently installed, as shown in the following images: The above cluster is a four-node cluster in a single block, but Prism shows a two-block cluster. Here is why.To ascertain which nodes belong to which blocks, Prism uses the value of the rackable_unit_serial attribute in the /etc/nutanix/factory_config.json file on each Controller VM. Nodes that belong to the same block must have the same value for the rackable_unit_serial attribute. The Prism service prints this information to the user interface, so if the information in this file is incorrect, the user interface appears to be incorrect.To compare the rackable_unit_serial attribute of all the nodes in the cluster, print the contents of the factory_config.json file on each node in such a way that they appear on the screen one after the other. nutanix@cvm$ allssh cat /etc/nutanix/factory_config.json | grep --color "rackable_unit_serial" ================== x.x.x.126 ================= Notice that the serial number that is highlighted in bold is different from the serial numbers of the other nodes in the block. This discrepancy causes Prism to show the node in a different block that, in reality, does not exist.
To resolve the issue, follow the steps below.Note: The commands in this procedure DO NOT affect cluster stability and are safe to run in a production environment. Using SSH, log in to the Controller VM with the incorrect serial number.Use a text editor like vi to change the incorrect serial number to match the serial number of the other nodes.Restart Genesis. nutanix@cvm$ genesis restart Find the Prism leader. The easiest way to accomplish this is to run the cluster status command and review the process IDs for the Prism service on each node. The Prism leader always has six PIDs, whereas the other Prism instances have less than six PIDs. nutanix@cvm$ cluster status | egrep -i "CVM:|prism" Using SSH, log in to the Controller VM on the node that hosts the Prism leader, and then stop the Prism service. nutanix@cvm$ genesis stop prism Restart the Prism service. nutanix@cvm$ cluster start To verify that the correct information is now reported in the Prism user interface, log out of Prism and then log back in.
KB11244
NCC Health Check: stale_app_consistent_snapshot_metadata_chunks_check
The NCC health check stale_app_consistent_snapshot_metadata_chunks_check checks if the stale app consistent snapshot metadata is possibly consuming unwarranted resources.
The NCC health check stale_app_consistent_snapshot_metadata_chunks_check checks if the stale app consistent snapshot metadata is possibly consuming unwarranted resources.This check runs by default every 24 hours. This check will return an INFO or WARN result if it identifies stale app consistent snapshot metadata IDF entities, and will generate a Warning alert after one failure.Note: This check only runs on AOS 6.1 or later releases. Running NCC Check You can run this check as a part of the complete NCC health checks nutanix@cvm:~$ ncc health_checks run_all Or you can run this check individually nutanix@cvm:~$ ncc health_checks data_protection_checks protection_domain_checks stale_app_consistent_snapshot_metadata_chunks_check Sample Output Check Status: PASS Running : health_checks health_checks data_protection_checks protection_domain_checks stale_app_consistent_snapshot_metadata_chunks_check Check Status: INFO Running : health_checks health_checks data_protection_checks protection_domain_checks stale_app_consistent_snapshot_metadata_chunks_check Check Status: WARN Running : health_checks health_checks data_protection_checks protection_domain_checks stale_app_consistent_snapshot_metadata_chunks_check Output messaging [ { "110274": "Stale app consistent snapshot metadata possibly consuming unwarranted resources", "Check ID": "Description" }, { "110274": "Unable to garbage collect app consistent snapshot metadata chunk entities probably due to Insights service unavailability", "Check ID": "Causes of failure" }, { "110274": "Contact Nutanix Support to clean up stale entries. Refer to KB 11244.", "Check ID": "Resolutions" }, { "110274": "IDF Higher Space Utilization", "Check ID": "Impact" }, { "110274": "This check is scheduled by default to run every 24 hours", "Check ID": "Schedule" }, { "110274": "One", "Check ID": "Number of failures to alert" } ]
Check the list of known issues and contact Nutanix Support in case further assistance is required. KNOWN ISSUES: [ { "Index": "1.", "Issue Summary": "There is a known issue that affects AOS versions 6.1.x, 6.5.x, and newer. During a backup workflow when a backup snapshot is created and later removed by a 3rd party tool - a stale metadata chunk entity in IDF can be generated. This issue is not affecting the backup workflow itself, so everything continues to work, and with time hundreds of such stale entities can be accumulated.", "Recommended Action": "Nutanix Engineering team is aware of the issue and working on a fix.\t\t\tThis KB will be updated with the AOS version containing the fix when it is released.\t\t\t\t\t\tBefore the AOS version with the fix is released:\n\t\t\tThis NCC check can be ignored. Optionally, this NCC check can be disabled (turned off) in the Prism Element UI, as explained in the Prism Web Console Guide paragraph \"Configuring Health Checks\".Nutanix Support can be involved to perform manual cleanup of stale metadata chunks from IDF. But new stale chunks can be generated if the cluster still running the AOS version without a fix." } ]
KB1591
Virtual disk provisioning types in VMware with Nutanix storage
This article provides some background information about provisioning types of virtual disks in VMware ESXi and the implications in a Nutanix system.
The KB article provides background information about provisioning virtual disk types in VMware ESXi and the implications for the Nutanix platform. There have been discussions and comparisons about the relative performance of Thin, Thick, and Thick Eager Zeroed provisioned disks. For traditional storage systems, the thick eager zeroed disks provide the best performance out of the three types of disk provisioning. Thick disks provide second-best performance and thin disks provide the least performance. However, this does not apply to modern storage systems found in Nutanix systems.
Refer to the Nutanix documentation vSphere Administration Guide for Acropolis: vDisk Provisioning Types in VMware with Nutanix Storage https://portal.nutanix.com/page/documents/details?targetId=vSphere-Admin6-AOS-v6_7:vsp-cluster-provisioningtypes-vdisk-vsphere-c.html for more details.
KB14101
Stuck Erasure Coding ops
This KB describes an issue of stuck Erasure Coding ops in AOS 6.6 and above.
Description: EC ops: undo/recode/replace, can be triggered on already EC-ed data. These ops are launched from the Curator. However, in some cases EC ops will not be launched since the Curator detects some ongoing ops on certain strip members. Usually that is the right thing to do, to wait until the ongoing operations are completed. After those operations are done, we should launch new (EC) ops in the next scan. Because of the bug ENG-488240 https://jira.nutanix.com/browse/ENG-488240, we are consistently triggering egroup and extent migration ops from the atime (access time) code in Stargate. Atime code will try to migrate egroup or extents on the node which is doing a read on that egroup. These ops fail and are being retried from Stargate again and again, leaving the flag on egroup that there are some operations in flux. If the bug in ENG-488240 https://jira.nutanix.com/browse/ENG-488240 happens, some EC ops will not be launched from the Curator and that can cause some high-level tasks to be stuck. In the ENG-488240 case, egroups which are affected are EC-ed and snapshotted! Original issue: In the ENG-488240 case, EC was disabled on the container and there was still some leftover data which was EC-ed. Undo tasks in this case cannot be triggered from the Curator. Potential scenarios where issue can happen: Here is the list of potential scenarios which could happen in the field. Combinations are any of the below + vdisk snapshotting must be enabled: EC turned off - Curator will want to trigger EC undo ops and those can be stuck because of the bug. Garbage collection - Curator will want to trigger EC replace ops and those can be stuck because of the bug. Disk/node removal - Curator will want to trigger EC recode ops and those can be stuck because of the bug. Node/disk rebuild - Curator will want to trigger EC rebuild/decode ops and those can be stuck because of the bug. FT rebuild - Curator will want to trigger EC fix ng aware placement ops and those can be stuck because of the bug. Unnecessary migrate extents ops might impact: EC undoEC recode Other migrate extents ops Because all are capped at 20 inflight ops (stargate_max_migrate_extents_outstanding_ops). Migrate egroup tasks are triggered from atime code with kDefault (lowest) priority, but it might still affect other ops which are of the similar priority. Not affected EC ops: Encode (would require data not to be EC-ed)Delete (would require data to be deleted, which is not the case)Overwrite (would require data to be write-cold, and in this case it is only being read)Fix Overwrite TU (same for overwrite case)
Identification: Once suspicious behavior is seen, this issue can be easily identified and confirmed. NOTE: This issue is specific to AOS 6.6 and is expected to be fixed in AOS 6.7. In case of similar symptoms seen in other releases, this KB and Workaround should not be used, and further troubleshooting is required. From the Curator logs we should be able to see another set of following error messages which indicate that there are ongoing operations on certain egroups. You might find certain egroup in multiple Curator scans, meaning that egroup is stuck on some Stargate: W20221208 10:27:18.863232Z 28547 extent_group_id_reduce_task.cc:10741] ECMR Skip checking EC strip with primary parity egroup 198680 After you find these messages in Curator logs, continue on Stargate logs. The following scenario should be happening: An owner vdisk hosted on node1 holding a member of the strip I1. Member of the strip I1 wants to be migrated to node2 because the migration conditions were satisfied in the atime code. Current logic takes into account whether an owner vdisk is marked for removal, and if so, it will acquire a vdisk configuration from the corresponding parent chain vdisk and fill in the fixer op arguments with the chain vdisk id. For a chain vdisk, an egroup migration can be run on any node, no matter which node is hosting the chain vdisk. Node 2 holds another strip member I2. Thus, an egroup migration operation fails with error code kConflictingErasureMember. After the first egroup migration fails, the atime code will try to migrate the extents from the egroup I1, but in that case we are sending the owner vdisk id as an argument. That causes repeating migration failures, which end up in a loop. In order to confirm you’re dealing with this issue, you will need to look at Stargate logs.There will be the following set of messages: W20221208 00:17:16.335911Z 20419 vdisk_micro_migrate_egroup_op.cc:280] vdisk_id=2345 operation_id=42720989 egroup_id=157880 Migrate request extent_group_id: 157880 desired_replica_disk_ids: 70 migration_reason: kILM on erasure egroup finished with error kConflictingErasureMember Notice the vdisk ids. In the first message that reported kConflictingErasureMember error, there is vdisk 2345, while in the later one we can see vdisk 2252. If we run the vdisk configuration printer, we’ll see that vdisk 2345 is a chain vdisk, while vdisk 2252 is the owner vdisk of egroup 157880. The owner vdisk 2252 should also have the “to_remove” field set to true. So, to summarize, there will be two warning messages from the migration_action.cc file, one with kConflictingErasureMember error and other saying that extents migration has been failed. Then compare the vdisk ids from the corresponding messages, and there should be both chain vdisk and owner vdisk ids.WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit and KB-1071 https://portal.nutanix.com/kb/1071. Workaround: A simple workaround for this problem is to avoid using the chain vdisk id for an egroup migration when the owner vdisk is marked for removal, by setting the following Stargate gflag on all the nodes in the cluster: stargate_atime_enable_migration_on_chain_vdisk=false. That way, all the migration requests will be sent to an appropriate node, and there won’t be any failed extents migration messages.
{
null
null
null
null
KB11437
Nutanix Kubernetes Engine - karbon-core container may crash in ergonClient.TaskUpdate routine causing os-image upgrade failure
This article discusses possible karbon-core container crash scenario due to ergonClient error handling and two known scenarios when this condition observed in the field
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.In some scenarios when NKE tries to communicate with Ergon service to update task state - it may panic (crash) with nil pointer dereference in ergonClien.TaskUpdate() routine.This crash could happen if ErgonClient on Karbon tries to update already completed task and Ergon respond with error code 16 "Trying to update already completed task".Scenarios when NKE may try to update already completed task may vary, following are two scenarios observed in the field:Scenario 1. during os-image upgrade:karbon_core.out may show crash will below backtrace: 2021-05-25T12:20:53.937Z hostupgrade_task.go:246: [ERROR] Cluster :failed to upgrade host image of K8S cluster: internal error: failed to upgrade boot disk image with error updating vm bootdisk: failed to find boot disk, vmModel &{AvailabilityZoneReference:<nil> ClusterReference:0xc002bd6d20 Description: Name:0xc002a87590 Resources:0xc002a3fc00} In both ecli or karbon_core.out you see that upgrade was triggered twice with short interval:karbon_core.out 2020-11-17T20:42:44.557Z host_upgrade.go:105: [INFO] Upgrading host image on node karbon-k8s01-abcde-etcd-1 ecli task.list component_list=Karbon nutanix@PCVM:~$ ecli task.list component_list=Karbon Scenario 2. during k8s cluster deployment: 2021-04-14T05:10:57.92Z deploy_k8s_task.go:365: [ERROR] [k8s_cluster=acsk8sdefault] Cluster acsk8sdefault:Valid error wasn't populated
Cause The Problem surfaced recently due to changes on Ergon side via commit 636b7fd27aaf57b778e882e58cf2e3d9d5d5b830. The block_updates_on_completed_tasks gflag default value changed from false to true, so Ergon begin to retrun new error code 16 ErgonInvalidRequestOnCompletedTask and this is not handled well on karbon ergon_client sideThis change currently integrated in AOS 5.19.2 and 5.20; this type of crash may happen on Karbon deployments running on these AOS versionsKRBN-3851 and KRBN-3593 Resolved the in Karbon 2.3 and later Solution Upgrade to Karbon 2.3 or later Workaround Scenario 1. during os-image upgrade: In this scenario upgrade may not complete successfully and a Karbon VM may not have a boot disk.The reason of that is during concurrent attempts of 2 upgrade tasks to update VM, first task detaches boot disk from VM and karbon-core may crash when second task tries to update already completed task.In this scenario the VM will be left without the boot disk, and it will not be able to start.You can check the number of disks of affected VM via Prism UI (i.e.: karbon-k8s01-abcde-etcd-1 in this example). Possible outcome - there is no scsi.0 present on the VM.If VM has no scsi.0 disk:Use acli image.list to get the list of available images and find the UUID (94061fe8-638f-46d3-997a-8ce5eee9aaac in below sample) of new karbon os image and attach it to the VM. Add the drive as scsi.0 drive: nutanix@CVM:~$ acli vm.disk_create karbon-k8s01-abcde-etcd-1 index=0 bus=scsi clone_from_image=94061fe8-638f-46d3-997a-8ce5eee9aaac Set appropriate boot device nutanix@CVM:~$ acli vm.update_boot_device karbon-k8s01-abcde-etcd-1 disk_addr=scsi.0 Restart the upgrade from Prism UI or via karbonctl nutanix@PCVM:~$ karbonctl login --pc-username admin Note: when two UpgradeHostImage tasks triggered, it is also possible that upgrade may complete successfully.If all workers/etcd/master nodes found to be operational and running on the latest Node OS version, then this is a cosmetic UI issue described in KRBN-3593.In order to mark task completed in Karbon UI, you can mark the task in kFailed state as completed nutanix@PCVM:~$ bin/ergon_update_task --task_uuid=24c26c66-848a-4593-8314-84ce8b2799ee --task_status=succeeded Scenario 2. during k8s deployment: This scenario had no known impact, after automatic karbon-core restart operation finished successfully.
KB13377
ADS load balancing and scheduling fails when there is a VM to VM affinity setting and one VM is powered off
ADS lazan cpu scheduling fails when an affinity rule is configured for a VM and the VM is powered off.
If there is a VM in a powered off state that is referenced by a VM to VM affinity rule or if a vm-group with only 1 VM has affinity set, ADS scheduler operations will fail. This affects AOS clusters running 5.20.4+ on AHV. From the Lazan logs (/home/nutanix/data/logs/lazan.log:) 2022-06-29 12:47:23,272Z WARNING solver_interface_client.py:87 Solver RPC returned error 6. From the /home/nutanix/data/logs/solver.out: ERROR [Thread-1] 2022-06-29 12:47:23,271Z SolverRpcServiceImpl.logInvalidRequest:215 Caused by: java.lang.IllegalArgumentException: A VM-affinity policy must mention at least 2 VMs. Got [1d454bb9-3792-4b3a-b92a-49600c963317]
This issue is resolved in AOS 6.6.X family (STS): AOS 6.6.2. Upgrade AOS to the latest supported version.There are two workarounds for the issue: Remove the VM group affinity for the VMs in question. nutanix@cvm:~$ acli vm_group.antiaffinity_unset group_name Power on VMs in the anti-affinity group.
KB15034
Troubleshooting IPMI Web GUI Access
In an effort to help identify the nature of the IPMI access issues, this article covers a range of causes that might be impacting access to the IPMI Web GUI.
Troubleshooting access to the IPMI Web GUI can be challenging, as issues can range from complete inability to access IPMI Web GUI to sluggish connection after entering the username and password. Below are several troubleshooting steps based on the IPMI Web GUI behavior. They start by simply confirming the logical and physical configuration and move to more unique situations, such as a hung BMC/IPMI negatively impacted by network scanners or SNMP software. While the IPMItool commands may run correctly on other vendor platforms that support the Intelligent Platform Management Interface (IPMI), the IPMICFG commands are specific to Nutanix platforms manufactured by Supermicro. IPMICFG is an In-band utility for configuring IPMI devices. It is a command-line tool providing standard IPMI and Supermicro® proprietary OEM commands for BMC/FRU configuration.
IPMI Network Configuration Checks Ping the IPMI Gateway followed by the IPMI IP. To obtain the IPMI network configuration, run either of the following commands. Cluster-wide IPMI Web GUI Network Configurations are helpful to ensure consistent configuration across all the IPMIs. Always confirm that the IPMI network configuration is set to static or matches the working IPMIs. Confirm that if a VLAN is used, it is configured on every IPMI. Finally, ensure the gateway configuration is correct. nutanix@cvm:~$ hostssh "ipmitool lan print 1" Individual Host IPMI Web GUI Network Configuration [root@ahv_host] ipmitool lan print 1 On ESXi, you must use the forward slash "/" [root@esxi_host] /ipmitool lan print 1 Standard network configurations should be checked to see whether the VLAN is set correctly compared to the rest of the cluster. Another network configuration to check is whether the IPMI IP address source is set to static or if it might have been mistakenly set to use DHCP, which would mean the IPMI Web GUI is using a different IP address than what is known. [root@AHV ~]# ipmitool lan print 1 For additional troubleshooting related to the IPMI Web GUI networking configuration and user configurations, refer to KB-1486 http://portal.nutanix.com/kb/1486.Next, Ping both the IPMI Gateway and IPMI IP for the problematic IPMI, followed by a functional IPMI in the same cluster, to compare behavior. C:\Users>ping xx.xx.xx.1 Failure to reach the gateway would imply that the IPMI port is not cabled or the switch port isn't configured correctly. IPMI Cabling Check Ensure each IPMI is cabled and review whether it uses a shared or dedicated port. For more details about each available IPMI port, review KB-14367 http://portal.nutanix.com/kb/14367. nutanix@cvm:~$ hostssh /ipmicfg -linkstatus IPMI Required Ports Even though you might be able to ping the IPMI IP, you will not have access to the IPMI web GUI if the required IPMI ports aren't allowed by the firewall policy. If a firewall is blocking the required IPMI ports, you won't be able to connect via a browser, or you might even lose specific functionality depending on which ports are blocked. Refer to the following link to understand the IPMI port requirements, Ports, and Protocols - IPMI https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filter[%E2%80%A6]537e2c298bc&productType=IPMI%20%28NX%20Series%20Hardware%29. Place the Browser into Incognito Mode Doing this removes any installed extensions (VPN, Anti-virus) and prevents cached user names and passwords from being used. Using a different server can also be helpful, as it tests connectivity between the jump box and the IPMI network. IPMI Issues: IPMI Commands Fail, Slow IPMI, White Screen After Gaining Access When running ipmitool or /ipmicfg commands, they fail to provide any output. Also seen can be a slow-to-respond IPMI Web GUI, or when accessed, the user is presented with a White Screen. These behaviors are often seen when the IPMI/BMC is hung, requiring a BMC reset or a power drain of the node or chassis. How to Reset the IPMI/BMC to Factory Default, KB-8123 http://portal.nutanix.com/kb/8123. Note that in G6 & G7 multi-node platforms, the power drain can be done remotely by following IPMI - Simulate a node pull using IPMI power controls, KB-8883 http://portal.nutanix.com/kb/8883. If performing a full factory reset has no impact, removing the CMOS coin battery for 10 minutes allows for a hardware reset of the BMC. For details about removing the CMOS coin battery, refer to the battery replacement guide for NX platforms, Hardware Component: Battery https://portal.nutanix.com/page/documents/list?type=hardware&filterKey=Component&filterVal=Battery. Removing the CMOS coin battery will remove any existing NTP configuration that needs to be reconfigured by accessing the IPMI Web GUI, selecting Date and Time, and entering the required information. Flashing the BMC would be the next logical step. Flashing the BMC implies installing a fresh copy of the current BMC firmware. Refer to Nutanix BMC Manual Upgrade Guide, KB-2896 http://portal.nutanix.com/kb/2896, for those details. Updating and restarting the BMC does not impact the running node. Completion of this procedure does not require downtime of the running node. Lastly, physically power-draining the node and/or chassis might be unavoidable. When performing a physical power drain on a node, the node should be removed from power for 10 minutes to ensure all components drain off any electrical current. Knowing that a multi-node chassis power drain can impact several nodes at once and may require a complete cluster shutdown, the following command can be run from the Host to determine if the MCU (Micro Controller Unit) is in a hung state or functioning properly. Troubleshooting Steps 1) Run the following command from the Host experiencing IPMI issues [root@host~]# ipmitool raw 0x06 0x52 0x07 0x8C 0x01 0xA5 0xFF 0xF5 0x84 ESXi will require a forward slash "/" for the ipmitool command to run properly. Output Codes Meaning 10 the MCU is functioning normally 00 the MCU is hung Example [root@host~]# ipmitool raw 0x06 0x52 0x07 0x8C 0x01 0xA5 0xFF 0xF5 0x84 10 Like all IPMI commands, the same command can be run from any working CVM by adding the following arguments. nutanix@cvm:~$ ipmitool -H <IPMI IP> -U ADMIN -P <password> raw 0x06 0x52 0x07 0x8C 0x01 0xA5 0xFF 0xF5 0x84 Remove the Network: Directly Connect to the IPMI Knowing the difficulty of isolating the network configuration resulting in abnormal behavior in the IPMI Web GUI, you can remove the network from the troubleshooting process by accessing the IPMI on-site, KB-13748 https://portal.nutanix.com/kb/13748. IPMI Web GUI: White Screen After Entering User Name & PW A common issue is using non-supported characters in the password used on the IPMI Web GUI. Changing the password to the node serial as a test can confirm if this is the current issue. Note that even after changing the IPMI password, a BMC reset might be needed to clear the current behavior before allowing the change to impact anything. In addition, creating a new user account can rule out causes like the IPMI user being disabled or locked out or bad attempts hitting the threshold preventing access. Follow KB-1486 https://portal.nutanix.com/kb/1486 to perform this task. Faulty Chassis In several cases where the chassis's backplane was faulty and prevented access to the IPMI on Node, the command /ipmicfg -tp info failed by providing the output "not twinpro." This implies that the node thinks it is a simple node block as it does not report information on itself or its parent. When that same command was run, Node B provided the following output. [root@host~]# /ipmicfg -tp info This shows that Node B sees its partner and identifies itself as a multi-node platform. When that same command was run on Node A, the following output was provided: [root@host~]# /ipmicfg -tp info This clearly shows an issue with the backplane, as Node B cannot report back properly; that is, it has no communication with Node A. If this is unsuccessful, Proceed with a BMC reset and then a node power drain. MEL: Review the Maintenace Event Logs Reviewing the MEL can possibly point to heavy traffic from devices or software (SNMP server) that can overload the IPMI Web GUI, making it hung, sluggish, or inaccessible. You can obtain the MEL from the CLI or the IPMI Web GUI.Run the following command to review the MEL from the Host. [root@host ~]# /ipmicfg -mel list To help better understand the daily impact, you can use grep a particular time stamp. [root@Sonic-1 ~]# /ipmicfg -mel list | grep Time:2024/03/27 If the troubleshooting process requires you to clear the MEL, save a copy of it and attach it to the support case you are working from. This could be because the MEL is full and can not take on more entries. [root@ahv ~]# /ipmicfg -mel G6 > G7 IPMI Web GUINote that if the "Used" MEL entries exceed the maximum number allowed, clear the MEL after saving and attaching a copy to the support case you are working from.G8 > G9 IPMI Web GUI Default PW In BMC 7.07 and earlier, the default credentials are username = ADMIN and password = ADMIN. In compliance with California statute SB-327, BMC 7.08 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-BMC-BIOS:top-Release-Notes-G6-G7-Platforms-BMC-v7_08-r.html and later uses a unique password. The new default credentials are username = ADMIN and password = <node-serial-number>. To find the serial number, issue the command /ipmicfg -fru list from the host. In the output, search for "Board serial number". The "Board serial number" value is the default IPMI Web GUI password. [root@host ~]# /ipmicfg -fru list You can also find the serial number on the node on a sticker located by the ports in the back of the node. Additional resources Issues NX G6-G7 IPMI is not responsive through GUI/CLI while being polled through 3rd party SNMP software, KB-14736 https://portal.nutanix.com/kb/14736Platform: G6/7 BMC: 7.11+IPMI web console is not able to log in, KB-9726 https://portal.nutanix.com/kb/9726Platform: G7 All BIOS Versions; All BMC VersionsIPMI GUI is not reachable via Web browser, KB-8930 https://portal.nutanix.com/kb/8930Platform: G6/7 BMC 7.00; NCC 3.9.xIPMI Web GUI inaccessible on NX G6 or G7 hardware with BMC 7.0x (Blank White Screen), KB-8736 https://portal.nutanix.com/kb/8736Platform: G6/7 BMC: BMC 7.00; BMC 7.07; BMC 7.09IPMI ADMIN account locked out after upgrading to BMC 7.0, KB-8119 https://portal.nutanix.com/kb/8119Platform: G6/7 BMC: 7.0 IPMI CLI and Web GUI KB Resources Common BMC and IPMI Utilities and Examples, KB-1091 https://portal.nutanix.com/kb/1091How to Reset the IPMI/BMC to Factory Default, KB-8123 https://portal.nutanix.com/kb/8123How to re-configure IPMI using ipmitool, KB-1486 https://portal.nutanix.com/kb/1486 NX Series Hardware Administration Guide Changing an IPMI IP Address https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:ip-ipmi-ip-addr-change-t.html Changing the IPMI Password https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:har-password-change-ipmi-t.html
KB7735
Expand Cluster and Cluster Creation Failure with factory imaged nodes
Having a node imaged from factory using Foundation 4.4 and 4.4.1: Fails to get discovered during expand cluster process from Prism. Fails during cluster creation via cluster create command on the cvm.
Nodes imaged from factory using Foundation 4.4 and 4.4.1 may: Fail to get discovered during expand cluster process from Prism. Fail during cluster creation via cluster create command on the cvm. Execute the following command to find the foundation version with which your node was imaged. nutanix@CVM:~$ cat /home/nutanix/foundation/foundation_version Following logs can be seen in genesis.out: nutanix@CVM:~$ less ~/data/logs/genesis.out Executing genesis status gives the following output: nutanix@CVM:~$ genesis status Executing cluster status right after the cvm boots up, gives the following output. nutanix@CVM:~$ cluster status Executing cluster status after some time, gives the following output which shows that it tries to execute some action on its old factory ip. nutanix@CVM:~$ cluster status
Please image the node using the Standalone Foundation VM http://portal.nutanix.com/#/page/docs/details?targetId=Field-Installation-Guide-v4-4:v44-cluster-image-foundation-t.html or the Foundation App https://portal.nutanix.com/#/page/docs/details?targetId=Field-Installation-Guide-v4-4:v44-portable-foundation-app-c.html.
KB6532
Citrix Cloud Connect setup fail, error: Could not contact DNS servers
This article describes an issue where Citrix Cloud Connect setup fails with error "Could not contact DNS servers."
While configuring Citrix Cloud Connect from Prism, it fails with the error: Could not contact DNS servers ~/data/logs/prism_gateway.log on the Prism leader does not have any entries for the failure. ~/data/logs/aplos.out on the Prism leader has the following entries: ConnectionError(e, request=request)\n', "ConnectionError: HTTPSConnectionPool(host='trust.citrixworkspacesapi.net', port=443): Max retries exceeded with url: /root/tokens/clients (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fa19df00e50>: Failed to establish a new connection: [Errno 11] ARES_ECONNREFUSED: Could not contact DNS servers',))\n"] Connectivity to DNS server looks fine. trust.citrixworkspacesapi.net can be resolved and connection to port 443 succeeds. nutanix@cvm$ allssh "nslookup trust.citrixworkspacesapi.net" nutanix@cvm$ allssh "nc -v trust.citrixworkspacesapi.net 443"
Check if the DNS entries are correct in the file /etc/resolv.conf. If yes, check the last modified date on the file: nutanix@NTNX-A-CVM:~$ allssh "cat /etc/resolv.conf" Check the date when the aplos and aplos_engine services were last restarted. nutanix@NTNX-A-CVM:~$ ps -aef |grep aplos If the date since the services have been running is before the configuration changes were made to /etc/resolv.conf, restart the aplos and aplos_engine services on all the CVMs. To avoid downtime, execute the following command to restart aplos and aplos_engine on all the CVMs one at a time. genesis stop aplos aplos_engine && cluster start Run the below Python code to see if a response is received. This is the exact code that gets executed for processing the request. If this does not fail, an aplos restart should resolve the issue. import requests Alternative curl code: curl -k --header "Content-Type: application/json" \ In the above code, replace "Id" with the Citrix Could Client ID and "sdd" with the Citrix Could Client's Secret Key. If this does not fail, an aplos and aplos_engine restart should resolve the issue.
KB11122
Insights Server service crashes due to cgroup memory limit leading to insights server dependant services crashing on Prism Central
Customer notices alerts indicating service restarts on Prism Central "Cluster Service {service} Restarting Frequently" or "CVM Python Service(s) Restarting Frequently". It is observed that insights_server service is being oom killed as a result of cgroup limits for insights_server service. As a result of insights_server service being unavailable all dependant services end up with FATAL's or crash loop due to which an alert is raised on Prism Central for cluster services restarting frequently
Symptoms: Alerts indicating frequent cluster service restarts are seen on Prism Central (Ergon, Magneto, Flow, Lazan, etc.)Upon investigations it is noticed insights_server dependant services are seeing FATAL's or crashing at the time the alert is received on Prism CentralInsights_server service is noticed to have abruptly gone down with no FATAL's loggeddmesg logs show oom kill for insights server service Confirmation: 1: Verify alerts are being seen on Prism Central which indicate cluster services are restarting frequently on cluster Example of Alert ID:A3034 There have been <threshold> or more service restarts of service within one day across all Controller VM(s). 2: Analysing Prism Central Logs reveal insights dependant services are crashing / fatal-ing at certain points in the day (for example, Ergon, Magneto, Flow, Lazan etc) Example snippet from ergon FATAL seen on Prism Central 2021-03-10 23:28:18,747Z ERROR 4073 /home/jenkins.svc/workspace/Promote-pipeline/euphrates-5.19-stable-pc-0.1/9.x-pc-0.1-bigtop-build-release-noshlib/bigtop/infra/infra_server/cluster/service_monitor/service_monitor.c:106 StartServiceMonitor: Child 18766 exited with status: 1 3. insights_server.out log file, shows insights crashing and re-spawning, there is also logs missing for a few minutes when insights goes down (in the log snippet below, no logging is available between 20:46 - 20:49) Example snippet from insights_server.out log file E20210321 20:46:24.225095Z 21253 insights_rpc_base.h:248] RPC GetEntitiesWithMetrics: Failed. Took 242 us. Error = kInvalidRequest: Sub Error = kQueryInvalidEntityType: 'Invalid entity type: ntnxprismops__microsoft_sqlserver__query specified. Request id: query_1616359584224851_96307_127.0.0.1 '. Result: Total 4. Logging in insights_server.out file after crash continue to indicate clients on Prism Central (using insights) having issues connecting to Insights Server DB given that not all shards on DB are online (indicating insights instability) Example snippet showing Magneto Failing to register E20210321 20:49:19.439640Z 19837 coordinator_watch_client.cc:298] RegisterWatchClientDone: client_id = CWC$Magneto:x.x.x.208:Master:VM session_id = 5089040d-6984-46c2-ba34-54f9706c0054 ip = port = 2027Failed to register the watch clientError = 5 5. dmesg logs show oom kill for insights_server as shown below [Thu Mar 25 10:49:31 2021] Control_10 invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=100 It was noticed in customers cluster that insights_server would crash several times at specific intervals every day. The oom kill noticed on Prism Central VM was because of hitting cgroup limits for insights_server Based on the case you are working on , you are potentially going to encounter two Situations Situation 1: RSS is at a higher value and cgroup is still reflecting the default value RSS for insights_server could have been increased by Support on a previous case as a part of troubleshooting from its default value . During our investigations we noticed that RSS was set to 10G on Prism Central VM nutanix@NTNX-10-x-x-x-A-PCVM:~$ links --dump http://0:2027/h/gflags | grep insights_rss_share_mb However the cgroup memory limit for insights_server continues to use default limit of 5.5G (this can be checked through genesis logs as shown below on Prism Central VM) nutanix@NTNX-10-x-x-x-A-PCVM:~$ grep cgroup ~/data/logs/genesis.out An expected behaviour with RSS limit exceeding for insights_server is for it to FATAL with message shown below Insights_server.FATAL log files are generated in /home/nutanix/data/logs with the following type of error: F0528 20:08:37.685642 240381 insights_server.cc:658] Exceeded resident memory limit: Aborting. Usage = 6069174272 Limit = 5767168000 If you are hitting Situation 1 - you will only notice oom kill for insights_server in dmesg log, and not FATAL's for insights_server NOTE: Insights RSS limit on Small PC VM is 5.5 GB and Insights RSS limit on a Large PC VM is 21 GBPanacea memory utilisation plot for insights server shows usage peaking to cgroup limit several times before insights server service gets oom killed and re-spawns (this explains although the RSS for insights service is increased on the Prism Central, the growth in memory is limited to cgroup limits) Situation 2: You notice insights_server is encountering FATAL's and is in a crash loop due to exceeding its RSS memory limit (and potentially seeing oom kill for insights_server as well) As explained above an expected behaviour with RSS limit exceeding for insights_server is for it to FATAL with message shown below Insights_server.FATAL log files are generated in /home/nutanix/data/logs with the following type of error: F0528 20:08:37.685642 240381 insights_server.cc:658] Exceeded resident memory limit: Aborting. Usage = 6069174272 Limit = 5767168000 Situation 3: RSS and cgroup memory values are the same at 5.5 GB (for small PC) or 21 GB (for large PC), confirmed with steps from Situation 1 - Ergon looks to be crashing on a recurring basis, but not constantly. (In TH-6236, ergon was crashing on a weekly basis at approximately the same time of day.)Checking for file sizes under /home/nutanix/data/stargate-storage/disks/<disk path info>/metadata/cassandra/data will be normal/expected nutanix@pcvm:~$ du -hs /home/nutanix/data/stargate-storage/disks/<disk path>/metadata/cassandra/data/insights_metric_data/ Log entries will be seen in insights_server logs indicating RSS threshold exceeding 90%. Percentage may be seen hitting 100% prior to the Ergon crash. nutanix@pcvm:~$ grep CheckRSSExceeds ~/data/logs/insights_server.out These RSS threshold entries exceeding 90% may line up to occur shortly after we see a number of queries in the insights_server log against all VMs (in this example, 123 total VMs were present). nutanix@pcvm:~$ grep "GetEntitiesWithMetrics" ~/data/logs/insights_server.out To confirm the query is for the "VM" entity and not a different entity, search insights_server.INFO logs for the query_<timestamp>_<numeric value> portion of the Request id., such as query_1621875600975622_48421 from the output above. In the following example, searching insights_server.INFO for query_1623682801521621_8685960 returns the following, which shows the request is for the VM entity type indicated by entity_list { entity_type_name: "vm" } in the output: I20210614 15:00:01.521633Z 805 insights_rpc_svc.cc:3278] Received RPC GetEntitiesWithMetrics from local. Request id: query_1623682801521621_8685960_local. Argument: entity_list \{ entity_type_name: "vm" } start_time_usecs: 1623679201000000 end_time_usecs: 1623679201000001 where_clause \{ comparison_expr { lhs { leaf { column: "node" suppress_ancestor_tree_traversal: true } } operator: kIN rhs \{ leaf { value { str_list { value_list: "89e1493b-dee0-4bb3-9c5f-796a8689017b" value_list: "5b7229da-078e-453c-b71d-fb9c78453c2a" value_list: "ba04c1a6-1e68-4fb3-918b-8d85dc5e4e66" value_list: "6cbadf94-5566-45b6-a577-1e705e618491" } } } } } } group_by \{ group_by_column: "node" aggregate_columns { column: "controller.oplog_drain_dest_ssd_bytes" operator: kSum down_sampling_operator: kLast } down_sampling_interval_secs: 3600 } query_name: "query_1623682801501135_8685953_127.0.0.1_fanout_request.controller.oplog_drain_dest_ssd_bytes" I20210614 15:00:01.521857Z 805 insights_query.cc:2793] Sending local NodeGetEntitiesWithMetrics query. . Request id: query_1623682801521621_8685960_local_fanout_request I20210614 15:00:01.570416Z 800 insights_query.cc:2761] Local NodeGetEntitiesWithMetrics query: Done. Took 48645 us. Request id : query_1623682801521621_8685960_local_fanout_request I20210614 15:00:01.571478Z 800 insights_rpc_base.h:262] RPC GetEntitiesWithMetrics: Done. Took 49848 us. Request id: query_1623682801521621_8685960_local. Result: Total group count: 4. Total entity count: 125 cgroup logs should show the memory fail count increasing (memory.failcnt - 5th value). Information for insights_servers can be pulled with the following: nutanix@pcvm:~$ grep -m3 -P 'TIMESTAMP|cgroup_name|insights_server' ~/data/logs/sysstats/cgroup_mem_stats.INFO As a troubleshooting step, if the failure is occurring on a regular schedule, such as weekly on the same day at approximately the same time, stopping the neuron service during that time period should prevent the the issue from occurring.Situation 4: Insights RSS reaching the threshold due to inefficient Neuron queries. You may experience a situation where Insights Server RSS usage goes above 90% at specific time intervals (for example, it happens every Wednesday at 7:00). The following signature can be found in the insight_server.INFO log. In ~/data/logs/neuron_server.log you can see that it is being active at the these timeframes. nutanix@pcvm$ allssh 'grep -i "Exceeded RSS threshold pct of 80" ~/data/logs/insights_server*'
Overarching issue with insights_server is expected to be resolved with ENG-398949, pc.2021.9. If services are not constantly crashing, and there is no impact due to the service crash, confirm if customer is okay with waiting for this update.Note: The following workaround is not ideal for the small deployment of PC.Increasing RSS may broadly affect other services.We have to check why the IDF RSS usage is high and consult with STLs or DevEx for the right solution. Solution for Situation 1 : RSS for insights_server on Prism Central VM is increased but cgroup is still reflecting default memory value for insights_server service & is hitting an oom kill as seen through dmesg logsSet the gflag for genesis service following the steps below 1. Create the following file on Prism Central VM ~/config/genesis.gflags 2. The cgroup limit of insights_server can be updated by assigning appropriate value to the below genesis gflag based on the rss value of the insights_server. --insights_multicluster_cgroup_limit_factor_pct=100 The calculation of value to be assigned to this gflag is done as below: cgroup_memory_limit_mb = (insights_default_rss * insights_multicluster_cgroup_limit_factor_pct) / 100 Please NOTE: The parameter "insights_default_rss" is different than the gflag "--insights_rss_share_mb", this value is defined based on the size configured for the PC in zeus. size: kLarge , insights_default_rss = 21000 configured size for PC can be checked using: zeus_config_printer | grep "^ size" Example: If the pc size configured is kSmall and I want to increase the cgroup limit for insights_server to 10GB, then I would need to set --insights_multicluster_cgroup_limit_factor_pct = 200, as per for formula: ( 5500 * 200) / 100NOTE: This calculation change is observed as per pc version 2022.6 and this the sourcegraph link https://sourcegraph.ntnxdpro.com/infra-fraser-6.1-pc-2.5/-/blob/infra_server/cluster/py/cluster/service/insights_db_service.py?L180 for this. 3. Restart genesis genesis restart 4. Restart insights_server genesis stop insights_server && cluster start NOTE: Confirm the ownership of the gflag files to be 'nutanix' . InsightsDB service may not start if the ownership of the file is 'root'. 5. Make sure the cgroup limit has increased through genesis log file [email protected]:~$ grep cgroup ~/data/logs/genesis.out Example output below indicating cgroup limit has been increased 2021-04-09 07:42:52,972Z INFO 47782544 service_utils.py:305 Creating cgroup insights_server. Solution for Situation 2 : Requires both RSS & cgroup to be increased for insights server from its default value NOTE: While this KB focuses on ensuring that you increase both RSS & cgroup values, ensure that you debug and identify the need to increase the RSS / cgroup values on Prism Central Below example is for a Small Prism Central VM , which is running 26GB Memory 1. Create the following file on Prism Central VM ~/config/insights_server.gflags 2. Add the below gflag to the file --insights_rss_share_mb=10000 3. Follow steps 1 - 4 from section Solution for Situation 14. Confirm RSS & cgroup are both reflecting accurate values To confirm RSS is set to 10G nutanix@NTNX-10-x-x-x-A-PCVM:~$ links --dump http://0:2027/h/gflags | grep insights_rss_share_mb To confirm cgroup is reflecting accurately [email protected]:~$ grep cgroup ~/data/logs/genesis.out Example output below indicating cgroup limit has been increased 2021-04-09 07:42:52,972Z INFO 47782544 service_utils.py:305 Creating cgroup insights_server. Solution for Situation 3: Requires both RSS & cgroup to be increased for insights server from its default valueThe same steps for Situation 2 can be used for temporary relief of this issue.The overarching issue caused by large number of calls from the neuron service is expected to be resolved with ENG-398949. If services are not constantly crashing, and there is no impact due to the service crash, the customer may opt to wait for a release containing the fix. NOTE: It's always a good idea to evaluate if Prism Central VM is genuinely starved from a resource allocation perspective All Examples above are from Small PC Configuration running 26GB Memory, based on your issue and PC configuration & once you verify the issue is same your RSS value would change accordingly Solution for Situation 4: Requires insights server cgroup & neuron gflag to be modified. To confirm that the issue is caused by neuron, you can either stop it shortly before when insights_server starts consume more memory, or find from the neuron log the check that is running at the time when RSS usage is high and then run it manually. For example, in TH-11549, we saw that "calculate_vm_write_io_latency_baseline_buff_insights" check was impacting RSS usage when running. nutanix@pcvm$ neuron --debug=true config_based analytics_module predict_alert_plugins calculate_vm_write_io_latency_baseline_buff_insights After manual execution is completed, review insights_server log to confirm that RSS usage went up.Only proceed with the below steps after confirming insights is impacted by neuron. If not sure, engage STL or Staff for further assistance. If high RSS usage of the insights leads to its termination by OOM due to exceeded cgroup limit, you may use the staggered approach described below: To let insights_server to free memory itself, slightly increase cgroup limit for insights by 5%. You follow steps from Solution for Situation 1 to set insights_multicluster_cgroup_limit_factor_pct to 105.Further monitor insights server memory usage and if the crashes are still being observed, start decreasing neuron gflag batch_entity_cnt by 5 or 10 elements. It should not be less than 25. For example, the default of batch_entity_cnt is 50, so we should start with 45 and monitor. It will reduce the amount of work that insights_server has to perform when neuron is collecting the stats for the prediction, however it will increase processing time for neuron and it may lead to its inability to complete the work in the desired schedule. Following engineering tickets opened for this scenario: ENG-580112 https://jira.nutanix.com/browse/ENG-580112 and ENG-580123 https://jira.nutanix.com/browse/ENG-580123
{
null
null
null
null
KB7459
"NCC preupgrade test(s) failed" in preupgrade.out, but NCC checks pass when run manually.
NCC pre-upgrade check failing but manual execution passes the check
Some NCC checks are leveraged as part of the preupgrade process. If the check results in a warning, fail or error, the preupgrade suggests running the check manually to identify the cause: 2019-04-16 17:06:25 ERROR preupgrade_checks.py:156 4 NCC preupgrade test(s) failed. First 3 issue(s):, Please refer KB 6430 Running the check as the check prescribes results in a contradictory PASS: Running : health_checks network_checks host_cvm_subnets_check That is because running the check manually by using the commands - [e.g.: ncc health_checks hardware_checks disk_checks host_disk_usage_check] is actually different from running them in a pre-upgrade setting. When you run directly, you use cluster health's RPC service and when you run them in pre-upgrade setting, the checks are run over SSH.Thus check is run using the flag --use_rpc=0 with the NCC check.
The command which the pre-upgrade check currently suggests is not the same command the pre-upgrade executes.The suggested command is of the form: ncc health_checks network_checks host_cvm_subnets_check To get the command executed during pre-upgrade, run the below command from the Prism leader: cat ~/data/logs/ncc.log | grep -A1 "Unable to parse response from" The output will be similar as below, actually executed command will be listed next to "Ran command:": 2019-04-16 17:05:12 ERROR base_plugin.py:1053 Unable to parse response from 10.XXX.XXX.177: Make sure to use the full command to get accurate results. In some cases, one command may return a pass while the other a warning.
KB9291
Lazan in crash loop AttributeError: 'NoneType' object has no attribute 'container_name'
Covers workaround for when the Lazan service falls into crash loop with AttributeError: 'NoneType' object has no attribute 'container_name' due to a container being removed from under volume group.
In some edge cases a customer may remove a container that backs a vDisk which in turn backs a Volume Group. In these cases the Lazan service will fail into a crash loop as the container to be populated no longer exists.Example Lazan stack trace: 2020-01-21 23:32:08 INFO manager.py:884 co_node_uuids: set([])
Option 1:List all disks, containers, and volume groups to files. nutanix@CVM$ arithmos_cli master_get_entities entity_type=virtual_disk requested_field_name_list=container_id,nutanix_nfs_file_path,vm_uuid >> virtual_disks.out; Gather all unique container ids for across all vDisks and grep against the containers.out file for any containers that may not exist. In the example below the container "1228953" did not return any results as it does not exist in arithmos. nutanix@CVM$ for i in $(egrep "id: |attribute_value_int: |attribute_value_str: " virtual_disks.out | sed 's/attribute_value_int:/Container_ID:/g;s/attribute_value_str:/VM_UUID:/g;s/id:/Disk_ID:/g;s/\"//g' | grep Container_ID | sort -u | awk '{print $2}'); do echo -e "\n$i"; egrep "container_name: |id: " containers.out | grep -B1 $i; done Reference the container ID back to the Disk ID(s). Modify the grep at the end of the below string to mirror the container ID found above. In this instance we see there was only a single backing disk referencing this container. nutanix@CVM$ egrep "id: |attribute_value_int: |attribute_value_str: " virtual_disks.out | sed 's/attribute_value_int:/Container_ID:/g;s/attribute_value_str:/VM_UUID:/g;s/id:/Disk_ID:/g;s/\"//g' | grep -B2 1228953 Correlate the above found Disk_ID(s) to the respective Volume Group(s) then identify any potentially active clients. Below there was only one volume group; however, an example on vDisk to Volume Group mappings can be correlated through arithmos_cli is provided. nutanix@CVM$ arithmos_cli master_get_entities entity_type=volume_group | egrep -A1 "id: \"|nutanix_nfs_based_virtual_disk_uuids" | grep -B3 "a8bd9aaf-224d-4d39-b99e-6849c134a6a1" nutanix@CVM$ acli vg.list Below we can see there was only one iscsi client. When viewing the entry we can see it is attached to the volume group above. You may need to review all iscsi clients if the vg fails to delete due to active connections. <acropolis> iscsi_client.list Validate with the customer that the iscsi volumes are no longer in use, unmount them from their respective initiators then delete the disk, remove the initiator, and delete the volume group via acli. Note acli vg.get should fail until the disk is deleted with an error similar to what is listed below. <acropolis> vg.list The above fails as there is both a disk and external attachment that must be removed first. The following steps must be used to identify and delete the disk and attachments. Because we are unable to get information on the disk associated with the VG because of the missing container_uuid; we can simply tab complete the index (0) in the below example. If more than one disk was present on the VG, repeat for each disk (and external attachment if more than one). <acropolis> vg.disk_delete TempContainerVolumeGroup 0 Once the Volume Group is deleted the lazan service should recover within a minute or two.Option 2In similar instances, where the container has already been deleted and the VG cannot be readily identified with the approach above, the following method can also be used. NCC may also complain about malformed vdisks: Detailed information for unsupported_vm_config_check: Firstly, as in Option 1, list all VGs using nutanix@CVM:~$ acli vg.list Then, iterate over one by VG one using below script, and identify the VG for which there is a VG get failure (acli vg.get) nutanix@CVM:~$ for i in `acli vg.list | awk '{print $2}' | grep -v Group`;do echo $i; acli vg.get $i ;done >> acli_vg.txt Get the dump of VGs from arithmos: nutanix@CVM:~$ arithmos_cli master_get_entities entity_type=volume_group >> vgs.txt Search the failure VG (failed to get acli vg.get <>) in arithmos dump above (vgs.txt ) and check that its disks were not present in nfs_ls -liaR output.Further, confirm that the correct VG has been identified, and acli vg.get returns the error: nutanix@CVM:~$ acli vg.get 900c9995-0f6f-4287-b1e6-e589e023416e Once the problematic VG has been identified, get the VG uuid and confirm that the container is no longer present.Example below is with VG uuid 900c9995-0f6f-4287-b1e6-e589e023416e which included vmdisks from container 2455348995, which has already been deleted. nutanix@CVM:~$ pithos_cli -lookup volume_group -vg_uuid 900c9995-0f6f-4287-b1e6-e589e023416e As indicated above, it is expected that container 2455348995 does not exist: nutanix@CVM:~$ ncli ctr ls | grep 2455348995 Also note that the iscsi_target_name has a non-identifiable/incomplete initiator, suggesting that there are no VM clients connecting to it, which can be verified with no output returned when search for this in vdisk_config_printer: nutanix@CVM:~$ vdisk_config_printer -iscsi_target_name=svcmetasql01-vg01-900c9995-0f6f-4287-b1e6-e589e023416e Following all these verification checks, confirm with customer that this VG is no longer in use, and no VMs are actually using the disks in this VG.Should there be any discrepancies in any of the above findings or any uncertainty, please do not proceed with the next steps, and engage EE or STL via a TH. The next step deletes the VG, so it is paramount that there are no uncertainties, as otherwise it will be a data loss scenario.Once fully confirmed, ask customer to delete the VG from Prism; this should fail with "Volume Group with uuid 'xxxx.xxxx.xxx' was not found. This would be a final verification check to proceed to delete the VG via acli: nutanix@CVM:~$ acli vg.delete SVCMETASQL01
KB6257
SATADOM 3IE3 bricks during an upgrade involving AC power cycle due to table corruption issue
SATADOM 3IE3 models with FW S560301N to brick due to table corruption. Please upgrade the FW to S6700330N /S671115N.
SATADOM 3IE3 v2 bricks during an upgrade involving AC power cycle due to table corruption issue Versions Affected: SATADOM 3IE3 v2 model with S560301N firmware. Workflow impacted: Hypervisor upgrade, SATADOM FW upgrade, BIOS upgrade, BMC upgrade, Phoenix based upgrade Symptoms seen on AHV upgrade If you are doing host reboot, you may see below symptoms:​ After AHV reboot host fails to boot, the following error is displayed. Error 18: Selected cylinder exceeds maximum supported by BIOS After selecting AHV version to load from GRUB, disk read error is shown. Error 25: Disk read error Subsequent reboot in either scenario produce a boot device failure. Reboot and Select proper Boot device or Insert Boot Media in selected Boot device and press a key You may also see the host get stuck in the grub prompt post-reboot: Symptoms seen on LCM based phoenix upgrade workflows (BMC/BIOS/SATADOM upgrade) Unable to boot node after SATADOM FW upgrade - asking for "insert boot device".Node got stuck in Phoenix after SATADOM FW upgrade and it failed to reboot to host due to missing satadom. lcm_ops.out 2018-08-02 22:21:03 INFO actions.py:586 task_info: location_uuid: "bbc12607-89ae-4acd-847f-8e6cbdf83f0f" What is the cause? AC Power cycle of node can brick SATADOM 3IE3 v2 model due to table corruption.This table corruption is due to a logical error and known bug in S560301N firmware.SATADOM 3IE3 v2 model running on firmware S670330N or S671115N are not impacted by table corruption issue.
What to do when you encounter the issue? Power off the node, wait ~30 seconds and power the node back on. Verify if the issue persists.Collect the following mandatory data indicated below and provide the link and details in the ENG ticket.If you encounter this issue, replace the SATADOM and dispatch a new one.Once the new SATADOM is replaced and added to the node using the SATADOM replacement process https://portal.nutanix.com/#/page/docs/details?targetId=Hypervisor-Boot-Drive-Replacement-Platform-v59-Multinode-G3G4G5:Hypervisor-Boot-Drive-Replacement-Platform-v59-Multinode-G3G4G5, check the firmware version and update it if necessary.Request that the customer update any remaining affected SATADOMs in their environment. ESXi nutanix@cvm$ hostssh "esxcli storage core device list| grep -A4 Path" AHV nutanix@cvm$ ssh [email protected] "smartctl -a /dev/sda " | head Hyper-V nutanix@cvm$ winsh "cd \Program?Files ; cd Nutanix\Utils ; .\smartctl.exe -a /dev/sda " | head If you observe bricking of SATADOM on good FW (S671115N/S670330N): Mark it for Failure analysis. Data which is required to be captured before RMA. Template data to place in Jira comment (only for NX platforms): Customer Name.NX Hardware modelBlock S/NCluster IDDate of failure/ TimestampHypervisor OS / revisionAOS versionSATADOM modelSATADOM firmware versionEntities flagged for Upgrade during LCM Module UpgradeActivities being performed during failure: (LCM SATADOM firmware upgrade?, Host reboot?, Other manual actions?)Data bundle location Data bundle to capture and attach to case: Screen Captures of the failure of the boot drive to be detected or the hypervisor to load.If using LCM, the following data needs to be collected as well. At the phoenix prompt, run lsscsi. This will show the correct device to run smart against and if the SATADOM is seen: phoenix /phoenix # lsscsi At the phoenix prompt, run "smartctl -a" against the satadom to get the log output: phoenix /phoenix # smartctl -a /dev/sdx - x is the device identified in lsscsi At the phoenix prompt, run dmesg and gather the output: phoenix /phoenix # dmesg At the phoenix prompt, gather the output of /proc/cmdline: phoenix /phoenix # cat /proc/cmdline At the phoenix prompt, gather the output returned by the reboot_to_host.py command: phoenix /phoenix # python /phoenix/reboot_to_host.py From one of the CVMs (Controller VMs) that are up, gather an ncc log bundle using the following plugin and component list for a 4-hour window around when the problem started. Have it start 1 hour prior to the start of the problem and continue for 3 hours after. For example: nutanix@cvm$ ncc log_collector --start_time=2018/11/03-02:00:00 --end_time=2018/11/03-06:00:00 --plugin_list=cvm_logs,cvm_config,hardware_log_collector --component_list="genesis,lcm,cluster_config,foundation"
KB14483
Alert - A160170 - FileServerCAShareConfigCheck
Investigating FileServerCAShareConfigCheck issues on a Nutanix cluster.
This Nutanix article provides the information required for troubleshooting the alert FileServerCAShareConfigCheck for your Nutanix Files cluster. Alert overview The FileServerCAShareConfigCheck alert is generated if Continuous Availability (CA) is enabled for SMB standard share or nested share. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "160170": "Check if CA is configured on SMB standard or nested share on a File Server.", "Check ID": "Description" }, { "160170": "Continous Availability is enabled for SMB standard share or nested share", "Check ID": "Cause of failure" }, { "160170": "Disable Continous Availability feature on SMB standard or nested shares", "Check ID": "Resolutions" }, { "160170": "Enabling Continous Availability on an SMB standard / nested share could result in performance issues.", "Check ID": "Impact" }, { "160170": "A160170", "Check ID": "Alert ID" }, { "160170": "File Server CA Share Config Check", "Check ID": "Alert Title" }, { "160170": "Misconfig of Continous Availability detected on a File Server SMB standard share / nested share.", "Check ID": "Alert Message" } ]
Use of Continuous Availability (CA) should be limited to distributed shares. If it is enabled on Standard/General shares, performance might be impacted. TroubleshootingCheck to see if any shares have CA enabled and if they are Standard (General) shares. nutanix@FSVM$ afs share.list|grep 'Continuous\|Share type\|Share path\|Share name' Resolving the issue If you have any Standard shares with CA enabled, disable CA using: nutanix@FSVM$ afs share.edit <share-name> continuous_availability=false If there are any concerns, or additional assistance is needed, contact Nutanix Support https://portal.nutanix.com/. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB 2871 https://portal.nutanix.com/kb/2871.Collect Files related logs. For more information on Logbay, see KB-3094 https://portal.nutanix.com/kb/3094. CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run: nutanix@CVM$ afs info.get_leader Once you are on the Minerva leader CVM, run: nutanix@CVM$ ncc log_collector --file_server_name_list=<fs_name> --last_no_of_days=5 --minerva_collect_sysstats=True fileserver_logs For example: nutanix@CVM$ ncli fs ls | grep -m1 Name Attaching files to the case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. Requesting assistanceIf you need assistance from Nutanix Support, add a comment in the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. You can also press the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will be in contact. Closing the caseIf this KB resolves your issue and you want to close the case, click the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case.
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""curl -I website""
null
null
null
null
KB8573
Unable to open VM console due to SSL protocol filtering
Problem with accessing VM console in any web browser where PC or PE uses self-signed certificate. However PE & PC web UI access is not affected.
Users might face trouble accessing guest virtual machine consoles on any web browser where Prism Central (PC) or Prism Element (PE) could be using a self-signed certificate. Following are examples of the error messages seen on different browsers;Chrome: There was an error connecting to the VM. This could be due to an invalid or untrusted certificate chain or the VM being powered off. Edge: Your PC doesn’t trust this website’s security certificate.
A possible cause for this issue could be the presence of ESET (or any other) antivirus/protection software on the virtual machine. We can investigate further by performing this test on another virtual machine that doesn't have any antivirus/protection software installed. If you're able to launch the console for a VM that doesn't have any antivirus installed, proceed with the following plan of action; Validate if ESET Security Management Center (or other software) is enforcing a policy that disables SSL protocol filtering.If yes, please disable the above policy and test VM console access. Or apply CA-signed certificate in Prism.Please contact Nutanix support https://portal.nutanix.com/page/home for further investigation if the issue persists after following the action plan.
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""netsh advfirewall set allprofiles state on""
null
null
null
null
""ISB-100-2019-05-30"": ""ISB-056-2017-07-21""
null
null
null
null
KB6031
Nutanix Files - The source file name(s) are larger than is supported by the file system
Working around Microsoft Windows limitation that does not allow to manipulate files and folders when the folder tree is too deep.
When trying to restore from Windows Previous Versions for Nutanix Files Share, you might run into this error: Error: The source file name(s) are larger than is supported by the file system. This is a Microsoft Windows limitation and typically happens with files that are buried in a series of subfolders that have long names. Whenever this happens, you might be unable to restore, delete, move, or rename the subfolders and/or files in the subfolder. In such a situation, attempting to copy-paste or delete the file or sub-folder would give the following error: Source Path Too Long
Workaround: Mount the share you are trying to restore from to a Windows client. Right-Click the Share and select Properties: Select the DFS tab on the Properties window and note the DFS referral path. In the screenshot below it is \\NTNX-ntnx-files-lab-1.domain.com\myshare Now Right-Click on the Share again, and under the "Previous Versions" tab, select the snapshot that you want to restore: Double-Click on it to open it in a new Windows Explorer instance, then highlight the path in the new Windows Previous Versions Windows Explorer and note the path. For example, Z:\@GMT-2020.04.01-04.00.00: Map a new network drive on the Windows client: For the path, use the DFS referral path to the folder you noted in Step 3 (\\NTNX-ntnx-files-lab-1.domain.com\myshare) and append Windows Previous Version snapshot path noted in Step 5 (@GMT-2020.04.01-04.00.00) See the screenshot below for an example, where the path is \\NTNX-ntnx-files-lab-1.domain.com\myshare\@GMT-2020.04.01-04.00.00 Now you should see a new network drive mapped to the Windows client that contains the Previous Version of data. For example, myshare is the share on Nutanix Files server ntnx-files-lab in the screenshot below: Browse the above Windows Previous Version folder and files to restore or copy. It is recommended to use Robocopy with appropriate switches to restore data. Refer to Microsoft documentation for more details on Robocopy https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy utility. Note: If you see this error when migrating your data from an external file server to Nutanix Files, refer to Nutanix Files User Guide: Share Migration https://portal.nutanix.com/page/documents/details?targetId=Files-v4_2:fil-files-share-migrate-c.html. This feature is first available in Nutanix Files 4.1.
KB10878
VMSA-2021-0002 / ESXi OpenSLP / Disable CIM Server
VMware ESXi is impacted by VMSA-2021-0002 / CVE-2021-21972
Multiple vulnerabilities in VMware ESXi and vSphere Client (HTML5) were reported by VMware in security advisor VMSA-2021-0002. Updates are available to remediate these vulnerabilities in affected VMware products. https://www.vmware.com/security/advisories/VMSA-2021-0002.html https://www.vmware.com/security/advisories/VMSA-2021-0002.htmlNutanix clusters running on ESXi have the hypervisor impacted by CVE-2021-21974OpenSLP as used in ESXi has a heap-overflow vulnerability. VMware has evaluated the severity of this issue to be in the Important severity range https://www.vmware.com/support/policies/security_response.html with a maximum CVSSv3 base score of 8.8 https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H.Workaround: Disable/Enable CIM Server on VMware ESXi https://kb.vmware.com/s/article/76372 https://kb.vmware.com/s/article/76372
Service Location Protocol (SLP) is a standard protocol that provides a framework to allow networking applications to discover the existence, location, and configuration of networked services in networks.Customers can proceed with the workaround that has no impact on Nutanix clusters.But we recommend to perform the patch update: ESXi70U1c-17325551ESXi670-202102401-SGESXi650-202102101-SG For ESXi custom images provided by others vendors, if cim is enabled by default and it cannot be disabled, please contact the vendor for support.Find here the steps in the documentation to upgrade your ESXi hypervisor host through the Prism web console ( 1-click upgrade): https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_19:upg-hypervisor-upgrade-esxi-c.html https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_19:upg-hypervisor-upgrade-esxi-c.htmlNutanix software is not impacted by VMSA-2021-0002. For the latest info about security and vulnerabilities please check the following sites:Nutanix Security Advisories: https://portal.nutanix.com/page/documents/security-advisories/list https://portal.nutanix.com/page/documents/security-advisories/listVulnerability List : https://portal.nutanix.com/page/documents/security-vulnerabilities/list?softwareType=AOS https://portal.nutanix.com/page/documents/security-vulnerabilities/list?softwareType=AOS
KB16748
VM attached to a VG and restored from HYCU backup will fail to power on in AHV 20230302.x
After backing up a VM with VG attached, if you restore that VM with VG on AHV 20230302.x versions it will fail to power on. 
AHV VMs with volume groups (VGs) may fail to power on or live migrate if volume group was restored from the backup.Scenario 1. After upgrading to AHV 20230302.x VMs with volume groups (VGs) previously restored from backup may fail to power on.During the restore, HYCU adds a string '-' followed by digits to the iscsi_target_name field: nutanix@cvm:~$ acli vg.get cf1ca33e-a91e-4274-840b-fdb34baa0b3f The following log signatures in /home/nutanix/data/logs/acropolis.out are seen when powering on the restored VM with VG attached: 2024-02-13 13:06:15,800Z INFO task_poller.py:140 VmSetPowerState: HypervisorError: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu 751b856a-d9be-49d4-be1b-f7ec2e81d1a2 prepare begin -) unexpected exit status 1: The /var/log/libvirt/libvirtd.log on AHV host: 2024-02-13 13:06:11.786+0000: 5442: error : virCommandWait:2752 : internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu 751b856a-d9be-49d4-be1b-f7ec2e81d1a2 prepare begin -) unexpected exit status 1: Failed to register VM 751b856a-d9be-49d4-be1b-f7ec2e81d1a2 with avm: invalid disk UUID in iscsi IQN 'iqn.2010-06.com.nutanix:hycu-clone-vg-b23dfc56-9988-4a99-b587-0b6ef172bc29-1707815472746' The /var/log/libvirt/qemu/hook.log on AHV host: 2024-02-13 13:06:11,767: root:DEBUG: Registering VM 751b856a-d9be-49d4-be1b-f7ec2e81d1a2 with avm... Scenario 2AHV upgrade to AHV 20230302.x would fail if a cluster has VMs with VG attached and restored from HYCU backup.Host entering maintenance mode will fail to evacuate VMs with VG that were previously restored. nutanix@cvm:~$ ecli task.get 3a97b2c4-0705-43c8-9dba-83d77fa3bbb1 In /home/nutanix/data/logs/acropolis.out: 2024-03-19 16:03:03,780Z INFO base_task.py:744 Task 15fab453-c388-428b-a372-82abc128cd5b(VmMigrate a986b5a1-2492-4028-8511-0558ceb44f0e) failed with message: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu a986b5a1-2492-4028-8511-0558ceb44f0e prepare begin -) unexpected exit status 1: Failed to register VM a986b5a1-2492-4028-8511-0558ceb44f0e with avm: invalid disk UUID in iscsi IQN 'iqn.2010-06.com.nutanix:hycu-clone-vg-8f2a9370-1b87-4f10-99e2-6c0edc33ca0b-1710839644544'
This issue is resolved in: AOS 6.8.X family (eSTS): AHV 20230302.100173, which is bundled with AOS 6.8 Please upgrade both AOS and AHV to versions specified above or newer.WorkaroundClone the VG with acli: acli vg.clone <new VG_name> clone_from_vg=<VG_name> Confirm iscsi_target_name doesn't have '-' followed by the digits: nutanix@cvm:~$ acli vg.get ceff467-59d5-4ef5-b3cd-6f9cf492d62e Detach the old VG and attach the cloned VG to the VM.Power on the VM.In case of upgrade failure - retry the upgrade.
KB1809
Acceptable limits for AcceptSucceededForAReplicaReturnedValue in Cassandra system logs
null
On configuring a syslog server, you may notice a high occurrence of the following in the syslog console: Failure reason:AcceptSucceededForAReplicaReturnedValue : 1 Error [3] This information is extracted from the following log on the CVM: nutanix@cvm$ ~/data/logs/cassandra/system.log In one instance, this error was observed about 7700 times per hour. The question was raised if this rate of occurrence is normal or if this indicates a problem on the CVM.
The AcceptSucceededForAReplicaReturnedValue message indicates a CAS (Compare and Swap) error occurred. Acceptable occurrence rates are 3 to 4 messages per second. This means that the maximum acceptable limit per hour is 60 * 60 * 4 = 14400 occurrences.
KB14985
Erasure Coded (EC) egroups encounter kSliceChecksumMismatch error, causing disk removal to get stuck or FSVM reboot loop
Erasure Coded (EC) egroups encounter kSliceChecksumMismatch error, causing disk removal to get stuck or FSVM reboot loop
Identification: 1. Verify if the disk removal is stuck because of pending egroups replication. The Curator master info logs also log the egroup IDs for which the egroup replication is stuck and pending. allssh "grep -i 'Egroups for removable disk' data/logs/curator.INFO" Example: nutanix@CVM:~$ allssh "grep -i 'Egroups for removable disk' data/logs/curator.INFO" 2. Identify If the Egroup that is pending on replication is marked as a corrupt Egroup due to kSliceChecksumMismatch using the Stargate links page. allssh "links -dump http:0:2009/corrupt_egroups" Sample Output: nutanix@cvm:~$ allssh "links -dump http:0:2009/corrupt_egroups" 3. Verify if the affected corrupt egroup is a EC Egroup by checking the output of the medusa_printer. For a EC egroup the control block section of medusa_printer command shows the erasure_code_parity_egroup_ids field which contains the parity egroup ID for this egroup. nutanix@cvm:~$ medusa_printer --lookup egid --egroup_id <egroup id> | head -n 50 Sample Output: nutanix@cvm:~$ medusa_printer --lookup egid --egroup_id 2419468069 | head -n 50 4. Reviewing the Stargate logs with the corrupt egroup id we can observe that erasure code decode ops fails with error kDataCorrupt and EC fixer ops fails with kErasureDecodeTooManyFailures error E20230330 10:23:18.294310Z 22431 erasure_code_base_op.cc:2222] EC op 83750106 failed to read chunk starting at slice index 120 for egroup 2419468069 error kDataCorrupt 5. Running nutanix_fsck.py script for the affected corrupt egroup as per KB 3850 https://portal.nutanix.com/kb/3850 shows checksum mismatch error and the fsck operation fails eventually. 2023-03-30 11:15:01,722Z INFO nutanix_fsck.py:373 Copying 172.20.200.87:/home/nutanix/data/stargate-storage/disks/ZAD7DGP3/data/9/3/2419468069.egroup to /home/nutanix/tmp/nutanix_fsck_data/2419468069.data.replica.2302716050
It has been identified by Engineering that Erasure code decode ops cannot auto-fix the condition of kSliceChecksumMismatch. If EC egroups encounter kSliceChecksumMismatch the auto reconstruction of these egroups fails repeatedly.This issue has been identified as a software defect ENG-555628 https://jira.nutanix.com/browse/ENG-555628 and requires engineering intervention to fix the affected corrupted egroups.Please open an ONCALL with the guidance of STL to fix this particular issue.
{
null
null
null
null
KB6741
Nutanix Files - Orphaned File Server VMs while Migrating Nutanix Files server between ESXi Nutanix clusters (MetroAvailability pair)
When performing a PD migrate, orphaned VM's are left behind in vCenter's inventory. On a MetroAvailability setup the destination is also in the same cluster. This means vCenter cannot register the FSVMs with the intended name because the original name is taken by the orphaned VM record.
When migrating a Nutanix Files server cluster to the remote site, where the remote site is the other side of a MetroAvailability pair, you may see an issue during activation where the task to activate the file server hangs at 47% until it finally times out.Reviewing the subtasks, you see only one VM change power state task, instead of one for each FSVM. Reviewing the VMs in vSphere, you see two entries for each FSVM, one labeled "<FSVM_name> (Orphaned)" and another labeled "<FSVM_name> (1)"Example:
An orphaned VM in vSphere is an inventory object in vCenter for which the backing files can no longer be found. Because the VM files can no longer be seen but the VM was not removed from inventory, it remains as an "orphaned" VM.Before running the activate task for the migrated Nutanix Files server or after the previous activate task has failed, clean up the orphaned entries and the modified VM names. For each "<FSVM_name> (Orphaned)" entry in vSphere, right-click and remove from inventory. NOTE: Removing a VM from inventory does not delete the VM from disk. If you removed the wrong VM from inventory, you would need to browse the datastore, locate the vmx file, right-click it and choose "add to inventory" and select the appropriate host or folder location to bring it back into inventory. For each "<FSVM_name> (1)" right-click the VM, select rename, and remove the " (1)" from the end of the name. You should now have one entry per FSVM with the correct name.Having completed the workaround above, you should be able to run the Activate workflow for the migrated Files server without issue.
KB16030
NDB | DBServers getting marked as ERA_DAEMON_UNREACHABLE due to API slowness
Due to API slowness in high load/stress setups, the database servers are being marked as ERA_DAEMON_UNREACHABLE state
In setups where there is a high load, the APIs take a longer time to be executed due to the high number of requests and some of the application logic like auth token (prior 2.5.2.1), EraDBServerCache (prior 2.5.3) blocking threads for a longer amount of time.This leads to the update calls not updating the client on time and the ERADBServerHealthCheckEngine marks the database server as ERA_DAEMON_UNREACHABLE during its execution.
The solution proposed aims to reduce the frequency of database servers switching back and forth between UP and ERA_DAEMON_REACHABLE states. This will be achieved by adjusting the dbserver_unreachable_interval_threshold_min property from 1 minute to 10 minutes. The ERADBServerHealthCheckEngine assesses this property to label a database server as ERA_DAEMON_REACHABLE if the time difference between the current time and the last update time for the database server exceeds 1 minute. From NDB UI Login using NDB UI using admin credentialsNavigate to the era config page by modifying the URL to https://<era_server_ip>/#/eraconfigSearch for the property dbserver_unreachable_interval_threshold_minChange the value from 1 to 10Click on Update Using CLI: era-server -c "config era_config update name=dbserver_unreachable_interval_threshold_min value=10"
}
null
null
null
null
{
null
null
null
null
KB12731
Alert - A160148 - FileServerPartialNetworkState
This Nutanix article provides the information required for troubleshooting the alert A160148 - FileServerPartialNetworkState or your Nutanix Files cluster instance.
From Files version 4.1 onwards (minimum NCC version 4.6.0), configuring multiple subnets on the Client-side network is supported. if there are multiple subnets for the client network configured the normal scale-out operation would require an additional step to perform a network patch operation to configure the newly added nodes with the remaining non-primary networks. Note: The normal scale-out operation configures the new nodes with only the primary network. The A160148-FileServerPartialNetworkState alert occurs to nudge the end-user to perform the second network patch operation, an alert is raised to indicate the partial network state of the file server upon completion of the first scale-out operation. For an overview of alerts, including who is contacted when an alert case is raised, see KB-1959 https://portal.nutanix.com/kb/1959. Sample Alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Alert ID": "The file server is in a partial network state." }, { "Alert ID": "The scale-out operation was performed on a fileserver with a multi-VLAN configuration." }, { "Alert ID": "Refer to KB article 12731. Contact Nutanix support if the issue persists or if assistance is needed." }, { "Alert ID": "The file server is in a partial network state." }, { "Alert ID": "The file server is in a partial network state." }, { "Alert ID": "This includes the reasoning for the triggered Alert \"{message}\"" } ]
To resolve the alert, the end-user can use the following afs CLI command on any FSVM to create a network patch task by providing the remaining non-primary network IPs for the new nodes. <afs> afs net.patch_external input-json-file Note: This is a CLI-based feature; hence, this option is unavailable in the UI. This command expects a JSON file with IP and network UUID details. Follow the help command to view the JSON file format. <afs> help net.patch_external In the above code snippet, two secondary external networks are configured because the primary network is already being configured, so it is configured by default on newly added nodes.The following points need to be considered: Managed network entry only requires the "UUID" field. Unmanaged network entry requires "UUID" and "ip_list" fields. The invocation of the above CLI command will create a NetworkUpdateTask. If the task fails, consider engaging Nutanix Support https://portal.nutanix.com/. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC log bundle on Minerva Leader using the following command: nutanix@cvm$ ncc log_collector --file_server_name_list=<fs_name> --last_no_of_days=<x> fileserver_logs Note: Execute the <afs> info.get_leader command from one of the CVMs (Controller VMs) to get the Minerva leader IP. Attaching Files to the CaseAttach the files at the bottom of the support case to the support portal. If the size of the logs being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB-1294 https://portal.nutanix.com/kb/1294.Since this is an external alert, the following logs are expected in /home/nutanix/data/logs/minerva_ha_dispatcher.log on FSVMs, for AddNvmTask. Multi-vlan config detected, marking new VGs as bad.
KB10694
Alert - A130192 - Conflicting NGT policies
Troubleshooting alert A130192 Conflicting NGT policies.
This Nutanix article provides the information required for troubleshooting the alert Conflicting NGT policies for your Nutanix cluster. Alert OverviewThis alert is generated due to conflicting NGT (Nutanix Guest Tools) policies.Sample Alert Warning : Conflicting NGT policies found for the VM. Output Messaging [ { "130192": "This check is scheduled to run every 1 minute, by default.", "Check ID": "Schedule" }, { "130192": "There are conflicting NGT policies for the VM.", "Check ID": "Description" }, { "130192": "More than one NGT policy is applicable for this VM. Please resolve the conflicts.", "Check ID": "Cause of Failure" }, { "130192": "Check the logs for error information or contact Nutanix support.", "Check ID": "Resolutions" }, { "130192": "VM is not rebooted after NGT install/upgrade.", "Check ID": "Impact" }, { "130192": "A130192", "Check ID": "Alert ID" }, { "130192": "Conflicting NGT policies", "Check ID": "Alert Title" }, { "130192": "Conflicting NGT policies found for the VM.", "Check ID": "Alert Message" } ]
Check and ensure that there is no more than 1 NGT policy for each VM. And remove the conflicting policies via Prism Central. If this step does not help, contact Nutanix Support https://portal.nutanix.com/ as soon as possible to troubleshoot the problem fully. Collect additional information and attach them to the support case. Before opening a support case, check the recent Alerts in Prism.Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the CaseTo attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. Requesting AssistanceIf you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. You can also click the Escalate button in the case and explain the urgency in the comment. Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case. You can also update the support case saying it is okay to close the case.
KB10831
File Analytics - Upgrade fails via LCM
This article helps in troubleshooting FA upgrade failure through LCM due to file analytics not being enabled in the file server
While trying to upgrade file analytics via LCM you may see the following error: Operation failed. Reason: Update of File Analytics failed on x.y.z.10 (environment cvm) at stage 1 with error: Below error signature is seen in lcm_ops.out on LCM leader: 2021-02-11 09:08:29,946 {"leader_ip": "10.240.1.202", "event": "Finished running filter AOSMinimumVersionFilter", "root_uuid": "ef6254c2-e463-4d60-93e5-8b1c19766354"} Following is seen in lcm_ops.out on LCM leader 2021-02-11 14:08:18,595Z ERROR helper.py:103 (10.240.1.206, update, d7a8e3fd-ce70-41b2-bca9-4fc80b235b58) EXCEPT:{"err_msg": "Upgrade failed: Traceback (most recent call last):\r\n File \"/opt/nutanix/upgrade/bin/run_upgrade_tasks.py\", line 10, in <module>\r\n import env\r\n File \"/opt/nutanix/upgrade/bin/env.py\", line 59, in <module>\r\n import gflags\r\nImportError: No module named gflags\r\nPre upgrade checks failed for file analytics\r\n", "name": "File Analytics", "stage": 1}
This issue was seen when the File Analytics was not enabled. Validate if the file analytics is enabled for the file server. The FA should return something like this is the FA was connected and enabled: <nuclei> partner_server.list In cases where it is not enabled it would show the following: <nuclei> partner_server.list Steps for enabling file analytics can be found here https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v2_1:ana-fs-analytics-enable-t.htmlPlease Contact Nutanix Support for further assistance.
KB3379
NCC Health Check: cluster_connectivity_status
The NCC health check cluster_connectivity_status only runs from Prism Central and generates an alert on Prism Central when a registered Prism Element cluster sync lags.
The NCC health check cluster_connectivity_status generates an alert on Prism Central when a registered Prism Element cluster Arithmos-IDF sync (i.e metric/stat sync status) lags and if the alert manager is down or in a crash loop. This check runs on Prism Central only. Running the NCC check You can run this check as part of a complete NCC health check: nutanix@PrismCentral$ ncc health_checks run_all You can also run this check separately: nutanix@PrismCentral$ ncc health_checks system_checks cluster_connectivity_status You can also run the checks from the Prism web console Health page: Select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 5 minutes, by default. This check will generate an alert after 3 consecutive failures across scheduled intervals. Sample output For Status: PASS Running : health_checks system_checks cluster_connectivity_status For status: FAIL Running : health_checks system_checks cluster_connectivity_status Output messaging [ { "Check ID": "Tests whether the cluster connectivity is fine" }, { "Check ID": "The connection between Prism Central and Prism Element is down or Services on Prism Central are crashing." }, { "Check ID": "Ensure that cluster network connectivity is fine and all the CVM services are up." }, { "Check ID": "Cluster data shown in the Prism Central is not up to date." }, { "Check ID": "A200000" }, { "Check ID": "Cluster Connectivity Status" }, { "Check ID": "Connectivity issues on cluster cluster_name" }, { "Check ID": "component data from cluster cluster_name is not up-to-date." } ]
Should NCC report a FAIL, verify the following: If running Prism Central (PC) 5.17 or later, and NCC 3.9.5 or earlier, then upgrade NCC to version 3.10.0 or later, and rerun the check.The Prism Element (PE) cluster is reachable through a network from the PC. Check if you can ping the cluster virtual IP address of the PE cluster from the PC VM. If not, fix any underlying network issues that might be present in your environment.On the PE cluster, check if all services are stable. You can use one of the following commands to check the current status of services on the cluster: nutanix@cvm$ cluster status | grep -v UP nutanix@cvm$ allssh "genesis status | grep '\[\]'" Check if any services have crashed recently by running the following command: nutanix@cvm$ allssh "ls -ltrhS ~/data/logs/*FATAL*" If you see any recent .FATAL file generated on any CVM, check if the service is in a crash loop or if it was a one-time occurrence. To do this, you can run the following command to list the service PIDs (process IDs). If the PIDs keep changing, it means that the service is repeatedly crashing. nutanix@cvm$ watch -d genesis status Note: The watch -d genesis status command to confirm whether a service is stable is not a reliable way to check/confirm the stability of the "cluster_health" and "xtrim" services since these services, for their normal functioning, spawn new process ID temporarily. The new temporary process ID or change in the process ID count in the watch -d genesis status command may give the impression that the "cluster_health" and "xtrim" services are crashing, but in reality, they are not. Rely on the NCC health check report or review the logs of the "cluster_health" and "xtrim" services to ascertain if they are crashing.The check runs only on the PC. It checks the last stat sync timestamp received from the PE is within 10 minutes. In case the lag is more than 10 minutes, this check will fail and in case of 3 consecutive failures, the alert should notify Arithmos-IDF sync issue. If the metrics are not visible in the PC for the PE cluster and there is no alert, ensure you are running at least NCC 4.0.0
KB16018
NDB | Oracle DB Registration failed with error "Error occurred while copying era_priv_cmd.sh"
This article is used to cover a scenario the Oracle DB Registration failed with error "Error occurred while copying era_priv_cmd.sh"
Unable to initiate Oracle DB Registration with NDB as it gives the following error Error occurred while copying era_priv_cmd.sh: The vm with IP: x.x.x.26 is not reachable with the credentials provided Firstly, try and isolate most of the issues that could related to credentials and connectivity between the NDB and DB VM Server that affects failure in copying era_priv_cmd.sh.This KB explains a specific situation where the operation for the registration does not start as the registration fails at the server layer in GenericUtil.java where we are copying the era_priv_cmd.sh file. The following traceback is reported in the server.log of the NDB server: 2023-11-07 06:23:36,841 [3-exec-50] INFO [ERADBServerServiceImpl] Register-dbserver request has failed, perform related cleanup Note: The first few lines indicate the GenericUtil.java error.Execute the following commands on the NDBServer to see if that is throwing any errors. In the IP_address field, the DB server VM IP needs to be added. /usr/bin/sshpass -p <password> scp -o StrictHostKeyChecking=no /opt/era_base/era_priv_cmd.sh <user>@<IP_address>:~/ If the above command reports the following failure message, then we are hitting this issue and proceed to the solution section. If not, please continue troubleshooting further Could not chdir to home directory /home/orabase/oracle: No such file or directory This indicates that the Oracle home directory is missing or misconfigured.
Execute the following steps on the DBServer VM:1. Get the user home that is set in /etc/passwd For example, in this case is oraera:x:1010:1002: RIXXXXX: /ora12203/orabase/oracle:/bin/bash [oraera@XXXXXXX ~]$ cat /etc/passwd 2. Create the user home using "sudo mkdir -p <user_home_dir>"For example, in this case command should be sudo mkdir -p /ora12203/orabase/oracle 3. Change ownership of the created directory to the user using "chown <user>:<user> <user_home_dir>" For example, in this case command should be chown oraera:dba /ora12203/orabase/oracle Post that re-try the operation and it should allow you to initiate the DB Registration operation.
KB15959
Nutanix Files - Veeam backup failing with 'NfsGetExports' error at Backup application for Nutanix Files NFS share
Experienced a Ganesha crash when initiating a Veeam backup with File Server 4.3.x. Consequently, the backup is failing for all NFS shares of Nutanix Files, and the File Server is entering a High Availability (HA) state.
Following the upgrade of Nutanix Files to version 4.3.x, Veeam backup is encountering a failure with the 'NfsGetExports' error message within the Backup application, specifically affecting NFS shares. failed: 10/19/2023 9:19:42 AM :: Failed to build object list for <FS-Name>: Error: Failed to call RPC function 'NfsGetExports': The network location cannot be reached. For information about network troubleshooting, see Windows Help. nfs41_readdir on / failed. This behavior results in the NFS Ganesha service crashing, causing Nutanix Files to enter a High Availability (HA) state when initiating the Veeam Backup.Here's how to validate the scenario:1. Upon validation of the minerva_nvm.log, you may observe the "Failed to send RPC request" error message. nutanix@fsvm:grep -B10 "Failed to send RPC request" /home/nutanix/data/log/minerva_nvm.log 2. Multiple events indicating the restart of the minerva_ha service can be observed in both ha_monitor.log and minerva_ha.log. nutanix@NTNX-10-129-xx-xx-A-FSVM:$ allssh 'grep "became the leader for node" data/logs/minerva_ha.log*' nutanix@NTNX-10-129-xx-xx-A-FSVM:/home/log/ganesha$ allssh "grep crash /home/nutanix/data/logs/ha_monitor.log" 3. The occurrence of multiple "NFS server initialized" messages in ganesha.log is indicative of the Ganesha service crashing. nutanix@NTNX-10-129-xx-xx-A-FSVM:/home/log/ganesha/cores$ allssh 'sudo less /home/log/ganesha/ganesha.log | grep -i "NFS SERVER INITIALIZED" | wc -l' 4. Ganesha cores files will be generating while backup process is initiated. nutanix@NTNX-10-129-xx-xx-A-FSVM:/home/log/ganesha$ allssh "ls -l /home/log/ganesha/cores/" 5. Verify the Ganesha core and it will show below signature message and also shows it coring because of segfault since NULL fh pointer is passed to minerva_is_remote_fh. nutanix@NTNX-10-129-xx-xx-A-FSVM: sudo gdb /home/log/ganesha/cores/<core filename>
Root Cause: When the readdir is performed on FS pseudo root "/", the Nutanix NFS server crashes when the NFSv4 client asks for FS level attribute like FATTR4_SPACE_AVAIL. And when it does, we have a bug where we end up crashing while fetching FS level attribute due to the mismatch child FSAL and parent object handle. This issue doesn't happen for Linux NFS client as it doesn't ask for FS level attributes during readdir. There are proprietary NFSv4 clients like that one for Veeam Backup Manager which asks for FS level attribute during readdir so a subsequent GETATTR could be avoided. Permanent fix available on Nutanix files version 4.4.x.This crash is caused by Veeam client's use of readdir attributes. Ganesha does not seem to expect FS level attributes during a readdir and issue is tracked under ENG-584040 https://jira.nutanix.com/browse/ENG-584040.Workaround: A permanent solution to the identified issue is to upgrade the File Server to version 4.4.x or a later release. This upgrade is recommended to address and resolve the problems associated with the current version.If an immediate upgrade is not feasible, an interim solution involves disabling NFSv4 on the file server. Following this, it is necessary to re-mount the shares on the client side to mitigate the issue. This temporary measure can help address the problem while planning for the eventual upgrade to a compatible version.Instructions for Deactivating NFSv4 for Nutanix Files via GUI1) Launch File server console from PE2) Go to configuration tab and select Authentication Option3)Under directory service select "Show NFS Advanced" Options4) Uncheck NFS 4 option
KB13146
Flow Network Security may block traffic using virtual IP address or forwarded traffic
Nutanix Flow Network Security (FNS) controls traffic using IP address. If the protected VMs uses virtual IP addresses that are not discovered by AOS, or forwards traffic from other clients, then the traffic is blocked by FNS.
Flow Network Security (FNS) may inadvertently block traffic if the traffic is sent from a virtual IP address (VIP) or the traffic is forwarded from another client. You may observe this behavior even if the applied security policy allows the condition. Blocked Traffic in the security policy or policy hitlogs does not appear to be the blocked traffic in these scenarios.Several scenarios using VIP or forwarding exist as the following. Flow Network Security depends on the same mechanisms as AHV IPAM or ARP/DHCP snooping to discover VM IP addresses. Scenarios 1-3 exist where the source IP address cannot be discovered. Flow Network Security detects the traffic using undiscovered IP addresses as invalid traffic and hence blocks it. Scenario 1. Load balancing using masquerading (NAT) or a virtual router. In this scenario, traffic handling follows the logic below: A load balancer (LB) receives the request packet from the client.LB forwards the request packet to one of the real servers (backend, RS) by rewriting the source MAC address to the LB's MAC address and the destination IP address to the target RS while keeping the source IP address (client IP address).RS receives the request packet and sends a response to LB. Generally, the default gateway of RS is set to LB in this configuration.LB receives the response packet and forwards it to the client. In a scenario where Flow Network Security protects an LB, Flow Network Security blocks the forwarding traffic from the LB regardless of the configured security policy. Opposite to the above, where Flow Network Security protects an RS (and not an LB) and the policy allows traffic, then Flow Network Security honors the policy and allows the traffic, including VIP traffic. Scenario 2. Load balancing with direct server return configuration (DSR) In this scenario, VIP and traffic handling follows the logic below: A load balancer (LB) responds to ARP requests for VIP. Prism does not show VIP as an IP address of real servers (backend, RS).RS does not respond to ARP requests for VIP but responds to IP packets destined for VIP when the packet arrives.LB initially receives the request packet from the client.LB forwards the request packet to one of the RS by rewriting the destination MAC address to the target RS while keeping the source IP address (client IP address) and destination IP address (VIP).RS receives the request packet whose destination IP address is VIP.RS sends a response based on the received packet, that is, sends it back directly to the client that has the source IP address of the request packet. The source address of the response packet is VIP. In a scenario where Flow Network Security protects either LB or RS (or both), Flow Network Security blocks the following traffic regardless of configured security policy: LB->RS traffic with a source IP address of the client IP address RS->Client traffic with a source IP address of VIP Scenario 3. An active-active virtual IP configuration using Windows Network Load Balancer (Windows-NLB) In this scenario, VIP and traffic handling follows the logic below: All members of Windows-NLB respond to ARP requests for VIP with a virtual MAC address. Prism does not show VIP as an IP address of any members.Some members receive the request packet whose destination IP address is VIP. The detailed behavior of delivering the request packet to members depends on NLB operation modes.Depending on the source IP address, one member responds to the client. The source address of the response packet is VIP. In a scenario where Flow Network Security protects members of Windows-NLB, Flow Network Security blocks the response packet from each member with the source IP address of the VIP, regardless of configured security policy. Scenario 4 (not affected). An active-passive virtual IP configuration (for example, Linux HA cluster with Keepalived or Windows Server Failover Cluster (WSFC)) In this scenario, VIP and traffic are handled as the following. One of the cluster members becomes active and holds VIP. The active member responds to ARP requests for VIP with its own MAC address. Prism shows VIP as an IP address of the active member.An active member received the request packet from the client.An active member sends a response based on the received packet. The source address of the response packet is VIP. In a scenario where Flow Network Security protects the cluster members, Flow Network Security does not block traffic, including VIP-related traffic, if the configured security policy allows the traffic.Note: There is a case that traffic for VIP is blocked even in Scenario 4, depending on the network environment. See KB-14166 https://portal.nutanix.com/kb/14166. How to check the discovered IP address by Flow Network SecurityPrism Element GUI>Home>VM> select the VM, and move the cursor to the “IP address” column. It pops the discovered address in the new overlay window.
The current workaround is the removal of VMs that includes undiscovered IP from security policies. Nutanix Engineering is aware of the issue of Scenarios 2 and 3 and is working on a fix in a future release.
KB12965
Security Dashboard STIG Guidance Reference
Security Dashboard STIG Guidance Reference
STIG/SRG Overview STIG (Security Technical Implementation Guide) - These are the guidelines that DISA (Defense Information Systems Agency) publishes for certain components of information technology and communication infrastructure used by the defense community that DISA supports.SRG (Security Requirements Guide) - These are used as a sort of overarching security methodology for a type of technology, such as a web browser. They are full of general statements and recommendations on how to secure a type of technology, without calling out a specific flavor or vendor. The primary difference is that STIGs mandate particular configuration settings and precise steps to confirm compliance while SRG provides a more general statement of a requirement. Most, if not all, STIGs are based on SRGs and take the general requirements and further define how a specific vendor’s product should be configured to meet the SRG requirement. While a STIG would tell you specific settings to lock down, SRG(s) would provide a guideline for how to secure an operating system in general. So in the absence of a relevant STIG, an SRG would be used. Updates to the STIGs can be published quarterly so it is important to ensure to reference the most recently released STIG, though they are not released as frequently as CVEs. Updates to the SRGs can also be published quarterly. Link to STIGs download site: https://public.cyber.mil/stigs/downloads https://public.cyber.mil/stigs/downloads Who uses STIGs? US DoDMany other US agenciesNon-US governmentsKey infrastructure companiesSecurity-conscious customers around the world Audience This article is intended for system administrators, Information Systems Security Officers (ISSOs), and auditors who are responsible for configuring, testing, or validating controls based on the required compliance framework. Background Nutanix has a goal of making security simple for our customers. We strive to do this in such a way that it is almost invisible. Part of achieving this goal is for us to have a foundation in inherently secure products. When we develop our products with security ingrained, we can then move to a position of maintaining, and when needed, improving the security in these products. Part of our overall security posture is conformance to STIG controls. Nutanix strives to adhere to and comply with DISA STIG requirements following their STIG / SRG guidance. Software Requirements The information contained in this document is verified on AOS. Related Documentation Refer to the Nutanix Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:Nutanix-Security-Guide for information about using Nutanix security features and hardening instructions. STIG Compliance on Nutanix Products Note: Refer to the Nutanix Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:Nutanix-Security-Guide for information about components that are not built into the Nutanix systems by default. STIG Contents Red Hat Enterprise Linux 7 STIG - Ver 3, Rel 6 https://download.nutanix.com/kbattachments/12965/Red%20Hat%20Enterprise%20Linux%207%20STIG%20-%20Ver%203%2C%20Rel%206.pdf Red Hat Enterprise Linux 7 STIG - Ver 3, Rel 7 https://download.nutanix.com/kbattachments/12965/Red%20Hat%20Enterprise%20Linux%207%20STIG%20-%20Ver%203%2C%20Rel%207.pdf Red Hat Enterprise Linux 7 STIG - Ver 3, Rel 8 https://download.nutanix.com/kbattachments/12965/Red%20Hat%20Enterprise%20Linux%207%20STIG%20-%20Ver%203%2C%20Rel%208-1.pdf
The STIG Guidance Reference provides information on how, in certain cases, Nutanix may meet certain RHEL STIG controls in ways that differ from how they are defined in the RHEL STIG. There are also some controls that do not apply to the Nutanix product at all. This document identifies these controls and provides an explanation of why the control does not apply, or are met in a different way than specified in the RHEL STIG. Nutanix uses SaltStack / SCMA (Security Configuration Management Automation) to maintain the security posture of the operating system and to self-correct the security baseline configuration of the AOS operating system to remain in compliance. If these components are found to be non-compliant, then the component is set back to the required security settings without needing intervention.
KB15015
NKE cluster deployment will fail when only HTTP proxy is selected in Prism Element
NKE cluster deployment will fail when only HTTP proxy is selected in Prism Element
Customers using proxy in their environment may notice new NKE cluster deployment will fail with the following error in karbon_core.out file, 2023-06-20T00:26:20.31Z etcd.go:601: [ERROR] [k8s_cluster=test] failed to deploy etcd profile on nodes ([653b1a3d-4f5b-437e-88f7-d191f1dcf53c 84467deb-aec8-4518-9c3e-364247ca6a85 076f6240-6b15-40b7-b917-ac3270b305bf]): Operation timed out: context deadline exceeded Checking the etcd vm you may see the below error for etcd service logs [root@test-7827a7-etcd-0 ~]# journalctl -u etcd --no-pager -f Checking for proxy configured for docker service you will only see a HTTP proxy configured like below, (check highlighted section) [root@etcd-0 ~]# systemctl cat docker Trying to do a docker pull with HTTPS proxy set works fine but with HTTP_PROXY it fails. (NOTE: the below commands are run in etcd vm, master or worker vms have containerd so the commands are different) [root@etcd-0 ~]# docker pull nginx In the command above replace <proxy_ip> and <proxy_port> with the correct proxy server ip address and port no. for your customer environmentTrying the same with HTTPS_PROXY set will work fine. To test with HTTPS run the following commands below, [root@etcd-0 ~]# sed -i 's/HTTP_PROXY/HTTPS_PROXY/' /etc/systemd/system/docker.service.d/http-proxy.conf Try to pull the image works fine with HTTPS [root@etcd-0 ~]# docker pull nginx Checking the Prism Element -> Settings -> HTTP Proxy page you will see only http option set,
NKE detects the proxy configured in the underlying PE and configures the proxy for all the NKE vms. When the underlying PE has only HTTP option selected in proxy type, NKE deployments will fail since docker and containerd expect a HTTPS proxy to connect to https://quay.io causing image pull failures. To resolve the issue check the HTTPS proxy type in Prism Element cluster where the NKE cluster will be deployed on.
{
null
null
null
null
KB2062
Supportability for Intel TXT
Supportability for Intel Trusted Execution Technology
Support for Intel TXT (formerly TPM) on Nutanix platform http://www.intel.com/content/www/us/en/architecture-and-technology/trusted-execution-technology/where-to-buy-isv-txt.html http://www.intel.com/content/www/us/en/architecture-and-technology/trusted-execution-technology/where-to-buy-isv-txt.htmlSupermicro as a vendor supports Intel TXT
As of now, Nutanix does not support TXT (Trusted Execution Technology).There are plans to integrate this feature, however there are no timelines attached.
KB3751
NCC Health Check: nvme_status_check
The NCC health check nvme_status_check checks the status of NVMe drives on a node. It checks both standard and additional attributes in the smart data of an NVMe drive. It logs all the attribute values and checks the values of some attributes by comparing them with expected values or thresholds. If everything is OK, it returns pass; otherwise, it returns a warning with a warning message.
The NCC health check nvme_status_check checks the status of NVMe drives on a node. It checks both standard and additional attributes in the smart data of an NVMe drive. It logs all the attribute values and checks the values of some attributes by comparing them with expected values or thresholds. If everything is OK, it returns pass; otherwise, it returns a warning with a warning message.AOS 5.0 release brings in the support for NVMe drives on AHV and ESXi 6.0 hypervisors.AOS 5.10 release brings in the support for NVMe drives on Hyper-V.This check is scheduled to run every 30 mins. The alert is raised on the first failure.Running the NCC Check: You can run this check as part of the complete NCC Health Checks ncc health_checks run_all You can use the command below to run this check ncc health_checks hardware_checks disk_checks nvme_status_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. Sample Output: Check status: WARN Detailed information for nvme_status_check: Check Status: ERROR Detailed information for nvme_status_check: Output messaging Note : This hardware related check executes on the below hardware [ { "Check ID": "Check that host NVMe drive is functioning properly." }, { "Check ID": "NVMe Drive is damaged or worn out." }, { "Check ID": "Replace the problematic NVMe drive as soon as possible. data from NVMe drive may be harmed." }, { "Check ID": "NVMe Drive on ip_address has errors." }, { "Check ID": "NVMe Drive has errors." }, { "Check ID": "NVMe Drive on host ip_address has errors: alert_msg" }, { "Check ID": "This check is scheduled to run every 30 minutes , by default." }, { "Check ID": "This check will generate an alert after 1 failure." }, { "Check ID": "Nutanix NX" }, { "Check ID": "Dell XC" }, { "Check ID": "Fujitsu SR" }, { "Check ID": "PowerEdge" }, { "Check ID": "Intel" } ]
There are a few SMART attributes to monitor and probe the status of the drive, for example, some of the things which are monitored by this check are as below: NVMe Controller IssuesDrive TemperatureSpare capacity available on the driveMedia Errors To check the NVMe drive model number : nvme id-ctrl <drive device handle> | grep mn To List, the NVMe drive on the node use the command below : nutanix@cvm:~$ sudo nvme list If you do see any Warnings reported by this NCC check, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/
KB15272
AHV 20230302.x - upgrade via reimaging troubleshooting guide
AHV 20230302.x - upgrade via reimaging troubleshooting guide
Starting from 20230302.xxx and LCM 2.6.2 new way of AHV upgrades is introduced: upgrade via reimaging.To date, all AHV upgrades were done ‘in place’. This works for incremental releases (based on the same underlying version of RHEL). Significant differences (new RHEL version, partition changes) are more difficult/risky to implement via ‘in place’ upgrades. AHV 9 (20230302.xxx) is based on Rocky Linux and has a different partition layout.As a result first upgrade to 20230302.xxx will be done through reimaging. Upgrades between 20230302.xxx builds will continue to be done ‘in place’.Note: upgrade via reimaging will take longer compared to an ‘in place’ upgrade. This happens due to 2 host reboots during reimaging compared to 1 during an ‘in place’ upgrade.
Reimaging overviewUpgrade via reimaging consists of the following steps: Backup host stateInstall the host with the new versionRestore host state LCM 2.6.2 is a prerequisite for upgrades via reimaging.Pros Cannot be affected by local modifications performed on hosts.Unsupported modifications are removed.Guaranteed to have the same file system as a new install (except configuration). Cons Two reboots are required (into reimager, back to AHV).LCM cannot (yet) monitor progress in real-time. Preparation Phase LCM checks there is sufficient space to store artifacts (AHV ISO, Nvidia driver RPM).LCM copies artifacts to the host.LCM executes the precheck script.Checks all expected host state configuration files are present.LCM executes the preparation script.Copies reimager binary to /boot.Adds bootloader menu entry for next boot.If successful, LCM reboots the host into reimager. If required configuration files are missing the LCM will fail with the following error: Operation Failed. Reason: LCM operation update failed on leader, ip: [x.x.x.x] due to Caught exception during upgrade: Reimage precheck failed, aborting. Other sample failure: Operation Failed. Reason: LCM operation update failed on leader, ip: [x.x.x.x] due to Caught exception during upgrade: Error mounting ahv installer iso: ret: -1, out: , err: [Errno 2] No such file or directory. Review /var/log/lcm_ahv_upgrade.log for more details about the failure.Reimager EnvironmentReimager is a stripped-down Linux appliance that uses the target AHV kernel. The environment is fully automated, and no user input is expected.No networking is available in the reimager.It has multiple protections to prevent infinite hangs:Failure in bootloader - fall back to booting into AHV after timeout. Kernel hangs or panics - host reboots back to AHV.Userspace hangs - watchdog reboots back to AHV.Installer VM fails to start within 2 mins - watchdog reboots back to AHV.Installer fails to complete within 60 mins - watchdog reboots. A sample LCM error: Operation Failed. Reason: LCM failed performing action ahv_reimage_success_check in phase PostActions on ip address x.x.x.x. Failed with error '/var/spool/ahv-upgrade-module/.reimage_incomplete marker file is present on the host; a reimage must have failed. Marking the upgrade as failed.' State SavePython script on AHV ISO collects a set of files into an archive. The list of files to be saved is defined by a JSON file. A sample file: https://sourcegraph.ntnxdpro.com/gerrit/ahv-metapackage/-/blob/etc/host-reimage-files.json https://sourcegraph.ntnxdpro.com/gerrit/ahv-metapackage/-/blob/etc/host-reimage-files.jsonAn archive is stored initially in RAM and then copied to the AHV disk. Note: Only some logs are saved. See https://gerrit.eng.nutanix.com/c/ahv/+/760807/7/pkg/iso/prepare_reimage.sh https://gerrit.eng.nutanix.com/c/ahv/+/760807/7/pkg/iso/prepare_reimage.sh for example.Potential failures: Unable to find AHV root diskUnable to read config files Failure log is written to /var/spool/ahv-upgrade-module/.reimage_incompleteA sample LCM error: Operation Failed. Reason: LCM failed performing action ahv_reimage_success_check in phase PostActions on ip address x.x.x.x. Failed with error 'Reimaging error: <truncated> FileNotFoundError: etc/modprobe.d/pci-passthru.conf\nHost state backup failed (Re)installationUses an identical approach to Phoenix - QEMU-based VM booted off AHV installation ISO. Workflow is driven by JSON file in metadata ISO. State save/restore performed as anaconda %pre and %post scripts.IPMI console will show the current reimaging operation like “Starting imaging” or “Performing post-install tasks”.Use the following command to connect to the host’s serial console to check the installation status: nutanix@cvm:~$ ipmitool -I lanplus -H <IPMI IP> -U <user> -P <password> sol activate Press Ctrl+B followed by “2” to get to a shell. State RestorePerformed after package installation but before running upgrade_config.sh.Failures towards the end of the installation process (i.e., after packages have been installed to the file system and when finalizing scripts are being run) are logged but don't cause the installation to halt. This is because the only way to be able to report them is to try and complete the reimage in the hope that LCM will be able to go in and recover the log file.The error is written to /var/spool/ahv-upgrade-module/.reimage_incomplete.Log filesMost logs are located in /var/log/installer.Some notable logs: State save: /var/log/installer/pre-scripts.logCopy state to AHV disk: /var/log/installer/pre-install-scripts.logState restore: /var/log/installer/post-scripts.logSaved state archive: /var/log/installer/host-state.dump. It can be extracted as any normal tar.gz file. Note: NCC 4.6.5.1 is required to collect logs from /var/log/installer/ folder.
KB16769
Nutanix Files - net.disable_ipv6 failing
IPv6 disable failing with OSError: [Errno 101] Network is unreachable
When using the AFS cli to disabled IPv6, on Prism Element or Prism Central deployed Nutanix File, you may receive the following error:Prism Element Deployed <afs> net.get_ipv6 Prism Central Deployed <afs> net.get_ipv6
Running the command outside of the interactive AFS CLi will allow for it to complete. nutanix@NTNX-A-FSVM:~$ afs net.enable_ipv6 If you need assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com./. Collect additional information and attach them to the support case. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB 2871 https://portal.nutanix.com/kb/2871.Collect Files related logs. For more information on Logbay, see KB-3094 https://portal.nutanix.com/kb/3094. CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run nutanix@cvm:~$ afs info.get_leader Once you are on the Minerva leader CVM, run: nutanix@CVM:~$ ncc log_collector --file_server_name_list= --last_no_of_days=5 --minerva_collect_sysstats=True fileserver_logs For example: nutanix@CVM:~$ ncli fs ls | grep -m1 Name Attaching files to the caseTo attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB1904
NCC Health Check: cvm_port_group_renamed_check
The NCC health check cvm_port_group_renamed_check determines if all the Ethernet ports in vmx files are present in the port group list.
The NCC health check cvm_port_group_renamed_check determines if the port group from the CVM (Controller VM) .vmx file matches the port group configured on the management vSwitch. This check only applies to ESXi hypervisors. This check does not apply to distributed vSwitches. Running the NCC Check You can run this check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or you can run this check individually: nutanix@cvm$ ncc health_checks hypervisor_checks cvm_port_group_renamed_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day, by default. This check generates an alert after 1 failure. Sample output For Status: PASS Running /health_checks/hypervisor_checks/cvm_port_group_renamed_check on all nodes [ PASS ] For Status: FAIL Running /health_checks/hypervisor_checks/cvm_port_group_renamed_check on all nodes [ FAIL ] Output messaging [ { "Check ID": "Check that CVM port group name has not changed" }, { "Check ID": "CVM port group has been renamed on the ESXi hosts." }, { "Check ID": "Refer to KB 1904" }, { "Check ID": "AOS upgrade may fail." }, { "Check ID": "A106428" }, { "Check ID": "CVM Port Group Renamed" }, { "Check ID": "CVM port group has been renamed on the ESXi hosts" } ]
Verify that the CVM has the correct Port Group configured for its virtual network adapters. You can cross-check this in vSphere by clicking on the Host > Configuration Tab > Networking and CVM > Edit Settings > Network adapter1.The vSwitch Port Group used for management/external communication should match the Port Group in the CVM settings. Note: There is also a vSwitch named vSwitchNutanix, with a Port Group named svm-iscsi-pg, which is the internal connection between the CVM and the host (192.168.5.x/24), this vSwitch and Port Group should not be modified. If these are different than the default, you may need to rename or recreate the vSwitch/Port Group.
KB16863
Prism Central login failure after every reboot
When name server order is incorrect in zeus_config_printer it can cause Prism Central login to fail immediately after a Prism Central reboot. The issue will resolve only after a mercury service restart.
Customers may notice after a reboot of the Prism Central VM login to UI will fail with 403 Unauthorized error. Prism Central cluster should have following conditions met,1. Prism Central should have CMSP enabled2. After every reboot customer will not be able to login with 403 error. Issue will be resolved by restarting mercury service, but will return if customer reboots the PC VM again.If the above conditions are met, verify the below signatures are seen.Mercury logs (mercury.INFO) will show Curl error: 2 trying to resolve iam-proxy.ntnx-base url I20240415 02:02:27.288221Z 77776 async_rest_client.cc:840] Initializing AsyncRestClient with a new CurlEventDriver After restart genesis logs will show it is correcting DNS order in /etc/resolv.conf 2024-04-15 02:01:09,919Z INFO 32954896 node_manager.py:7334 Dns search suffixes on the svm None are not in-sync with zeus's [u'prism-central.cluster.local'] In zeus_config_printer output you can see 127.0.0.1 is shown as the last entry. nutanix@NTNX-PCVM:~$ zeus_config_printer | grep name_server
When Prism Central VM restarts genesis will check if /etc/resolv.conf file content is same as the name_server_ip_list in zeus_config_printer. This check will verify both the order of the DNS servers and the ip addresses. In this example 127.0.0.1 was the last entry causing /etc/resolv.conf to be overwritten pushing 127.0.0.1 as the last entry like below for a very short time, nutanix@NTNX-PCVM:~$ cat /etc/resolv.conf The issue flow will be like below,1. PC is restarted by customer2. Genesis starts first and changes /etc/resolv.conf file pushing 127.0.0.1 as the last entry3. Mercury starts reads the /etc/resolv.conf and sends all the DNS resolution requests to customer's DNS server since it is in the top list, Customer's DNS will not be able to resolve IAM urls and will fail.4. MSP controller starts in few minutes, finds 127.0.0.1 is the last entry and corrects the /etc/resolv.conf putting 127.0.0.1 as the first entry.5. Customer finding logins are failing, will restart mercury service, on restart Mercury will read the corrected /etc/resolv.conf and will be able to resolve iam urls fixing the login issue.In CMSP enabled clusters 127.0.0.1 should be the first entry, if not urls like iam-proxy.ntnx-base cannot be resolved by services like Mercury, running in the Prism Central VM. To resolve the issue, from the Prism Central UI under Settings -> Name Servers page, remove the DNS customer's DNS server ip entries one by one until 127.0.0.1 comes up as the first entry and then re-add the removed customer DNS entries back. Confirm from zeus_config_printer the order shows correctly like below, nutanix@NTNX-PCVM:~$ zeus_config_printer | grep name_server
}
null
null
null
null
KB10254
Steps to deploy Citrix ADM on Nutanix AHV
Steps to deploy Citrix ADC on Nutanix AHV
Citrix Application Delivery Management (ADM) formerly known as MAS is a centralized management solution which can be used to manage, monitor, and troubleshoot the entire global application delivery infrastructure from a single, unified console. Citrix ADM is also available as a Nutanix Ready solution which can be deployed on AHV. We have seen cases where customers face issues while deploying Citrix ADM on Nutanix AHV.
BELOW ARE THE STEPS USED TO DEPLOY CITRIX ADM ON NUTANIX AHV Download the Citrix MAS qcow disk image from Citix site and upload it using image configuration tab in prism Create a VM using this qcow disk image (IDE only) {Only 1 disk should be attached to the appliance with IDE bus} After creating a VM add serial port from acli to the VM using below command nutanix@cvm:~$acli vm.serial_port_create <VM Name> type=kServer index=0 Post this power on the VM and login with username "nsrecover" password "nsroot" from VM console. Update network configuration through "networkconfig" script once you login from host console. Fill the network details using the options and hit 7 for saving the details. After updating the network verify that IPaddress is assigned using "ifconfig" and verify with ping that gateway is reachable. Once network is verified run "deployment_type.py" script from the VM console and deploy standalone ADM by choosing option 1. After selecting option 1 you will be asked to reboot the appliance enter yes. Once the VM is back up you will be able to access the ADM GUI which can be used by the customers to manage application delivery infrastructure.
}
null
null
null
null
KB6490
Secure Log flooded with "sshd[xxx]: Postponed publickey for nutanix from xx.xx.xx.xx"
Secure log (/home/log/secure) is flooded with the following: sshd[xxx]: Postponed publickey for nutanix from xx.xx.xx.xx port xx ssh2 [preauth]
The following message is continuously logged in /home/log/secure: 2018-11-16T04:01:17.823968+00:00 NTNX-18Sxxx-B-CVM sshd[19359]: reprocess config line 67: Deprecated option RhostsRSAAuthentication
Latest AOS are based CentOS 7.4 or higher. Centos 7.4 made some security related updates to sshd to tighten security. One of them is deprecating RSAAuthentication support. This is the reason for the log. To fix the issue, do the following as "root": Edit /etc/ssh/sshd_config and comment the line "RhostsRSAAuthentication no". After the edit, the sshd_config entry should look like: # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts Edit /etc/ssh/ssh_config and un-comment the "Host *" entry just above "RhostsRSAAuthentication". After the edit, the ssh_config entry should look like: Host * Restart sshd service root@NTNX-18SM6F1xxCVM~# service sshd restart This should fix the issue.
KB4320
Error: "All required agent virtual machines are not currently deployed on host '.' or 'hostname'" when trying to power on CVM
You have NSX installed in your vSphere Environment, and the Host you are trying to power on the CVM, do not have NSX agent VMs running. Install those agent VMs on those Hosts.
Description:This may be because you have NSX installed in your vSphere Environment, and the Host you are trying to power on the CVM, do not have NSX agent VM's running. These agent VM's are used by NSX to monitor VM's on the Host via DRS.
Install those agent VMs on those Hosts.Workaround: Disable vSphere DRS (not ideal). NSX piggy-backs on DRS to monitor VM's on the ESXi Host.Restart the hostd processes on the affected ESXi host by running this command: root@ESXi# /etc/init.d/hostd restart If any task gets stuck in the vSphere client there might be need to restart also the VCenter Agent processes on the affected ESXi host by running this command: root@ESXi# /etc/init.d/vpxa restart VMware KB 50121349 https://kb.vmware.com/s/article/50121349 is reporting a similar issue and suggesting the previous steps as a resolution.The following steps can be used to power on the CVM using the cli: Check the Vmid of the CVM (normally 1): root@ESXi# vim-cmd vmsvc/getallvms Note the Vmid of the CVMCheck the Power status of the CVM: root@ESXi# vim-cmd vmsvc/power.getstate <cvm_vmid> Power on the CVM root@ESXi# vim-cmd vmsvc/power.on <cvm_vmid>
KB2559
IPMI event log - CATERR - Asserted or IERR
The following article describes how to troubleshoot when the error "CATERR - Asserted" appears in the ipmi event log.
When the host is unresponsive or has unexpectedly restarted, check if a CATERR or IERR has been logged at the time of the occurrence.These 2 symptoms will be evident for any CATERR or IERR assertion: Hypervisor system crash (PSOD/BSOD) or lockup/freeze or an unexpected restart occurred.CATERR or IERR event is logged in the IPMI SEL event logs. If a system crash (PSOD/BSOD) is generated, take a screenshot of the system crash screen and verify the IPMI event logs.To check the event logs, do the following: Login to the IPMI Web UI.Go to the Server Health tab.Select the Event Log or Health Event Log option from the left.After reviewing the output on the screen, click on the Save button to export the output to CSV. The IPMI event log may indicate whether there is a hardware error based on the memory or CPU. NOTE: Intel processors assert the CATERR(Catastrophic Error) / IERR(Processor Internal Error) state when a CPU is not able to correct a condition, but this may not always indicate a failed hardware. Some causes are transient (a rare internal timeout to a device) that may never reoccur.Diagnosis requires isolation of a broad set of potential causes, related to the node CPU or Peripheral devices. Remediation may involve different hardware components, firmware, or BIOS (Intel microcode).
Known possible causes: DIMM Failure If there are multiple entries in the IPMI event log indicating Correctable Memory - Asserted leading up to an event entry of CATERR - Asserted, the memory DIMM has a problem. The logs highlight which DIMM slot has a faulty memory. 561 2015/03/30 10:20:01 OEM Memory Correctable Memory ECC @ DIMMB2(CPU1) - Asserted Similarly, if a DIMM error is observed immediately following a CATERR/IERR, that memory DIMM has a problem. 16 Critical 2019-08-03 21:35 Processor CATERR has occurred - Assertion The DIMM can be replaced if you observe either of the above 2 series of events. Please note that entries for successful hPPR or ePPR are not considered DIMM errors regardless that the "Sensor" title says "BIOS OEM (Memory Error)".Nutanix recommended BMC and BIOS releases may also resolve memory-related cause on the NX-platform. Please review the release notes of BMC and BIOS releases https://portal.nutanix.com/page/documents/details/?targetId=Release-Notes-BMC-BIOS:Release-Notes-BMC-BIOS and make sure to always update to the latest firmware version. CPU Fault (for G4/G5 node with the recommended BMC/BIOS) If there are no memory errors in the event log, and the message CATERR - Asserted can be seen, there is a hardware issue with the CPU. 14 2015/06/22 08:45:02 OEM CPLD CATERR - Asserted LCM BIOS upgrade to P[X]70.002 (Issue observed in G6/G7) Nutanix has identified an issue during BIOS upgrade leading to CATERR assertion on the SEL. This issue might be seen on rare cases leading to LCM failure on rebooting host. Refer to KB-14619 http://portal.nutanix.com/kb/14619 for identifying if you are hitting this issue and proceed with the solution in the KB. CATERR or IERR causing unexpected reboot or crash of a node with Intel E-810 4-port 25/10GbE It has been observed that, in rare conditions, a node equipped with one or more Intel E810 chipset-based cards may unexpectedly restart or crash, having CATERR or IERR logged in the SEL event. Refer to KB-14333 http://portal.nutanix.com/kb/14333 for identifying if you are hitting this issue and proceed with the solution in the KB. In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. Important: When the host is in a hung state (e.g. while the PSOD or BSOD is still displayed), if you have observed a CATERR or IERR then if possible do not restart the server until you have consulted with Nutanix support, as additional troubleshooting data is most effectively collected before a restart. The BMC may also automatically save some troubleshooting data during an occurrence, that may be collected during your engagement with Nutanix support.Depending on the issue, and Out Of Band (OOB) script will be provided by Nutanix, to run from another system, that will capture further troubleshooting data needed to identify the root cause.
KB8042
Logbay task stuck at running due to an unexpected cluster event
If an unexpected cluster event occurs during a Logbay job execution, the collection stops or completes but task still appears in progress or incomplete in ergon/Prism Tasks.
Issue: There can be instances where if an unexpected cluster event occurs during Logbay job execution, the log collection stops but the cluster task still appears in progress or in an incomplete state in ergon/Prism Tasks list.Cause: This happens because Logbay has a task execution timeout to clean-up any long-running/abandoned tasks which is a bit long at the moment but if the end-user simply waits for some time, then the task will be cleaned up by itself.Symptoms:Output of "ecli task.list include_completed=false" lists the stuck logbay task as below: nutanix@cvm$ ecli task.list include_completed=false Output of "progress_monitor_cli --fetchall" may be empty.
Wait for the task to time out. If the task still continues to show in a running state for at least 24 hours, contact Nutanix Support http://portal.nutanix.com for assistance.Engineering is aware of this issue and is working on the resolution.
KB8275
Alert - A130161 - Protection Rule Conflict Occurred
Investigating the ProtectionRuleConflicts alert on a Nutanix cluster.
This Nutanix article provides the information required for troubleshooting the alert A130161 - Protection Rule Conflict Occurred for your Nutanix cluster. Alert Overview The A130161 - Protection Rule Conflict Occurred alert occurs when a VM cannot be protected due to conflicting protection rules, this could be due to one or more protection rules protecting the same entity. Sample Alert Block Serial Number: 18SMXXXXXXXX Output Messaging [ { "130161": "Unable to protect VM due to conflicting protection rules.", "Check ID": "Description" }, { "130161": "One or more protection rules protect the same entity.", "Check ID": "Cause of failure" }, { "130161": "Resolve all the conflicts between the protection rules or modify the VM to ensure that it is protected under one protection rule.", "Check ID": "Resolutions" }, { "130161": "VM cannot be recovered if a disaster occurs.", "Check ID": "Impact" }, { "130161": "A130161", "Check ID": "Alert ID" }, { "130161": "Protection Rule Conflicts", "Check ID": "Alert title" }, { "130161": "Failed to apply the protection rule on the VM '{vm_name}', because of '{reason}'.", "Check ID": "Alert message" } ]
TroubleshootingThe issue occurs when VM is already protected under a protection policy and the VM is added to the new protection policy using the APIs.For additional information, review the Nutanix Disaster Recovery Guide Protection Policies https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-create-asynchronous-protectionpolicy-pc-t.html. Note: Nutanix Disaster Recovery was formerly known as Leap. Resolving the issueModify the VM to ensure that it is unprotected before adding it to a new protection policy. Collecting Additional InformationIf you need further assistance or if the above steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Gather the following information and attach them to the support case. Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 http://portal.nutanix.com/kb/2871. Attaching Files to the CaseTo attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. Requesting AssistanceIf you need assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, you can also contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. Closing the CaseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case.
KB13206
Nutanix Object store may become unavailable upon upgrading to PC 2022.4
Object Clusters deployed on AHV while using MSP 1.0.8 and below may become unavailable following the upgrade to Prism Central (PC) 2022.4.
Nutanix has identified a rare issue where Object stores on AHV may become unavailable following the upgrade to Prism Central (PC) 2022.4.x. Nutanix Objects on AHV originally deployed with MSP version 1.0.8 and below, and is currently on Prism Central (PC) 2022.4 or was at one point upgraded from Prism Central (PC) 2022.4 are affected by this issue. Notes: MSP 2.4.1 was bundled with PC 2022.4Customers who originally deployed Objects on AHV with MSP version later than 1.0.8, or those who have not yet upgraded to MSP 2.4.1 (hence are running earlier versions) are not vulnerable to this issue.Deployments of Objects on ESXi are not affected. To determine if your cluster is impacted, follow the steps below: Determine the version of MSP, and Prism Central of the cluster: Log on to Prism Central.Click on Menu (top left) > Administration > LCM > Inventory Determine the version of MSP that Nutanix Objects was initially deployed against. Log in to the Prism Central VM the Nutanix Objects Cluster is registered with and then run the below commands. Determine the name of the Objects Cluster: nutanix@PCVM:~$ mspctl cluster list Sample output: NAME UUID TYPE STATUS DESCRIPTION The Objects Cluster is of Type “service_msp”. Determine the MSP version the Objects Cluster was deployed with. Execute the below command against each object cluster reported in the above output ("cluster name" is the "name" of the Objects Cluster): nutanix@PCVM:~$ mspctl cluster get <cluster name> -o json The example below shows the command and the output: nutanix@PCVM:~$ mspctl cluster get smsp -o json Determine the upgrade history of the Prism Central Cluster if the current version of Prism Central is greater than PC 2022.4. Note that the below command will report "No such file or directory" if PC has not been upgraded during its life cycle. SSH to the Prism Central VM and run the below command: nutanix@PCVM:~$ allssh cat ~/config/upgrade.history|grep -i 6.1-stable-26d7c18d22e894848d6edc5bb7b1f6c2185295c8 The example below shows the command and the output: nutanix@PCVM:~$ allssh cat ~/config/upgrade.history|grep -i 6.1-stable-26d7c18d22e894848d6edc5bb7b1f6c2185295c8 If you see an entry for "el7.3-release-fraser-6.1-stable-26d7c18d22e894848d6edc5bb7b1f6c2185295c8" in the output above, it indicates that Prism Central cluster is either on PC 2022.4 or was upgraded from PC 2022.4. If the Objects Cluster was originally deployed with "Controller Version" 1.0.8 and below (Step 2 above) and Prism Central is on version PC 2022.4 or Prism Central was at one point upgraded from PC 2022.4 (Step 3 above), the cluster is exposed to this issue.
If the Objects Cluster is currently managed by Prism Central 2022.4, or Prism Central was upgraded from 2022.4, please contact Nutanix Support http://portal.nutanix.com/page/home. Field Advisory #105 https://download.nutanix.com/alerts/Field_Advisory_0105.pdf is issued for this issue.
KB13250
NC2 - Hibernate stuck at 0 percent at Draining vDisk oplog to the extent stores
​​​​​​​This article describes an issue where hibernation is stuck at draining vDisk oplog to the extent store at 0 percent.
Starting with AOS version 6.0.1 onwards, NC2 on AWS offers the capability to hibernate/resume the cluster to/from an AWS S3 bucket.This particular issue is primarily observed when hibernation is started, and the cluster’s available space is already very low. Looking at command output for 'progress_monitor_cli --fetchall' we see the step hibernation is stuck on: progress_task_list { Due to the limited space in the cluster, the stargate.INFO log will look similar to the below where it is unable to find space to allocate a replica: I20220519 23:26:38.808238Z 18324 replica_selector.cc:521] Unable to pick a suitable replica on tier SSD-PCIe
Due to the limited space available in the cluster, hibernation is not able to make any progress therefore, it will be required to free up some space from the cluster to proceed with the hibernation. At this point however, the UI interface is not available, and we are unable to manage the cluster to free up the space needed to continue with the operation. In order to manage the cluster and free up space, we must first place the cluster back into a RUNIING state. Contact Nutanix Support https://portal.nutanix.comfor assistance in reverting the hibernate task and place the cluster back in RUNNING state.
KB7186
Move - Steps to increase the disk space of Move Appliance
Move - Steps to increase the disk space of Move Appliance
It may be required to expand the disk size of Move Appliance.Please note, at the time of KB publishing the procedure has not been through QA validation.If any questions or concerns reach out to senior SRE or ENG team.
1. To validate current disk size : admin@move on ~ $ rs root@move on ~ $ df -h root@move on ~ $ fdisk -l Increase the disk size of Move Appliance from Prism to 100GBAfter increasing the disk size of the VM from Prism follow these steps: root@move on ~ $ fdisk /dev/sda Resize the partition: root@move on ~ $ resize2fs /dev/sda2 Validate the current size: root@move on ~ $ fdisk -l Deploy latest Move appliance as the later versions have bigger disks.
KB15142
Nutanix Files - Share Migration tool troubleshooting
The following article provides the steps to troubleshoot Share migration tool issues
Share migration tool lets you copy share data from a third-party source to a target files share and also for copying data from one share to another within the same cluster Portal doc on Share Migration https://portal.nutanix.com/page/documents/details?targetId=Files-v4_2:fil-files-share-migrate-c.htmlLogs of interest for troubleshooting share migration issues On each fsvm /home/nutanix/data/logs/migrator.log* Check migration plan status afs migration.plan_status name=my_plan verbose=true Scenarios where migration job fails with errorIssue 1: "status: 0xC0000234. err: STATUS_ACCOUNT_LOCKED_OUT. The user account has been automatically locked because too many invalid logon attempts or password change attempts have been requested" ------------------------------------------------- Issue 2 - Failed to open source share path[test1]. err: <SID> is not a valid domain SID. Retry migration with the option to allow copying entities with unknown owner/group SIDs in-order to copy the file/directory. -------------------------------------------------
Solution 1 (For issue 1)Check /home/log/samba/winbindd.log in FSVM for any errors 2023-07-13 01:02:28.905078Z 2, 225972, class=winbind, winbindd_pam.c:2044 winbind_dual_SamLogon From the above it's clear the issue is due to the wrong password, Please check with the customer on any special characters in the password or try using a different account nutanix@fsvm$ afs migration.source_add source_fs_fqdn="filer1.child4.afs.minerva.com" source_alias=filer1 source_user='CHILD4\\backup_user' Solution 2 (For issue 2)Add following option allow_unknown_sids = Copy entities with an unknown owner or group SID. afs migration.plan_migrate name=my_plan allow_unknown_sids=true
KB7842
Xen | mixed NICs (type or number) in the same XenPool is not supported
null
It is not recommended from Citrix to have have heterogeneous hardware among the hosts belonging to the same XenServer Pool. If any manual configurations are done to accommodate different number/type of NICs in a XenPool it might lead to problems in future.
It is best practice to have duplicated/identical hardware across the board in a pool. The reason it is best practice is that the order of the NICs and host networks matter if all the NICs on a host are in use and there are dissimilar amounts of NICs on the host. Also when you are adding a new Host to the pool (cluster expansion in our case), that has the 10GB NICs at the same interfaces as the previous hosts in the pool (example eth0 and eth1) then the two NICs will be bonded as they are moved into the pool. The other 1GB would not be touched. However, if they do not have corresponding counterparts in the new Host(example 10GB NICS in new host are enumerated as eth2 & eth3 and onboard 1GB NICS are enumerated as eth0 and eth1) xenpool expansion (cluster expansion) would fail.
KB16946
Compliance status shown as loading in prism central for VM’s with NVram or vTPM disks for storage based policy encryption feature.
VM's having NVRAM or vTPM disks attached will show compliance status as "In progress".
Background of the issue When storage policy is associated with any VM, a new entity of type "storage_vdisk" gets created in IDF corresponding to each disk on that VM.The compliance status of any VM gets updated after checking the state of each "storage_vdisk" entity for each disk after the operation from Storage Policy completes. Problem Description: It was found that the storage policies are not applied for NVRAM and vTPM disks and the "storage_vdisk" entity does not get created corresponding to these disk types.Since the compliance status for a VM/VG gets updated after checking the status of all corresponding "storage_vdisk" of the VM/VG entity, if the "storage_vdisk" entity for any disk of VM/VG is not present then compliance status continue to show in status "In Progress" indefinitely.Process to check the compliance status of the VM/VG in storage policy: 1. Click on the storage policy option under the "compute and storage" tab: 2. You will get a summary of the storage policy as follows, click on entities> unrealized as shown: 3. Compliance status will be displayed for the VM's as shown: Identification: Note for the vmdisk_uuid of the disk corresponding to the NVRAM/vTPM disk and the data disk. For Example: Affected VM: DXXXXXK { If we check the vdisk config of vmdisk corresponding to normal data disk: "d2bb9b2d-03b4-40bb-b607-64b771a2dbd0" it has storage policy associated with it. vdisk_id: 30030566062 While for the vmdisk uuid of nvram disk: "516c27bb-00df-45f1-8c6f-412bde6e9408", vdisk config does not have storage policy uuid associated with it. vdisk_id: 30023820941
Upgrade AOS to the fix version 6.8 or above. The issue is fixed in AOS 6.8 onwards, where storage policy will be applied to any newly created disks of type NVRAM and vTPM.However after upgrading to AOS 6.8, the storage policy will not be applied to already existing NVRAM and vTPM disks, the storage policy need to be manually reapplied by removing and re-adding the category to storage policy.After couple of curator scans the compliance status should get updated.
KB9901
Can't protect a VM using Sync Protection Policy due to stale stretch_params
Stale Stretch Parameters are not cleaned up when AZs are unpaired/repaired.
Symptoms: When unpairing the AZ with SyncRep PP and entities in sync, stretch params are not cleaned up. Unpairing also marks the remote site with "to_remove: true" but as the stretch parameters are not removed, the remote site cannot be GC.It has been seen that some customers also unregister Prism Central (PC) and re-deploy a new PC leaving the stale stretch params on the Prism Element (PE) cluster.Setting up SyncRep while these stale entries exist will fail.This can be seen on either both source and target clusters (PC/PE), or only the target cluster. Events that lead to the issue: Availability Zone is disconnected.If enough time has elapsed after the AZ disconnect (maximum of 5 minutes), stretch is removed from source but not the target.PC is unregistered and redeployed from either source (if stretch has not been removed), target or both. Identification: On PE cluster (Source/Target/Both), Remote site has "to_remove: true" set. cerebro_cli list_remotes - Source Cluster remote_name: "remote_x.x.x.x" cerebro_cli list_remotes - Target Cluster remote_name: "remote_y.y.y.y" Check stretch_params_printer on source cluster and destination cluster to validate if there are any stale entries present. Source Cluster stretch_params_printer - The stale stretch will have the "replicate_remote_name" value set to the remote site, which has "to_remove: true" set. nutanix@SourceCVM$ stretch_params_printer Target Cluster stretch_params_printer - The stale stretch will have the "forward_remote_name:" value set to the remote site, which has "to_remove: true" set. nutanix@SourceCVM:~$ stretch_params_printer Other symptoms: Replication failed with "Finishing due to error kAborted. Error detail: Stretch operation aborted. Error detail: Failed to stretch disks due to cerebro error "You can see the error "Finishing due to error kAborted. Error detail: Stretch operation aborted. Error detail: Failed to stretch disks due to cerebro error " in the Cerebro logs as shown below: I0804 13:59:23.830045 15236 entity_stretch_add_disks_meta_op.cc:659] [entity_list=1fbd6c72-07f2-4f8f-8302-3c06c44a8ac0, pd_name=pd_1596560360859370_21, meta_opid: 4919878 parent meta_opid: 4919870 ] Releasing the snapshot resource lock In the Cerebro logs, "Remote remote_x.x.x.x not found in zeus config" and "Remote site remote_x.x.x.x is unhealthy and resume" will be observed. I0804 13:59:23.831430 15236 replicate_meta_op.cc:2674] PD: pd_1596560360859370_21, remote: remote_x.x.x.x, snapshot: (8413349076919602336, 1591010184840938, 4919884), remote meta_opid: -1, meta_opid: 4919891 parent meta_opid: 4919878 Skipping vdisk based reference resolution for metadata replication
Removing Stale Stretch Params: IMPORTANT: Consult a Senior SRE or member of your local DR Team before following this procedure.If you are not sure, consult with a Senior Support Engineer. Check PC on both source and target for any stale sync replication configurations using KB 9716 https://portal.nutanix.com/kb/9716. If they are confirmed to be stale, clear them using the KB.Recheck and clear any stale stretch parameters from both source and target PE that relate to the Remote Site, which is set to be removed (as above in the Identification section). Run stretch_params_printer on PE to recheck stale stretch params. nutanix@CVM$ stretch_params_printer Clear the stale stretch with "cerebro_cli entity_stretch_change_mode [entity_uuid] [cluster_id]:[cluster_incarnation_id]:[entity_id] [cluster_id]:[cluster_incarnation_id]:[entity_id + 1]". The values come from the output of stretch_params_printer: nutanix@CVM$ cerebro_cli entity_stretch_change_mode 1fbd6c72-07f2-4f8f-8302-3c06c44a8ac0 123456:123456789:11559 123456:123456789:11560 Repeat step 2 for all stale stretches and on both clusters.Check if stretch_params_printer no longer displays any stale stretches. If the stale stretch is not removed, open a TH/ONCALL for further assistance.Check that no remote sites are listed anymore with to_remove set: nutanix@CVM$ cerebro_cli list_remotes 2>/dev/null |grep to_remove Notes: If the stale stretch params are cleared the remote site no longer is set to be removed, the Entity/s should now be able to be stretched (SyncRep enabled).If there is still a problem with Entities configured for SyncRep, continue troubleshooting.
KB16695
VM power-on fails with "Failed to register VM UUID with avm" error due to AOS and AHV version mismatch
VM cannot be powered on, Prism show error message: "Operation failed: InternalException", acropolis.out shows "Failed to register VM <UUID> with avm" error.
An AHV virtual machine (VM) may fail to power on with the "Operation failed: internalException" error message.The following error message will be in the /home/nutanix/data/logs/acropolis.out log: Task <UUID>(VmSetPowerState <UUID> kPowerOn) failed with message: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu <UUID> prepare begin -) unexpected exit status 1: Failed to register VM <VM UUID> with avm: missing UUID (alias) for device: No such file or directory
A version mismatch between AOS and AHV causes this issue. The incorrect versions may have been selected during the foundation process, resulting in compatibility issues. Review the Compatibility and Interoperability Matrix https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix and perform an upgrade to restore compatibility.
KB9219
Prism Central - Using AD and ADFS (SAML) logins concurrently
At the time of this article's writing, the Prism Central documentation does not provide information on the use of SAML-based authentication to be used with RBAC (role-based account control) or SSP (Self-Service Portal). This article aims to assist with adjusting parameters on the domain controller side to allow the use of both AD and ADFS (SAML) concurrently.
Prism Central supports three authentication options: Local user authentication. Users can authenticate with a local account configured in Prism Central (see Managing Local User Accounts https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_5:mul-user-manage-pc-t.html section of the Security Guide).Active Directory authentication. Users can authenticate using their Active Directory (or OpenLDAP) credentials when Active Directory support is enabled in Prism Central.SAML authentication. Users can authenticate through a qualified identity provider when SAML support is enabled for Prism Central. The Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between two parties, ADFS as the identity provider (IDP) and Prism Central as the service provider. Note: ADFS is the only supported IDP option for Single Sign-on. Additionally, Prism Central does not allow AD user accounts to authenticate concurrently with both Legacy AD (LDAP) and ADFS (SAML), as it causes conflict and errors.The following signature is observed in aplos.out (/home/nutanix/data/logs/aplos.out): 2019-06-24 18:56:53 ERROR authn_middleware.py:309 Failed to login with Saml.Error: {'api_version': '3.1', Known issue: SAML authentication using ADFS unable to login - KB 8151 https://portal.nutanix.com/page/documents/kbs/details/?targetId=kA00e000000CrRoCAK.
Until the functionality is integrated into the code, there is a viable workaround to get both AD and ADFS authentication to work with the same login accounts.Note: This guide assumes that you have configured the SAML Identity Provider in Prism Central and created your Relying Party Trust in ADFS. Refer to Configuring Authentication https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v6_5:mul-security-authentication-pc-t.html section of Prism Central Guide for more details on setting up authentication.Active Directory and SAML user logins can be used concurrently in Prism Central by modifying the ADFS Relying Party trust to use the Display Name AD user attribute rather than the UPN attribute. To achieve this, separate account mappings will need to be created within Prism Central to point towards the AD account, one for the AD Directory and one for ADFS Provider. Prism Central will use these two mappings as separate entities, but they will share the same user account and password in AD. Any current AD user role mappings or assigned projects will need to be duplicated inside Prism Central settings for the new SAML logins. Any current SAML users in Prism Central will also need mappings and projects re-assigned if this new ADFS configuration is used. ADFS configuration Open ADFS console and edit the claim rules on the Prism Central Relying Party Trust.Inside the Edit Claim Rules dialog box, edit the claim rule to include these settings. Attribute store = Active Directory LDAP Attribute = Display-NameOutgoing Claim = NameID Save changes and apply the new rule. The ADFS login will now use the Display name AD attribute to map to the user’s Prism login. Now we have two different AD attributes mapping the Prism login ID: UPN for AD loginDisplay-Name for ADFS login. Prism Central configuration Login to Prism Central and go to Settings > Role Mapping.Click new mapping and choose your ADFS provider from the Directory or Provider drop-down list.Note: For SAML, only the LDAP type USER is supported; LDAP type GROUP is not yet supported.Create new mappings for your ADFS users and groups. Add the user name values without the domain suffix, for example, testuser1.Use the fully qualified domain username to log into both AD and ADFS. With these small changes, it should be possible to authenticate with Prism Central using ADFS and continue to use AD user accounts for RBAC/SSP Projects or CALM.
KB12580
Nutanix Response to CVE-2021-22045 / VMSA-2022-0001
Nutanix Response to CVE-2021-22045 / VMSA-2022-0001
VMware recently released a security advisory to address a heap-overflow vulnerability (CVE-2021-22045). More information is available on the VMware site.: https://www.vmware.com/security/advisories/VMSA-2022-0001.html https://www.vmware.com/security/advisories/VMSA-2022-0001.html The security advisory references VMware KB 87249 https://kb.vmware.com/s/article/87249 which points to available patches for ESXi 6.5 and 6.7 and offers a workaround for the scenario where patches can’t be applied or are not available. The workaround VMware proposes requires that all CD-ROM devices are disabled/disconnected on all running virtual machines. This may cause adverse effects because the Nutanix CVM, PCVM, Files instances all need to have an ISO image attached in order to start and operate correctly. Currently, VMware provides two methods in KB 87249 http://kb.vmware.com/s/article/87249 to disconnect the CD-ROM device from Virtual Machines, manually, one VM at a time, using the vSphere web client and a PowerCLI command that will automate the device removal across a set of queried VMs. Using either method without considering the impact to Nutanix infrastructure VMs can have adverse effects. Below is an overview of the results of removing the CD-ROM configuration from an active Nutanix infrastructure VMs and leaving VMs without a CD-ROM device connected: Disconnecting the ISO from running Nutanix Infrastructure VMs: CVM: The disconnect operation will fail to complete. (It is not possible to disconnect the CD-ROM device from a running CVM.) FSVM: The VM will freeze until the task confirmation question is answered in vCenter UI. PCVM: The VM will freeze until the task confirmation question is answered in vCenter UI. Planned (OS/FW upgrades) or unplanned reboot of a running Nutanix Infrastructure VM without the CD-ROM device attached: CVM: The CVM will fail to boot. FSVM: The VM will boot but services will not start. PCVM: The VM will boot but services will not start.
Nutanix recommends remediating the VMware security advisory via patching ESXi wherever possible to prevent disruption to Nutanix Infrastructure since the Nutanix CVMs, Files FSVMs, and Prism Central PCVMs, all have operational dependencies on having a connected CD-ROM device. The Nutanix CVMs, Files FSVMs, and Prism Central PCVMs are not general-purpose VMs designed for direct user access. Access to these VMs should only be granted to a limited set of accounts using complex passwords known only to trusted users, minimizing their exposure to the VMware security advisory. If the alternative workaround to disconnect the CD-ROM device is absolutely required the modified steps below can be used with Files FSVMs and Prism Central PCVMs. These steps do not apply to the CVM. Exclude the Nutanix Infrastructure VMs from any automated bulk CD-ROM disconnect task. Manually disconnect the CD-ROM devices from the Nutanix VMs and be prepared to immediately answer any confirmation messages to reduce the time the VM will be inaccessible. Ensure the CD-ROM devices are reconnected prior to any planned maintenance that will reboot the VMs. Be prepared to reconnect the CD-ROM devices to allow for successful boot and service start-up of the Nutanix VMs in the event of any unplanned restarts. Do not run the PowerCLI command provided in the VMware KB without excluding the Nutanix Infrastructure VMs in the filtering criteria as it can cause a service disruption on the Nutanix VMs. Some of the symptoms observed include: The PC and FSVM VMs may become unresponsive to ping and may no longer be accessible via SSH or the web UI. The “Reconfigure virtual machine” task fails on the CVM with “Connection control operation failed for disk “ide0:0: Prism Central management services may become unavailable. Nutanix Files shares may not be accessible. Running the modified command below will list all VMs with a connected device but exclude the Nutanix CVM, PC, and Files instances: Get-VM | Where { $_.Name -notlike '*NTNX-*' -and $_.Notes -ne 'NutanixPrismCentral' } | Get-CDDrive | Where {$_.extensiondata.connectable.connected -eq $true} | Select Parent To remove and disconnect attached CD-ROM/DVD devices but exclude the Nutanix instances, run the command below: Get-Vm | Where { $_.Name -notlike '*NTNX-*' -and $_.Notes -ne 'NutanixPrismCentral' } | Get-CDDrive | Where {$_.extensiondata.connectable.connected -eq $true} | Set-CDDrive -NoMedia -confirm:$false This will attempt to disconnect the CD-ROM/DVD for all VM matching the above filter, excluding Nutanix infrastructure VMs. Multiple tasks will be started in the vCenter console, one for each VM.
KB13038
[Objects] Modifying HYCU parameters to increase performance when backing up to an object store
If HYCU backup performance to a Nutanix Objects object store target is poor, several parameters are available on HYCU that may improve performance.
If HYCU backup throughput to an Nutanix Objects object store is performing poorly, and the cause is not due to an unhealthy HYCU VM(s), PE cluster, network, or other environmental health factor, tuning of the HYCU configuration may be needed to increase performance.Generally, high concurrency to an object store may greatly improve performance as compared to low concurrency. Ideally, this is done by increasing the number of simultaneous jobs that HYCU is running against the object store; however, in cases where increasing the number of jobs is not possible or feasible, or if performance remains poor with a higher number of jobs, HYCU may need to be engaged to investigate if adding adjusting configuration parameters of the HYCU VM may improve performance.Caution: increasing the number of simultaneous jobs HYCU performs may require more resources on the HYCU VM and/or negatively impact performance. Any changes made should be at the customer's discretion and in small increments. If in doubt, engage HYCU Support https://confluence.eng.nutanix.com:8443/display/SW/HYCU+Support+Workflow.This article will list a couple of parameters that have been observed to increase performance. This information is for reference only, as these parameters must be adjusted only by HYCU.Before tuning HYCU or object store parameters to improve performance, ensure the object store, PE cluster, and network are healthy, and that backup performance to the object store is not due to an underlying performance issue with the PE cluster network. Perform general health checks of the environment, such as executing NCC health checks, and investigating cluster and object store alerts. See the following suggested KB articles: General sanity checks of the MSP cluster that runs Objects https://portal.nutanix.com/kb/8704 Objects - Manual health checks of the Objects Microservices https://portal.nutanix.com/kb/11672 Collecting a performance log bundle for an object store using the objects_collectperf utility https://portal.nutanix.com/kb/7805 Framing a performance troubleshooting case https://portal.nutanix.com/kb/1995 Third-Party Backups Slow or Failing https://portal.nutanix.com/kb/7150
Note: The HYCU parameters described here are for reference only. Nutanix Support should not modify any parameters under /hycudata/opt/grizzly/config.properties on the HYCU VM(s). As per a discussion with HYCU Engineering, these parameters should only be modified by HYCU after HYCU Support has performed troubleshooting and analysis and determined that adjusting these or other parameters is beneficial. Engage HYCU Support as described in the HYCU Support Workflow https://confluence.eng.nutanix.com:8443/display/SW/HYCU+Support+Workflow. In a future documentation update to best practices, HYCU will provide customer-facing guidance on modifying these parameters. [ { "Parameter": "changed.regions.range.multiplier\t\t\t\t\t\tor\t\t\t\t\t\tchanged.regions.query.size.min.gb", "Description": "This parameter controls the amount of data that HYCU will scan on disk to determine what regions of data have been changed since the previous backup, and thus what data needs to be backed-up during an incremental backup. Increasing this value will result in fewer API calls, and may greatly reduce incremental backup time. Per HYCU, adjusting this value may also improve the performance of full backups.\t\t\t\t\t\tOriginally, HYCU looked for a property provided by the AOS Changed-Region Tracking API called next_offset to determine what data has changed. Due to an old software defect in AOS that prevented next_offset from working properly, HYCU modified their software to scan regions on disk as a workaround. A side-effect of this change is that backups may take longer to complete. The issue with next_offset has since been fixed in AOS, and HYCU will modify their software to once again use next_offset in a future HYCU release.\t\t\t\t\t\tSee KB 12114 for more information on the changed.regions.range.multiplier or the changed.regions.query.size.min.gb properly." } ]
KB14651
NDB - Configure hugepages settings on Postgres
This article describes how to configure hugepages settings on Postgres.
HugePages default settings are NOT applied as part of the new Postgres Database provisioning workflow in Nutanix Database Service (NDB). HugePages are generally used to improve database performance. To know more about the benefits of using huge pages with PostgreSQL, see PostgreSQL documentation https://www.postgresql.org/docs/12/runtime-config-resource.html. Note: NDB was formerly known as Era. Impacted NDB Versions This issue has existed since NDB 2.5 release. If the NDB server is in a release less than 2.5, then for the provisioned DB servers with VM RAM of memory size greater than 1.5 GiB, NDB will configure the HugePages automatically. HugePages configuration can be verified using the command: [user@dbvm]$ grep HugePages_ /proc/meminfo It will have an output similar to the following. The values might be different, and HugePages_Total and HugePages_Free will have non-zero values. [user@dbvm]$ grep HugePages_ /proc/meminfo
This issue will be fixed in a future release, where NDB will configure the HugePages automatically for the provisioned DB servers with VM RAM of memory size greater than 1.5 GiB. Enable HugePages for PostgreSQL NDB does not recommend configuring HugePages if the VM RAM (memory) size is less than or equal to 1.5 GiB. In that case, there is no need to follow any of the steps below. If the VM RAM size is greater than 1.5 GiB, then HugePages can be configured in two ways as a workaround: HugePages can be set during database provisioning operation for VM RAM size of more than 4 GiB by setting the flag allocate_pg_hugepage to true. The flag allocate_pg_hugepage can be set using Era API or CLI. To set the value of allocate_pg_hugepage, see the "Set allocate_pg_hugepage" section below.If you follow this step, skip the rest of the steps below.If the allocate_pg_hugepage flag is not set during provision operation or the VM RAM size is less than or equal to 4 GiB (irrespective of the allocate_pg_hugepage flag value), execute the shell script configure_hugepages.sh on the Postgres VMs.Before executing the script, consider the following: Running this script before starting to use the database.If the memory size is less than 4 GiB, then do not modify the fraction value in the shell script.If the memory size is greater than 4 GiB, then you may modify the value of the fraction but keep it between 0.1 to 0.8. Set allocate_pg_hugepageThe flag allocate_pg_hugepage can be set in provisioning operation using two ways: Era API:In the provisioning payload, set the value of allocate_pg_hugepage as true. { Era CLI: SSH into Era server.Type command era to enter into Era CLI mode. [era@era]$ era Generate an output file (for example, provision.json) for provision operation. Use the tab key for help with the different fields that are required for generating the output file. era > database provision engine=postgres_database provision_instance generate_input_file single_instance create_dbserver=true software="POSTGRES_10.4_OOB" compute="DEFAULT_OOB_COMPUTE" network="DEFAULT_OOB_POSTGRESQL_NETWORK" nx_cluster_name="EraCluster" db_parameter="DEFAULT_POSTGRES_PARAMS" output_file=provision.json Exit out from Era CLI mode. Type command exit. era > exit Open the generated output file. [era@era]$ vi provision.json Set the value of allocate_pg_hugepage to true and save the file. Before saving the file, set the values for other required fields. { Type command era to enter Era CLI mode again.Provision using the updated output file and also pass the required fields and values. Use TAB for help with this. era > database provision engine=postgres_database provision_instance name=xxx input_file=provision.json sla="NONE" The above step will provide a command to monitor the provision operation. Use that command to track the provisioning operation. How to execute the configure_hugepages.sh script Log in to the DB server VM as an OS user who has sudo privileges.Download the configure_hugepages.sh https://download.nutanix.com/kbattachments/14651/configure_hugepages.sh script and save it in the /tmp directory.Run the script: [user@dbvm]$ sh /tmp/configure_hugepages.sh The script output will be somewhat similar to the screenshot below. The end statement will be “Script execution Done“. Repeat the above steps in all DB server VMs if it is a PostgreSQL HA provisioning. Validate the changes If configure_hugepages.sh was used to configure HugePages, follow the below steps to confirm if it has been updated successfully. [user@dbvm]$ grep HugePages_ /proc/meminfo It will have an output similar to the following. The values might be different. [user@dbvm]$ grep HugePages_ /proc/meminfo If HugePages has not been configured, then it will have an output similar to below: [user@dbvm]$ grep HugePages_ /proc/meminfo
KB14632
NDB | Manual addition of disks to staging drive of a DBVM causes Time Machine operation failures
This KB addresses the TM issues faced when disks are manually added to staging drive of DBVMs from Prism
NDB TM functions such as Snapshot, Log Catchup, and Copy Logs encounter errors when disks are added manually to the staging drive of DBVMs through Prism. This is due to the failure to record the drive addition in NDB metadata, preventing NDB from recognizing the newly added disk. Additionally, any manual modifications to existing disk configurations on DBVMs or other entities owned by NDB can lead to the same problem.NOTE: Customers should refrain from independently adding disks to the staging drive of NDB-managed DBVMs, as the NDB metadata system will not register the new disk in such cases.Scenario:Disk(s) is/are added by customers to the staging drive of the DBVM. The newly added disk(s) could be partitioned by the customer as well.IMPORTANT: If the staging drive is filling up, it will be auto-extended by NDB automatically without the need for manual intervention based on the value of AUTO_DB_STAGE_SIZE_CAP_MB in the eraconfig (default = 512 GB).However, if after auto-extend the staging drive size has reached the set threshold value or if auto-extend causes the staging drive size to cross the threshold if carried out, the auto-extend task would fail.If this happens, customers might resort to adding disks by themselves. Symptoms:1. Copy Log fails with (examples): Staging directory /home/era/staging_mount_base/xxxx...-...xxxx/<DBVM>/<ID> does not exist on the mounted staging snapshot Error while detaching staging snapshot. Details:'internal Error occured,failed to execute host level commands 2. Snapshot/Log Catchup operations fail with an error that either directly mentions that the staging directory doesn't exist or an error that stems from the fact that NDB cannot see the newly added disk.
A possible solution would be to fix the NDB metadata manually by updating the disk information.Note: If the auto-extend task fails because the threshold has or would be reached, increasing the value of AUTO_DB_STAGE_SIZE_CAP_MB would help remediate the issue and no manual additions or metadata alterations would be required.NOTE: Pausing and Resuming TM might be effective only for log catchup operation. Copy logs may still fail.Another workaround is to revert the disk addition changes performed by the customer, which involves removing the PVs added to this VG/LVM to return to the original configuration. This needs to be carefully reviewed and recommended as removing the disk with data can lead to production impact and further implications. Refer to TH-10609 and ONCALL-14472 for reference. **CONSULT YOUR LOCAL STL/ENG AS CERTAIN STEPS ARE TO BE CARRIED OUT BY ENG/STL ONLY**Educate customers about the consequences of manually adding disks to DBVM from Prism, warn against doing so, and communicate that this is an unsupported workflow. ENG-612409 https://jira.nutanix.com/browse/ENG-612409 has been filed to add this limitation to the NDB user guide. ERA-25448 https://jira.nutanix.com/browse/ERA-25448 has been filed as a future improvement to block the manual changes on NDB-managed DBVMs.
{
null
null
null
null
KB7366
NCC Health Check: remote_site_latency_check
This check was introduced in NCC 3.9.0 and validates the latency between primary and target clusters.
The NCC health check remote_site_latency_check checks if the latency to the target cluster is lesser than the maximum value (5ms) allowed for AHV Synchronous Replication.If the latency is higher than 5ms, synchronous replication may fail. This check provides a summary of the latency to the target cluster(s).Running the NCC CheckYou can run this check as part of the complete NCC health checks ncc health_checks run_all Or you can run this check individually ncc health_checks data_protection_checks ahv_sync_rep_checks remote_site_latency_check This check is scheduled to run every 6 hours and only runs on Prism Central. An alert will get generated after failure. Sample outputFor Status: PASS Running : health_checks data_protection_checks ahv_sync_rep_checks remote_site_latency_check For Status: FAIL Running : health_checks data_protection_checks ahv_sync_rep_checks remote_site_latency_check Running : health_checks data_protection_checks ahv_sync_rep_checks remote_site_latency_check Output messaging [ { "110022": "Check if the latency to target cluster is lesser than the maximum value allowed for AHV Sync Rep", "Check ID": "Description" }, { "110022": "Target cluster is unreachable or the connection to target cluster is not good", "Check ID": "Cause of failure" }, { "110022": "Ensure that the target cluster is reachable and latency is below 5ms or choose another target cluster", "Check ID": "Resolution" }, { "110022": "Synchronous Replication will be affected", "Check ID": "Impact" }, { "110022": "A110022", "Check ID": "Alert ID" }, { "110022": "Checking if latency to target cluster is lesser than the maximum value", "Check ID": "Alert Title" }, { "110022": "Latency to [remote site] is greater than 5ms", "Check ID": "Alert Message" } ]
If the check reports a FAIL status in regards to latency being too high, verify the latency between the sites. This can be done by running the ping command bi-directionally: From CVM on source cluster to CVM on remote cluster nutanix@cvm:~$ ping x.x.x.22 From CVM on remote cluster to CVM on source cluster nutanix@cvm:~$ ping x.x.x.209 You can also run the tracepath command bi-directionally to try to isolate where in the network path the latency increases or drops:From CVM on source cluster to CVM on remote cluster nutanix@cvm:~$ tracepath x.x.x.22 From CVM on remote cluster to CVM on source cluster nutanix@cvm:~$ tracepath x.x.x.209 If latency is showing above 5ms when running the commands, ensure your network environment is stable (physical switches, cabling, and device configurations) and consult with your networking team to resolve the latency issues. If latency for this particular remote site cannot be reduced, please choose another target cluster.To verify if the latency issue is intermittent or persistent, you can review the /home/data/logs/sysstats/ping_remotes.INFO logs on each cluster. Output within the ping_remotes.INFO log will be similar to the following (the IP in the log will be the remote cluster VIP): #TIMESTAMP 1567796889 : 09/06/2019 07:08:09 PM Review the log and see if you can find any patterns for high network latency observed every hour, and so on. If there is a pattern for high network latency, try to isolate what could be causing the latency spikes. If the check reports a FAIL status in regards to the latency not being found, this indicates that the connection to the remote site is unreachable. The remote site can become unreachable if the following occurs: 1. The remote cluster becomes unstable2. The remote PC VM becomes unstable3. The Availability Zone (AZ) gets disconnected To verify if the issue is with the PC VM or the remote cluster (target cluster), you can review the ping_remotes.INFO log on the source cluster. If the issue is with the remote cluster, messages similar to the following will be displayed within the log: #TIMESTAMP 1567800183 : 09/06/2019 08:03:03 PM If unreachable messages are seen in the log, please check on the status of the remote cluster and also verify that there are no connectivity issues with the remote cluster Virtual IP Address (VIP). The VIP can be viewed by logging into Prism Element on the remote cluster and clicking on the cluster name - this will display the 'Cluster Details' - verify that the IP listed for the 'Cluster Virtual IP Address' matches the IP address shown in the ping_remotes.INFO log of the source cluster. If there is a discrepancy with the IP addresses, consult with your network team to verify any recent networking changes. If the remote cluster VIP has recently changed, the Protection Policy for this remote cluster will need to be recreated. Steps for creating a Protection Policy can be found in the "Creating a Protection Policy" section of the Nutanix Disaster Recovery Guide: https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-create-asynchronous-protectionpolicy-pc-t.html https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-create-asynchronous-protectionpolicy-pc-t.htmlIf the log is not showing unreachable messages, then the issue is either with connectivity to the PC VM or the AZ has become disconnected. You can check the connectivity to the PC VM by running the ping command. If the ping command is unsuccessful, verify that the PC VM is powered on and check to see if there have been any recent networking changes. If the ping command is successful, check the status of the Availability Zone. Log into both the source and remote PC to make sure the AZ is configured and showing with a status of reachable. You can access this information by typing "Availability Zones" into the search bar or by to the menu and selecting Administration > Availability Zones. Steps for creating an AZ can be found in the "Pairing Availability Zones" section of the Nutanix Disaster Recovery Guide: https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-pair-availabilityzones-pc-t.html https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-pair-availabilityzones-pc-t.html In case the mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ http://portal.nutanix.com/. Additionally, please gather the following command output from the Prism Central VM and attach it to the support case: nutanix@cvm:~$ ncc health_checks run_all
KB13966
HPDX - HPE RM5 Firmware - HPD7
HPE has released new firmware to mitigate RM5’s false positives upon which a drive goes into write cache disable mode. In this state, drive performance is heavily impacted and is removed from Nutanix Storage Stack.
**HIGHLY CONFIDENTIAL - Only for customer handling. Don’t share any technical details with the Customer, Partner, or outside Nutanix**HPE has released new firmware to mitigate RM5’s false positives upon which a drive goes into write cache disable mode. In this state, drive performance is heavily impacted and is removed from Nutanix Storage Stack.To check if you are running HPDX RM5 drives, please check the drive model number is VO003840RWUFB.This particular issue is also tracked with the help of NCC check disk_status_check; to read more about NCC check, refer to KB-8094 http://portal.nutanix.com/kb/8094. Please be aware of the issue reported in ENG-502107 https://jira.nutanix.com/browse/ENG-502107, where this NCC check doesn't alert on recent NCC versions. To read more about the Kioxia RM5 SSD Premature failure condition, please refer to KB 11804 http://portal.nutanix.com/kb/11804.HPE released HPD6 in Aug '21 to mitigate these issues. However, despite these changes, we observed some rare re-occurrences of the issue on the firmware version HPD6.In Oct-Nov '22, HPE released a newer firmware version HPD7 for the Kioxia RM5 drive, which HPE qualified and integrated as a part of their Disk Firmware module.
HPE has a public customer advisory describing the issue and requesting their customers to move to HPD7: https://support.hpe.com/hpesc/public/docDisplay?docId=a00118276en_us&docLocale=en_US https://support.hpe.com/hpesc/public/docDisplay?docId=a00118276en_us&docLocale=en_USHPD7 firmware is qualified by Nutanix http://jira.nutanix.com/browse/HPDX-2195 to run on the Nutanix Platform and is officially supported. However, HPE and Nutanix mutually agreed not to place HPD7 on LCM. Nutanix would continue to provide HPD6 FW in LCM.HPD7 firmware will be available based on the recommendation from HPE support and would be obtained directly from HPE with upgrades performed using HPE's available tools: https://support.hpe.com/connect/s/softwaredetails?language=en_US&softwareId=MTX_39b703a23c0b47b28dc4143495&tab=revisionHistory https://support.hpe.com/connect/s/softwaredetails?language=en_US&softwareId=MTX_39b703a23c0b47b28dc4143495&tab=revisionHistoryWhile Nutanix supports HPD7 FW from HPE, the support policy on clusters with the latest firmware would follow the general guidelines. Any Firmware related issues, issues during manual FW upgrade, or any Hardware related issues will be handled by HPE support. Nutanix will only address software issues on the cluster.
KB2282
NCC Health Check: check_failover_cluster
The NCC health check check_failover_cluster detects if the Microsoft Failover Cluster has been configured with any of the host disk drives, which is an incorrect configuration.
The NCC health check check_failover_cluster detects if the Microsoft Failover Cluster has been configured with any of the host disk drives, which is an incorrect configuration. This condition may cause an outage at a future time, as it will prevent disks from being available to the Controller VMs (CVMs) for the Nutanix storage if the Controller VMs and hosts are rebooted. Therefore, this is a critical issue that needs to be resolved as soon as possible. Running the NCC Check It can be run as part of the complete NCC check by running the following command from a Controller VM (CVM) as the user nutanix: ncc health_checks run_all or individually as: ncc health_checks hypervisor_checks check_failover_cluster You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 5 minutes.This check will generate a critical alert A106457 after 1 concurrent failure across scheduled intervals. Sample output For Status: FAIL (Scenario 1) Node x.x.x.x: For Status: FAIL (Scenario 2) Node x.x.x.x: Related symptoms: There is an associated critical alert that is also observed in the Web Console (Prism) or in the following command output: nutanix@cvm$​ ncli alert ls Problem signature The Nutanix Hyper-V host lists Cluster Disk* resources from the following PowerShell command if these resources have been added. You must have no Disk entries. C:\> PowerShell Note: If the Controller VM and the host are rebooted, the Controller VM is unable to boot and displays the state in Hyper-V as Running, but the console shows Kernel panic and message checking for ./nutanix_active_svm_partition but does not finish booting because the Controller VM boot disk is being held. Disks must never be added to the cluster for Nutanix in the Microsoft Failover Cluster Manager. Disks are reported as Failed or Offline after trying to add them (with Information Clustered storage is not connected to the node). Output messaging [ { "Check ID": "Check if the Hyper-V failover cluster is properly configured" }, { "Check ID": "Hyper-V failover cluster is not configured with host disk drives." }, { "Check ID": "Refer to KB 2282." }, { "Check ID": "It can cause outage as it will prevent disks from being available to CVMs after CVM and Host reboot." }, { "Check ID": "A106457" }, { "Check ID": "Hyper-V failover cluster is not configured with host disk drives." }, { "Check ID": "Failover cluster creation failure" }, { "Check ID": "Hyper-V failover cluster is not configured with host disk drives." } ]
The Microsoft Failover Cluster contains the Nutanix Hyper-V hosts as Nodes, but should never have Storage (No Disks or Pools) added, even though the disks are listed in the Microsoft Failover Cluster Manager as available and selected to add by default. FAIL (Scenario 1): Do the following to fix the FAIL issue corresponding to your AOS version. Log in to portal.nutanix.com https://portal.nutanix.com.Refer to Creating a Failover Cluster for Hyper-V https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:hyp-hyperv-failover-cluster-t.html in the Hyper-V Administration for Acropolis https://portal.nutanix.com/page/documents/details?targetId=HyperV-Admin-AOS:HyperV-Admin-AOS guide to create the Microsoft Failover Cluster or to re-add the node(s) to the Failover cluster Note, this check may show if there is an issue for the Hyper-V host connecting to the domain controller for previously configured failover clusters. Indications of this may be when running the workflow to create a failover cluster receiving an error: ERROR_NO_LOGON_SERVERS Further checks can be made from the command prompt on the Hyper-V host: C:\>nltest /server:dc-name.domain.com /query where dc-name.domain.com is a domain controller. Ensure connection to an Active Directory Domain Controller.If there is an issue with Windows RPC, this is seen in the Windows event viewer, or if the PowerShell command Get-ADDomainController fails: PS C:\> Get-ADDomainController Look for output showing a Domain Controller or a message such as "RPC server unavailable" By resolving these issues, any issues relating to a non-configured failover cluster may also be resolved due to the hosts being able to contact an Active Directory Domain Controller. FAIL (Scenario 2): Run the following PowerShell command from any one of the Nutanix Hyper-V hosts in the failover cluster to remove all the Cluster Disk* resources: PS C:\> Get-ClusterResource "Cluster Disk*" | Remove-ClusterResource If the FAIL status prevents one of the Controller VMs from booting successfully, cluster status displays that the Controller VM is Down. You may verify the FAIL state on the Hyper-V host by connecting to the Controller VM console. If you are using the RDP (mstsc) or IPMI remote console on the Hyper-V host, use the following PowerShell command to see the Controller VM console and to verify that the Controller VM has not booted. PS C:\> Connect-CVM If the condition prevented the Controller VM from booting, the screen displays Kernel panic, and you cannot log in. The Controller VM is also unreachable by using svm. Checking /dev/sdb1 for /.nutanix_active_svm_partition If the Controller VM is down, stop and restart. PS C:\> Stop-VM *CVM Enter local computer credentials when asked. Run the following command on any Controller VM to confirm that all the CVMs services are up. cluster status Run the following command on any CVM ncc health_checks run_all If any Controller VMs were down during this recovery after the Controller VM has been up at least 5 minutes, to confirm that there are no other issues. Note: It is possible that the Nutanix Cassandra service detaches the Controller VM node temporarily from the Cassandra ring if the Controller VM was down for 120 minutes or more. You may see cassandra_status_check and cassandra_log_crash_check messages reported. Follow the KBs listed in the NCC details for the checks. If the issue is not resolved or if you have any queries, consider engaging Nutanix support at https://portal.nutanix.com https://portal.nutanix.com.
""ISB-100-2019-05-30"": ""Title""
null
null
null
null
KB8667
How to configure a customized http port for VMware vCenter
Use of non-default http port in vcenter
In most of the installation base customers use the default http port 80 for their vCenter. There are cases where a customer would like to change this port to a different port which is not possible to archive when registering vCenter in PRISM. To achieve this integration we will require to change the gflags for Uhura and Acropolis services with the gflag: vcenter_http_port
To change the port from a Nutanix perspective we need to change the two gflags in Uhura and Acropolis. Please make sure the cluster status is UP and the data resiliency is ok. For changing gflags please note we will Follow KB-1071:"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)" Uhura Gflag Change: Step 1: change gflag for Uhura (the parameter all_future_versions indicates that the change does resist over upgrades) nutanix@cvm$ ~/serviceability/bin/edit-aos-gflags --service=uhura --all_future_versions Step 2: search for attribute: vcenter_http_port vcenter_http_port : The vCenter Server HTTP port : int :: 80 Step 3: Change the gflag to the desired value vcenter_http_port : int : 80 :: 180 (In our case the default port 80 was changed to 180) Step 4: Restart Uhura (restarts are non disruptive) nutanix@cvm$ allssh genesis stop uhura; cluster start Acropolis Gflag Change: Step 5: change gflag for Acropolis (the parameter all_future_versions indicates that the change does resist over upgrades) nutanix@cvm$ ~/serviceability/bin/edit-aos-gflags --service=acropolis --all_future_versions Step 6: search for attribute: vcenter_http_port vcenter_http_port : The vCenter Server HTTP port : int :: 80 Step 7: Change the gflag to the desired value vcenter_http_port : int : 80 :: 180 (In our case the default port 80 was changed to 180) Step 8: Restart Acropolis. Perform the checks described in KB 12365 http://portal.nutanix.com/kb/12365 to make sure it is safe to stop Acropolis. nutanix@cvm$ allssh genesis stop acropolis; cluster start Verification: Step 9a: Verification of the gflag changes via edit-aos-gflags script: nutanix@cvm$ ~/serviceability/bin/edit-aos-gflags edit-aos-gflags 2019-11-21 15:15:40 INFO zookeeper_session.py:131 edit-aos-gflags is attempting to connect to Zookeeper Step 9b: NCC health check also shows the edited gflags. nnutanix@cvm$ ncc health_checks system_checks gflags_diff_check Running : health_checks system_checks gflags_diff_check In NCC releases older than NCC-4.0.1 we will still be facing the below alert, as port was hardcoded in the check. ID : 6ee3c497-3684-4424-8d7a-1b8cb507d934
""Latest Firmware"": ""14.24.1000""
null
null
null
null
KB5916
X-Fit - What it is and why it is required for Capacity Planning
Introduction to X-Fit and features enabling.
What is X-Fit?This is a machine learning patent-pending technology developed at Nutanix. For more details, see Seasonal X-FIT: Seasonal Time series Analysis and Forecasting using Tournament Selection http://go.nutanix.com/rs/031-GVQ-112/images/nutanix-xfit.pdf.Where is it used?Used in Capacity Planning for predictive analysis.Requirement for X-Fit Feature? Valid Prism Pro License and all Features should be enabled. Follow the Licensing Guide https://portal.nutanix.com/page/documents/details?targetId=Licensing-Guide:lic-licensing-display-features-t.html to check if the required license features are enabled.Prism Central 5.5 onwards where the feature is introducedCapacity Planning with X-Fit requires 26GB Prism Central VMs and should be the default for new deployments. For an Upgrade, you would need to upgrade memory. Refer to the Prism Central Guide for instructions.
When I Click Planning Tab in Prism Central, I see a message that X-Fit needs to be enabled. How do I enable it?Below is the message that the user may report:Validate that the Prism Central Pro License is applied and valid. This could also be due to Prism Central not having enough memory. You should see a banner on the Prism page at the top that X-Fit is disabled due to memory. Engage Nutanix support for assistance with validating the memory requirements.Note: This banner may be hidden under other banners at the top of the page. You may need to close the other banners first to view this.
}
null
null
null
null
KB7896
File Analytics - Unable to connect to File Server or Prism
Error in File Analytics user interface. This issue is resolved in File Analytics version 2.0.1 and above. Upgrade File Analytics upgrade to the latest version.
File Analytics (FA) UI fails with the error: Unable to connect to File Server Or: Unable to connect to Prism
Note: If CAC authentication is enabled on PE, File Analytics will not be usable due to DoDIN requirements that will disable the file_analytics local user.This issue is resolved in File Analytics version 2.0.1 and above. Upgrade File Analytics upgrade to the latest version. If the upgrade is not possible, use the below steps to work around the issue: If the error is observed for Prism Credentials "Unable to connect to Prism", then: Log in to Prism.Click on the Gear Icon at the top right corner to open the settings page.Scroll down to the Users and Roles section on the left pane, then click on the Local User management settings.Verify that the user “file_analytics” exists. If it does not exist, use steps 5 - 8 ONLYIf it does exist, use steps 5 and 9 ONLY Do the following: Log in to FA VM CLI using the below credentials: User: nutanixPassword: nutanix/4u Execute the following command to retrieve the password: # /opt/nutanix/analytics/bin/retrieve_config If the user does not exist: In the example above, the password for the file_analytics user is Nutanix/4u990.Now switch back to the Local User Management setting page on Prism from step 3.Create a new user with the below details: Username: file_analyticsFirst Name: file_analyticsLast Name: file_analyticsEmail: [email protected]: Nutanix/4u990 (as per the above example)Language: en-USRoles: User Admin, Cluster, Admin If the user already exists: nutanix@CVM$ ncli user reset-password user-name=file_analytics password=<password retrieved from FA output> Example: nutanix@CVM$ ncli user reset-password user-name=file_analytics password=Nutanix/4u990 If the error is observed for File Server Credentials, engage Nutanix Support https://portal.nutanix.com/.
KB15237
Curator scans failing due to underlying network issues
This KB describes a Curator scan failure due to underlying network issues
AOS clusters on ESXi might sometimes encounter Curator scan failing due to underlying network issues: netstat from CVM is reporting too many checksum errorsThese errors are seen in all the CVMs"esxcli network nic stats" is not reporting as many errors as we see in the CVMProbably host is not recording all the errorsNo DIMM errors are seen in the hosts. The rate of increments in all hosts looks like an issue in the switch level. why curator fails when it encounters network issue?Curator crash is happening due to bad checksum which is causing curator to fail the check of cksum validation for incoming MapReduce records from other nodes in the cluster.When Curator map/reduce tasks emit some data to be consumed by downstream reduce tasks to be run on all nodes.On every emit, before persisting the records (key/val pairs) on local curator disks, we compute checksum and when the reducer is reading input fetched from upstream tasks on all nodes, we validate this checksum and assert when validation fails. We can get the checksum errors from CVM: nutanix@CVM:~$ allssh 'netstat -s|grep sum' In the next 20 Min nutanix@CVM:~$ allssh 'netstat -s|grep sum' Curator is crashing with following signature on most of the CVMs: F1021 18:13:51.230080 18831 task_input_reader.cc:151] Check failed: MapReduceEmitterSpec::ValidateChecksum(emitter_type_, record) input_set_name_=ExtentGroupIdReduceTaskOidMetadata, input_pathname_=tasks/83992.r.27.ExtentGroupIdOwnerReduceTask/input/1.ExtentGroupIdReduceTaskOidMetadata/in Checksum validation is failing on all CVMs: ================== x.x.x.x ================= From Curator logs we can noticed Unsupported checksum type as well: F1017 09:22:10.849273 316 mapreduce_emitter_sec.h:170] Unsupported checksum type: 32769
Receive missed errors are found on different vmnic network cards of all hosts in the cluster: root@xxxxxx01A:~] esxcli network nic stats get -n vmnic0 | egrep "Total receive errors|Receive CRC errors|Receive missed errors" [root@xxxxxx01B:~] esxcli network nic stats get -n vmnic0 | egrep "Total receive errors|Receive CRC errors|Receive missed errors" [root@xxxxxx01C:~] esxcli network nic stats get -n vmnic0 | egrep "Total receive errors|Receive CRC errors|Receive missed errors" [root@xxxxxx01D:~] esxcli network nic stats get -n vmnic0 | egrep "Total receive errors|Receive CRC errors|Receive missed errors" [root@xxxxxx01E:~] esxcli network nic stats get -n vmnic0 | egrep "Total receive errors|Receive CRC errors|Receive missed errors" Resolve physical layer issues: to investigate the network for any drops and physical layer issue. There are some links is usful.NCC Health Check: host_nic_error_check:http:portal.nutanix.com/kb/1381NCC Health Check: host_rx_packets_drop: http://portal.nutanix.com/kb/2883 In this case, we are already doing redundant NIC replacements,the upstream switch was replaced.When the customer repalced upstream switch,curator running stable.(case # 1293611 https://nutanix.my.salesforce.com/5007V000028aWA7QAM)
KB15812
AOS_CVE_PATCH entity is visible in LCM inventory after upgrading to LCM-2.7
This article describes the entity "AOS_CVE_PATCH" in Inventory after upgrading to LCM version 2.7.
After updating LCM to LCM-2.7 on Prism Element, we can observe a new entity AOS_CVE_PATCH in the Inventory UI page. Example: Note: The "AOS_CVE_PATCH" is only available in inventory UI, and there are no available updates for it.
Ignore the AOS_CVE_PATCH entity in the LCM inventory. This issue has been fixed in LCM 3.0. LCM 3.0 or above does not list this entity in the inventory.
{
null
null
null
null
KB5352
LCM: Firmware update fails with "FAILED: Input/output error and returncode 5"
This KB will help identify hardware failures from log signatures.
Hardware firmware update fails on LCM (Life Cycle Manager) with below error: Operation failed. Reason: Command (['/home/nutanix/cluster/bin/lcm/lcm_ops_by_cvm', '102', '301', '4f63d356-246e-4191-a210-e62aa9543423', '9e6b8972-1a83-4a1f-8d88-69a90875c612', '5301baaf-1c86-46ce-8995-7cc5e64d3704', 'c8a242d5-405f-402c-863a-7cef0ffcec49', 'd89dc5ab-4b34-4672-b33f-f3500eeed784']) returned 1. On the LCM leader, you will see the below signature in ~/data/logs/lcm_ops.out. NOTE: You can find the LCM leader by running the command lcm_leader from any CVM in the cluster. 2018-03-09 07:52:27 INFO lcm_ops_by_cvm:161 State [1002], Handler [_perform_operation_by_cvm], CVM - [x.x.x.x]
FAILED: Input/output error and returncode 5 Above message indicates this is a hardware issue. Confirm whether or not there are any obvious hardware failures in the environment. If you are unable to identify any hardware failures, contact the hardware vendor for diagnostics.
KB9393
ixgben modules parameters(RxDesc) are not getting loaded post ESXi Upgrade
ixgben modules parameters(RxDesc) are not getting loaded post ESXi upgrade and not allowing us to modify it manually to zero.
Customer upgrading Esxi might notice the NIC drivers may fail to load post upgrade, there are two scenarios customer may hit,Scenario 1: Customer upgraded from 6.7U3 or 7.0U2 (any version of 7.0) will see the networking is lost to the host. From IPMI console Esxi will report "No compatible network adapter found. Please consult the product's Hardware Compatibility Guide (HCG) for a list of supported adapters."Logging into Esxi shell from IPMI virtual console and listing for installed drivers on the upgraded host we see only ixgben driver present and is reported as not loaded [root@esxi]# esxcli system module list | egrep 'Name|ixgb' Attempts to load the driver module will fail with error, "Cannot load module ixgben: Bad parameter"Checking /var/log/vmkernel.log we see error "Module parameter RxDesc isnot valid for module ixgben." Checking the nodes that are not upgraded, we see all the existing nodes run on ixgbe driver. This issue happens when customer on advise of Nutanix support in the past have set the RxDesc parameter of ixgben driver to mitigate rx_missed errors alert. In the newer versions of the ixgben driver the parameter is retired and from Esxi 7.0 ixgbe driver is not present in Esxi upgrade bundle hence the driver module defaults to ixgben but it couldn't be loaded due to RxDesc parameter set in the past.Scenario 2: We have seen an issue after ESXi upgrade from ESXi 6.5.0 build-5969303 to ESXi 6.5.0 build-13932383. ESXi host ixgben modules parameters(RxDesc) are not getting loaded and not allowing us to modify it manually to zero. Ixgben drivers were got upgraded from 1.4.1-2vmw.650.1.26.5969303 to 1.7.1.15-1vmw.650.3.96.13932383 with the ESXi Upgrade. The ixgben drivers are not loaded but enabled. [root@esxi]# esxcli system module list | egrep 'Name|ixgbe' ixgben driver is enabled but not loaded so we have network connectivity issues to access the host through the network(only IPMI console will allow the access to the host) Or the ixgben drivers are not loaded and not enabled [root@esxi]# esxcli system module list | egrep 'Name|ixgbe'
Solution 1:On the upgraded host1. From the IPMI virtual console login to Esxi shell and clear the custom parameters with the command below, [root@esxi]# esxcli system module parameters clear --module ixgben NOTE: The above command is only present on upgraded host not on the hosts that are yet to be upgraded. 2. Issue the following commands to Enable and Load the Driver: [root@esxi]# esxcli system module set --enabled=true --module=ixgben 3. Confirm both drivers are loaded and enabled: [root@esxi]# esxcli system module list | egrep 'Name|ixgbe' Please Note: If the host is in Maintenance Mode the changes may not reflect, please exit Maintenance Mode in order for the changes to be reflected upon Host reboot. vim-cmd /hostsvc/hostsummary | grep inMaintenanceMode vimsh -n -e /hostsvc/maintenance_mode_exit Once the parameter is cleared, reboot the Esxi host and confirm networking loads fine On hosts waiting to be upgraded:On the hosts that have not yet upgraded. Ensure the current nic's are using ixgbe driver, then run the following command to get the custom parameters set for ixgben module. You will see output like below showing RxDesc is set to be 4096 (in this example but can be any value). [root@esxi]# esxcfg-module -g -m ixgben Now clear the custom parameter with the following command [root@esxi]# esxcfg-module --set-options "" -m ixgben Now run the module parameter get command again to confirm the RxDesc is cleared as seen in the output below, [root@esxi]# esxcfg-module -g -m ixgben Please Note: -m option for esxcfg-module might fail in later ESXi releases and can be dropped: [root@esxi] esxcfg-module -g ixgben The upgrade should go through without any intervention now. Solution 2:Changing to use the ixgbe driver instead of the ixgben driver will resolve the issue. How to change driver from ixgben to ixgbe 1. Check the current status: The ixgben drivers are not loaded but enabled. [root@esxi]# esxcli system module list | egrep 'Name|ixgbe' The ixgben drivers are not loaded and not enabled. [root@esxi]# esxcli system module list | egrep 'Name|ixgbe' 2. Disable the ixgben driver: [root@esxi]# esxcli system module set --enabled=false --module=ixgben 3. Enable the ixgbe driver: [root@esxi]# esxcli system module set --enabled=true --module=ixgbe 4. Restart the ESXi host to reflect the changes.
KB8977
LCM - Ergon Exception: Failed to send RPC request
This KB describes about the LCM task marking as failed due to Ergon Service issue.
LCM operation to update any firmware, BIOS or BMC the upgrade task is marked as failed. The host might stuck in phoenix for older LCM versions.LCM upgrade task in Prism is marked as failed with below message. Operation failed. Reason: LCM operation kLcmUpdateOperation failed on phoenix, When we further look into the lcm_ops.out log on the lcm leader we would see the following traceback. 2020-05-17 22:00:11,724 INFO ergon_utils.py:200 [kLcmUpdateOperation] [Phoenix] [xx.xx.xx.36] [eb6df9bb-0fbc-4a1a-9fd1-f9386a7fb28c] Task: Updating task eb6df9bb-0fbc-4a1a-9fd1-f9386a7fb28c with state LcmOpsByPhoenixStates.READY, message Finished to execute pre-actions on phoenix, status kRunning When we further look into the ergon.out log on the lcm_leader we would see below traceback. 2020-05-17 21:54:03 INFO cpdb_watch_client_adapter.py:137 Waiting for reset event A failed ergon update caused an RPC request time out which led to the failure of the LCM task. Along with the above mentioned symptoms, you will also see insights_server.FATAL on one or more nodes with the below signature. ================== X.X.X.X ================= "Watch dog fired" occurs because of deadlock condition in the cluster and this causes the task to fail.
Issue from the IDF side will be fixed in AOS 5.15.2 and 5.10.11 - this is tracked under ENG-316218 https://jira.nutanix.com/browse/ENG-316218.Note: If you do not see ‘Watch dog fired’ on any node but have the other symptoms there might be some other issues. Please file a new ENG, make sure to collect the LCM log collector bundle( KB-7288 https://nutanix.my.salesforce.com/kA00e0000009CUZ?srPos=1&srKp=ka0&lang=en_US) and Insights logs using logbay for further analysis.If the node is stuck in phoenix, follow the instructions in KB-9437 https://portal.nutanix.com/kb/9437 to restore the node from phoenix and bring it back online in the clusterOnce the node is recovered, perform LCM inventory and verify that the firmware was upgraded to the latest. If not, re-attempt the upgrade.
KB7892
Cassandra in crash loop due to missing rack awareness information in zeus or expand cluster not working.
This KB describes the scenario of Cassandra being in a crash loop on a new node (new block) after being added into a cluster using ncli and rack awareness enabled or if rackable unit information is modified while a node is detached from cassandra
Background: After adding a new node to a cluster, Cassandra will be in crash-loop on the newly expanded node. Cassandra on other nodes is stable. Some cases the Cassandra and dynamic ring services are stable.Prism might display an error saying “metadata dynamic ring change operation stuck" and will eventually display errors and become unresponsive.The cluster expansion task will show success, but the node will not be added to the metadata ringThis scenario can also be hit without a cluster expand operation being involved if the following conditions are met: Customer has racks configured in the rack configuration pageNode is detached from the Cassandra metadata ring (for example due to maintenance or parts replacement)/etc/nutanix/factory_config.json is updated to change the rackable unit serial number (due to a node or chassis replacement)Cassandra is restarted (for example, due to a CVM reboot)This correlates with scenario 1 from this KB and the corresponding workaround can be followed. OR Expand Cluster fails to add new nodes NOTE: In TH-6292 was noted that Cassandra FATALed with the signature Check failed: Configuration::CheckIfDomainConfigIsComplete(config_proto) logical_timestamp: 64010 even if there was no rack awareness configured or auxiliary_config.json file in the system. This can happen during a node removal / adding process due to a race condition where the infra part is not populating Zeus correctly. ENG-392050 https://jira.nutanix.com/browse/ENG-392050 has been opened to request additional logging as this has not been fully RCA yet. Ensure to rule out all the possible scenarios in this KB before concluding this race condition has been hit, as it is very rare. More details can be found in TH-6292 http://jira.nutanix.com/browse/TH-6292. Identification: Cassandra process will be in a crash loop and will only list 3 PIDs (vs expected 5 when the service is stable): nutanix@CVM:~$ cluster status Cassandra_monitor logs will display the following: ~/data/logs/cassandra_monitor.INFO F0723 19:26:31.218788 31968 cassandra_zeus_util.cc:1408] Check failed: Configuration::CheckIfDomainConfigIsComplete(config_proto) logical_timestamp: 64010     or W20220129 06:16:45.608163Z 7449 cassandra_zeus_util.cc:1399] Can't find cassandra status: kNormalMode for node with svm id: 269918163 in its history since the last kNormalMode status. Either node never went into this state or its state change was not part of history On affected CVM review the dynamic_ring_changer.INFO in this case Cassandra is not necessarily crashing. dynamic_ring_changer.INFO:I20210501 17:16:32.385597Z 21887 dynamic_ring_change.cc:1646] Domain configuration is not complete, skipping adding the node Prism interface might display loading errors and refreshing the page will not load Prism back (This doesn't happen in all instances but it is a possible side effect). rack_id and rack_uuid lines will be missing from the rackable_unit_list of the Zeus configuration. See below the different scenarios on this. So far, three different scenarios have been observed in the field: Scenario 1: Only the newly added block of the cluster have the rack_id and rack_uuid entries missing nutanix@CVM:~$ zeus_config_printer | egrep -i "rackable_unit|rack_" Scenario 2: All the existing blocks of the cluster have the rack_id and rack_uuid information missing, however it is present for the newly added node: nutanix@CVM:~$ zeus_config_printer | egrep -i "rackable_unit|rack_" Scenario 3: The new node has default rack configuration while cluster does not have rack awareness configured and does not have /etc/nutanix/auxiliary_config.json file. New node nutanix@CVM:~$ cat /etc/nutanix/auxiliary_config.json Cluster nutanix@CVM:~$ allssh cat /etc/nutanix/auxiliary_config.json"
Solution: Most of the problems (as listed below) related to Expanding a cluster are fixed in 5.15.6, 5.19.2, or later. Recommend customers to upgrade their clusters to 5.15.6, 5.19.2, or later. They can consider 5.20.x or 6.0.x as well.Recommend customers to expand their clusters using Prism only. Root Cause: There are currently several tickets addressing this situation but the basic root cause is a disconnect some between nodes having rack_id / rack_uuid in zeus and other not. Once cassandra_monitor starts, this disconnect is noticed by cassandra_monitor, causing the crash loop. Engineering is working on improving the logic to avoid this situation. The two scenarios noted in the Description section of the KB can be hit under the following conditions: - Scenario 1: Customer has rack awareness configured and expands the cluster via ncli which causes the new node to miss rack_id and rack_uuid fields. Expanding via ncli doesn't show the rack configuration GUI where the customer is forced to insert a node into a rack. This scenario can also be hit without a cluster expand operation being involved, if the following conditions are met. Attach your case to ENG-404275 https://jira.nutanix.com/browse/ENG-404275 if you hit the problem after node or chassis replacement/relocation. Customer has racks configured in the rack configuration pageNode is detached from the Cassandra metadata ring (for example due to maintenance or parts replacement)/etc/nutanix/factory_config.json is updated to change the rackable unit serial number (due to a node or chassis replacement)Cassandra is restarted (for example, due to a CVM reboot)This correlates with scenario 1 from this KB and the corresponding workaround can be followed. Refer to ONCALL-10355 http://jira.nutanix.com/browse/ONCALL-10355 for more information. - Scenario 2: Customer does not have rack awareness configured, but the newly added node has the following file (/etc/nutanix/auxiliary_config.json) with populated rack data and the node is expanded either via ncli or Prism. While processing this file rack_id and rack_uuid fields are added in Zeus while the cluster does not have rack awareness enabled. /etc/nutanix/auxiliary_config.json is a helper file that is consumed by genesis during cluster expansion to load Rack placement configuration into Zeus. When it's consumed, it leaves an empty dictionary inside the file. Non consumed file: nutanix@CVM:~$ cat /etc/nutanix/auxiliary_config.json Consumed file: {} - Scenario 3: Cluster does not have rack awareness enabled, but the new node has the following file (/etc/nutanix/auxiliary_config.json) with populated rack data and the node is expanded via Prism. /etc/nutanix/auxiliary_config.json in new node. new node: nutanix@CVM:~$ cat /etc/nutanix/auxiliary_config.json On cluster: nutanix@CVM:~$ allssh "cat /etc/nutanix/auxiliary_config.son
KB12747
Failed To Snapshot Entities Alert due to stale cached vm information
It has been seen that due to stale cached vm esxi information added to a PD CG, when the vm is deleted we end up with an empty CG and Alert - A130088 - FailedToSnapshotEntities as the files in the CG no longer exist
Alert Failed to Snapshot Entities may be seen for vmware log files which do not exist in the location specified: ID : 5b8fc42a-9937-4b82-abc8-583a3fc25e58 Checking the files noted in the alert If we check the for the files noted in the alert they are not found: nutanix@CVM:~$ ssh [email protected] "ls -al /vmfs/volumes/metro_01/vm01/vmware-1.log" Check the cerebro.INFO to confirm that the error was generated due to a non existent file using the below command. nutanix@CVM:~$ allssh 'zgrep "Encountered nonexistent file [filename from the alert]" ~/data/logs/cerebro.*WARN*'. Eg: nutanix@CVM:~$ allssh 'zgrep "Encountered nonexistent file /metro_01/vm01/vmware-1.log" ~/data/logs/cerebro.*WARN*' Repeat steps 1 and 2 for each file noted in the alert.
Within 24 hours the CG which only contains invalid files is cleaned up and removed, and the alert will stop, however, this can happen for other files on other VM's, as in the case of an ESXi VDI cluster, VMs are removed and deployed as needed. While the alert is valid that the files which do not exist can't be snapshotted, if these are only vmware log files the alert can be safely ignored.Engineering is working to prevent this issue from occurring.If the alert is for any other files, please contact Nutanix Support for further assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com. https://portal.nutanix.com./ Gather the following information and attach them to the support case. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871. Collect the NCC output file ncc-output-latest.log or output from the NCC check run from Prism. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871. Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@CVM:~$ logbay collect --aggregate=true Attaching Files to the CaseAttach the files at the bottom of the support case on the support portal. If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
KB5234
Citrix MCS catalog creation process skips the image preparation step resulting in VDIs that are unable to boot
Citrix MCS catalog creation process skips the image preparation step resulting in VDIs that are unable to boot due to no identity disk found.
Citrix MCS catalog creation process skips the image preparation step resulting in VDIs that are unable to boot due to no identity disk found. Cause Sometimes while troubleshooting these scenarios, the restrictive setting of skipping the image preparation step may be tweaked. This can be verified by running the following PowerShell commands from any Citrix Delivery Controller: asnp citrix* This will give you the following output: imagemanagementprep_doimagepreparation=false This entry skips the image preparation step, resulting in only cloning the master image and creating the VDIs. Hence, the VDIs created are just clones and no identity disks are created, which results in unbootable VDIs.
Remove the entry above by running the following command: Remove-ProvServiceConfigurationData -name imagemanagementprep_doimagepreparation After removing, re-deploy the catalog and it should now deploy successfully.
KB7237
Failed to upgrade Foundation manually by using the Foundation java applet
null
If the node is not a part of a Nutanix cluster yet, you may use a Foundation Java applet to upgrade the Foundation version on CVM (Controller VM). UPGRADE CVM FOUNDATION BY USING THE FOUNDATION JAVA APPLET https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-v4_2:v42-foundation-upgrade-with-java-applet-t.html https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-v4_2:v42-foundation-upgrade-with-java-applet-t.html The Foundation Java applet includes an option to upgrade or downgrade Foundation on a discovered node. The node must not be configured already. If the node is configured, do not use the Java applet. Instead, update Foundation by using the Prism web console (see Cluster Management > Software and Firmware Upgrades > Upgrading Foundation in the Prism Web Console Guide). However, sometimes we may fail to upgrade Foundation manually by using this Foundation java applet. Errors are like bellow: Uploading the tarball failed:
Mac and Windows have had problems with default Java keys. The problem is due to the Java component here does not have the ciphers that CVM needs. We do not have a better solution yet. There are a few other options to workaround: Upgrade Foundation from Prism after adding the nodes to the cluster.Try it with a Linux client instead.Upload the Foundation upgrade bundle manually and upgrade from command line ( KB 4588: Manually upgrade Foundation on a single node https://portal.nutanix.com/kb/4588).
KB15225
Prism Element (PE)- LCM inventory stuck at 0% - Index of Issues
Indexes of known issues resulting in LCM inventory stuck at 0%/LCM framework update stuck in Prism Element (PE)
This KB is designed as an index to other KBs. It should have minimal information when new scenarios are added here. SREs are encouraged to: Link your case to this KB and to the actual KB that provides the solutionWrite additional KBs on failure scenarios and update this KB going forward. Updates to this KB should focus on the required condition to hit the failure, keywords, and applicable LCM versions. Workarounds and ENG tickets should be left to the actual solution KB.
KB articles describing LCM inventory stuck at 0% and or LCM framework update stuck happening on PE: [ { "KB article:": "In AOS 5.20.2.x and 6.0.2.x LCM inventory or framework upgrade gets stuck due to the Hades services not releasing open files/connections." }, { "KB article:": "In AOS 6.5.x there is a new issue with Hades services not releasing open files/connections." }, { "KB article:": "Run into an issue where an LCM inventory stalls at 0%, due to \"Exception: lcm_metric_operation_v1 (kEntityTypeNotRegistered)\" in LCM leader genesis.out" }, { "KB article:": "When performing inventory in AOS 5.10.x or older versions, LCM UI displays \"LCM framework is not yet started,\" and the inventory task stops at 0% after performing inventory. This is because older AOS versions do not have the modules for recent LCM to work" }, { "KB article:": "LCM leader getting returned \"None\" faced in LCM 2.4.1.3 and AOS 6.0 or pc.2021.5 configuration, where we observe the LCM inventory is stuck at 0% and \"Waiting for LCM framework to start\" in Prism LCM UI" }, { "KB article:": "LCM Inventory tasks are getting stuck due to empty LCM root task" }, { "KB article:": "Possible LCM framework file corruption caused the LCM framework update stuck and the LCM root task stuck at 0%" }, { "KB article:": "LCM framework update stuck due to the LCM version being inconsistent across the CVMs" }, { "KB article:": "LCM Framework update hung with genesis.out on downrev CVM reporting that the \"module for LCM Version X not found in catalog\"" }, { "KB article:": "LCM is stuck due to a corrupted LCM config and commands \"configure_lcm -p\" and \"lcm_upgrade_status\" fail with ERROR: Failed to configure LCM. error: 'enable_https'" }, { "KB article:": "Missing the rim_config.json and version.json files, complaining in genesis.out, cause LCM Framework update stuck" }, { "KB article:": "Cluster gets stuck for LCM framework update due to it is trying to downgrade the LCM framework to a lower version and blocks other LCM tasks" } ]
KB3541
Hyper-V - Overview of NutanixDiskMonitor service on Windows Server 2012 R2
This article provides an overview of NutanixDiskMonitor service that runs on Windows Server 2012 R2 Nutanix Hyper-V hosts.
NutanixDiskMonitor service is present on all Windows Server 2012 R2 Hyper-V hosts and is used for the following. Enumerating disks on a host.Refreshing CVM configuration in regard to disk configuration (only if CVM is not running).Starting CVM (as by default CVM is not supposed to be started when Hyper-V host is started). Note: The CVM must have Automatic Start Action set as None, otherwise NutanixDiskMonitor will not be able to reconfigure. NutanixDiskMonitor is not present on Windows Server 2016 Hyper-V hosts. SCSI controller passthrough is used like on ESXi and AHV. Automatic start action is set to Always start this virtual machine automatically for CVM.
NutanixDiskMonitor logs status messages are located in the 'Application' Event Log. It uses NutanixHostAgent as Source, but does not specify any event id numbers (Event ID will be always equal to 0).The following are sample events from a Hyper-V host showing a successful CVM startup.OS starts (this event is from System log). Log Name: System NutanixHostAgent starts. Log Name: Application NutanixDiskMonitor service starts. Log Name: Application NutanixDiskMonitor detects physical disks. Log Name: Application Checking CVM status. Only update configuration when CVM is not running. Log Name: Application In this case, CVM did not have any disks attached. Log Name: Application NutanixDiskMonitor detaches existing disks from CVM. Log Name: Application NutanixDiskMonitor attaches new disks. Log Name: Application CVM has started. Log Name: Application Verify event from NutanixCvmConsole to see how the CVM boots. Log Name: Application In order to get all events from all the nodes from 'Application' Event Log, logged by NutanixDiskMonitor, run the following command on a CVM. allssh 'winsh "Get-EventLog -LogName application -Source nutanixdiskmonitor|ft -wrap"' If you want to see only X newest events, add '-newest X' parameter to the 'Get-EventLog' cmdlet. Example: allssh 'winsh "Get-EventLog -LogName application -Source nutanixdiskmonitor -newest 5|ft -wrap"'
KB1747
SCVMM - slow deployment of template to VM
SCVMM VM deployment from the library may fail over from Fast File Copy to BITS over HTTPs.
Deploying VMs through SCVMM may take longer than expected (nearly 1 hour for 120 GB VMs). If you check SCVMM VM deployment job, you can see that SCVMM uses BITS (Background Intelligence Transfer Service) over HTTPS instead of Fast File Copy (as seen below). This takes significant amount of time, based on the size of the VHD.
SCVMM may fail over to BITs over HTTPs due to various reasons.One of these reasons may be that SCVMM checks if a share, where it is deploying VMs to, is accessible and SCVMM Run-As has write permissions on this share. The error message in SCVMM VM Deployment job should look similar to the one below: Error (20552) This error may likely be caused by the issue with Nutanix SMI-S storage provider reported in ENG-131553: it returns incorrect access control list (ACL) for SMB shares (Nutanix container) to SCVMM.
KB4600
NCC Health Check: vg_space_usage_check
The NCC health check vg_space_usage_check verifies that volume groups running on the Nutanix cluster are below the usage threshold limit and raise an alert if they are above the [WARN] (75%) or [CRITICAL] (90%) values.
The NCC Health Check vg_space_usage_check verifies that volume groups running on the Nutanix cluster are below the usage threshold limit, raising an alert if these volume groups are above the [WARN] (75%) or [CRITICAL] (90%) values. This check was introduced in NCC version 3.1.0 Running the NCC Check Run this check as part of the complete NCC Health Checks. nutanix@cvm$ ncc health_checks run_all Or run this check individually: nutanix@cvm$ ncc health_checks hardware_checks disk_checks vg_space_usage_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. Sample output For Status: PASS Running /health_checks/hardware_checks/disk_checks/vg_space_usage_check on all nodes [ PASS ] For Status: WARN Running /health_checks/hardware_checks/disk_checks/vg_space_usage_check on all nodes [ WARN ] For Status: ERROR Running : health_checks hardware_checks disk_checks vg_space_usage_check For Status: FAIL Running /health_checks/hardware_checks/disk_checks/vg_space_usage_check on all nodes [ FAIL ] Output messaging [ { "Check ID": "Check high space usage on volume groups" }, { "Check ID": "Excessive space usage in the volume group." }, { "Check ID": "Add more capacity to the volume group or delete some data to free storage space. The Guest OS and/or the application using the volume group should be configured to send the list of freed blocks to AOS using SCSI UNMAP command. This can be done as part of in-band or out of band TRIM operation(s). Refer to 'Thin Provisioning' section in the Nutanix Volumes guide." }, { "Check ID": "Volume Group will run out of space, and the cluster may become unable to service I/O requests to that volume group." }, { "Check ID": "A101056" }, { "Check ID": "Volume Group Space Usage Exceeded" }, { "Check ID": "Volume Group space usage for volume_group_name is at usage_pct%." }, { "Check ID": "This check is scheduled to run every hour, by default." }, { "Check ID": "This check does not generate an alert, by default." } ]
If the check reports a WARN or a FAIL status: Clean up data from the Volume Group to regain space below the threshold limit.The Guest OS and/or the application using the Volume Groups should be configured to send the list of freed blocks to AOS using SCSI UNMAP command. This can be done as part of in-band or out of band TRIM operation(s). Refer to ' Thin Provisioning https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2049-Nutanix-Volumes:top-thin-provisioning-.html' section in the Nutanix Volumes guide https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2049-Nutanix-Volumes%3ABP-2049-Nutanix-Volumes. You may also expand the Volume Groups as described in the KB 3638 https://portal.nutanix.com/kb/3638. If the check reports an ERROR status, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/.If the check reports an ERROR status due to the presence of a stale VG in AOS version 6.6 or later, refer to KB 13889 http://portal.nutanix.com/kb/13889 to resolve the issue.NOTE: Backup vendors (Ex. Veeam, Rubrik, etc.) which integrate with Nutanix often create volume groups to mount snapshots and copy off their backups. It is normal that these volume groups may temporarily trigger this alert. The space usage for the VG will be dependent on the size and usage of the VM disks that are attached to the VM. The names of these volume groups are often preceded with the name of the backup vendor - such as "Veeam-xxxxxxxxx-xxxxx-xxxx-xxxx-xxxxxxxxxxxxx". If you are seeing this alert for this type of VG, you can disregard this check and the check should stop failing after the backup is complete and the volume group is removedNOTE: In case if your cluster runs Karbon version older than 2.2.1, there is no xtrim service on worker VMs, this eventually might lead to vg_space_usage_check false positives. For more information about xtrim with Karbon, see KB 10516 https://portal.nutanix.com/kb/10516.For more information on Karbon, see Nutanix Karbon Guide https://portal.nutanix.com/page/documents/details?targetId=Karbon-v2_2:Karbon-v2_2.
KB3005
NCC Health Check: dns_server_check
The NCC health check dns_server_check verifies the configuration and reachability of the DNS server on the Controller VMs and the hosts.
This check has undergone several improvements over time. Ensure you are running the latest NCC version for the most complete coverage. The NCC health check dns_server_check verifies if the Name Server configuration is correct and functional on CVMs (Controller VMs), hypervisor hosts, and Prism Central (PC): Check if Name Servers are actually configured on the clusterFor each Name Server which is configured, from each CVM/PCVM: Ping the Name Server from each CVM/PCVMQuery the Name Server for local address (127.0.0.1) as a way to prove DNS server is responsive to CVM/PCVMConfirm the Name Server in use on the CVM matches the stored Name Server(s) in the cluster-wide config (Zeus) to ensure the config is uniform across all CVM/PCVMs For each Name Server which is configured, from each Host: Ping the Name Server from each HostConfirm the Name Server configured each CVM matches the stored Name Server(s) in the cluster-wide config to ensure the config is uniform across all nodesFrom each Host, query the Name Server for local address (127.0.0.1) as a way to prove DNS server is responsive to Host If Metro/stretch cluster is enabled, perform a rudimentary check of the configured Name Servers for the potential of not being redundant in the event of an active site failoverPeriodically check and measure Name Server response times to expose online-but-unhealthy DNS server resolution issues that may cause downstream service issues. Running NCC CheckRun this check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or run this check separately: nutanix@cvm$ ncc health_checks system_checks dns_server_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every 12 hours, by default. This check will not generate an alert. Sample OutputFor status: WARN Running /health_checks/system_checks/dns_server_check on the node [ WARN ] For status: Fail Running /health_checks/system_checks/dns_server_check [ FAIL ] Output messaging Note: This check may report INFO for Nutanix Clusters running on AWS. You can ignore this INFO result and it is not necessary to take any action at this time. This does not affect the functionality of the cluster. It is known that DNS servers for Nutanix Clusters on AWS may not be reachable by ping. Nutanix Engineering is aware of this issue and is working on a fix in a future release. [ { "Check ID": "Check for DNS Server configuration" }, { "Check ID": "A name server is not configured on CVMs and hypervisor hosts or is not able to resolve queries." }, { "Check ID": "Verify if the Name server is configured on CVMs and on hypervisor hosts and if it can resolve queries. Review KB 3005 for more details." }, { "Check ID": "Name resolution will not work." } ]
Note: You may need to scroll to the right to see the full tables below. For status: INFO For Status: WARN For Status: FAIL In NCC 3.5.2, the check was introduced to Prism Central as well. Prism Central For Status: INFO For Status: WARN For Status: FAIL In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Gather tNCC Health Check Bundle and attach it to the support case: nutanix@CVM:~$ ncc health_checks run_all[ { "Message": "No name servers configured in Zeus", "Cause": "No name servers are configured in the cluster.", "Impact": "DNS services, if required, will not work.", "Solution": "Configure the name server on the cluster.\n\t\t\tUsing Prism:\n\t\t\t\t​​Log in to Prism.Click on the gear icon on the top right, then select Name Servers.\n\t\t\t\tUsing SSH to any CVM:\n\t\t\t\tnutanix@cvm$ ncli cluster add-to-name-servers servers=NAME_SERVER\n\t\t\t\tReplace NAME_SERVER with a comma-separated list of IP addresses to be included in the name servers list." }, { "Message": "Only one name server is configured", "Cause": "Only one name server is configured.", "Impact": "DNS services will not work if the configured name server is not reachable.", "Solution": "It is recommended but not mandatory to configure multiple name servers for redundancy in case a name server is not reachable." }, { "Message": "Unable to ping DNS server x.x.x.x from CVM", "Cause": "Ping failing on the configured DNS server from CVM.", "Impact": "DNS services may not work.", "Solution": "Check if the configured DNS server is correct.Check network connectivity between the CVM and the DNS server.Check if any firewall is blocking traffic from CVM to the DNS server.\n\n\t\t\tNote: Some local environment firewall rules/ACLs external to the CVM may deny ICMP echo/ping traffic but still allow DNS lookups (UDP-TCP/53), which will show failed pings for a working DNS server.\n\n\t\t\tIn some cases, if Symantec Antivirus services are set as Automatic, they may initialize before DNS services and may cause an issue with port 53 to be blocked.\t\t\tIn that case, specific GPO Policies need to be set on the DNS server for the Antivirus Service to Automatic-Delayed Start." }, { "Message": "Unable to ping DNS server x.x.x.x from Hypervisor", "Cause": "Ping failing on the configured DNS server from Hypervisor.", "Impact": "DNS services may not work.", "Solution": "Check if the configured DNS server is correct.Check network connectivity between the hypervisor and the DNS server.Check if any firewall is blocking traffic from hypervisor to the DNS server." }, { "Message": "DNS Server Response Time of x exceeds a threshold of x milliseconds with DNS Server x.x.x.x", "Cause": "\"test.dnscheck.ncc\" is a known bad, non-existent FQDN which the NCC health check throws at a DNS server to measure how the server responds to negative result lookup.Upstream components in AOS/AHV can be negatively affected if a DNS server does not handle 'not found' lookups.The check throws the bad request at the server and times whether it responds at all, within 3 sec, or +15sec, and raises an alert accordingly.A healthy DNS server should respond < 3sec to say \"that FQDN does not exist\", if it does not, then it means any component/application in AOS which relies on a timely DNS response from the configured name servers will be delayed and its own timeouts might occur and cause other downstream errors.", "Impact": "Slow DNS responses can negatively impact cluster operations.", "Solution": "If using BIND, review the configuration to send back NXDOMAIN (non-existing domain) when the response is not known.Check the resolution of non-existent FQDN, such as \"test.dnscheck.ncc\", if the time-out takes too long there may be negative impacts on the cluster. Described in cause column.Check if the configured DNS server is correct.Confirm the DNS server's health and responsiveness.Check the network path between the cluster and the DNS server for performance/latency/stability issues. Fix the issue outside the cluster or remove the affected DNS server and add a more reliable one in its place." }, { "Message": "Message", "Cause": "Cause", "Impact": "Impact", "Solution": "Solution" }, { "Message": "Nameservers configured on CVM: ['x.x.x.x'] do not match the Nameservers configured in Zeus config: ['x.x.x.x']", "Cause": "Name servers configured in Zeus config do not match name servers in /etc/resolv.conf on CVM.", "Impact": "DNS services may not work as expected.", "Solution": "Configure DNS servers on CVM from Prism or ncli only.Remove unwanted entries from /etc/resolv.conf." }, { "Message": "Nameservers configured in Zeus config are not configured on the host", "Cause": "If you use the Prism web console to add the DNS servers to ESXi or Hyper-V clusters, the DNS servers are not added to the hosts.Different DNS servers configured on CVMs and Hypervisor.", "Impact": "DNS services may not work as expected.", "Solution": "It is recommended to have the same DNS servers on CVMs and hosts.Configure DNS servers on the hypervisor hosts manually on ESXi and Hyper-V clusters.No need to configure DNS servers on AHV manually." }, { "Message": "DNS Server Response Time of x exceeds a threshold of x milliseconds with DNS Server x.x.x.x", "Cause": "\"test.dnscheck.ncc\" is a known bad, non-existent FQDN which the NCC health check throws at a DNS server to measure how the server responds to negative result lookup.Upstream components in AOS/AHV can be negatively affected if a DNS server does not handle 'not found' lookups.The check throws the bad request at the server and times whether it responds at all, within 3 sec, or +15sec, and raises an alert accordingly.A healthy DNS server should respond < 3sec to say \"that FQDN does not exist\", if it does not, then it means any component/application in AOS which relies on a timely DNS response from the configured name servers will be delayed and its own timeouts might occur and cause other downstream errors.", "Impact": "Slow DNS responses can negatively impact cluster operations.", "Solution": "If using BIND, review the configuration to send back NXDOMAIN (non-existing domain) when the response is not known.Check the resolution of non-existent FQDN, such as \"test.dnscheck.ncc\", if the time-out takes too long there may be negative impacts on the cluster. Described in cause column.Check if the configured DNS server is correct.Confirm the DNS server's health and responsiveness.Check the network path between the cluster and the DNS server for performance/latency/stability issues. Fix the issue outside the cluster or remove the affected DNS server and add a more reliable one in its place." }, { "Message": "Message", "Cause": "Cause", "Impact": "Impact", "Solution": "Solution" }, { "Message": "DNS Server x.x.x.x is not reachable", "Cause": "If a query made to the DNS server by using the local IP address of the Controller VM fails, the check reports a FAIL status. For example, the check runs the command \"nslookup 127.0.0.1\".", "Impact": "DNS services will not work.", "Solution": "Check the network connectivity between the cluster and the DNS server.Check if the DNS server configured is correct.Check if any firewall is blocking communication between the cluster and the DNS server.Verify settings on the DNS server.To test if queries are successful and DNS are reachable, use SSH to log in to any CVM and use the dig utility to gauge responsiveness.\n\t\t\t\tnutanix@cvm$ time dig A_RECORD @NAME_SERVER\n\t\t\t\tReplace A_RECORD with the FQDN and NAME_SERVER with the DNS server name." }, { "Message": "DNS Server x.x.x.x response time untraceable.", "Cause": "\"test.dnscheck.ncc\" is a known bad, non-existent FQDN which the DNS server NCC health check throws at a DNS server to measure how the server responds to negative result lookup. Upstream components in AOS/AHV can be negatively affected if a DNS server does not handle 'not found' lookups. The check throws the bad request at the server and times whether it responds at all, within 3 sec, or +15sec, and raises an alert accordingly. A healthy DNS server should respond <3sec to say \"that FQDN does not exist\", if it does not, then it means any component/application in AOS which relies on a timely DNS response from the configured name servers will be delayed and its own timeouts might occur and cause other downstream errors.", "Impact": "DNS check will report as FAIL", "Solution": "NCC DNS check expects an ERROR from the DNS server for a test address (test.dnscheck.ncc). Depending on the configuration on the DNS server, the response might not be sent back.If using BIND, review the configuration to send back NXDOMAIN (non-existing domain) when the response is not known." }, { "Message": "DNS server on x.x.x.x is blocked on port 53.", "Cause": "The TCP is blocked on port 53 and only UDP is allowed. Due to the recent check added in NCC 3.10.0.1, this check , ensures that port 53 connectivity is available between the CVM and the DNS server. If TCP is blocked, this alert will be generated. \t\t\t\t\t\tNote: DNS primarily uses the User Datagram Protocol (UDP) on port number 53 to serve requests. DNS queries consist of a single UDP request from the client followed by a single UDP reply from the server. When the length of the answer exceeds 512 bytes and both client and server support EDNS, larger UDP packets are used. Otherwise, the query is sent again using the Transmission Control Protocol (TCP). TCP is also used for tasks such as zone transfers. Some resolver implementations use TCP for all queries.", "Impact": "DNS check will report as FAIL", "Solution": "To test if DNS is reachable, use SSH to log in to any CVM and use the netcat command.\n\t\t\t\tnutanix@cvm$ nc -zv @NAME_SERVER 53\n\t\t\t\tIf the DNS Server is not reachable and the connection times out, check with other name servers to ensure connectivity over port 53:\n\t\t\t\tnutanix@cvm$ nc -zv google.com 53\n\t\t\t\tAllow TCP on port 53." }, { "Message": "Unable to resolve 127.0.0.1 on DNS Server configured on host x.x.x.x: x.x.x.x from CVM, DNS Server may not be running.", "Cause": "The DNS server may not have an A record for \"localhost\" configured, a PTR record for \"127.0.0.1\" configured, or the zone file to serve these records may be improperly configured.", "Impact": "DNS check will report as FAIL", "Solution": "To test if the DNS server is able to serve records for these lookups nslookup can be utilized to test the DNS server.\n\t\t\tnutanix@cvm$ nslookup 127.0.0.1 @NAME_SERVER\nnutanix@cvm$ nslookup localhost @NAME_SERVER\n\t\t\tIn the event the lookup does not succeed then the DNS server configuration should be reviewed to validate that records for localhost and 127.0.0.1 are properly configured and lookups are served back to hosts making these lookups." }, { "Message": "Message", "Cause": "Cause", "Impact": "Impact", "Solution": "Solution" }, { "Message": "No name servers configured in Zeus", "Cause": "No name servers are configured in the cluster.", "Impact": "DNS services, if required, will not work.", "Solution": "Configure the name server on the cluster.\n\t\t\tUsing Prism:\n\t\t\t\t​​Log in to Prism.Click on the gear icon on the top right, then select Name Servers.\n\t\t\t\tUsing SSH to Prism Central VM:\n\t\t\t\tnutanix@pcvm$ ncli cluster add-to-name-servers servers=NAME_SERVER\n\t\t\t\tReplace NAME_SERVER with a comma-separated list of IP addresses to be included in the name servers list." }, { "Message": "Only one name server is configured", "Cause": "Only one name server is configured.", "Impact": "DNS services will not work if the configured name server is not reachable.", "Solution": "It is recommended but not mandatory to configure multiple name servers for redundancy in case a name server is not reachable." }, { "Message": "Unable to ping DNS server x.x.x.x from CVM", "Cause": "Ping failing on the configured DNS server from Prism Central VM.", "Impact": "DNS services may not work.", "Solution": "Check if the configured DNS server is correct.Check network connectivity between the Prism Central VM and the DNS server.Check if any firewall is blocking traffic from Prism Central VM to the DNS server." }, { "Message": "Message", "Cause": "Cause", "Impact": "Impact", "Solution": "Solution" }, { "Message": "Nameservers configured on CVM: ['x.x.x.x'] do not match the Nameservers configured in Zeus config: ['x.x.x.x']", "Cause": "Name servers configured in Zeus config do not match name servers in /etc/resolv.conf on Prism Central VM.", "Impact": "DNS services may not work as expected.", "Solution": "Configure DNS servers from Prism Central or ncli only.Remove unwanted entries from /etc/resolv.conf." }, { "Message": "Message", "Cause": "Cause", "Impact": "Impact", "Solution": "Solution" }, { "Message": "DNS Server x.x.x.x is not reachable", "Cause": "If a query made to the DNS server by using the local IP address of the Prism Central VM fails, the check reports a FAIL status. For example, the check runs the command \"nslookup 127.0.0.1\".", "Impact": "DNS services will not work.", "Solution": "Check the network connectivity between Prism Central VM and the DNS server.Check if the DNS server configured is correct.Check if any firewall is blocking communication between Prism Central VM and the DNS server.Verify settings on the DNS server.To test if queries are successful and DNS are reachable, use SSH to log in to Prism Central VM and use the dig utility to gauge responsiveness.\n\t\t\t\tnutanix@pcvm$ time dig A_RECORD @NAME_SERVER\n\t\t\t\tReplace A_RECORD with the FQDN and NAME_SERVER with the DNS server name" } ]
KB10718
NCC Health Check: node_storage_tier_skew_check
The NCC health check plugin node_storage_tier_skew_check reports disk capacity skew above 15% within SSD storage tier in each individual node in a Nutanix cluster.
The NCC health check plugin node_storage_tier_skew_check reports disk capacity skew above 15% within SSD storage tier in each node in a Nutanix cluster.This check runs on AOS versions 6.0 and later if the NCC version on the cluster is >=NCC v4.1.0 and < NCC v4.5.0This check runs on AOS version 5.20 and later if the NCC version on the cluster is NCC v4.5.0 or higher.Prior to NCC version 4.6.2 this check also flagged HDD storage tier skew, which was discontinued. Running the NCC Check It can be run as part of the complete NCC check by running nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks stargate_checks node_storage_tier_skew_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run every 24 hours by default.This check will not generate an alert. Sample output For status: PASS Running : health_checks stargate_checks node_storage_tier_skew_check For status: WARN Running : health_checks stargate_checks node_storage_tier_skew_check Output messaging Note: Starting NCC 4.6.2, this NCC Health check does not monitor the disk capacity discrepancy in the HDD tier.[ { "Check ID": "Checks for skew in storage tiers of the node" }, { "Check ID": "Disks in the tier are not the same size." }, { "Check ID": "Replace disk with the same size as the other disks in the tier on the node." }, { "Check ID": "Unsupported configuration, impacting cluster performance" } ]
Whenever a disk in a node is replaced for RMA https://portal.nutanix.com/page/documents/details?targetId=Drive-Replacement-G8:Physically%20Replacing%20a%20Drive (faulty disk) or for Capacity Upgrade https://portal.nutanix.com/page/documents/details?targetId=Drive-Replacement-G8:bre-drive-upgrade-t.html purposes, as well as in cases where a previously empty disk slot gets populated, there is the potential to introduce skew in a storage tier in a node. If the capacity of a new disk differs from the rest of the drives in the corresponding storage tier (NVMe, SSD, HDD) by more than 15%, such a difference become big enough to impact data replicas placement (RF2 / RF3), create hot-spots, and impact performance - and is thus unsupported. AOS version 6.0 introduces a feature where dissimilar disk capacities are detected at the disk replacement time. In the case of RMA with a bigger disk, extra_disk_reservation is added to make Extent Store part of the disk equal to the rest of the disks in a tier in a node. This remediates potential issues. Resolving the issue To resolve the issue, replace the disk with the same size as the other disks in the tier on the node, or logically remove/re-add outlier disks one at a time as described below. To check disk sizes in a node, navigate to Hardware > Table view > Disk in Prism Element.Then click in the table header row to sort the table by 'Host Name' or 'Hypervisor IP' column. The 'Disk Usage' column will display the Used and Total capacity of disks, while the Tier column - is the storage tier. NCC check output highlights which storage tier is found to be skewed on which node. Note: Starting NCC 4.6.2, this NCC Health check does not monitor the disk capacity discrepancy in the HDD tier. Alternatively, connect to the corresponding CVM over SSH and run the list_disks command: nutanix@CVM:x.y.z.76:~$ list_disks Note: In the above example, there is no skew. The first two disks are SSDs, and the next four are HDDs in a hybrid node. In the case of skew, the bigger disk(s) that introduces the skew needs to be replaced with the same size as the rest in the same tier in this specific node.Alternatively, the disk can be logically removed from cluster configuration and then re-added using the "Replace Faulty Disk" option of the Repartition and Add dialog, allowing AOS 6.0 feature to add extra_disk_reservation to align the usable (Extent Store) capacity of this disk with other disks in the same tier. For clusters running AOS 5.20.x, refer to KB 10683 https://portal.nutanix.com/kb/10683 and consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/. Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the Case To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB11516
LCM: Minimum Required Versions for Dell 13G Platforms environment
This article lists the minimum versions required in Dell 13G platform environment for LCM.
Please Note : This KB article is only applicable for DELL 13G Platforms. In order to work with Life Cycle Manager (LCM), your system must have the following minimum versions of Dell components installed. For the highest supported versions for PTAgent and iSM, see Release Notes for LCM: https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-LCM:rim-lcm-dell_1.9-r.html If your environment does not meet the above minimum required versions, then the component upgrade will be disabled in LCM. Another reason for a component upgrade to be disabled is if the installed version is already higher than the payload version. If any of the previous conditions are true, you will see one of the following messages: PTAgent version installed on the host is either lower than <minimum version> or higher than <payload version>. Please refer KB 11516. iSM version installed on the host is either lower than <minimum version> or higher than <payload version>. Please refer KB 11516. Examples of Disabled Reasons: PTAgent version installed on the host is either lower than 1.9.0-371 or higher than 2.4.0-43. Please refer KB 11516. iSM version installed on the host is either lower than 3.3.0-1290 or higher than 3.4.0-1471. Please refer KB 11516.[ { "PTAgent": "iSM (el7)", "1.9.0-371": "3.4.0-1471" }, { "PTAgent": "iSM (others)", "1.9.0-371": "3.4.0" } ]
If you have earlier versions installed, update them manually using DELL available tools or remove these versions and let LCM re-install these post inventory to at least the versions listed in the above table.Please, reach out to Dell Support for manual upgrade assistance or any queries.
KB8149
NCC Health Check: check_dvs_esxi_version_compatibilty
NCC 3.9.2. The NCC health check check_dvs_esxi_version_compatibility checks for the presence of a DVS and an affected ESX version.
The NCC health check check_dvs_esxi_version_compatibility checks for the presence of a DVS and an affected ESX version. The affected ESXi versions are 6.0GA, 6.0U1-U3, and 6.5GA. This check has been introduced in NCC-3.9.2. Running the NCC Check You can run this check as part of the complete NCC health Checks: nutanix@cvm$ ncc health_checks run_all Or run the check separately: nutanix@cvm$ ncc health_checks network_checks check_dvs_esxi_version_compatibility You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is not scheduled to run on an interval. This check will not generate an alert. Sample Output: For Status: PASS Running : health_checks system_checks check_devs_esxi_version_compatibility For Status: FAIL Running : health_checks system_checks check_devs_esxi_version_compatibility Output messaging [ { "Check ID": "Check if DVS is being used on an affected ESXi version - ESXi 6.0GA,U1,U2,U3 and 6.5GA." }, { "Check ID": "ESXi on the cluster is not of the recommended version - ESXi 6.0GA,U1,U2,U3 and 6.5GA." }, { "Check ID": "Upgrade ESXi on the cluster to ESXi 6.0 p07 Build 9239799 or ESXi 6.5 U1 or later.\t\t\tPlease follow steps from KB 8149 to resolve this issue." }, { "Check ID": "CVMs might be disconnected after reboot and risk a hung upgrade." } ]
Upgrade ESXi to a non-affected version. Note: If you are upgrading ESXi from an affected version, there is a chance that the CVM (Controller VM) may be disconnected after rebooting during the upgrade workflow. During an AOS upgrade for a cluster utilizing distributed virtual switches, a CVM may not reconnect to the network following a reboot. There is a bug found in ESXi 6.0GA, U1, U2, U3 and 6.5GA which affects VMs connected to distributed virtual switch ports. VMware describes the issue as: When you power off a virtual machine, other virtual machines that are in the same VLAN on the ESXi host lose The issue can be confirmed by examining the vmware.log file in the directory /vmfs/volumes/NTNX-local-ds-xxxxxxxx-A/ServiceVM_Centos/ on the non-booting CVM - ESXi host. 2018-08-07T13:03:53.803Z| vcpu-0| I120: VMXNET3 user: failed to connect Ethernet0 to DV Port 2986. To restore connectivity to the CVM, confirm that the "Connected" checkbox is selected in the CVM VM properties in vCenter. If the box IS selected, deselect and re-select it to force the NIC to reconnect. This defect is documented by VMware under bug ID 2007036 and is marked as resolved in ESXi 6.0 p07 Build 9239799 or ESXi 6.5 U1 or later. The following VMware KB articles provide reference to this issue: https://kb.vmware.com/s/article/53629 https://kb.vmware.com/s/article/53629 https://kb.vmware.com/s/article/53627 https://kb.vmware.com/s/article/53627 In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com/. Additionally, gather the following command output and attach it to the support case: nutanix@cvm$ ncc health_checks run_all
KB15368
NDB - SQL Server Log Catchups continue to fail with error Driver exceeded the timeout value of 70 minutes for the execution of the operation
SQL Server Log Catchups continue to fail with error Driver exceeded the timeout value of 70 minutes for the execution of the operation due to staging disk not automatically extending its size
SQL Server Log Catchup continues to fail with below error: Log Catch Up - 60% The log drive of the database continues to get full as the Transaction logs are not getting truncated as Log Catchups are not running to take SQL Log Backups.Failed operation log shows the extend_staging_drive operation is called to increase the staging disk size, however it does not extend the disk [2023-08-07 10:09:52,068] [15580] [INFO ] [0000-NOPID],Start extend_staging_drive After a couple of retries of the above extend_staging_drive operation, Log Backup is attempted and it fails with SQL Error due to not enough space in the staging disk: [2023-08-07 10:48:21,689] [15580] [WARNING ] [0000-NOPID],Function: perform_log_backup, Exception: '[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Write on "C:\\NTNX\\ERA_BASE\\dblogs\\2023_08_07_10_09_55#DBNAME\\DBNAME_2023_08_07_10_09_55#DBNAME.trn" failed: 112(There is not enough space on the disk.) (3202) (SQLExecDirectW)'
If the Log Drive for the database is critically low on free space, use the NDB Scale operation to increase the log drive spaceThen proceed to manually increase the staging disk to accommodate the data in the Database Log Drives by following the steps outlined in KB14510 https://portal.nutanix.com/kb/14510, If this is s SQL AAG make sure to extend the staging disk on all SQL Server Nodes that are part of the AAG.
KB12344
Nutanix Guest Agent Service is not able to get certificate expiry
Nutanix guest agent service in a windows VM is not able to get certificate expiry
In Windows Guest VMs with non U.S English locale, the guest_agent_service.ERROR logs shows: 2021-10-29 07:38:55,205Z ERROR C:\Program Files\Nutanix\python\bin\os_utils_windows.py:355 Could not get certificate expiry: list index out of range The VM has valid certificate: C:\Windows\system32>certutil -dump "C:\Program Files\Nutanix\config\client-cert.pem" Nutanix Guest Agent Service is in running status
In above example, the Guest Agent Service is not able to parse the certificate as it expects the NotBefore and NotAfter Dates to be in MM/DD/YYYY HH:MM AM/PM' formatCheck and confirm that the communication link is active as the above issue should not prevent NGT communication nutanix@NTNX-9MT1TC3-A-CVM:10.156.74.60:~$ ncli ngt list vm-names=VMNName
KB4494
NCC Health Check: metadata_mounted_check
The NCC health check metadata_mounted_check ensures the metadata directory in use by Cassandra is correctly mounted.
The NCC health check metadata_mounted_check ensures the metadata directory in use by Cassandra is correctly mounted. This check returns a FAIL if any of the metadata disks are not mounted correctly. Running the NCC Check It can be run as part of the complete NCC check by running: nutanix@cvm:~$ ncc health_checks run_all Or individually as: nutanix@cvm:~$ ncc health_checks hardware_checks disk_checks metadata_mounted_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every hour, by default. This check will generate a Critical alert after 1 failure.Starting with NCC version 3.9.4 this check will generate a Warning alert after 1 failure and Critical alert after 2 failures. Sample output For Status: PASS Running : health_checks hardware_checks disk_checks metadata_mounted_check For Status: FAIL Detailed information for metadata_mounted_check Output messaging [ { "Check ID": "Check that all metadata disks are mounted." }, { "Check ID": "Metadata disk(s) not mounted on CVM." }, { "Check ID": "Contact Nutanix Support to mount the metadata disk." }, { "Check ID": "Cluster performance may be significantly degraded. If this condition continues, there is a potential for data corruption and/or loss." }, { "Check ID": "A101055" }, { "Check ID": "Metadata disk(s) not mounted on CVM" }, { "Check ID": "Metadata disk(s) on Controller VM svm_ip_address are not mounted: unmounted_disks" } ]
Check the status of the disk by performing the Troubleshooting section of KB 4158 https://portal.nutanix.com/kb/4158. If the disk status is normal or if this alert was due to scheduled maintenance, run the command "list_disks" and check if the disk mentioned in the FAIL message above (for example, BTHC633207LF800NGN) is listed in the output. Example: nutanix@cvm:~$ list_disks Run the command "df -h" and check if the disk is also listed in the output. Example: nutanix@cvm:~$ df -h Run the command below to list all metadata disks: nutanix@cvm:~$ sudo find /home/nutanix/data/stargate-storage/disks -maxdepth 2 -name metadata -exec dirname {} \; Example: nutanix@cvm:~$ sudo find /home/nutanix/data/stargate-storage/disks -maxdepth 2 -name metadata -exec dirname {} \; Run the command below to check if all metadata disks are correctly mounted: nutanix@cvm$ sudo find /home/nutanix/data/stargate-storage/disks -maxdepth 2 -name metadata -exec dirname {} \; | xargs -I{} /bin/mountpoint {} Example: nutanix@cvm:~$ sudo find /home/nutanix/data/stargate-storage/disks -maxdepth 2 -name metadata -exec dirname {} \;| xargs -I{} /bin/mountpoint {} A metadata disk is correctly mounted in case the /bin/mountpoint command indicates it "is a mount point". If you get an output saying a disk "is not a mountpoint", engage Nutanix Support https://portal.nutanix.com. Note: For situations where you determine that the FAIL is a false positive, Nutanix recommends upgrading to NCC-3.9.4.1 or later. A known issue with the metadata_mounted_check has been fixed as indicated in the Resolved Issues section of the NCC-3.9.4 Release Notes https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-NCC-v3_9:rel-Release-Notes-NCC-v3_9_4.html.
KB9040
Migrated VM failed to open an IIS application pool in 32-bit mode
After successfully moving a VM from Hyper-V to AHV there may be issues starting an IIS application pool in 32-bit mode.
After successfully using MOVE to migrate IIS-based VM web servers, customers may see issues trying to start IIS application pools in 32-bit mode.Although the application initially starts, browsing the IIS landing page results in a 503 http service error.Reviewing the event logs of this issue: Error value: 00000000 Exception code: 0xc000001d translates to: Instruction An attempt was made to execute an illegal instruction. Trying the same application in 64-bit mode works without issue.Reinstalling ASP.NET does not resolve the issue.
The Faulting module is: aspnetcorev2.dllLocation: C:\Program Files (x86)\IIS\Asp.Net Core Module\V2\aspnetcorev2.dllAfter doing some digging and discussing this issue with Microsoft we found this: Installing ASP.NET Core Hosting Bundle 3.1.0 crashes 32 bit application pools https://github.com/dotnet/aspnetcore/issues/18019 https://github.com/dotnet/aspnetcore/issues/18019 Microsoft confirmed this is an issue with their .NET code. The issue should be resolved in version .NET 3.1.3 which should release in March 2020.
null
null
null
null
null