id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB3528
How to translate NUTANIX-MIB SNMP MIB objects and OIDs
This article describes how to translate NUTANIX-MIB SNMP MIB objects and OIDs.
Nutanix published its MIBs as NUTANIX-MIB. It can be downloaded from Prism --> SNMP--> Download MIB. However, on Nutanix Portal, the SNMP MIB OIDs are not easily available. This makes doing a simple snmpwalk difficult if the NUTANIX-MIB is not loaded on the NMS system or the CVM (Controller VM).
If you want to identify NUTANIX-MIB OIDs, they are published on the external site https://www.marcuscom.com/snmptrans https://www.marcuscom.com/snmptrans, which can help translate NUTANIX-MIB SNMP MIB objects and OIDs. Simply enter the MIB object or OID and it will translate the entry. Here is an example for controllerStatusTable: .1.3.6.1.4.1.41263.11 So, the OID is .1.3.6.1.4.1.41263.11. This is very useful when doing an snmpwalk without loading the NUTANIX-MIB. The NMS tool or the CVM would only work with OIDs. The above SNMP tool can also translate from OIDs to MIB objects. Another easy way to check all the Nutanix specific OIDs is to use a graphical MIB browser. There are a few free MIB browsers available. (iReasoning MIB browser for Mac works well.) Load the Nutanix MIB to the MIB browser to get a graphical tree displaying all the OIDs available. The Nutanix specific OIDs are located in the "private -> enterprises -> nutanix" container: Note: All OIDs available in the MIB are not necessarily populated by the cluster, such as the ones with a value of "obsolete" in the Status field, which were removed from newer AOS releases.
KB4108
Configuring and modifying ldaps via ncli
Configuring and modifying LDAPs via nCLI
Prism UI may not allow you to specify LDAP in the Authentication Configuration ldap_url: ldaps://test-dc.test.company.local:636 If you need to modify a current LDAP configuration you can do this via nCLI then use the following steps below.
To add LDAP through nCLI as follow on any CVM: nutanix@cvm$ ncli authconfig add-directory connection-type=LDAP directory-type=ACTIVE_DIRECTORY directory-url=ldaps://test-dc.test.company.local:636 domain=company.local name=test-ssl Check and modify the current LDAP configuration by using the following nCLI commands:To check current authentication configuration: cvm$ ncli authconfig ls To check configuration of LDAP: cvm$ ncli authconfig ls-directoy To edit the current LDAP configuration: cvm$ ncli authconfig edit-directory connection-type=LDAP directory-type=ACTIVE_DIRECTORY directory-url=ldaps://test-dc.test.company.local:389 domain=company.local name=test-ssl You can also type 'ncli' to enter the tool and then use the 'TAB' key to display current syntax options.
KB15176
List_disks command is not showing all disks on Fujitsu hardware due to unsupported expander
While running list_disks on CVMs not all disks are shown in the list while lsscsi is showing all attached disks due to Fujitsu unsupported Broadcom / LSI SAS3408 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
- Reporting some CVMs are not listing all the attached disks on Fujitsu Hardware However the disks are attached and detected by the lsscsi commands. Running list_disks command shows some empty slots nutanix@NTNX-EWAK028918-A-CVM:~$ list_disks lsscsi command shows all the attached disks nutanix@NTNX-EWAK028918-A-CVM:~$ lsscsi So disk on slot 1,3,6,7,8 are missing So disk on slot 1,3,6,7,8 are missing here what we found was slot to phy is different Looking the physical mapping nutanix@NTNX-EWAK028918-A-CVM:10.130.233.135:~$ ls /sys/class/sas_phy/phy-*/device/port/end_device-*/target*/*/block here we can see slot to phy is different so as an example for sda we are expecting it to be in 0:25 but according to hardware_config.json it should be in {
#NAME?
KB14167
AHV upgrade is blocked in LCM 2.5.0.4 or newer if running vGPUs VMs are found
AHV upgrade is blocked in LCM 2.5.0.4 or newer if running vGPUs VMs are found
The following text may be shown starting from LCM 2.5.0.4 if running vGPU VMs are detected: Upgrades from AHV {source} to {target} are not supported while vGPU VMs are running. Either shut down all vGPU VMs and re-run inventory, or upgrade in multiple steps: {source} -> 20201105.30100-30417 -> {target}. See KB-14167 for more details.
Due to a code defect, the direct upgrade path is blocked if the following conditions are detected: AHV upgrade is performed from versions older than 20201105.30100 to 20220304.xRunning vGPU VMs are detected. Workaround 1Perform a 2-step upgrade: Perform an LCM inventory scan.Upgrade AHV to any version in the following range: 20201105.30100 - 20201105.30417Upload the NVIDIA host driver bundle to LCM compatible with the target AHV version, see Updating the NVIDIA GRID Driver with LCM https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide-v2_5:top-lcm-ngd-update-t.html for details.Perform an LCM inventory scan.Upgrade AHV to the desired version in the 20220304.x family. Workaround 2 Shut down all vGPU VMs. Note: You can get a list of vGPU-enabled VMs via Prism Central: VMs > Modify Filters > GPU Type > vGPUPerform an LCM inventory scan.Upload the NVIDIA host driver bundle to LCM compatible with the target AHV version, see Updating the NVIDIA GRID Driver with LCM https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide-v2_5:top-lcm-ngd-update-t.html for details.Upgrade AHV to the desired version in the 20220304.x family. Refer to the Updating the NVIDIA GRID Driver with LCM https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:top-lcm-ngd-update-t.html chapter of the "Life Cycle Manager Dark Site Guide" for more details about upgrading AHV on clusters with GPU.
KB2493
NCC Health Check: ca_certificate_expiry_check
The NCC health check ca_certificate_expiry_check verifies certificates used with the SED drives.
The NCC health check ca_certificate_expiry_check verifies if certificates used with the SED drives are valid. Certificates have defined expiration dates. If the cluster has an expired certificate, the cluster cannot be used to verify trust of server certificates. Running the NCC Check You may run the NCC check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or run the ca_certificate_expiry_check check separately: nutanix@cvm$ ncc health_checks key_manager_checks run_all You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every day. This check will generate an alert after 1 failure. Sample output For Status: PASS Running : health_checks key_manager_checks ca_certificate_expiry_check For Status: FAIL Running : health_checks key_manager_checks ca_certificate_expiry_check Output messaging [ { "Check ID": "Check if CA certificates are about to expire." }, { "Check ID": "Certificates have defined expiration dates." }, { "Check ID": "Get a new signed certificate." }, { "Check ID": "If the cluster's certificate expires, it will be unable to verify trust of server certificates." }, { "Check ID": "A1115" }, { "Check ID": "Cluster Certificate Expiring" }, { "Check ID": "Cluster CA certificate certificate_name expires on expiration_date." } ]
If this check reports a FAIL status, get a newly signed certificate from your certificate authority provider or entity. Add the Certificate or configuration as per the instructions on Configuring Data at Rest Encryption on Prism Web Console https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v5_19:wc-security-data-encryption-wc-t.html. After adding the new certificate, delete the expired certificate. If the error still occurs, contact Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
KB13700
Metadata repair improvements in case of SSTable corruptions
This article notes improvements made to the metadata repair functionality of Cassandra.
Metadata repair is an operation performed by Cassandra to self heal in the case of health warnings, or if too many restarts occur. It has 2 main phases. First, scans are performed to ensure all the other replicas have the correct updated data and second, the data on the acting node in the procedure is completely removed.This worked well in previous AOS versions since metadata sizes were limited to a few tens of GBs, but now with clusters containing TBs worth of metadata Cassandra repair operations take much longer. In order to improve this process and time-to-repair, improvements with SSTable corruption workflows are required.Metadata repair workflow in case of SSTable corruptions (Pre AOS 6.6): Local monitor receives corrupt SSTable health warnings from daemon.Local monitor sets the status to kMetadataRepairRequired and restarts.DRC (local / remote) starts the ring change scans for all column families and sets the status to kMetadataRepairDone upon completion.Local monitor detects the state change kMetadataRepairDone and deletes all the Cassandra data and changes state to kNormalMode. Problems with this approach: Even if only one SSTable of a particular column family is corrupted, we end up scanning and deleting the data belonging to all column families. This is an unnecessary effort and causes high repair times in clusters with large volumes of data.On PC VMs, which typically have huge amounts of metric data, corruption of a metric column family SSTable leads to hours and even days of repairing the data. Instead of trying to repair the data here, we can instead delete the corrupt file since small amount of data loss is acceptable in metric data.
Noted below are solutions to improve workflows with SStable corruption: TermsCF = Column FamilyMR = Metadata RepairSStable = Sorted String table Improvement 1 - For Paxos CFs and Non-Paxos Non-Metric CFs, perform MR only for the column families to which the corrupted SSTables belongs ( ENG-460739 https://jira.nutanix.com/browse/ENG-460739).Updates in metadata repair workflow (AOS 6.6): Local monitor receives corrupt SSTable health warnings from daemon which contains the column family of the corrupt file.Local monitor sets the status to kMetadataRepairRequired and also populates the corrupt keyspace and column family name to the ‘CassandraMetadataRepairCF’ field in zeus and restarts.DRC (local / remote) sees the status change to kMetadataRepairRequired and since we have some CFs in ‘CassandraMetadataRepairCF’, populates the same to ‘CassandraMetadataRepairCF’ in dyn_ring_change_info proto and triggers the ring change scans for only the column families present in ‘CassandraMetadataRepairCF’. Upon completion, changes the status to kMetadataRepairDone.Local monitor detects the state change kMetadataRepairDone and deletes only the Cassandra data belonging to CFs for which the repair was triggered. Then the state is changed to kNormalMode and ‘CassandraMetadataRepairCF’ is cleared from zeus config. Improvement 2 - For Metric CFs: Delete the corrupted SSTable for metric data instead of metadata repair ( ENG-460746 https://jira.nutanix.com/browse/ENG-460746) Updates in metadata repair workflow (AOS 6.6): Local monitor receives corrupt SSTable health warnings from daemon which contains the column family of the corrupt file.If this is a metric column family, then instead of marking the node for repair, we change the status to kForwardingMode instead and restart.Daemon, on start, sees that there is a corruption in a metric CF SSTable and schedules a CassandraHealthWarningFixer call to fix it.CassandraHealthWarningFixer tries to delete the corrupt file and if it succeeds clears the health warning. Monitor, upon not receiving any health warnings in the next heartbeat, will change the status back to kNormalMode. NOTE: Both these improvements are only enabled on PC currently. AOS implementation will be determined after these improvements are released with PC (PC 2022.9 is the associated 6.6 release). GFlags Enabling/Disabling improvement 1: --perform_metadata_repair_only_for_corrupted_sstable_cf_pc=true/false Enabling/Disabling improvement 2: --delete_corrupt_metric_sstable_pc=true/false Future plans Engineering plans on extending improvement 1 further by repairing only for the affected range of the column family during SSTable corruption ( ENG-474823 https://jira.nutanix.com/browse/ENG-474823).
KB6279
Seagate drives of 2TB,4TB,6TB and 8TB are marked for removal after CVM reboot
Seagate drives of 2TB,4TB,6TB, and 8TB may be marked for removal due to smartctl test started by Hades timing out after a CVM reboots due to maintenance or other reasons.
Seagate drives of 2TB,4TB,6TB and 8TB are marked for removal due to smartctl test started by Hades timing out. From the hades logs, we can see that the drive was marked offline by the Stargate for scsi abort. 2018-07-20 13:16:13 INFO disk_manager.py:2017 Writing ASUP data for disk_serial ZAD0BKW7 for reason scsi abort Post the SCSI abort event, Hades starts the smartctl short test for the drive but the test gets stuck at 90 percent and never completes and eventually marks the disk for removal. 2018-07-20 13:16:14 INFO disk_diagnostics.py:302 Hades test smartctl-short status Self-test routine in progress percent 10 for disk ZAD0BKW7 Hades marks the disk for removal. 2018-07-20 13:49:22 INFO disk_manager.py:3318 Attempting to update to_remove flag for disks with ids [45L] in zeus config with operation set This is behavior is seen after a CVM reboot which can be due to any reason (AOS upgrade, Hypervisor upgrade, etc.)
There are 2 scenarios and both needs to be handled differently. In both the scenarios, the drives are actually not bad but the Hades mark them for removal because it couldn’t complete the smartctl test. Scenario 1 - Disk firmware version is less than SN05 (6TB and 8TB) or TN05 (2TB and 4TB) nutanix@NTNX-17SM37020297-B-CVM:~$ lsscsi First, we need to bring the disk back online. To bring the disk back online, we need to modify the Hades to remove the offline timestamps for the disk and restart the Hades service. The procedure to remove offline timestamp is described in KB 4369 https://portal.nutanix.com/kb/4369.Hades service can be restarted using the command: sudo /usr/local/nutanix/bootstrap/bin/hades restart Once the drive is online and added back to the cluster, we should work with the customer to upgrade all the Seagate disks firmware to SN05 irrespective of the fact whether the issue is seen on the disk or not. You can upgrade the disk firmware from LCM, this has been enabled from LCM 1.4.3759/ 2.1.3766 release. To update the disk firmware for the drives manually, firmware file can be downloaded from the below-mentioned link. For step-by-step instruction for manual firmware upgrade, refer to the Documentation: Manually Updating Data Drive Firmware https://portal.nutanix.com/page/documents/details?targetId=Hardware-Admin-Guide:bre-data-drive-update-manual-t.html Scenario 2 - Disk firmware version is SN05 (6TB and 8TB) or TN05 (2TB and 4TB) There are currently no reports of this issue happening with SN05 firmware. In case it happens, please follow the below process: nutanix@NTNX-17SM37020553-B-CVM:~$ lsscsi Before we recover from the situation and add the disk back to the cluster, we need to collect diagnostic data to further understand the reason for the smartctl (DST) stuck. Please follow the steps mentioned below to collect Seagate diagnostic data. iostat output when DST is stuck to understand the workload on the drive. iostat information captured in the sysstats logs should be good enough.Seagate drive diagnostic logs using Seagate Utility. Seagate utility can be downloaded from the below confluence link, and directions to use are also available in the confluence link: https://confluence.eng.nutanix.com:8443/display/HA/Pull+Debug+Logs+from+Seagate+HDDs https://confluence.eng.nutanix.com:8443/display/HA/Pull+Debug+Logs+from+Seagate+HDDsNote: Engineering has confirmed that we can run this utility on the affected disk while cluster running normal operations. To bring the disk back online, you can follow the same steps described in Scenario-1.[ { "Drive Capacity": "2TB", "Drive Model - Firmware": "ST2000NM0055", "md5sum": "008f274ad70f3d50fde8ed33c795331d" }, { "Drive Capacity": "4TB", "Drive Model - Firmware": "ST4000NM0035", "md5sum": "008f274ad70f3d50fde8ed33c795331d" }, { "Drive Capacity": "6TB", "Drive Model - Firmware": "ST6000NM0115", "md5sum": "3f17b17a182daf863ed37d9f515e225e" }, { "Drive Capacity": "8TB", "Drive Model - Firmware": "ST8000NM0055", "md5sum": "4040cd53aea092e2e55194f29f26a7fc" } ]
{
null
null
null
null
KB13900
Prism Central reports return empty intermittently
On Prism Central, reports may intermittently return empty, with no data inside, whereas the next run of the same report shows data is available
Prism Central reports may intermittently appear empty, with no data inside. The graphs in the report may show the following message: "Data is either not found, or it is inaccessible due to restrictions". Running the report for a 1 or 2-week timespan works correctly but anything longer than that results in empty reports showing "Data is either not found, or it is inaccessible due to restrictions" If the same report is re-run sometime later, it may show data is available and returns the expected results.SYMPTOM 1:The Prism Central log file ~/data/logs/vulcan.out contains the following error (label value is variable depending on what the report is run for): E0902 10:21:20.570008Z 9010 widgetDataCollector.go:575] Error while parsing response for label:"Storage Usage (%)" property:"storage.usage_ppm" aggregation_operator:kAvg insights_server logs at ~/data/logs/insights_server.ntnx-XX-XX-XX-XX.nutanix.log.INFO may show the request is taking way longer than for successful runs:Good case scenario, in which the request is taking around 11 seconds: I20220902 10:23:30.610551Z 37797 insights_rpc_svc.cc:3481] Received RPC GetEntitiesWithMetrics from 127.0.0.1:43428. Request id: query_1662114210610506_23816490_127.0.0.1. Argument: query_name: stats_gateway_reportingMetrics entity_list { entity_type_name: "disk" } start_time_usecs: 1630578164000000 end_time_usecs: 1662114210000000 where_clause { lhs { lhs { comparison_expr { lhs { le Bad case scenario with the request taking about 171 seconds to complete: I20220902 10:20:50.563081Z 37784 insights_rpc_svc.cc:3481] Received RPC GetEntitiesWithMetrics from 127.0.0.1:37216. Request id: query_1662114050563008_23 ​​​SYMPTOM 2:For the report instances with empty report, we also see the insights_server hitting RSS threshold of 80 percent. We can see that the RSS value for insights_server is always equal to or less than 4.3GB which is less than the 80% of 5.5G (default cgroup limit). I20230316 11:25:15.626199Z 19171 insights_server.cc:564] CheckRSSExceeds: Exceeded RSS threshold pct of 80. Usage in pct = 80 SYMPTOM 3:Check the stats_gateway logs in /home/nutanix/data/logs/stats_gateway and note if it is not able to fetch data from IDF with following error: 'context deadline exceeded'. E0406 07:11:21.261570Z 90657 protobuf_rpc_client.go:388] RPC request error: Post "http://127.0.0.1:2027/rpc/nutanix.insights.interface.InsightsRpcSvc/": context deadline exceeded type: *url.Error
WARNING: Support, SEs, and Partners should never make Zeus, Zookeeper, or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit) Refer to internal KB 1071 https://portal.nutanix.com/kb/1071 for further instructions on how to work with Gflags.Apply Solution 1 for symptoms that don't include RSS/OOM signatures.SOLUTION 1:By default Vulcan waits for 60 seconds to get results from insights query, but if the query takes more than 60 seconds depending on the time range selected for the report and the number of entities involved in the report, the insights query from Vulcan can fail and it generates an empty report,To mitigate this, below gflags need to be set for vulcan and stats_gateway service on Prism Central: Stop StatsGateway and Vulcan on Prism Central nutanix@PCVM: genesis stop stats_gateway ​for StatsGateway in /home/nutanix/config/stats_gateway.gflags (create the file if it does not exist), add line: --db_timeout_secs=300 for Vulcan in /home/nutanix/config/vulcan.ini (create the file if it does not exist), add line: QueryTimeoutInSecs=300 start StatsGateway and Vulcan: nutanix@PCVM: cluster start SOLUTION 2:If Solution1 does not help and we see "Exceeded RSS threshold" logging in insights_server logs then we increase the IDF RSS and cgroup limit (given that PC has enough memory size) by the following the solution section of KB-11122 https://nutanix.my.salesforce.com/kA00e000000LTvQ?srPos=0&srKp=ka0&lang=en_US.SOLUTION 3:If Solution 1 and Solution 2 do not help, and the signatures match with scenario 3, then please apply the following insights_server gflag by updating the insights_server gflag file or using edit-aos script following KB-1071 https://nutanix.my.salesforce.com/kA0600000008SsH?srPos=0&srKp=ka0&lang=en_US --insights_rss_low_watermark_pct=25 After making the relevant changes, re-run the report (multiple times if needed) to confirm it returns data after the above change.
KB11863
This cluster needs additional licenses to account for the newly added capacity. Expand
When an additional node is added to a cluster, a message appears advising that more licensing is required.
When a new node is added, or a cluster is expanded, the error below is returned: The cluster needs additional licenses to account for the newly added capacity. The alert can also present the following details. Summary:
The cluster's licenses must be updated to resolve this alert. Navigate to and select "Manage Licenses". Example: Select the tile which will be used to license the cluster and select "Edit Selection". Example: Select the related cluster, select "Actions" then select "Advanced Licensing" Please note that if there are multiple clusters which need updating you will need to do this step cluster-by-cluster. Example: You will be presented with the the quantity required on screen. Choose "Select Licenses" for each presented tile. Example: If the particular license ID in use also has available licenses, unselect in and then select in again so that it takes the required license. If you have other License IDs available or wish to use other license IDs, you may do so and click on "SAVE" Example: Once the updated quantity appears on the screen, select "Next": Example: Select "OK", select "Next", and then select "Confirm". The license file will be generated and can then be applied on the cluster Example:
KB16649
How to increase the memory limits of the kommander-cm pod
How to increase the memory limits of the kommander-cm pod
null
You may encounter a situation in DKP 2.X in which the kommander-cm pod is crash looping, and the reason given in the "kubectl describe pod" output shows that it is due to exit code 137, or oom-kill. This is usually a result of a management cluster that is dealing with more traffic than the default resource limits were configured to address. To increase the amount of memory for this pod, you can configure a ConfigMap override for the Kommander appdeployment. First, we're going to create a ConfigMap containing a higher memory limit than the default of 512Mi. Save the following as a file called kommander-cm-override.yaml: apiVersion: v1kind: ConfigMapmetadata: namespace: kommander name: kommander-cm-overridedata: values.yaml: | controller: containers: manager: resources: limits: memory: 1024Mi Next, apply the ConfigMap to your cluster: kubectl apply -f kommander-cm-override.yaml Now we are going to edit the kommander appdeployment to use this ConfigMap: kubectl edit appdeployment -n kommander kommander Edit the "spec:" section to add a configOverride value. Do not touch the appRef section: spec: configOverrides: name: kommander-cm-override appRef: ... Save your changes, and within a few minutes the kommander-cm pod should restart with a new memory limit.
KB13412
Services not starting on PrismCentral due to docker daemon start delayed with thousands of 'Removing stale sandbox'
On PrismCentral genesis service may be stuck waiting for docker to be available for many hours. Docker daemon will be stuck in "activating (start)" phase for many hours busy with stale sandbox cleanup.
On PrismCentral genesis service may be stuck in the services start routine for many hours, waiting for docker daemon to be available. Docker daemon will be stuck in "activating (start)" phase for many hours, busy with stale sandbox cleanup.Identification:1. Services are not starting, and there are no FATAL signatures in data/logs/genesis.out2. Genesis service, waiting for docker to start; this can be verified by checking for 'Error checking status of docker daemon' signature, the status will be 'activating (start)' and multiple repeating 'Removing stale sandbox' log lines: nutanix@PCVM:~$ less data/logs/genesis.out 3. docker status checked manually shows it stuck in 'activating (start)' state for many hours: nutanix@PCVM:~$ systemctl status docker 4. docker logs filled with thousands of 'Removing stale sandbox' lines with unique ids nutanix@PCVM:~$ sudo journalctl -u docker | grep "Removing stale sandbox" | wc -l 5. docker key-value database file could be bloated and be hundreds of megabytes in size; the standard healthy size is hundreds of kilobytes: nutanix@PCVM:~$ sudo ls -lh /home/docker/docker-latest/network/files/local-kv.db
The safe way is to let the sandbox clean-up finish. If the sandbox count is significant, cleanup is expected to take time. After bloat cleanup is completed subsequent docker restarts will be quick.After cleanup is completed, dockerd became available, and genesis will normally proceed with services start.Cleanup rate observed if the field: ~250k records cleanup took ~24h.Currently, the exact cause of database bloat is unknown. Technically it could be caused by the docker container spinning in the crash loop for a long time.
KB8830
Zookeeper resiliency turns critical when one of the servers is not able to join quorum due to different round and/or epoch ids
In very rare cases, Zookeeper resiliency becomes critical when a Zookeeper node comes out of the quorum. The Zookeeper node becomes unable to join back into the quorum due to other Zookeeper servers posessing different epoch and round id values, resulting in Zookeeper resiliency to become compromised. Any additional failures in this state can cause Zookeeper services to become inactive/unavailable, and lead to cluster wide impact. This KB describes the issue and workaround to resolve this issue.
In very rare cases, Zookeeper resiliency becomes critical when a Zookeeper node comes out of the quorum. The Zookeeper node becomes unable to join back into the quorum due to other Zookeeper servers posessing different epoch and round id values, resulting in Zookeeper resiliency to become compromised. Any additional failures in this state can cause Zookeeper services to become inactive/unavailable, and lead to cluster wide impact. Precheck:***ATTENTION: Verify that there are no duplicate ZK processes running in any of the CVMs. This can be verified looking for the following Error in Zookeeper logs: "Couldn't bind to port java.net.BindException: Address already in use (Bind failed):" If the duplicate Zookeeper process issue is seen, please refer to KB 9523 https://portal.nutanix.com/kb/9523 for additional guidance. Do not proceed with any further steps from this KB. 2020-02-23 11:54:39,314 - ERROR [QuorumPeer[myid=2]0.0.0.0/0.0.0.0:9876:Leader@169] - Couldn't bind to port 2888 Additionally, the top output on the CVM shows both duplicate processes: nutanix@CVM$ ps -ef |grep zookeeper The following command will give the number of actual zookeeper processes running:Expected output to be 0 on all non-zookeeper CVMs, and 1 for the 3/5 zookeeper CVMs (depending on FT of the cluster); seeing anything higher than 1, or more than 3/5 1s is an indication that something is amiss: nutanix@CVM$ allssh 'pgrep -fc "^bash /usr/.*zkServer.sh"' If the duplicate Zookeeper process issue is seen, please refer to KB 9523 https://portal.nutanix.com/kb/9523 for additional guidance. Do not proceed with any further steps from this KB. If the duplicate Zookeeper process issue is discerned, then the issue is different from the scenarios mentioned below. Engage a SR SRE/STL. Symptoms: NCC Plugin zkinfo_check_plugin reports that one of the Zookeeper servers in the cluster is inactive. Detailed information for zkinfo_check_plugin: When this issue is encountered during an AOS upgrade, the upgrade will get stuck, as the CVM that just got upgraded will be unable to join the quorum.The output of the following command would show that one of the servers are not active. nutanix@CVM$ for i in $(sed -ne "s/#.*//; s/zk. //p" /etc/hosts) ; do echo -n "$i: ZK " ; ssh $i "source /etc/profile ; zkServer.sh status" 2>&1 | grep -viE "nut|config|fips|jmx" ; done For example: nutanix@CVM$ for i in $(sed -ne "s/#.*//; s/zk. //p" /etc/hosts) ; do echo -n "$i: ZK " ; ssh $i "source /etc/profile ; zkServer.sh status" 2>&1 | grep -viE "nut|config|fips|jmx" ; done /etc/hosts, /home/nutanix/data/zookeeper/myid, and zeus_cofig_printer all show correct ZK nodes and myid values. After reviewing all the symptoms above and double checking that no duplicate Zookeeper processes are running on the CVM, identify which (if any) of the 2 scenarios mentioned below match the cluster being troubleshooted:Scenario #1:The following log signatures would be present on the ZK server when trying to join the quorum, but would be unable to do so, due to differing election round ID and peer epoch values. This issue will not be resolved by restarting genesis, or the zookeeper service on the problematic ZK node. 019-12-21 17:36:31,479 - INFO [WorkerReceiver[myid=1]:FastLeaderElection@720] - Notification: 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state), 0x1 (my round) Explanation of snippet above: PeerEpoch values: ZK1: 0x0 < Trying to join the quorumZK2: 0x18 < LeaderZK3: 0x19 < Follower Round IDs: ZK1: 0x1 < Trying to join the quorumZK2: 0x5 < LeaderZK3: 0x6 < Follower Due to differing values of "PeerEpoch" and round IDs, the ZK1 server would be unable to join the quorum Note that if a ZK server is down for more than 30 min there will be a periodic Zookeeper migration triggered between other non Zookeeper nodes in the cluster. Scenario #2:The ZK server trying to join the quorum has a different "currentEpoch" value compared to the other ZK servers. nutanix@CVM$ allssh 'cat data/zookeeper/version-2/currentEpoch; echo' Here, the problematic ZK node has the "currentEpoch" value set to "43", while the rest of the servers in the ZK ensemble have the value set to "48".
Resolution Please upgrade to AOS 5.15.3 or AOS 5.18.1, as the issue is fixed in these versions per ENG-160764 https://jira.nutanix.com/browse/ENG-160764. Workaround ***Caution: Do not proceed with these steps without the supervision of a Support Tech Lead (STL). Please engage with a Sr./Staff SRE or an EE before restarting the Zookeeper service. If the steps in this KB do not help, consider engaging engineering via an ONCALL.Before proceeding with the workaround, double check the precheck mentioned in the description and ensure there are no duplicate Zookeeper processes running on any of the CVMs; failing to do so may lead to complications including extended downtime and data consistency issues.If the duplicate Zookeeper process issue is seen, please refer to KB 9523 https://portal.nutanix.com/kb/9523 for additional guidance. Do not proceed with any further steps from this KB. Scenario #1: If all the symptoms match, the workaround to fix this issue is to restart Zookeeper leader node (which is not the zk node having issues). This should be done during a maintenance window, as cluster instability for some time is expected, because clients connected to Zookeepeer leader will have to re-connect to the new leader after the election is complete. Any unforeseen leader election problems at this point may cause additional downtime for user VMs. Verify there are no duplicate Zookeeper services running on any of the CVMs as explained in the precheck in the description. If there are duplicate services found, stop here and refer to KB 9523 https://portal.nutanix.com/kb/9523 for additional guidance. Do not proceed with any further steps from this KB. Example where duplicate services are running: nutanix@CVM$ ps -ef |grep zookeeper Verify there is no current Zookeeper migration in the cluster - the command should return Operation failed: nutanix@CVM$ edit-zkmigration-state print_zkmigration_proto Verify that the working zookeepers are in sync, with the latest zxid. The problematic ZK will not return any zxid. Note: Sometimes, you may notice the other two working zookeeper nodes are not in sync, which could just be a temporary, as the leader is usually a little ahead. It is recommended to repeat the below "echo srvr" command a few times or after a few seconds. If it is working correctly, the zkid will sometimes become the same for the zk leader and follower: nutanix@CVM$ for i in zk{1..3}; do echo $i; echo srvr | nc $i 9876 | grep Zxid; done Verify that there is no mismatch in the current and accepted epoch values (the values from non current ZK servers can be ignored): nutanix@CVM$ allssh 'cat ~/data/zookeeper/version-2/currentEpoch; echo' nutanix@CVM$ allssh 'cat ~/data/zookeeper/version-2/acceptedEpoch; echo' Note: If the epoch values are different ONLY on the problematic note, refer to Scenario #2 below. If the zxids on the other ZK nodes are not in sync, or if the epoch values are different on two or more nodes, do not proceed. Please engage engineering via ONCALL.If everything is in order, find the Zookeeper leader: nutanix@CVM$ for i in $(sed -ne "s/#.*//; s/zk. //p" /etc/hosts) ; do echo -n "$i: ZK " ; ssh $i "source /etc/profile ; zkServer.sh status" 2>&1 | grep -viE "nut|config|fips|jmx" ; done 7. SSH into the leader and restart Zookeeper service: nutanix@CVM$ genesis stop zookeeper && genesis restart 8. Verify the Zookeeper election is successful and all the servers are back in the quorum. 9. If there are any issues at this point and Zookeeper service is still not starting as expected, consider engaging engineering via an ONCALL. Scenario #2: If the difference in "current" and/or "accepted epoch" is present only on the problematic follower ZK node, follow the below procedure. If the difference is present on multiple nodes, do not proceed with the below steps and engage engineering via ONCALL. 1. Verify there are no duplicate Zookeeper services running in any of the CVMs as explained in the precheck in the description. If there are duplicate services found, stop here and follow the resolution mentioned under Section - Duplicate ZK processes. Example where duplicate services are running: nutanix@CVM$ ps -ef |grep zookeeper 2. Stop Zookeeper and genesis service on the problematic CVM: nutanix@CVM$ genesis stop zookeeper && genesis stop genesis 3. Update the value to what's present on the working ZK nodes and verify that the value has been changed. Update only the files that are mismatched.Accepted Epoch nutanix@CVM$ echo -n "<VALUE>" > data/zookeeper/version-2/acceptedEpoch Current Epoch nutanix@CVM$ echo -n "<VALUE>" > data/zookeeper/version-2/currentEpoch - Replace <VALUE> with the accepted/current Epoch values present on the working ZK nodes, respectively. Example: For updating Current Epoch from 43 to 48. nutanix@CVM$ cat data/zookeeper/version-2/currentEpoch; echo For Accepted Epoch: nutanix@CVM$ cat data/zookeeper/version-2/acceptedEpoch; echo 4. Start Zookeeper and genesis services: nutanix@CVM$ genesis restart If this does not help, please consider engaging engineering via an ONCALL.
KB16339
Uhura service crashes on Prism Central VM during security / port scans
Uhura service crashes on Prism Central VM during security / port scans
With the latest Prism Central pc.2023.x versions, the "Uhura" service on the Prism Central VM can crash and stop working unexpectedly if the Prism Central VM is scanned by a security port scanner software such as Nessus Scan. Identification "Uhura" service on the Prism Central VM will be at "DOWN" status nutanix@pcvm:~$ cs | grep -v UP /home/nutanix/data/logs/uhura.out log file on the Prism Central VM will be stuck with a similar signature as follows, and no new logs will be appended. You may find the "Child <PID> exited with status 0" at the end of this log file. INFO 20315 /src/bigtop/infra/infra_server/cluster/service_monitor/service_monitor.c:97 StartServiceMonitor: Child 20317 exited with status: 0 /home/nutanix/data/logs/genesis.out log file on the Prism Central VM will show unusual connections with "code 400" on different service ports from an IP that belongs to a security/port scanner softwareThese unusual connections from this IP correlate with the "Uhura" service stops and crashes. ERROR 45034000 http_server.py:200 ::ffff:x.x.x.x:59502 - code 400, message Bad HTTP/0.9 request type ('\x16\x03\x01\x02\x00\x01\x00\x01\xfc\x03\x03\x193\xe6\xe79\xe5\xaeJJX_uoX\xaf\xf0\x11I\xf0\x15E\xff\xe4\x14\x80m\x18\xf94<\x93\xa1') dmesg logs on the Prism Central VM may also show similar logs as shown below: "SRC" IP will belong to the security/port scanner software, and "DST" IP will be the Prism Central VM IP. nutanix@pcvm:~$ sudo dmesg -T | grep portscan
This issue is resolved in pc.2023.4.0.2, pc.2024.1. Please upgrade Prism Central to the specified version or newer. Workaround: "Uhura" service on Prism Central VM needs to be started manually with the following command: nutanix@pcvm:~$ cluster start As a workaround, exclude the Prism Central VM from the port scanner security software until a Prism Central with a fix for this issue is available.
KB16994
Security Central 2.x not processing data from Prism Central with non-default "Cluster Name"
Security Central 2.x not processing data from Prism Central with non-default "Cluster Name"
Security Central 2.x may get stuck when processing data from a Prism Central instance where the "Cluster Name" was changed from the default "Unnamed."Security Central GUI may show this status even after a few days:"Account's data update is in progress. Please check in after a few hours."
The workaround for the current version is to rename the PC cluster to its default name, "Unnamed."Once SCVM is upgraded to 2.1.1 - you can change the PC name to any name again. This known issue will be addressed with Nutanix Security Central 2.1.1.
KB5851
Nutanix Files - Deleting internal snapshots (SSR and CFT snapshots)
This article documents the procedure to calculate reclaimable space and remove internal snapshots (i.e. Self Service Restore (SSR) and third party backup snapshots (CFT)) using afs command line utility.
Nutanix Files takes two types of snapshots:1. External Snapshots Created through a protection domain schedule or via a user-initiated one-time snapshot of the PD.This is a snapshot that can be used to recover the File Server cluster. The space occupied by these is visible on the File Server Container (Prism > Storage > Storage Container) 2. Internal Snapshots Exposed to the user for file-level recovery. On an SMB Share, these are exposed to the client as Windows Previous Version.The space occupied by these internal snapshots is accounted for within the space occupied by the share and can be seen on the File Server Dashboard. This article outlines the procedure to calculate how much space can be reclaimed and remove the Internal Snapshots (i.e Self Service Restore (SSR) and Third-party backup snapshots (CFT)) using afs command-line utility. Additional Notes: If you delete any internal snapshot from within the fileserver, the space reclaimed will be reflected ONLY on the File Server Dashboard, not the File Server Container.Existing Protection Domain snapshots include the internal snapshots present from the time the snapshot was created, thereby locking in the space occupied by the internal snapshot.Therefore, if we are trying to reclaim space in the AOS cluster as the end goal, both PD and Internal Snapshots need to be deleted, and then we need to let at least two Curator full scans complete before we can see the space reclaimed.Curator scans are system generated and automatically run every 6 hours on the cluster. If Erasure Coding is enabled on the file server container, we would require three full scans to reclaim space.CFT snapshots have a limit of 20 snapshots per share as per Nutanix Configuration Maximums https://portal.nutanix.com/page/documents/configuration-maximum/list?software=Nutanix%20Files&version=4.4.0, once limit is reached third party backup will fail with error showing 'Create snapshot function failed with error [Max snapshot limit reached]'. In this situation, deleting older CFT snapshots will allow creation of new CFT snapshots.Snapshots are read-only, and it is not possible to delete individual files inside of the snapshots themselves.
This procedure is applicable from File server version 3.5.3 and later. 1. Log in to one of the File server VMs (FSVM) via SSH. Use Virtual IP address via nutanix user (Username:nutanix). Authentication should be automatic: nutanix@CVM$ afs info.nvmips nutanix@CVM$ ssh nutanix@<Virtual_IP> 2. List the snapshots for a share and capture the UUID of the snapshot: nutanix@FSVM$ afs snapshot.list share_name=<share_name> Example: nutanix@FSVM$ afs snapshot.list share_name=<share_name> Note: The snapshot list is sorted according to creation time (oldest at the top and newest at the bottom). Capture the UUID of the snapshots you want to delete If you want to delete multiple snapshots then break up the snapshots into groups of 5. 3. Run the below command to calculate the reclaimable space after snapshot deletion. The range cannot go past 5 snapshots: nutanix@FSVM$ afs snapshot.reclaimable_space <uuid_start>:<uuid_end> Example: nutanix@FSVM$ afs snapshot.reclaimable_space 05e65a80-36df-45a7-9032-87fa7296ef9a:dc325d5d-5499-46d1-93df-80476be2f961 Note: Base snapshot (uuid_start) should be created before Current snapshot (uuid_end)4. Delete the snapshots using afs CLI command: a. To delete a single snapshot nutanix@FSVM$ afs snapshot.remove snapshot_uuid_list=<snapshot_uuid> b. To delete multiple snapshots, starting with oldest to newest. nutanix@FSVM$ afs snapshot.remove snapshot_uuid_list=<snapshot_uuid1>,<snapshot_uuid2> Note: <snapshot_uuid>: Use the snapshot UUID obtained for step 2.The max number of snapshots that can be deleted at a time is 5. Example: nutanix@FSVM$ afs snapshot.remove snapshot_uuid_list=c072d2fd-8204-4a31-9d40-a0d7efb1b00d,34b4f7bc-5db0-4f1b-82ae-1321ea96834c Where the file server version is lower than 3.5.3, upgrade the file server to the latest available versions and delete the snapshots. In the event that you can not upgrade the file server and wish to delete the File Server internal snapshots, please contact Nutanix Support https://portal.nutanix.com/.
KB5231
Troubleshooting Single SSD Repair Features
A guide to using and troubleshooting single SSD repair features.
This article describes usage and troubleshooting for the single_ssd_repair command-line script, which was introduced in AOS 4.6 ( FEAT-1258 https://jira.nutanix.com/browse/FEAT-1258) to assist in recovery from disk failures on Nutanix platforms that have only one SSD drive. The same functionality was added to the Prism Web Console in AOS 4.7 ( FEAT-1935 https://jira.nutanix.com/browse/FEAT-1935), but you can still use the command-line version in this and later AOS releases. Unlike other platforms that have redundant boot partitions (RAID 1) across two physical drives, SSD failures on single-SSD platforms often result in the local CVM failing to boot. Depending on how the SSD fails, there may be no alert or clear indication as to whether the CVM went offline due to a boot drive failure or for some other reason. Opening a console to the CVM, you will usually see Ext4 filesystem errors or a message to the extent of "FATAL: Module scsi_wait_scan not found." The latter of these can occur in cases for a number of reasons, but most commonly due to a corrupt or missing svmboot.iso (on ESXi ServiceVM_Centos.iso). What is the single_ssd_repair script? The single_ssd_repair script combines two other existing scripts along with some additional logic and safeguards: svmrescue.iso option "Rescue Nutanix Controller VM" or Phoenix option "Repair CVM" -- These options format and re-install the first three partitions on a CVM boot drive. These include Boot 1, Boot 2, and the /home directory. Partition 4, which contains the customer data, is not touched. Check out this guide https://confluence.eng.nutanix.com:8443/display/STK/Repairing+or+Reimaging+Nutanix+CVM for more details on svmrescue and Phoenix.boot_disk_replace -- Necessary for restoring the configuration on a CVM after it is rescued or re-imaged. When should I use the single_ssd_repair script? When you are trying to recover from a corrupted CVM boot partition or a failed SSD on a single-SSD hardware platform running AOS 4.6 or later. Dual-SSD clusters running AOS 5.8, 5.5.3, 5.6.2 or later versions can also use this utility to recover from CVM corruption, but both SSD boot drives must be logically removed in-advance of executing the script.When should I NOT use the script? When the cluster is in the middle of an AOS upgrade. If you think you may need to use this script, please consult a DevEx Engineer before doing so.Do not use the script when Network Segmentation 2.0 is enabled, only the host boot disk repair is supported in this scenario.How do I know if my single-SSD CVM is down due to a boot drive failure? Check for the presence of a disk failure alert in Prism indicating a serial number that matches the SSD on the CVM which is down in the Prism > Hardware > Diagram page. If you see such an alert, then the SSD boot drive has likely failed and you should proceed to the Solution section for further instruction. If there is no such alert, proceed to Step 2. Try to boot the affected CVM using a "vanilla" svmboot.iso obtained from a working CVM's ~/data/installer/e*/images/ directory. See "Steps to Recover" section of KB-4220 https://nutanix.my.salesforce.com/articles/Knowledge_Base/svmboot-iso-and-ServiceVM-Centos-iso-Usage-and-Troubleshooting for full details. If the CVM boots, then your issue was not a failed disk and you should perform a full NCC health check and collect logs for RCA. If after a handful of attempts, the CVM still does not boot with this new ISO, proceed to Step 3. Create an svmrescue.iso following this guide https://confluence.eng.nutanix.com:8443/display/STK/SVM+Rescue%3A+Create+and+Mount and mount it to your CVM instead of the current ISO being used. Boot up the CVM and enter the Rescue Shell. From here, you should perform some hardware debugging to determine the scope and most likely cause of the problem. Consult the Disk Debugging Guide https://confluence.eng.nutanix.com:8443/display/STK/WF%3A+Disk+Debugging if you are unfamiliar with the syntax of the following commands, or ask a Senior SRE for help. Make sure to log the results of these tests in your case notes as these can be critical in cases where cause of the failure was misdiagnosed. Run the "lsscsi" command to see if the SSD appears to the LSI controller. If the SSD does not appear, most likely it has failed completely and you should proceed to the Solution section of this article for steps to resolve. If the SSD does appear, proceed with debugging at Step b. Test the device health using "smartctl". If the SSD Passes the test, it may be that the CVM boot files were corrupted and the SSD hardware is not at fault. Proceed to the Solution section, but skip replacing the actual device. If the SSD Fails the test, proceed with debugging at Step c. Check the link speed and error counters for the interface for the SSD using "lsiutil". Note your findings in the case notes and proceed to the Solution section.
Please note passwords for the nutanix and admin user before using the single_ssd_repair script and confirm they are in place afterwards. ENG-495806 is open for an issue where these passwords can revert to defaults. NOTE: Always consult the Hardware Replacement Documentation https://portal.nutanix.com/#/page/docs/list?type=hardware on the Nutanix Support Portal https://portal.nutanix.com for your AOS version and hardware model to find the most up-to-date instructions for recovering from these and other types of hardware failures. Using Single SSD Repair Logically remove the failed SSD from the cluster configuration. If the SSD disk appears in Red in the "Prism > Hardware > Diagram" view, select the drive and then click "Remove Disk". Monitor the disk removal task in Prism and do not proceed to replace the drive until the task is completed 100% and you no longer see the drive serial number listed in the "ncli disk list" output. If the disk does not appear in Red and the CVM is offline, you will see a banner similar to the one below and will have to remove the disk via the command-line. To remove the disk via the command-line, first obtain the Disk ID from the output of the "ncli disk list". It will be listed in the Id field after the two colons "::" nutanix@CVM:~$ ncli disk list include-deleted=true | grep -B8 -A6 <serial_number_of_disk> Start the disk removal with the following command: nutanix@CVM:~$ ncli disk rm-start id=<insert_disk_id> force=true Monitor the disk removal task in Prism and do not proceed to replace the drive until the task is completed 100% and you no longer see the drive serial number listed in the "ncli disk list" output. Once the original SSD Disk ID is completely removed from the configuration. Replace the physical drive (skip this step if no hardware fault was found and only software corruption is suspected). Power down the affected CVM, and take the node out of Maintenance Mode in case of ESXi hypervisor. If you are not in the middle of an AOS upgrade...Initiate Single SSD Repair! If an AOS upgrade is ongoing, consult a DevEx Engineer before proceeding. If you were able to initiate the disk removal in Prism, selecting the same disk in the Prism > Hardware > Diagram and click on the "Repair Disk". NOTE: Be careful not to confuse this with the "Repair Host Boot Device" option, which is reserved for recovering from Satadom (hypervisor boot device) failures. If you initiated the disk removal via the command-line, you will have to start the SSD Repair by running the following command from a working CVM: nutanix@CVM:~$ single_ssd_repair -s <ip_address_of_affected_cvm> In some instances the single_ssd_repair -s <ip_address_of_affected_cvm> command may only function properly when executed from the Curator Master. Monitoring Single SSD Repair Prism Tasks CVM Command Line nutanix@CVM:~$ ssd_repair_status Example: nutanix@NTNX-1xFMxxxxxxx-A-CVM:10.1.1.3:~$ ssd_repair_status Logs nutanix@CVM:~$ allssh grep -i ssd /home/nutanix/data/logs/genesis.out Example: nutanix@NTNX-1xFMxxxxxxx-A-CVM:10.1.1.3:~$ allssh grep -i ssd data/logs/genesis.out Another log that will be useful during the last stage for the SSD Repair process is the Boot Disk Replace Log. Output the logs contents with the following command and research any errors you find within: nutanix@CVM:~$ allssh cat /home/nutanix/data/logs/boot_disk_replace.log If the CVM is not reachable on the network, try connecting through the VM Console (if ESXi or Hyper-V) and capture the state of the CVM from there. AHV users can use VNC Viewer to view the CVM console as described in KB-5156 https://nutanix.my.salesforce.com/articles/Knowledge_Base/Access-VNC-console-for-CVM. Troubleshooting Help All KB's related to Single SSD Repair https://docs.google.com/document/d/1h4ZMAwRznqOJ_U90h_4XQWDLg4-fQR7RQgpxQMm5ZTY/edit KB-4427 https://nutanix.my.salesforce.com/articles/Knowledge_Base/Troubleshooting-CVM-Down: Troubleshooting CVM Down KB-7247 https://portal.nutanix.com/kb/7247: PROBLEM: ssd_repair_status shows "Unable to transfer svmrescue image to host x.x.x.x" Known Issues & Improvements ENG-121366 https://jira.nutanix.com/browse/ENG-121366: single_ssd_repair workflow should ideally handle both single or dual SSD workflow. (ONCALL-2929) ENG-98927 https://jira.nutanix.com/browse/ENG-98927: Cassandra should not try to create directory if in forwarding mode ENG-91487 https://jira.nutanix.com/browse/ENG-91487: Validate Prior SSD removal for Single SSD Repair
KB10371
Logbay Collection 'collection_result.JSON' feature
This article describes new feature added in logbay with NCC 4.0
This KB article describes in detail about the scenario where logbay might have issues collecting all the logs in the cluster. There can be multiple reasons for logbay being unable to collect a log file. A few of them are listed below : 1. Log file not available during the timeframe of execution 2. Log collection of the desired logfile generated an ERR during collection. 3. Log collection request using logbay timed out during collection.
With the release of NCC-4.0.0, NCC team has introduced a new utility inside Logbay `collection_result.JSON`. This utility helps in explaining what are the various logs which are captured in the Logbay collected bundle. The file `collection_result.json` is stored inside the logbay bundle /home/nutanix/data/logbay/bundles/ Typically, the content of the collection_result.JSON looks as below : Total Failed Items: 4 <- number failed items Total Items: 573 <- number of collected items TotalBytesWritten: 2.7GB <- size of all the bundle combinedFailedItems - Shows the list of log bundles which failed collection. Example: ############## ww.xx.yy.zz ##############
KB10402
Nutanix Files - Not able to re-mount NFS share with Kerberos authentication
Mounting issues in Linux and environments using Kerberos authentication.
NFS shares that use Kerberos authentication (krb5, krb5p, or krb5i) can automount using fstab, but are not able to re-mount the share after un-mounting. This has been resolved by disabling RDNS on the Linux client. If rdns is set to false, it prevents the use of reverse DNS resolution when translating hostnames into service principal names. The default value is true. Setting this flag to false is more secure but may force users to use fully qualified domain names when authenticating to services.
Verify that the client has the correct A and PTR DNS records. The below command can be run on the FSVMs to determine what needs to be configured in DNS. nutanix@NTNX-FSVM:~$ afs dns.list If the DNS records are correct the FSVMs resolve via IP and name, and the File Server resolves via name, add the below line in the client's /etc/krb5.conf file: rdns = false Test mounting and unmounting the NFS share on the client: user@linux:~$ umount <mounted share path>
KB6719
Script for data collection - G6/G7 node hangs due to 3-Strike Timeout processor error
This KB focuses on tordump.pyc script which will gather additional information for RCA of IERR/CATERR/PSOD/BSOD or host hung event.
We have seen multiple cases in which G6/G7 nodes in the cluster may hang randomly due to an Intel 3-Strike Timeout processor error. Symptoms: The node may be stuck in a hung state with PSOD(ESX), BSOD(Hyper-V), or hang/freeze with a blank screen. Sample PSOD on ESXi This is a sample PSOD. The actual type of PSOD/BSOD may vary. VMware ESXi 6.7.0 [Releasebuild-10302608 x86_64] Sample from AHV Kernel panic - not syncing: Timeout: Not all CPUs entered broadcast exception handler The node may reboot unexpectedly (auto-reboot by AHV or by BMC).CATERR (Catastrophic Error) or IERR (Processor Internal Error) may be found in IPMI SEL Note: CATERR/IERR may not always be logged in IPMI SEL even in case of 3-Strike. Sample IERR Sample CATERR Note : IERR / CATERR may be logged for various reasons.To identify the 3-Strike issue, follow the steps in ISB-103 https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=58235312.Check KB-2559 http://portal.nutanix.com/kb/2559 for other possible reasons for IERR / CATERR.
Note: We should NOT REPLACE NODES that encounter the 3-strike issue once since it is a very low probability issue that will not be solved by replacing HW.This is a 3-strike processor error. As per the engineering, the recommendation is to upgrade BMC/BIOS to 7.10/43.103 and attach your case with logs to ENG-197569 https://jira.nutanix.com/browse/ENG-1975693 strike processor error is from CPU which can be due to PCI device timing out and we need the additional debugging capability to understand the root cause.All the relevant information is summarized in ISB-103 https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=58235312, please refer to the ISB doc for further actions and latest updates. Pre-requisites : The data-collection with the new script should be performed while the host is in a hung state. Disabling the "System auto reset" option will prevent the node being automatically reset by BMC following a CATERR/IERR. Detailed information and configuration are explained in ISB-103 https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=58235312. Please collect OOB logs mentioned in KB-2893 http://portal.nutanix.com/kb/2893 for data collection Estimated time to run Without 3-strike error : the script will only collect basic MCE logs and should finish < 2 minutes.With 3-strike error : This may vary depending on network speed/traffic which can potentially cause time-out problem. Based on test, typical execution time is < 15 minutes and maximum < 30 minutes. Post-Log Collection Gather lspci output AHV lspci -vvv cat /proc/iomem ESXi lspci -e > lspci-binary As mentioned in ISB-103 https://confluence.eng.nutanix.com:8443/pages/viewpage.action?pageId=58235312, add your case to s https://docs.google.com/spreadsheets/d/1vgpXXrmwuoPIyD4PwzMDG6j5sX9g5xY7LQCX7MuNSw0/edit#gid=0 preadsheet http://docs.google.com/spreadsheets/d/1vgpXXrmwuoPIyD4PwzMDG6j5sX9g5xY7LQCX7MuNSw0/edit#gid=0 with required informationUpload the file generated by the script in diamondLeave a comment with summary of issue along with lspci output in https://jira.nutanix.com/browse/ENG-197569 https://jira.nutanix.com/browse/ENG-197569 and specify the file namePlease immediately notify any regional Escalation Engineer for confirmed 3-strike instances by tordump.pyc
KB13971
Create PE-PC Remote Connection Task fails with ACCESS_DENIED and ENTITY_NOT_FOUND when IAMv2 is enabled on PC
Unable to create PE-PC remote connection if IAMv2 is enabled due to PE local user not present in IAMv2 Database
Identification After a PE-PC remote connection reset, we might not be able to create the RC between PE and PC. From PE tasks page and aplos.out, we might notice the following error reason: "response": { There will not be any remote connection entity present in the PE: nutanix@CVM:~$ nuclei remote_connection.list_all On aplos_engine.out on the respective PC, you may notice ACCESS_DENIED for RC entity and ENTITY_NOT_FOUND for PE user 2022-11-16 11:13:30,397Z ERROR intentengine_app.py:1257 Encountered exception {'api_version': '3.1', 2022-11-16 11:38:43,198Z ERROR client_util.py:99 Failed to call API https://iam-proxy.ntnx-base:8445/api/iam/v4.0.a1/authn/users/xxxxxxxx-xxxx-xxxx-xxxx-ac1f6bcd14ab:[GET], code 404, reason NOT FOUND Explanation: A remote connection is successful, only if PE is able to authenticate on PC with its user. PC will have a hidden user with the username as PE cluster’s UUID, which will be used to authenticate.In scenarios where IAMv2 is not enabled, i.e. IAMv1 is used, this user information is authenticated from the following zknode: nutanix@PCVM:~$ zkcat /appliance/physical/userrepository The data in this zknode can be cryptic and to view it in a well-formatted way. To view this we can save the script get_user_info_zk_repo.py https://download.nutanix.com/kbattachments/13971/get_users_info_zk_repo.py in ~/tmp directory on the PCVM and confirm the md5sum matches. nutanix@PCVM:~$ cd ~/tmp nutanix@PCVM:~/tmp$ wget https://download.nutanix.com/kbattachments/13971/get_users_info_zk_repo.py nutanix@PCVM:~/tmp$ md5sum get_users_info_zk_repo.py 625731557a2dad7045e5779d7081a1be get_users_info_zk_repo.py Run the script using nutanix@PCVM:/tmp$ python get_user_info_zk_repo.py The local PE user with the username as PE cluster’s UUID will be present here.When IAMv2 is enabled, user information is stored in iam_user table of iam-user-authn. When this local user is not present in IAMv2 Database, we will see the following response in the aplos_engine.out on PC 2022-11-16 11:38:43,198Z ERROR client_util.py:99 Failed to call API https://iam-proxy.ntnx-base:8445/api/iam/v4.0.a1/authn/users/xxxxxxxx-xxxx-xxxx-xxxx-ac1f6bcd14ab:[GET], code 404, reason NOT FOUND Because of this, PE is not able to authenticate on PC and hence remote connection creation is not successful.
If user is present in the zknode but not in IAMv2 DB, then you can execute iam-bootsrap job again as mentioned below and it will copy the details from ZK node to DB.Note: This action may render the PC inaccessible for some time, and hence, please inform the customer in advance or schedule a maintenance window for this activity if required.Note: This KB is applicable only in scenarios where we observe ENTITY_NOT_FOUND errors for the user, coming from the authn pod to aplos_engine.WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. To execute IAM-bootstrap job List the zknodes present in the iam_flags: nutanix@PCVM:~$ zkls /appliance/logical/iam_flags Remove the following zknodes if they exist: nutanix@PCVM:~$ zkrm /appliance/logical/iam_flags/iam_v2_enabled Get the CMSP cluster UUID: nutanix@PCVM:~$ mspctl cluster list Delete the existing bootstrap job (replace <MSP-CLUSTER-UUID> with the UUID found from previous step) : nutanix@PCVM:~$ mspctl application -u <MSP-CLUSTER-UUID> delete iam-bootstrap -f /home/docker/msp_controller/bootstrap/services/IAMv2/iam-bootstrap.yaml Re-apply the bootstrap YAML: nutanix@PCVM:~$ mspctl application -u <MSP-CLUSTER-UUID> apply iam-bootstrap -f /home/docker/msp_controller/bootstrap/services/IAMv2/iam-bootstrap.yaml Once this completes, please reset the PE-PC remote connection from the CVM of PE cluster nutanix@CVM:~$ nuclei remote_connection.reset_pe_pc_remoteconnection Post this, remote connection should successfully be created and PE entities should be managable by PEPost resetting this remote connection successfully, Prism Central UI may become inaccessible with a 403 Access Denied error, stating “No permission to access this page. User is not authorized to access this page”If this occurs please review the internal comments section of KB 12082, to verify the dnsmasq configuration.
KB11986
v3 API clusters/list may return stale values after software upgrade on a remote cluster
v3 API clusters/list may return stale software_map after software upgrade on a remote cluster
v3 API clusters/list may return stale value for software entities (NCC, AOS, AHV etc.) if it was recently upgraded.For example, UI displays the correct values (AHV version 20201105.2030):While in a Rest API response, sent to clusters/list endpoint, we can see a stale value in "hypervisor_server_list" table: ...
This API queries information from intent_spec IDF table which for some reason is not being updated properly. The issue is being tracked in ENG-414459 https://jira.nutanix.com/browse/ENG-414459.
KB14385
Nutanix Database Service - Configure automatic deletion of archived WAL files for Postgres database
This article explains how to configure automatic deletion of archived WAL files in NDB for Postgres HA database.
Note: Nutanix Database Service (NDB) was formerly known as Era. Versions Affected: 2.x The automatic deletion of WAL files from the archive directory in NDB is disabled by default and if it is not enabled, then over time, the archive directory will get full and the Postgres logs will show the following errors: rsync: write failed on "/pgsql/pg-clstr-00/archive/pg-clstr-00/00000002000000000000000E": No space left on device (28)
The automatic deletion of the WAL files in NDB can be enabled using the NDB API. Below steps can be followed: Log in to the NDB API server (leader node in case of NDB HA) via SSH as user “era”. The leader node can be found from the NDB Administration page.Type "era" and press Enter to launch the era CLI.Run the following command to get the database ID of a database for which the automatic deletion needs to be enabled: [era@ndb]$ era -c "database list" | grep <DB_Instance_Name> For example: [era@ndb]$ era -c "database list" | grep clstrdb The first column in the output is the database ID. On the same NDB API server, run the following command to check the current value of automatic deletion policy: [eta@ndb]$ curl -u admin -k --location --request GET 'https://<NDB_server_IP>/era/v0.9/databases/<DB_ID>/properties' | tr "," "\n"| grep -A1 archive_wal_expire_days Enter the admin password when prompted and it should display the property value currently set. For example: [era@ndb]$ curl -u admin -k --location --request GET 'https://x.x.x.x:443/era/v0.9/databases/8c96d5b9-de8f-48d8-afe2-5236b00aa87d/properties' | tr "," "\n" | grep -A1 archive_wal_expire_days A value of -1 means the automatic deletion is disabled. You can set it to the desired number of days for which you want to keep the WAL files. Any WAL files older than that number of days will be deleted. On the same NDB API server, run the following command to set the value for automatic deletion policy: [era@ndb]$ curl -k -u admin --location --request POST \ Replace "value" with the desired number of days. In the example above, it is set to 30. Enter the admin password when prompted. Repeat step 4 to validate that the value was updated as needed. Now, with every log-catchup operation, time machine will also remove the archived WAL files as per the defined retention days.
KB6703
Zookeeper unresponsive due to empty CVM ZOOKEEPER_HOST_PORT_LIST environment entry
This KB explains an issue in which the ZOOKEEPER_HOST_PORT_LIST environment entry is not populated in one or more CVMs leading to connectivity issues with Zookeeper.
This KB explains an issue in which the ZOOKEEPER_HOST_PORT_LIST environment entry may be incomplete in one or more CVMs leading to connectivity issues with Zookeeper.The following symptoms may be observed when this issue is encountered: zeus_config_printer command reports the following FATAL: nutanix@cvm$ zeus_config_printer zkls does not return and freezes: ^CTraceback (most recent call last): Any functionality depending on connectivity with Zookeeper, for instance, single_ssd_repair script or boot_disk_replacement, may fail as they cannot connect to Zookeeper: QFATAL Timed out waiting for Zookeeper session establishment Zookeeper ruok is still reporting connectivity to the servers fine as it does not rely on Zookeeper environmental variable.Restarting Zookeeper service does not resolve the issue since it does not repopulate the environmental variable.Services may be stable as long as they were started before the Zookeeper variable got reset.If the CVM is restarted in this state, Zookeeper will fail to initialize leading to all services failing too due to Zookeeper being unavailable. VerificationZookeeper environment has to be checked to identify this issue by running the following command: nutanix@cvm$ allssh "env|grep ZOOKEEPER_HOST_PORT_LIST" A healthy environment will list all 3 (or 5 on FT2 cluster) zk servers. For example: nutanix@cvm$ allssh "env|grep ZOOKEEPER_HOST_PORT_LIST" However CVMs hitting this issue will not return all zk servers. For example, in the output below, only zk1 is displayed: nutanix@cvm$ allssh "env|grep ZOOKEEPER_HOST_PORT_LIST" The reason for this is the file that sets these environment variable, zookeeper_env.sh, is empty. When this file is empty, the code will just adds zk1 to the environment variable.The commands below will confirm if zookeeper_env.sh is empty on the CVMs: nutanix@cvm$ allssh cat /etc/profile.d/zookeeper_env.sh
This issue is triggered due to Genesis updating or creating the zookeeper_env.sh file. In this process, the temporary file used that will have the contents is not properly synced to disk. So if the CVM reboots at this point, the file ends up being empty. This issue is more likely to be triggered during upgrades or unclean CVM shutdown but may go unnoticed for a while until the services restart. ENG-128286 http://jira.nutanix.com/browse/ENG-128286 has a fix for the issue in AOS 5.5.7 and 5.6 and later.All steps below must be performed only after approval from Senior SRE or DevEx Team.First verify fault tolerance status of the cluster. In case FT is 0, DO NOT PROCEED as the cluster is at risk and raise an ONCALL for DEVEX assistance.If FT is not 0, then proceed with the following steps: Back up current zeus_config_printer with the command below: ZOOKEEPER_HOST_PORT_LIST=zk3:9876,zk2:9876,zk1:9876 zeus_config_printer > tmp/zcp.0 Run the command below to set Zookeeper environment value: export ZOOKEEPER_HOST_PORT_LIST=zk3:9876,zk2:9876,zk1:9876 Modify /etc/profile.d/zookeeper_env.sh. Include the following content: # THIS FILE IS GENERATED AUTOMATICALLY - DO NOT EDIT You may need to log out from the CVM and log in again. Ensure that Zookeeper is stable and you can access zeus_config_printer and zkls.Upgrade AOS to a version with the fix as soon as possible. Additional Scenario: In cases such as cluster shutdown (maintenance) or a power outage, genesis may not start the Zookeeper process. Symptoms seen will be similar to the once mentioned in this KB. The issue was due to https://jira.nutanix.com/browse/ENG-186252ENG-186252 & https://jira.nutanix.com/browse/ENG-186176ENG-186176 LCM creates a deadlock between genesis and LCM framework issue has been resolved with 5.5.8 release onwards.One can confirm this by doing kill -SIGUSR1 <genesis-PID> Now check the genesis.out, you should see a stack of LCM catalog interface waiting for ZK connection <greenlet.greenlet object at 0x7f1c89ef5a50> Verify if AOS release is between 5.5 to 5.5.8 and if it matches then we have to start zookeeper manually.Workaround to get cluster started would be manually starting zookeeper. Check /etc/hosts to see who are the ZK nodes and start "zkServer.sh start-foreground" on those nodes simultaneously, post that all the services should come up without issue.
KB12797
Data-In-Transit Encryption guidelines and limitations
Details on the data-in-transit feature introduced in AOS 6.1.1
By default, intra-cluster traffic is not encrypted over the network. Introduced in AOS 6.1.1 Data-In-Transit encryption can now be used to encrypt service level traffic between cluster nodes. This includes only Stargate level encryption in 6.1.1, with subsequent releases introducing additional services that can have their traffic encrypted. AOS 6.7 introduced Zookeeper service Data-In-Transit encryption. Prerequisites:In order to use Data-In-Transit encryption, the following prerequisites must be met: AOS 6.1.1 or later.PC 2022.3 or later.Same licensing requirements as Software Data at Rest Encryption. Additional Details: Communication will be encrypted using TLS v1.2, AES 128 bit key will be used by default.Complete cipher spec is ECDHE-RSA-AES128-GCM-SHA256.Stargate Port 2009 will be used for both secure and plain TCP connections LimitationsPhase 1 of Data-In-Transit encryption has the following limitations: Only intra-cluster traffic between Stargate processes is encrypted.Other services like Cassandra or Zookeeper are not encrypted in phase 1.RDMA traffic will not be encrypted in phase 1.When a Controller VM (CVM) goes down, UVM to remote CVM traffic will not be encrypted in Phase 1.UVMs connected to Volume Groups will not be encrypted when the target disk is on a remote CVM in Phase 1.
ConfigurationConfiguration of this feature can be performed via Prism or NCLI: see Data-in-Transit Encryption chapter https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:mul-data-in-transit-encryption-overview-c.html in the Nutanix Security Guide. Prism - Hardware > Clusters > Actions > Enable Data-In-Transit Encryption: NCLI: ncli cluster edit-params enable-intransit-encryption=(True or False) TroubleshootingLogging for this feature is contained within the Stargate subset of logs. For example, ~/data/logs/stargate.INFO will note logging related to enabling/disabling the feature. Stargate.INFO (stargate_net handles enabling/disabling): nutanix@cvm$ grep stargate.net stargate.INFO Stargate.INFO (secure_tcp_connection shows encrypted sessions): nutanix@cvm$ grep secure_tcp_connection stargate.INFO
KB15791
Prism Central - Kanon Service goes in crash loop if VM name has non-ascii characters
If VM has non-ASCII characters and is protected using any Protection policy then the Kanon service goes in a crash loop.
Kanon service goes into a crash loop after any VM with a non-ascii character in their VM name is present and protected using Protection Policy (LEAP). With Kanon Service in a crash loop, the VM protect/Unprotect workflow will not proceed.The below traceback signature is observed in the Kanon log, ~/data/logs/kanon.out [email protected]:~/data/logs$ less ~/data/logs/kanon.out Identify the VM that has the non-ascii character using the command below,Note: Modify the count value based on the number of VMs ("count" == "Total Entities"). [email protected]:~/data/logs$ nuclei vm.list count=100 In the above example, VM Nikhil_tesöt has the non-ascii character "ö" in the VM name and the VM is protected using Protection policy. [email protected]:~/data/logs$ nuclei vm.get Nikhil_tesöt |grep -A10 protection_policy_state
This issue is being tracked through ENG-564744 https://jira.nutanix.com/browse/ENG-564744.Workaround: To stop the Kanon service from crashing, rename the VM that has a non-ascii character to a normal ASCII character
KB9236
Stargate in crash loop due to divergent tentative update between AESdb and Disk WAL
Stargate in crash loop due to divergent tentative update between AESdb and Disk WAL
This KB describes a scenario where Stargate will go into crash-loop due to a software defect related to Autonomous Extent Store (AESdb). Note that this software defect only impacts clusters with AESdb enabled.zeus_config_printer can be invoked to review on which containers AES is enabled. nutanix@CVM:~$ zeus_config_printer | egrep -w 'container_id|aes_enabled' For more information regarding Autonomous Extent Store (AESdb), review the following confluence link: Stargate in crash loop due to divergent tentative update between AESdb and Disk WAL https://confluence.eng.nutanix.com:8443/x/hPPlBg Identification Stargate will be in crash-loop with one of the following signatures in /home/nutanix/data/stargate.FATAL: F0402 10:50:25.952833 30143 extent_group_state.cc:2883] Check failed: largest_seen_intent_sequence() >= control_block->latest_intent_sequence() (2 vs. 3) 1122410631 F1130 15:49:07.732064 30582 extent_group_state.cc:3285] Check failed: global_metadata_intent_sequence() >= control_block->global_metadata_intent_sequence() (16 vs. 17) 751476814 F0512 16:59:19.858719 28815 extent_group_state.cc:2924] Check failed: largest_seen_intent_sequence() >= control_block->latest_intent_sequence() (313 vs. 314) 869933461 ControlBlock F0512 16:59:20.874625 28817 extent_group_state.cc:2924] Check failed: largest_seen_intent_sequence() >= control_block->latest_intent_sequence() (5103 vs. 5108) 634594479 ControlBlock IMPORTANT: Engage Engineering via ONCALL if any of the following conditions are met: Not enough free space in the cluster to perform disk removalsAfter following the workaround section of the KB, Stargate is still in crash-loop. Root Cause ENG-250482 http://jira.nutanix.com/browse/ENG-250482 and similarly ENG-309207 https://jira.nutanix.com/browse/ENG-309207 describe a situation where the tentative update (mechanism used to track which disk operations have been completed in case of failure) written into the Disk WAL will be divergent with the tentative update written on AESdb. This can happen if there is a Stargate/CVM unplanned failure when the tentative ops are being written. In the example above, the tentative updates will be depicted in the line: (16 vs. 17)
ENG-250482 https://jira.nutanix.com/browse/ENG-250482 and ENG-309207 https://jira.nutanix.com/browse/ENG-309207 correct this software defect. 5.10.11 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+5.10.11, 5.15.2 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+5.15.2, 5.17.1 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+5.17.1 and 5.18 https://jira.nutanix.com/issues/?jql=project+%3D+ENG+AND+fixVersion+%3D+5.18, include the fixes for both defects. Advise customers to upgrade to any of these versions or later for a permanent fix. Workaround NOTE: Consider engaging an STL or a Senior resource in case assistance is needed following the instructions below. Identify which disk(s) is holding the AESdb. Get the egroup id(s) depicted in the FATAL log signature in ~/data/logs/stargate.INFO. Note that there might be more than one fatal pointing at different egroup ids. F0402 10:50:25.952833 30143 extent_group_state.cc:2883] Check failed: largest_seen_intent_sequence() >= control_block->latest_intent_sequence() (2 vs. 3) 1122410631 F0402 10:50:25.956422 30143 extent_group_state.cc:2883] Check failed: largest_seen_intent_sequence() >= control_block->latest_intent_sequence() (2 vs. 3) 869933461 Run a medusa_printer lookup from any CVM to locate the physical drive where the egroups are stored. The lookup needs to be run once per egroup id reported in the FATAL log signatures. The lookup might return two different outputs. nutanix@CVM:~$ medusa_printer -lookup egid_physical -egroup_id 1122410631 If the output returned is a negative value: nutanix@CVM:~$ medusa_printer -lookup egid_physical -egroup_id 1122410631 A negative value means that the egroup does not exist anymore in Cassandra. In order to proceed, it is required to query the local AESdb to obtain the disk where it was stored. The following lookup will identify on which physical SSD and AESdb the relevant egroup information is. (There is one local AESdb per node.) Run the one liner command below from the CVM where Stargate is crash-looping. The disk will be a local SSD on the CVM where Stargate is crash-looping. nutanix@CVM:~$ for f in `ls /home/nutanix/data/stargate-storage/disks/` ; do echo "Disk Serial Number:" $f ; echo "Medusa lookup output:"; medusa_printer --lookup egid_physical --egroup_id 1122410631 --physical_state_metadata_db_path=/home/nutanix/data/stargate-storage/disks/$f/metadata/aesdb | head -30 ; echo "" ; done If the output returned is not a negative value as shown below: nutanix@cvm:~$ medusa_printer -lookup egid_physical -egroup_id 869933461 The lookup above returns the number of replicas available (depending if RF2 or RF3) and the location of the physical SSD and CVM where the AESdb holds the relevant information. Run this additional lookup to ensure there is a healthy replica for the egroup. Ensure there are no kMissing or kCorrupt replicas. nutanix@cvm:~$ medusa_printer -lookup egid -egroup_id 869933461 | grep replicas -C20 Take note of the serial number of the disk of the CVM where Stargate is crash-looping. Assuming .56 is the CVM with Stargate crash-looping, note the serial BTTV324201CB400HGN. --physical_state_metadata_db_path=/home/nutanix/data/stargate-storage/disks/BTTV324301ZB400HGN/metadata/aesdb on SVM: 10287734692 IP: xx.xx.xx.57 Ensure to run the lookups for all egroups reported in the Stargate FATAL logs. Take note of in which disk the egroup noted in the stargate FATAL is stored. Example: egroup id 1120108547 ==> disk BTTV324301ZB400HGN Once all the egroups and their respective physical disks have been identified, ensure there is enough free space to perform a disk removal from the cluster. If there is not enough space available, engage Engineering via ONCALL and do not attempt to remove the drives. Set the CVM where Stargate is FATALing on maintenance mode: nutanix@CVM:~$ ncli host ls Initiate a disk removal of the disk that is holding the egroup as per the mapping built in Step 2. NOTE: Remove disks one by one. Wait until the removal of one disk completes before removing the next one. nutanix@CVM:~$ ncli disk ls | grep "BTTV324301ZB400HGN" -B8 Once the disk removal(s) is (are) complete, get the CVM out of maintenance mode. Stargate should be stable and not crash-loop anymore. Removing the disks removes the AESdb stored on the disk as well, clearing the condition that causes Stargate to crash-loop. nutanix@CVM:~$ ncli host edit id=20 enable-maintenance-mode=false At this point, it is safe to re-partition and add the disk back from Prism https://portal.nutanix.com/page/documents/details/?targetId=Hardware-Admin-Ref-AOS-v5_17%3Ahar-drive-add-t.html. (This re-creates the AESdb anew.)
KB15247
Cloud Connect CVM can get Stargate in crash loop if a lot of data had been deleted from it
If a large amount of data is deleted from a Cloud Connect CVM, Stargate may go into a crash loop
On a Cloud Connect CVM Stargate can go into a crash loop with the following Fatal message: F20230623 15:55:31.665747Z 3108 extent_store.cc:842] Stargate initialization not completed for 900 secs The Stargate INFO-level logs will show that it is waiting for the "current_usage_" to initialize and it will be repeating for 15 mins (900 secs) until the initialization timeout. I20230623 15:40:42.206697Z 3120 data_WAL.cc:1072] Waiting for current_usage_ to initialize That can happen if there was a lot of backup data stored on the Cloud Connect CVM and then it was all deleted at once. This can be verified by checking the storage pool usage: nutanix@NTNX-A-CVM:~$ ncli sp ls-stats It shows that it has 20+ TiB of used space. However, looking at the S3 data disk, it is empty: nutanix@cvm$:~$ du -hs /home/nutanix/data/stargate-storage/disks/539757849130232697-7338896213879639274-8/* Stargate during the initialization tries to delete the stale episodes, but is unable to do so in 15 mins and fails with a timeout.
In such case, the Stargate initialization timeout can be increased with a gflag.This solution is only for Cloud Connect CVMs on AWS or Azure, do not apply it for on-prem CVMs.WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/editand KB 1071 https://portal.nutanix.com/kb/1071. --stargate_max_initialization_time_secs=2700
KB14549
Space Accounting - Storage Space Issue Warning and Alerts
This article outlines various ways to monitor if the cluster or some other component is running out of space. 
For other Space Accounting issues or topics not covered in this article, see KB-14475 https://portal.nutanix.com/kb/14475. Monitoring Storage Pool or Container Usage is imperative. Crossing a certain threshold will either lead the cluster into a critical state by compromising Data Re-Build during a node failure or the cluster going into a Read-Only state, impacting the incoming I/O and causing production to go down. The cluster cannot rebuild the data during a node failure if the storage pool usage has crossed the Resilient Capacity (It is unique for every cluster). This will impact Data Availability and Resiliency.The cluster will go into a Read-Only state if the storage pool or container usage crosses the 95% Threshold. This will impact production as the incoming writes are stopped. This article outlines various ways to monitor if the cluster or some other component is running out of space.
On Prism Element, Data Resiliency Widget Click on Widget and refer to the Free Space component. If the Failure Tolerable is 0, the cluster does not have enough free space to rebuild data if any node goes down. It becomes 0 once the cluster crosses its resilient capacity. The Data Resiliency will get Critical at that time. On Prism Element, Storage Summary Widget On Home Page, this widget shows both Logical and Physical Usage on the cluster. It also shows the threshold past which the cluster cannot rebuild the failed node. The threshold can be Auto or Manually configured using the setting icon of the widget. The color of the usage bar will become Yellow if the usage is approaching the threshold and will become red once the threshold is crossed. Click the View Details section to show more granular information: On Prism Element, Analysis Dashboard On the Analysis Dashboard https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-analysis-dashboard-wc-r.html, Nutanix Administrator can create an "Entity" chart for entity types such as Storage Pool or Storage Container and use the Metric "Logical Usage" or "Physical Usage". Creating such a chart will help understand if the usage is due to a sudden or gradual increase. Alerts Prism Element generates alerts if space usage crosses a certain threshold. Corresponding KB articles can help resolve the issue causing the alert. On Prism Central, Capacity Runway View Use the Capacity Runway tab in the Planning dashboard to view summary resource runway information for the registered Nutanix and Non-Nutanix clusters and to access detailed runway information about each cluster. Storage Runway displays the cluster runway for storage usage, i.e., the number of days before the storage runs out. Refer to this link for more information. https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-resource-planning-runway-view-pc-r.html Refer to the Resource Planning Page https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-resource-planning-pc-c.html on the Prism Central Guide to create scenario and get recommendations. Prism Central can analyze resource usage over time and provide tools to monitor resource consumption, identify abnormal behavior, and guide resource planning. https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-behavioral-learning-pc-c.html On Nutanix Insights, Storage Overview: Displays details on the storage space utilization (in GiB or TiB) and, if enabled, the resilient capacity of the cluster. Resilient Capacity is the capacity of the cluster that is able to self-heal from a failure beyond the configured threshold. Link to read more about it. https://portal.nutanix.com/page/documents/details?targetId=Portal-Help:sup-insights-cluster-details-view-sp-r.html#:~:text=Storage%20Overview%20Tab Cluster Wide Storage Usage Pattern can be seen for 24 Hours and 7 Days. This information is useful to understand whether there is an abnormal or gradual growth. Storage Health will show an alert related to Storage usage. [ { "Name": "Volume Group Space Usage Exceeded", "ID": "A101056", "Impact Type": "System Indicator", "Entity Type": "Volume Group", "KB Article(s)": "KB 4600" }, { "Name": "Storage Pool Space Usage Exceeded the Configured Threshold", "ID": "A1128", "Impact Type": "Capacity", "Entity Type": "Storage Pool", "KB Article(s)": "KB 11118, KB 2475" }, { "Name": "Cluster does not have enough free space to tolerate a node failure", "ID": "A101047", "Impact Type": "System Indicator", "Entity Type": "Storage Pool", "KB Article(s)": "KB 1863" }, { "Name": "Data Disk Space Usage High", "ID": "A1005", "Impact Type": "Capacity", "Entity Type": "Node", "KB Article(s)": "KB 3787" }, { "Name": "File Server Space Usage High", "ID": "A160000", "Impact Type": "Capacity", "Entity Type": "File Server", "KB Article(s)": "KB 8475" }, { "Name": "File Server Space Usage Critical.", "ID": "A160001", "Impact Type": "Capacity", "Entity Type": "File Server", "KB Article(s)": "KB 8475" }, { "Name": "Storage Container Space Usage Exceeded", "ID": "A1130", "Impact Type": "Capacity", "Entity Type": "Container", "KB Article(s)": "KB 2476" } ]
KB14005
Prism Central - Category or category value must start with a letter
When creating a category or category value in Prism Central, the name must start with a letter and then letters, numbers, underscore, dot or dash.
When trying to create a category or category value in Prism Central that begins with a number, the following error message is displayed in the UI: This value must start with a letter and then letters, numbers, underscore, dot or dash.
If a customer has a requirement to create a category or category value that begins with a number, the nuclei command line can be leveraged to create the entity.The category can be created with the following command: <nuclei> category.create_key name=1234 The category value can be created with the following command: <nuclei> category.create_value name=CostCenter value=1234 ENG-514304 has been filed to address this limitation and at the time of writing is targeted to be fixed in pc.2023.3.
KB15544
[NKE] Use an existing Volume Group to create a PV and PVC with CSI in NKE
The KB describes the procedure to use existing volume group and create PV and PVC to be consumed by a pod
This KB describe how to use an existing VG and create a PV and PVC in NKE to be used by a pod in Kubernetes. If you have an existing Volume Group that needs to be attached to a pod running in NKE cluster, it can be used to create a Persistent Volume Claim and Persistent Volume manually that can then be managed via CSI automatically.This covers two use cases: When the VG has existing data that needs to be used by a podWhen the PV/PVC were deleting while the ReclaimPolicy was set to retained, the VG with data is present and can be used by other PV/PVC
Steps:1. Find the Volume Group details from the Prism Element CVM cli nutanix@CVM~$ acli vg.get test-import 2. Create a PV and PVC definition. apiVersion: v1 Make sure to add the volumeName field in PVC definition and use the same name for the PV creation. apiVersion: v1 The name of PV must match the volumeName field in PVCObtain the nutanix secret name from kube-system namespace in the cluster kubectl get secret -n kube-system Change the fsType to xfs or ext4. Configure chapAuth ENABLED or DISABLED as configured in the storageclass definition.Add description if needed <storage class name>, PVC: <PVC-name>, NS: <namespace>Copy the VG iqn obtained from the acli vg.get command and add "-tgt0" to it.In the volumeHandle field, add the VG UUID to "NutanixVolumes-" 3. Create the PVC and PV. kubectl apply -f pv-pvc.yaml The PVC should be in bound state with the PV and can be used by a pod.
KB16200
Nutanix Files - Files on NC2 Azure - FSVM Scaleout Failing With Container Does Not Exist Error
Files on NC2 Azure, FSVM scaleout is failing with "File /NTNX_azrntnx-files01_ctr/4.4.0.1 does not exist" error
Files on NC2 Azure, FSVM scaleout is failing with AFS image does not exist on the files container error. This KB includes troubleshooting steps and the work around. Note: Check out KB http://portal.nutanix.com/kb/13309/ 13309 http://portal.nutanix.com/kb/13309/ for high level information on NC2 on Azure and tunnel access. You can also reference KB15194 https://portal.nutanix.com/kbs/15194/ for the different remote tunnel options for Azure and AWS. ACESSING THE CLUSTER via SSH The customers are completely blocked from accessing the Cluster/CVMs via CLI. This includes SSH, opening the CVM console from PE, opening the Nutanix Remote Support Tunnel, accessing the AHV hosts via SSH, SSH from Prism Central (PC) to CVMs, etc. Any and all external sources are blocked, therefore, do not make request to the customer that requires them to access the CLI. The only method to access Azure clusters' CVM/Hosts is via the MCM tunnel (Teleport). https://confluence.eng.nutanix.com:8443/display/SW/SSH+Azure+Clusters+-+Teleport The customer can still access the PC VMs via SSH to do any troubleshooting needed via PC. Troubleshooting Steps: Confirmed that the Files Manager, File Server and FSM are at 4.4.0.1 or later Note: In case of a Scaleout Prism Central, you need to determine the Files_Manager Leader and investigate the logs accordingly nutanix@PCVM:~$ files_manager_cli get_leader From the Prism Central CLI, the files_manager_service.out log shows task failed error as shown in the following example: nutanix@PCVM:~$ ~/data/logs/files_manager_service.out The FSVM scaleout task details via "ecli task.get <<task uuid> from PCVM CLI shows that the VmDiskCreate path failed with ""InvalidArgument: File /<<container name>>/<<image>> does not exist: 6" error because 4.4.0.1 raw image is missing in the container as shown in the following example: <ergon> task.get 0454b5e4-c54b-4085-694f-66a625e3c8e0
Work Around: Manually download the 4.4.0.1 imageSteps to download 4.4.0.1 on Nutanix Files Container. From CVM CLI, verify AFS 4.4.0.1 software availability as shown in the following example: <ncli> software list software-type=afs name=4.4.0.1 2. Download the AFS 4.4.0.1 software as shown in the following example: <ncli> software download software-type=afs name=4.4.0.1 3. Once the AFS 4.4.0.1 software download is completed, it should be available for upgrade as shown in the following example: <ncli> software list software-type=afs name=4.4.0.1 At this point, you can proceed to perform the FSVM scaleout operation.
KB12921
AHV VLAN (VM Network) may not be visible in Projects on Prism Central (PC) UI
One or more AHV VM Networks, a.k.a. VLANs, may not be visible for Projects in PC UI
One or more VLANs may not be visible when creating or updating Projects in Prism Central (PC) UI. Rule out any client-side browser issues by doing a 'Force Refresh' of the page and/or clearing the browser cache and reloading PC UI.Confirm the VLAN(s) (VM Network) is correctly configured as per the AHV Administration Guide https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v5_20:AHV-Admin-Guide-v5_20 and visible on the AHV PE cluster.Confirm the missing VLAN is visible elsewhere in PC UI and only missing within Projects in PC: For example, check PC > Entities Menu > Network & Security > Subnets.Confirm there is no PE-PC communication issue. Refer to KB-6970 https://portal.nutanix.com/KB/6970 for further details.Confirm that the total number of VLANs spanning all clusters registered to the PC is <=500.
If client-side browser and PE configuration show no issues, the VLANs missing in PC Projects are visible in PC Subnets page, and there is no confirmed PE-PC communication issue: Collect the following information Project information (via PC CLI): nutanix@PCVM:~$ nuclei project.list VLAN info from both PC CLI: nutanix@PCVM:~$ nuclei subnet.list sort=name and PE CLI: nutanix@CVM$ acli net.list Collect a Logbay bundle from the PC, and a bundle from a PE where the VLAN(VM Network) is configured. Refer to KB6691 - NCC - Logbay Quickstart Guide https://portal.nutanix.com/kb/6691Contact Nutanix Support https://portal.nutanix.com/ and provide the information collected in steps 1 and 2. To confirm the total number of VLANs across all clusters registered to the PC: nutanix@PCVM:~$ nuclei subnet.list | grep Total Example output: nutanix@PCVM:~$ nuclei subnet.list | grep Total
KB14879
Clusters with Prism Central Disaster Recovery (PCDR) configured could experience PC-PE sync issues on upgrading the target PCDR cluster to AOS 6.5.3
Clusters with Prism Central Disaster Recovery (PCDR) configured could experience PC-PE sync issues on upgrading the target PCDR cluster to AOS 6.5.3
Clusters with Prism Central Disaster Recovery https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-cluster-pcdr-introduction-pc-c.html (PCDR) configured could experience PC-PE sync issues on upgrading the target PCDR cluster (Prism Element) to AOS 6.5.3.Note: AOS versions 6.5.0, 6.5.1.x, 6.5.2.x, 6.5.3.1, 5.20.x, and 6.6.x are not affected by this issue.Identification: Prism Element is upgraded to AOS 6.5.3Prism Element running AOS 6.5.3 is selected as a target for Prism Central Disaster Recovery http://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-protection-pcdr-pc-t.html feature. To check this: PCVM Settings > Prism Central Management page to verify if PE is selected as the PCDR target.Multiple tasks are stuck on Prism Central in ecli task.list command output. Stuck tasks may include tasks related to LCM, catalog, anduril, acropolis. nutanix@PCVM:~$ ecli task.list include_completed=false PE-PC communication is working fine. Refer to KB-5582 https://portal.nutanix.com/kb/5582to verify PE-PC communication.Insights_server log on PCVM shows "Sync barrier is not satisfied yet for entity" errors. Below command can be used for that: nutanix@PCVM:~$ allssh 'egrep -i "Sync barrier is not satisfied yet for entity" ~/data/logs/insights_server.INFO*' Tasks may accumulate and not be completed on PC ("kRunning" state) but checking at PE Cluster, the tasks could be completed ("Succeeded" state): nutanix@PCVM:~$ ecli task.get 20f60fbe-d8f2-420b-af72-3afac6d811e7 nutanix@CVM:~$ ecli task.get 7627251e-e26f-4d80-bcdf-693867103620 Note: To identify the PE cluster, get the "Cluster_UUID" from the Task at the PC VM and look for it in the output of the following command: nutanix@PCVM:~$ ncli multicluster get-cluster-state In addition to the aforementioned symptoms, if the customer attempts a PCDR recovery on the PE cluster with AOS 6.5.3 , it could result in failure as illustrated below : PC recovery task will fail in PE due to timeout: ecli task.get 48b28801-39d6-4bb6-5ddc-d90180f3dbd7 The recovery tasks are running/queued on the PC: Task UUID Parent Task UUID Component Sequence-id Type Status Prism Central UI login will fail. Not all the services on Prism Central will be running: nutanix@NTNX-A-PCVM:~$ genesis status The logs from ~/adonis/logs/prism-service.log will show that the recovery task RestoreIDF is stuck at SUBTASK 2, STEP 2: 2023-08-04 09:05:18,162Z INFO [RestorePCScheduler1] [,] RestoreIdfDataSubtask:execute:72 SUBTASK 2: Starting restore IDF data from replica PE subtask.
If you are experiencing the above symptoms on the clusters running the affected AOS version, contact Nutanix Support http://portal.nutanix.com.This issue is fixed in AOS 6.5.3.1.Note: If you disabled PCDR as part of the workaround for AOS 6.5.3 clusters, upgrade to AOS 6.5.3.1 before re-enabling PCDR.
KB5046
New SSD may not get added to the CVM due to residual RAID superblock
This issue occurs if the SSD was previously used as a boot SSD in another CVM and wasn’t cleaned before being installed on another node.   Such SSDs may have leftover software RAID superblock.
Symptoms: New SSD fails to get added to the cluster. Hades fails to partition the disk. hades.out shows that partitioning the disk failed after creating the first partition: 2017-11-29 20:52:19 INFO disk_manager.py:5140 Setting operation user_disk_repartition_add_zeus on disk /dev/sdb with serial BTHC606302SD1P2OGN hades.out may have the "unsupported linux_raid_member" error as well: 2019-04-02 13:59:20 ERROR disk_manager.py:1682 Disk /dev/sdk has a partition /dev/sdk1 with unsupported linux_raid_member file system type /proc/mdstat shows the new SSD is added to different software RAID device. ~$ cat /proc/mdstat
WARNING: Improper usage of disk_operator command can incur in data loss. If in doubt, consult with a Senior SRE or a STL.Cause: This issue occurs if the SSD was previously used as a boot SSD in another CVM and wasn’t cleaned before being installed on another node. Such SSDs may have leftover software RAID superblock. When Hades tries to partition the disk, it fails after creating the first partition as soon as the software RAID detects the superblock.The RAID superblock is a data structure on the disk stored at the beginning of the standard locations that Hades will create partitions at - so if it has not been cleaned when the SSD was prepared to be used as a replacement, the old software RAID metadata is detected when the new partition table entries are created. (More information regarding this superblock type is discussed here: ( https://raid.wiki.kernel.org/index.php/RAID_superblock_formats https://raid.wiki.kernel.org/index.php/RAID_superblock_formats). Solution: Stop the inactive md127 device that was reported evident in /proc/mdstat nutanix@cvm$ sudo mdadm --stop /dev/md127 Check /proc/mdstat and/or the hades.out logs to ensure the md127 device has been removed. 2017-11-29 22:09:17 INFO disk_manager.py:4945 No disk update is required in hades proto Stop the Hades service. nutanix@cvm: sudo /usr/local/nutanix/bootstrap/bin/hades stop Before using dd in the next step make sure the disk device for the new SSD has not unexpectedly been mounted (e.g. /dev/sdb). This would be abnormal but is an important precaution. nutanix@cvm: df -h | grep /dev/sdb << if the appropriate device for the new SSD is /dev/sdb - then /dev/sdb1 should not be listed at this point as a mounted device Run the following command to wipe-out the RAID superblock nutanix@cvm: sudo mdadm --zero-superblock /dev/sdbX << Replace sdbX with the appropriate device partition for the SSD in your case. If there are still any existing partitions on the disk, delete the partitions. Any existing partitions may prevent the above "zero-superblock" command to succeed nutanix@cvm: sudo parted -s /dev/sd[Letter] --list Verify there are no partitions on the disk. Example: nutanix@cvm: sudo parted /dev/sdb p Reboot the CVM (after confirming the cluster is fault tolerant). nutanix@cvm$ cvm_shutdown -r now Check if the new SSD is detected and mounted by running df command on the CVM after reboot.If the new SSD is not added then add the SSD by clicking the 'Repartition and Add' button on Prism UI. If the disk does not get added to cluster then root cause as to why disk was not getting added to cluster from prism. If the new SSD is not added via Prism UI (wait a few minutes to see the result), then run the following command to repartition the appropriate SSD device. nutanix@cvm$ disk_operator repartition_disk /dev/sdX boot Example: nutanix@cvm$ disk_operator repartition_disk /dev/sdb boot If you had to use the disk_operator command, then you will need to also restart the hades and genesis service, in order for the new SSD data partition to be used by Stargate. Wait for the hades and genesis to have time to restart, and and then re-check the list of disks being used by Stargate now includes the new SSD mounted path (e.g. below the new SSD SN is BTHC60xxxxxxP2OGN). nutanix@cvm$ sudo /usr/local/nutanix/bootstrap/bin/hades restart
KB6388
Nutanix Files - Preupgrade Check failure “Upgrade failed due to internal error”
The following KB explains the reasons for errors when pre-upgrade checks are executed at the time of a Nutanix Files upgrade and also basic troubleshooting to help to resolve issues yourself before contacting Nutanix Support.
Nutanix Files Pre-Upgrade checks fail stating following error message: Failed to reach FileServer Or: Upgrade failed due to internal error: upgrade You might see a MinervaException or internal error which might occur because of an unreachable FileServerVm from Nutanix Controller VM. The issue might occur because of the following. Network issues (generic)Unstable services within the file serverNutanix files node (fsvms) might not be up or accessible.Network Segmentation enabled (upgrade or new install scenarios) The alert section displays File server unreachable warning message similar to the following alerts: Title: File server unreachable check Title: File Server upgrade failed
Troubleshooting Nutanix recommends executing file server health check. ncc health_checks fileserver_checks fileserver_cvm_checks file_server_status_check If the check fails, refer to KB 3691 https://portal.nutanix.com/kb/3691. Log on to one of the Nutanix Files FSVM using the internal IP address and verify the status of the services by using the cluster status command. FSVM:~$ cluster status If the services are marked as down, execute cluster status to bring those services online. FSVM:~$ cluster start If the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/.
KB3320
USB passthrough support on Nutanix platforms
Does Nutanix support USB passthrough?
Does Nutanix support USB passthrough?
Nutanix does not support USB passthrough on all platforms/hypervisors. There are no plans to support USB passthrough on Nutanix platforms due to security concerns and due to the fact that it affects features such as live migration, HA, 1-Click upgrades and more. Alternative If you need a guest VM to access USB devices, using AnywhereUSB is a valid solution. This solution uses a USB hub accessible over TCP/IP. A driver is installed on the VM and it emulates a USB device connected to the hub as if it was directly attached to the VM. This method is well-known to work and does not affect the features mentioned above. Please see the Digi site for further details: www.digi.com/products/usb-and-serial-connectivity/usb-over-ip-hubs/anywhereusb http://www.digi.com/products/usb-and-serial-connectivity/usb-over-ip-hubs/anywhereusb
KB13736
Nutanix Files - Files SMB fs admin role is not dynamically updated
Updates to the FS admin role in Prism are not dynamically reflected unless the user is logged off or SMB session is closed
Updates to the FS admin role in Prism are not dynamically reflected unless the user is logged off or SMB session is closedFS admin roles can be added or removed in Prism through Portal doc https://portal.nutanix.com/page/documents/details?targetId=Files-v4_3:fil-file-server-admin-permissions-t.htmlOne such example is where the share is still accessible even after removing the domain user from the Files admin roleExample1) User PS1 is FS admin2) ACL was updated on an SMB share via MMC and set to no access.3) Then PS1 was removed from FS admin.However, the share was still accessible and the PS1 user was able to read/write files on the share until the SMB connection is active. /home/log/samba/clients-x.log: 2022-09-01 10:15:34.523865Z 2, 197826, uid.c:283 check_user_ok To resolve the issue, it's recommended to log off the affected user or close the SMB session
Workaround Log off the current user session and log in again. OR Run the below command from FSVM to kill the SMB session KB-7491 http://portal.nutanix.com/kb/7491 fsvm$ “afs smb.close_user_sessions "domain\username"
KB13244
Nutanix Files - Partially Unjoined state
Nutanix Files - Partially Unjoined state
Nutanix Files might get stuck in Partially Unjoined state when the Files server is unjoined from domain and rejoined again as part of maintenance or based on customer requirement. The options in Files Server UI are greyed out leaving no option to leave and join domain again. Symptoms:1. Files server will report Undetermined error when testing testjoin nutanix@FSVM:~$ sudo net ads testjoin -P 2. Health check report Files server is not joined to an AD domain <afs> smb.health_check 3. Checking nameservice_proto will not report any AD domain information nutanix@FSVM:~$ afs 4. ***Only if all the below conditions are matching proceed to follow the workaround i) The file server will be in a down state however all the FSVMs will be up and running with all the services. The following error will be received when running the minerva ha check: ERROR: Failed to determine HA state ii) The zpool status check will show "no pools available". Running command "allssh sudo zfs list" will show the below output: no datasets available iii) "afs vg.show_status" will show all the zpool status as good. None of FSVM'S will show the eth interface which will have the data services IP however the IP will be reachable from the FSVMs iv) Restarting the minerva service will correct the issue for sometime but again it will go into the same state. v) minerva_nvm.log will show the below error: 2023-07-05 09:44:36,611Z ERROR 39314288 quota_manager_util.py:49 Winbindd is not ready: vi) allssh 'zkls /appliance/logical/minerva/' will show the below error: Zookeeper error: no node vii) minerva_ha_monitor.log will show ha service in crash loop 2023-07-05 10:17:21.005609 HA service crashed. Interval = 109 start_time = 1688552131 end_time = 1688552241
Workaround: This issue will be fixed in Nutanix Files 4.3. Follow below steps as a workaround.DO NOT use Force Unjoin unless it matches the exact symptoms where customer already tried rejoin domain and it stuck in partial state.Need to engage Engineering through Oncall if the symptoms do not match. 1. Force unjoin domain from command line nutanix@NTNX-***-**-***-**-A-FSVM:~$ afs 2. For symptom 4, Even if the above process ends in error it will still clear the stuck task. Proceed to restart the ha service. This step should be skipped for other symptoms. allssh 'genesis stop minerva_ha; cluster start' 3. Join domain by Select overwrite existing computer accounts option from Nutanix Files UI.
KB8174
NX-1175S-G6:- Foundation stalls due to disabled NIC
This article explains the limitations and issues noticed specific to NX-1175S-G6 hardware platforms.
Issues specific to NX-1175S-G6 Foundation stalls on NX-1175S-G6 platformEthernet Device not found Scenario 1 Foundation stalls on NX-1175S-G6 platformThe foundation on the NX-1175S-G6 platform using available 1Gb ports may get stall due to the onboard NICs that do not seem to be working. This NIC is disabled by design, Foundation web interface shows no progress for the task "Mounting installer media":Host's IPMI console reads the following when performing Foundation: Starting interface eth0 AND ixgbe 000:b3:00.0 eth0: NIC Link is Down Scenario 2 Ethernet Device not foundFor the NX-1175-G6 nodes, the onboard NICs do not seem to be working. You cannot see the eth2 and eth3 interfaces when running the command below (In case of AHV): root@nutanix-network-crashcart# ovs-vsctl show If you manually try running "ifup" for these interfaces, it errors out with "eth2: Device not found".
Scenario 1 Foundation stalls on NX-1175S-G6 platformOnly 10Gb optical ports can be used to perform the Foundation on the NX-1175S-G6 platform.platform The 1Gb metallic ports were intentionally capped with plastic cover, so the customer should not be able to use themPlease refer to the official NX-1175S-G6 Hardware documentation for more detailsPlease note the label "Disabled Ports" and "Do not use" on the picture from our official documentation. Scenario 2 Ethernet Device not foundThere are two 10GBaseT LOM ports present on the NX-1175S-G6 platform, but the Nutanix BIOS disables both of these ports because of driver/software support limitations. To access the BMC, the only option is to use the IPMI port. root@AHV # lspci | grep -Ei 'nic|network|ethernet' Note:- To know more about System Specification of NX-1175S-G6 https://portal.nutanix.com/#/page/docs/details?targetId=System-Specs-G6-Single-Node:har-node-naming-nx1175sg6-r.htmlplatform. Please Refer G6 system Specification documentation available on the portal available here https://portal.nutanix.com/#/page/docs/details?targetId=System-Specs-G6-Single-Node:har-node-naming-nx1175sg6-r.html.
KB15342
After incorrect shutdown of Prism Central during upgrade the error message "Failed to reach node where Genesis is up" gets displayed
Due to customer-initiated incorrect shutdown of the PCVM during upgrade, Genesis gets into a crash loop indicating an issue with the file /home/nutanix/.genesis_restart_required_path being invalid.
This affects both single/scale-out instances of PC where after an upgrade, you may see error messages such as "Ping request failed.', 'reason': 'REQUESTS_CONNECTION_ERROR". Or if you find that a Genesis restart is not working properly, resulting in an output such as: 2024-06-19 00:06:09,336Z rolled over log file The issue is similar to the one described in KB-13235 https://portal.nutanix.com/kb/13235, but any attempt to run a ZK command locks the session with no result.You can encounter this scenario after a customer resets/reboots a PCVM instance during an upgrade.
DO NOT PROCEED FURTHER WITHOUT COLLECTING THE BELOW-MENTIONED LOGS This issue was observed on ONCALL-12963 and ONCALL-8897, where the logs were inconclusive. Please collect the below logs from the active partition and attach them to ENG-482650 for further RCA: /tmp/* The condition is caused by a version mismatch between /home/nutanix/.genesis_restart_required_path and /appliance/logical/genesis/release_versionYou can resolve this by moving the ephemeral genesis_restart_required_path file and letting the PCVM recreate it. mv /home/nutanix/.genesis_restart_required_path /home/nutanix/.genesis_restart_required_path.bak Once the mismatch is resolved, you can restart Genesis as usual. nutanix@PCVM:~$ genesis restart Important: Be sure to remove /home/nutanix/.genesis_restart_required_path.bak after you are sure that services are up and working as normal This will continue the upgrade process and reboot the PCVM itself, thus completing the upgrade task on Prism Central. The upgrade process will cleanup and remove the newly created /home/nutanix/.genesis_restart_required_path. This is expected. In some scenarios, not all services can come up due to docker daemon not running as seen below when checking cluster status: CVM: 10.xx.xx.xx Up, ZeusLeader Verify the status of the docker.service with the following command: sudo service docker status If the docker.service is Active: failed or Active: dead please attempt to start the docker.service with the following command and verify if it is Active: active (running) sudo service docker start Once the docker.service is actively running you can perform cluster start to bring up the remaining services and the upgrade process will continue as normal. cluster start
KB3842
Host hung issues as per Internal Support Bulletins
null
Internal Support Bulletins or ISBs are a means of providing timely information on various noteworthy support issues. These bulletins are not to be distributed outside of Nutanix, although the information within them will be of benefit to Nutanix Support while handling customer cases.
Nutanix G5 nodes based on Intel Broadwell architecture might crash unexpectedly due to an Intel microcode bug: From FA#43 http://download.nutanix.com/alerts/Nutanix-Field-Advisory_0043.pdf or ISB-018 https://docs.google.com/document/d/1Opt_CmY0B4r0qBWjSCZOVitctVsFkDWKhUEYOk0GRL8/edit. List of KBs below will help with remediation steps: KB-1262 http://portal.nutanix.com/kb/1262 for checking BMC/BIOS version KB-2896 http://portal.nutanix.com/kb/2896 for manually upgrading BMC KB-2905 http://portal.nutanix.com/kb/2905 for manually upgrading BIOS KB-3692 http://portal.nutanix.com/kb/3692 for 1-click BMC/BIOS upgrade support Nutanix nodes based on Haswell architecture experience host lockups. From FA#34 http://download.nutanix.com/alerts/Nutanix-Field-Advisory_0034-v3.pdf: Resolution: Upgrade BMC & BIOS KB-1262 http://portal.nutanix.com/kb/1262 for checking BMC/BIOS version KB-2896 http://portal.nutanix.com/kb/2896 for manually upgrading BMC KB-2905 http://portal.nutanix.com/kb/2905 for manually upgrading BIOS KB-3692 http://portal.nutanix.com/kb/3692 for 1-click upgrade BMC and BIOS Release Note https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-BMC-BIOS:Release-Notes-BMC-BIOS ESXi Host Is Unresponsive - G4 Haswell Specific Related doc including resolution: KB-2593 http://portal.nutanix.com/kb/2593 Host lockups issue caused by degraded 3L-3ME satadom Described in: ISB-026 https://docs.google.com/document/d/1Vo24kcu9cdUY7lFd1TOGXT19FS271i3P9hR4nL9rF4Q/edit, ENG-69318 http://jira.nutanix.com/browse/ENG-69318, ENG-47799 http://jira.nutanix.com/browse/ENG-47799 Use below way to determine satadom model in ESXi host: [root@esxi]# esxcli storage core device list | grep -A 3 Path You can also execute this within any CVM to get SATADOM info from all ESXi host in Nutanix cluster: nutanix@cvm$ hostssh "esxcli storage core device list | grep -A 3 Path"
KB9848
Nutanix Self-Service - Project creation failed error "user_capability mandatory field: username is missing"
Project creation failed with users from OpenLDAP added during project creation.
Nutanix Self-Service (NSS) is formerly known as Calm. While trying to create a project using NSS, a user is getting listed, however when adding and saving a project, it fails with the below error: user_capability mandatory field: username is missing Checked and confirmed that the configured authentication type is OpenLDAP. nutanix@cvm$ ncli authconfig ls-directory
Currently, NSS only supports Active Directory. Nutanix is working to support other LDAP directory types in future releases.
KB6495
Nutanix VirtIO drivers for Windows Server 2019
The Nutanix VirtIO 1.1.5 driver package contains Windows Hardware Quality Lab (WHQL) certified driver for Windows Server 2019.
Nutanix cluster administrators deploying Windows Server 2019 as a guest operating system in AHV may not find signed drivers for that operating system in the Nutanix Guest Tools (NGT) or Nutanix VirtIO driver packages.
Starting from Nutanix VirtIO 1.1.5 driver package contains Windows Hardware Quality Lab (WHQL) certified driver for Windows Server 2019. Please download the latest driver package from the VirtIO downloads https://portal.nutanix.com/page/downloads?product=ahv&bit=VirtIO page of the Nutanix support portal.The AHV Administration Guide https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:vm-vm-virtio-ahv-c.html describes how to install the Nutanix VirtIO package.
KB13828
SPP/NIC upgrade fail by LCM 2.5 node stuck in the Phoenix
HPE node stuck in the phoenix while doing SPP upgrade from the LCM.
SPP/NIC firmware upgrade fails with node stuck in Phoenix.Since the high-level symptoms of a node stuck in Phoenix can match several conditions, please ensure that ALL observations match, as this issue is specific to a customer's network environment, with different VLANs in two different bonds. Logs are Collected when the node is stuck in the Phoenix while doing an SPP upgrade from LCM. genesis.out 2022-09-26 09:10:58,371Z INFO 65952752 ha_service.py:345 Discovering whether storage traffic is currently being forwarded on host 192.168.50.103 lcm_ops.out 2022-09-26 12:28:07,148Z INFO lcm_ops_by_phoenix:2054 (192.168.50.14, kLcmUpdateOperation, 49ef3966-3135-479b-6d41-4d80af7251ae, upgrade stage [1/2]) Upgrade_status for release.hpe.dx.firmware_entities failed with (255, u'', u'FIPS mode initialized\r\nssh: connect to host 192.168.50.14 port 22: No route to host\r\n') In the sequence of event, the upgrade starts OK and proceeds to boot the node into a phoenix, where there is connectivity back to the leader, in this example to the lcm_leader .13: Then, with the node booted in phoenix after 5 mins (approx.) the firmware upgrade starts, as shown on the screen: In the screenshot below, the firmware completed successfully, with initial iLO version 2.55 and after this completion iLO shows target version 2.65. However at that point, the node is stuck in Phoenix. The node will need to be manually rebooting the node back to the hypervisor with: python /phoenix/reboot_to_host.py After a new LCM inventory is run, SPP upgrades can still be visible for the nodes: Please confirm this sequence of events in the logs, with lcm_leader can talk to the node-in-phoenix in the start of the process: 2022-09-29 10:56:27,351Z CP Server Thread-8 log.log:104 INFO: phoenix_callback: greetings from phoenix
In this customer environment, CVMs are configured with br0 containing eth0 + eth1 and br1 containing eth2 + eth3 which are on different VLANs. All have link status up, after the ILO reset, NIC flap for eth0 & eth1 lasting about 2 secs. These can be seen dmesg: [Thu Sep 29 11:00:36 2022] i40e 0000:14:00.0 eth0: NIC Link is Down This renders eth2 as the new active interface, which cannot connect to the CVM leader as it is cabled to a physically different subnet, even when eth0/1 comes back online, there are not configured as the active interfaces at that stage since eth2 already has a LINK UP.The NIC flap on eth0/1 during the iLO reset that is causing traffic to be handled by another interface that has a link status up, which in this case cannot talk to the lcm_leader. After that, foundation/phoenix is not able to correctly detect which is the correct interface to use for traffic, irrespective of the link status.If the LCM task is still running follow the following steps to complete the LCM upgrade.Match the MAC address of the NICs connected to the CVM network and bring down all the other interfaces (Which don't have any connectivity to the CVMs) in phoenix, this will restore connectivity and the upgrade will complete successfully if LCM has not timed out. If the upgrade has timed out or failed, use the lcm_node_recovery script to bring the node back up. The SPP upgrade will complete successfully but the version won't change to the updated version as LCM had timeout out. In this case, use KB-11979 https://portal.nutanix.com/kb/11979 to change the SPP version manually.Note: After changing the Network configuration and VLAN, using the same br0 and br1 configuration, the customer will be able to upgrade the SPP on all the nodes.
KB12570
Nutanix Files - Simultaneous access to same files in Multi-protocol share
This article helps in enabling simultaneous access to same files from both protocols in a multi-protocol share
Nutanix Files Multi-protocol shares support both SMB and NFS clients read and write access to same share.On enabling "Simultaneous access to same files from both protocols" on a Multi-protocol share, below are the conditions that meet: When an SMB client is performing Read operation on a file inside a share, both NFS and SMB clients can simultaneously read the same fileWhen an NFS client is performing Read operation on a share, both NFS and SMB clients can simultaneously read the same fileWhen an SMB client is performing Write operation on a share, NFS client cannot perform simultaneous Read or Write operationWhen an NFS client is performing Write operation on a share, SMB client cannot perform simultaneous Read or Write operation Having simultaneous Write access from both protocols on a share is not supported. However, when one protocol has Write lock over a file, simultaneous Read access can be enabled for second protocol by editing the share settings.
To Enable Simultaneous Read access from the second protocol for a share while the first protocol is performing Write, follow the below steps:1. ssh to any of the Controller VM and run the below command to list File Server VMs Internal IPs nutanix@NTNX-CVM:~$ afs info.nvmips 2. Ssh to any one of the Internal IPs and run the below command to set simultaneous Read access for a share Example: Example: For share named “smb-nfs-mp” Once done, the second protocol should be able to Read the file while the first protocol is performing a write operation.3. To validate the change use the following command nutanix@NTNX-FSVM:~$ afs share.list sharename=smb-nfs-mp
KB4869
NCC sata_dom_wearout_check might fail if the SATADOM device smart monitoring status is set to disabled in BIOS
null
NCC may trigger a false positive for wear level on 3IE3 SATADOM models. Before replacing the SATADOM based on smartctl data from the device, please verify if there are any vsish errors and SMART errors. Below is an example for "vsish error" and "smart error" where the ESXi smart utility is unable to read the disk (SATADOM) details: [root@nhmlesxen02:~] ./iSMART -d t10.ATA_____SATADOM2DSL_3IE3_V2______________________BCA11608020150103___ -v 2 From the commands folder in ESXi host (vm-support bundle) Failed to get SMART stats for t10.ATA_____SATADOM2DSL_3IE3_V2______________________BCA11608020150103___ error CANNOT open device Sample output of SATADOM health status: nutanix@NTNX-CVM:~$ for i in `hostips`; do echo esx $i ; ssh root@$i 'esxcli storage core device smart get -d ` ls /dev/disks/|egrep -v ":|vml|naa"` | grep -i health' ; done
Generally the nodes shipped out from Nutanix actually have the SMART monitoring feature enabled in BIOS. If you see symptoms like the example in the problem description above, what this means is that the BIOS settings for the SATADOM device has been changed from the factory shipped defaultsWhile the same can actually be enabled back, the fact would remain that we would not know what else has been changed from the defaults in the BIOS/BMC for the affected node. As such, just reset both the BIOS and BMC to the factory defaults and this will re-enable the SMART monitoring for the SATADOM device back again.Below steps can be used to reset the BIOS and BMC1. Reset BMC via command line Resets the management console without rebooting the BMC or from IPMI page: 2. Reset BIOS to defaults: (a) Monitor the power cycle operation from the remote console screen launched in Step 10. Once the Nutanix logo appears, press the Delete key from the remote console window continuously until Entering Setup is displayed at the bottom of the screen. The node now starts in the BIOS. Once in the BIOS, use the right-arrow key to go to Save & Exit. Use the down-arrow key to go to Restore Optimized Defaults and the press Enter. Select Yes and press Enter to Load Optimized Defaults. Press F4 to select Save & Exit and then press Enter to select Yes in the Save & Exit Setup dialog box. The node now restarts and after the node restarts, the BIOS update is complete. At this point, you should be able to boot back into the node and confirm if the NCC checks now pass correctly
KB16134
NDB Time Machine Operations Getting Abandoned
NDB Time Machine Operations Getting Abandoned
The driver-cli process, part of the Nutanix Database Service (NDB) agent, is responsible for fetching the work to be executed in the DB server. Upon receiving the work, driver-cli informs the NDB server that work is being processed, after which a separate process is started to perform the task itself. In NDB deployments 2.5.3 and above, the NDB agent is a long-running process. And if the NDB agent processes are not restarted cleanly (see RCA for ERA-33115 https://jira.nutanix.com/browse/ERA-33115), then the driver-cli will get killed every minute and a new driver-cli will be started in its place. And in certain situations (purely dependent on timing), it is possible the driver-cli is killed just before a separate process (async-driver-cli) is started for the task obtained from the NDB server. In this situation, we have the following status in the NDB repository: Once the NDB repository reaches this state, any new operation of the same category for the NDB VM gets abandoned. This will be mitigated automatically by the NDB operation monitor after the operation timeout. Problem symptom: The end-user will observe that Time machine operations - copy_logs - initiated by the NDB scheduler will not be executed. The Time machine SLA for persisting database logs in the log-drive will be breached. Details: The operation status in the NDB metadata will indicate the operation to be ABANDONED (status = 2). Furthermore, you may also find that there is at least one operation of the same type with status UNASSIGNED (status = 8), but, the corresponding status of the work (era_works_depot table) shows RUNNING (status = 1). This mismatch in the status between operation and work-depot means the issue is being hit. Problem identification: Connect to the NDB-Server. $ ssh era@<NDB-Server> Connect to NDB metadata. ndb-server$ sudo su - postgres Enter the following query to show if there are mismatches between work and operation status. era_repos=# SELECT * If there are mismatches, proceed to the Solution section. If the select statement returns no output, that means there is no issue.
Terms: NDB-Server => The NDB serverNDB-VM => The VM where the operations are supposed to run Steps: This problem can be mitigated by following the steps below. Restart the NDB agent on the NDB-VM using KB 16133 http://portal.nutanix.com/kb/16133.Connect to the NDB-Server. $ ssh era@<NDB-Server> Connect to NDB metadata. ndb-server$ sudo su - postgres Identify the operation ID that has a mismatched status as shown in the diagram above. era_repos=# SELECT ew.operation_id OPERATION_ID, For each OPERATION_ID listed in the above query, fail the operation. era_repos=# UPDATE era_operations SET status=4 where id=<OPERATION_ID1> Exit the NDB server. All new TM operations submitted against the NDB-VM should NOT get abandoned. era_repos=# \q
KB16383
My Nutanix Sign-up Issue for Cisco Users
Cisco users with the domain (Cisco.com) are encountering difficulties signing up for My Nutanix. The issue stems from the authentication process, which involves federation via Cisco's OKTA system. To address this, a phased approach is being implemented, granting users access to specific features gradually. Currently, Phase 1 provides access to Sizer, Collector, and Nutanix University, with Phase 2 focusing on expanding support for Deal Registration and Community Edition. Efforts are underway to facilitate smooth onboarding for Cisco users to My Nutanix.
Cisco users are encountering difficulties signing up for My Nutanix. These users, identified by their cisco.com domain, are intended to be onboarded to My Nutanix via federation. Authentication for these users is facilitated through Cisco's OKTA system. To address this issue, a phased approach is being implemented to grant Cisco users access to specific features within the My Nutanix dashboard.
If you encounter users with a Cisco domain who are experiencing sign-up issues for My Nutanix, please advise them as follows: Join Webex Teams Spaces for Assistance: Encourage users to join Webex Teams spaces dedicated to providing assistance. These spaces serve as platforms for real-time communication and support. Users can seek guidance and resolve any issues they encounter during the sign-up process. Access Sales Connect Site: Direct users to visit the following link on Cisco's Sales Connect site: https://salesconnect.cisco.com/DataCenter/s/products-and-solutions/compute-hyperconverged https://salesconnect.cisco.com/DataCenter/s/products-and-solutions/compute-hyperconverged On this site, users will find valuable resources related to compute hyperconverged solutions. Specifically, they can access links to join relevant Webex spaces and internal mailers. These resources are designed to provide additional assistance and support to Cisco users navigating the My Nutanix sign-up process.
}
null
null
null
null
KB12163
Nutanix Files: Unable to create a data protection policy for Nutanix Files SmartDR
Policy create is failing during remote share add.
Symptoms: Policy create is failing during remote share add. Remote share add gRPC is able to establish gRPC connection but we are failing to send a response back to gRPC client from replicator server. Destination traffic is not reaching src with tcp retransmission. From pcap we could see the that gRPC traffic is not going through. So we asked the customer to involve the networking team. nutanix@NTNX-A-FSVM:~/minerva/bin$ ./repl_cli get_remote_fs --target_fs_ip=10.xx.xx.xx --target_fs_uuid=ef964e79-8872-4ef2-8dd8-4467aae9bc6a NOTE: If we don’t see Failed: rpc error: code = Unavailable desc = transport is closing in any of the command responses, you may be hitting a different issue. Debugging so far:1. Policy create is failing during remote share add.2. Remote share add grpc is able to establish grpc connection but we are failing to send a response back to grpc client from replicator server.3. Connection between source and target FS has ~200 ms latency, but we are unable to repro the issue with latency alone. Finished repro:1. Create a golang CLI which will attempt to send the same PreShareAdd grpc from source to client.2. Create a download link for customer.3. Verify download link internally and verify CLI commands work.4. Download to the customers FSVM and test on their environment over the tunnel
Steps for customer: 1. Log in to target PE [email protected]. 2. Log in to 10.xx.xx.xx FSVM. 3. Execute repl_cli ./minerva/bin/repl_cli get_remote_fs --target_fs_ip=10.xx.xx.xx --target_fs_uuid=ef964e79-8872-4ef2-8dd8-4467aae9bc6a: If you see Failed: rpc error: code = Unavailable desc = transport is closing, the command has failed and the network is not letting through the packets required for gRPC communication. nutanix@NTNX-A-FSVM:~$ ./minerva/bin/repl_cli get_remote_fs --target_fs_ip=10.xx.xx.xx --target_fs_uuid=ef964e79-8872-4ef2-8dd8-4467aae9bc6a The following output is for a successful gRPC request & response: 0903 21:35:17.353451 203836 repl_cli.go:56] initializng insights
KB12381
VM update via Prism Central gives the error "spec_hash must be provided"
Updating VM throws spec_hash must be provided. This article discusses how to diagnose and fix such issues.
VM update task from PC gives the following error in Prism Central UI. Error processing your request Symptoms can include failure of any VM update tasks like adding VM to a category, changing name of the VM etc. Failure message in PC UI will be as mentioned above. The following trace will be seen in aplos.out (~/data/logs/aplos.out) on Prism Central VM when a VM is tried to add to a category. 2021-10-25 10:57:57,752Z INFO resource_intentful.py:775 Checks for non-auth kinds The issue can be verified by doing a blank update on any VM. It will give the following error signature. nutanix@PCVM:~$ nuclei vm.update <VM UUID> name=<Name-of-the-VM> Null value can be seen in the VM intent spec for the fields "task_uuid" and "dirty" nutanix@PCVM:~$ nuclei diag.get_specs kind_uuid=<VM UUID> spec_json=true
The issue is caused by corrupt intent spec. This can happen if ergon tasks for VM update were manually interrupted (Like force-stopping VM update tasks).As the issue is on the intent spec of the VM. Any update attempt of the intent spec like changing VM name, adding disk, adding VM to category etc will be affected.Recreate the intent spec to fix the issue. Steps: Execute the following command to clean up the spec. nutanix@PCVM:~$ nuclei diag.cleanup_aplos_metadata kind_id=<Affected VM UUID> Execute the required VM update task. Now the VM update tasks (like adding VM to categories) will work fine.
KB11260
Foundation: UCSM Maintenance Policy should be 'immediate'
If Cisco UCS Manager has its default maintenance policy incorrectly set, Foundation may fail to mount the Phoenix ISO to each node and subsequently fail to image the nodes.
The wrong settings in your UCSM default maintenance policy can prevent Foundation from successfully mounting the phoenix ISO, causing the install to fail.Below is the trace recorded in the Foundation ~/log/node.log for the affected node(s): 20200707 08:39:35 INFO Created vmedia policy fdtnWMP2411004X Error message displayed in UCSM, associated with the failed attempt to create the virtual CD drive: Policy reference vmediaPolicyName fdtnWMP24110057 does not resolve to named policy The reasons for the behaviour: Foundation creates and associates an initial service profile for each node.Foundation powers down the server and attaches a new vMedia policy to each node, which allows for use of a virtual CD drive.The service profile is updated to reflect the vMedia policy change but the update is not applied if the default policy is set to 'User Ack'. Hence, the virtual drive for the Phoenix ISO is never created.Foundation checks if the ISO is mounted. Since the ISO is not mounted the above-mentioned error is returned.
This can be resolved with a quick settings change in UCSM: Log into UCSMClick on the 'Servers' option in the menu to the leftSelect 'Policies,' then expand 'root' in the menu.Expand 'Maintenance Policies', then select 'default,' to open the settings we need (default maintenance policy for the root node, which will be applied to the nodes).Make sure the Reboot and Storage Config. Deployment Policy are both set to 'Immediate'. After finished with foundation, policy can be switched back to UserAck. The 'Servers' menu can be found near the left side of the screen: Default maintenance policy should be set to 'Immediate': NOTE: After finished with Foundation, the policy can be switched back to UserAck.
KB14653
Flow Network Security Troubleshooting: Reading ovs tables
Overview of gathering information from ovs tables, parsing microseg output, and finding rule UUID in multicardinality situations
Services involved in Flow Network Security (FNS) Prism Element Microsegmentation Service Only available in NOS > 6.xResponsible to configure open flow rules in OVS, starting and stopping conntrack stats, memory reservationsHandles host addition/removal, supports ROBO use casesuses acropolis RPCs to program the OVS rules Flow Service Service is always up Involved in enablement pre-checks and AHV memory reservation Acropolis Service Prior to 6.x NOS, did the job of microsegmentation serviceUses SSH channel to program OVS rules in AHV hostsInsights Data Fabric Service (IDF) Microseg service, or acropolis if NOS is pre-6.x, uses watches to get the updates of VM and vNIC Supports ROBO use cases, IDF events are leveraged AHV Conntrack stats collector supports policy hit logging and visualizationexports flow information of configured polices to Cadmus via Kafka, exports flow information to registered IPFIX_Collectors and FSCNo separate rule programming to support IPFIX IPFIX exporter from 6.x NOS and support PC, exports IPFIX records to FSC and IPFIX collectorsuses separate open-flow configuration in br.dmx to commit all the flows in connection tracking tablemanaged by networking team, not flow team Acropolis Modules Prism Central Microseg Service Service is started in microseg enablement workflow, Involves network security rule config and enforcement, manages the policy with VMs that span across multiple PE clusters, sends policy updates only to relevant clustersOwns Network_security_rule_info Flow Service Enablement prechecks, Dategorizing VDI VMs, Configuring service/address groups, Policy configs and RBAC, Memory reservation in AHV, Config of ipfix exporter, handles cluster addition/deletion from PCOwns service_group and address_group Cadmus Service Used for visualization, communicates with Kafka to get flow info and store in IDF, websocket is used to communicate with Cadmus to query traffic IDF Service Microseg service uses watches to get updates on VM and categories Kafka Service Used for visualization and runs as a docker containerServes as queue/bus between conntrack stats collector (AHV) and cadmus Advanced Network Controller (ANC) Currently only applies to Flow Virtual Networking (FVN)A set of services that runs as part of CMSP inside PCmysql, hermes, Open Virtual Networking (OVN) OVN has northbound db (NB), southbound db (SB), and northdNB exposes entities to traditional networking elements: routers, switches, firewalls, etcSB exposes traditional network elements to lower level constructs: datapath, logical flow, etcnorthd is the service that converts NB to SB In an environment with VPC, the following atlas commands may be useful atlas_cli network_controller.listatlas_cli network_controller.get <uuid from list>atlas_cli config.getatlas_cli network.list Note: additional information about Atlas may be found on the nutanix sharepoint ( Flow Networking https://nutanixinc-my.sharepoint.com/:p:/g/personal/priyankar_jain_nutanix_com/Ed8BQB8NXjJOsB8p7xEIme8BWnEH6eeDyFnM43g_LVRxag) OVS Architecture vSwitchd User space daemon, accessed via ovs-appctl and ovs-ofctlCore component, supports packet classification using various lookupsOpenFlow Rules Has pattern match and action: Ex: ip,nw_src=x.x.x.x,nw_dst=y.y.y.y action drop ovsdb-server User space daemon and acts as database, accessed via ovs-vsctl kernel module kernel space and acts as forwarding engine, accessed via ovs-dpctl Important Tables Application Policy Table 40: match src_ip with Target Group IP, action=load (rule_uuid to registers)Table 41: match registers with rule_uuid, dst_ip with Outbound (and ports), action=redirect to table 60Table 42: match dst_ip with Target Group IP, action=load(rule_uuid to registers)Table 43: match registers with rule_uuid, src_ip with Inbound (and ports) action=redirect to table 60Table 60: allow all packetsTable 100: drop all packets Isolation Policy Table 20: uses conjunctions to match src_ip and dest_ip of two entities AD Policy Table 50: match src_ip with Target Group IP, action=load(metadata first bit)Table 51: match metadata big, dst_ip with Target group IP, action=redirect to table 52, if mismatch, redirect to table 55Table 52: match the intra tier traffic, redirect to table 60, else redirect to table 53Table 53: match the inter tier traffic, redirect to table 60, else redirect to table 55Table 55: match the allowed components, redirect to table 60, else redirect to table 58Table 58: check policy mode, forward to either table 60 or table 100 Quarantine Policy Table 10 and 11: match traffic generated from Quarantined VMTable 12 and 13: match traffic towards Quarantined VM Using the Tables Run the following command from the AHV Host: ovs-ofctl dump-flows br.microseg | grep <ip address> [root@Sonic-3 ~]# ovs-ofctl dump-flows br.microseg| grep XX.XX.XX.210 - Each line will start with "cookie:0x" followed by a hexadecimal number, which is how the flows are tagged. Each flow will be broken up over several lines, depending on where it is in the internal process - We can trace the flow by checking the "Table" field in this output, to see how this flow is being handled, as well as the "Actions section" to see where the flow is being sent - Here, we can see that the flow is in Table 40, and the actions are to load the UUID to the registers and resubmit to Table 41. Checking the Table list above, we can see this is an application policy, as Table 40 and 41 only handle application policies and only look at source IPs - From there, we see the flow is in Table 41 for the next three lines, and then the action is to commit the flow to Table 60. Checking above, we see that Table 60 is "allow all," so this flow should be allowed based on source IPs- On line 5, we see the flow hit Table 42, which loads the flow UUID to the registers and resubmits it to Table 43- We see the last three lines showing the flow being processed in Table 43 and again submitted to Table 60, which still allows all traffic- Based on this, all sources are allowed and all destinations are allowed, this traffic should not be being blockedThe output from this command can be challenging to read, and while reading the tables does allow an understanding of how the traffic is being processed it is often more convenient to know which rule is being applied to it. For that command, see the CVM Rule UUID Retrieval section below.In contrast, a blocked rule would have an output similar to the following: [root@castlevania-3 ~]# ovs-ofctl dump-flows br.microseg
PC To get all policies: nutanix@PCVM:~ $ nuclei network_security_rule.list To get rules, including actions, VMs, and AppType nutanix@PCVM:~ $ nuclei network_security_rule.get <rule_uuid> AHV Host To get flows on a bridge, from host, run ovs-ofctl dump-flows <bridge name> | grep <IP address> [root@Sonic-3 ~]# ovs-ofctl dump-flows br.microseg | grep XX.XX.XX.210 This output will have the table each flow is hitting, and the rule UUID is also present, but broken up into 4 pieces after Reg0, Reg1, Reg2, Reg3, which is challenging to interpret. CVM Rule UUID Retrieval If a VM is referenced more than once in a single policy (multicardinality), the traditional nuclei command for getting the rule uuid of that flow will return "list index out of range." To find the UUIDs of current flows, use the following command: nutanix@CVM:X.X.X.18:~$ hostssh "ovs-ofctl dump-flows br.microseg | grep load | sed -e 's/^.*ip,//' | sed -e 's/actions=/UUID=,/' | sed -e 's/-.*0\[\]//' | sed -e 's/-.*1\[\]//' | sed -e 's/-.*2\[\]//' | sed -e 's/-.*$//' | sed -e 's/,load:0x//g' | grep -e '^nw' 2>/dev/null" To add an IP search filter, add : grep <x.x.x.x> immediately after "grep load " and add a pipe before the sed command that follows, ex: nutanix@CVM:X.X.X.18:~$ hostssh "ovs-ofctl dump-flows br.microseg | grep load | grep xx.xx.xx.157 | sed -e 's/^.*ip,//' | sed -e 's/actions=/UUID=,/' | sed -e 's/-.*0\[\]//' | sed -e 's/-.*1\[\]//' | sed -e 's/-.*2\[\]//' | sed -e 's/-.*$//' | sed -e 's/,load:0x//g' | grep -e '^nw' 2>/dev/null" Use that UUID to confirm which rule is being applied to that flow, and then use KB 13866 http://portal.nutanix.com/kb/13866 to continue to troubleshoot that rule.This command is a work in progress, if it fails, the UUID is available in the full output in the format of the hex number assigned to Reg0-3, such that Reg0 is the first 8 bits, Reg1 is the second 8 bits, etc.The JIRA ticket created to get the one-liner as a user command is NET-13266 https://jira.nutanix.com/browse/NET-13266.
KB11367
Nutanix Self-Service - App/VM deployment fails with ERROR - "The resource 'value' is in use."
App/VM deployment fails with ERROR - "The resource '<value>' is in use."
Nutanix Self-Service is formerly known as Calm.The VM deployment from Calm blueprint fails with Application status: ERROR with Reason The resource 'value' is in use. Note: the value could be anything that corresponds to the referred resource.In the following example, the resource '152' indicates the port number for the vDS (VMware virtual Distributed Switch) port.Upon further investigation and expanding "PV" (Provisioning VM) section, we see snippet as in the example below: Network with id dvportgroup-119 not found in standard port groups, searching in DVS port groupsNetwork vds-network1 with id dvportgroup-100 found in DVS port groupsUsing Datastore test-ds as the default datastore for disksAdded new disk to devices in VM config specCloning template WIN2019-Template to create VM test-vm10Waiting for VM clone to completean error occurred while cloning the VM from the parentThe resource '152' is in use. Further, from the vCenter under Networking view, checking the ports column for the vDS in question shows that port ID 152 belongs to the VM template from which the VM is being deployed.
In order to fix the issue, do the following. Launch the App deployment from the blueprint in question.Under the VM configuration section, expand the section "TEMPLATE NETWORK ADAPTERS" and select the checkbox "Exclude from VM config" as shown below: Expand NETWORK ADAPTERS details and add a vNIC with desired Adapter Type and associated Network as shown below. The deployment of VM should complete successfully and status should indicate RUNNING as shown below
KB13326
NDB | Provision MSSQL DB from .bak file fails at "Distribute Database Data" step
This article introduce a scenario that MSSQL Provision DB fails with a .bak file.
This scenario can be reproduced with the below steps:1. Provision a DB with NDB ==> Export this DB backup within SMSS ==> Provision a new DB from the DB backup file on NDB.2. The DB provision operation fails with the error: 'Failed to import database on backup on primary database server VM. "\'<DB Name>\'"' at "Distribute Database Data" step3. Provision a new DB from the DB backup file means the "Build Database from Existing Backup" option on the Provision SQL Server DB wizard. Attach a screen shoot for reference purposes:3. Notice the "restore location is the same as the source" message on the DB server VM rollback operation log (The rollback operation logs are in the Operation Diagnostic bundle, which is a .zip file ==> Unzip it to get more logs): [2022-06-30 18:38:22,114] [5552] [INFO ] [0000-NOPID], 4. On the DB Provision wizard, provide a different name than the original DB name ("aladdindb" in the above example) in the “Database Name on VM” field (Attach a screen shoot for reference purposes). The Provision complete successfully.
The above Provision scenario is only supported by NDB after the 2.5 version.Here is a temporary workaround before NDB 2.5 version:====================1. Clone a new DB from the existing DB ("aladdindb" in the above example) Time machine.2. Delete the Cloned DB from NDB (Select the "Unregister the database from Era" option only, Do Not select "Delete the database from the VM"). Attach a screen shoot for reference purposes:3. Register the DB that we just unregistered.4. NDB can then manage two DBs with the same name on the DB server VMs.The disadvantage of the above workaround:====================1. "Delete the database from the VM" option is available when Removing this DB from NDB. We can Unregister the DB from NDB, then manually clean up the DB and data files within the VM. Or refer to the steps in Internal Comments to make the option available.2. Above workaround does NOT apply to Multiple PE clusters configuration.
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""for i in `svmips` ; do ssh $i \""sudo iptables -t filter -A \\WORLDLIST -p tcp -m tcp --dport 2100 -j ACCEPT\"" ; done""
null
null
null
null
KB13813
Objects - Troubleshooting IAM User Replication Issues
Troubleshooting IAM User Replication Issues
IAM SynchronizationObjects streaming replication enables automatic and asynchronous copying of source buckets to target buckets in different Objects instances. For Objects replication within different Prism Central instances, IAM Synchronization is required between the source PC and its corresponding remote PC. This process involves replication of IAM users on the source Objects instance to the destination Objects instance belonging to a different PC with the same access key and secret key pair. The following Portal document provides steps in configuring IAM synchronization: Setting up IAM Synchronization with a Different Prism Central https://portal.nutanix.com/page/documents/details?targetId=Objects-v3_1:v31-replication-iam-sync-t.html
Useful Commands and Logs for IAM Replication Issues The list of API users on Source PC can be viewed by following this Portal Document - Viewing API Users https://portal.nutanix.com/page/documents/details?targetId=Objects-v3_1:v31-replication-iam-sync-t.html Once IAM Pairing is complete, the list from the Source PC can be compared with the Target PC to confirm that all users have been replicated successfully. If PC UI access is not available, the following command can be used to retrieve this list of IAM users curl -X GET https://iam:5553/iam/v1/users\?offset=0\&length=20\&sort_attribute=username\&sort_order=ascending --cert /home/certs/MspControllerService/MspControllerService.crt \ This returns a list of users as seen below which can be used for comparison between Source and Target PCs { Note: The following options can be used to get the Objects Cluster Service IP Option 1 (With UI Access) Login to Prism Central, select Services and go to the Objects pageThe Objects Cluster Service IP is listed under the Objects Public IPs column in the Object Stores tab. Option 2 (Without UI access) From aoss_service_manager.out log file on PCVM nutanix@PCVM:~$ allssh "tail data/logs/aoss_service_manager.out| grep 'Service IP'" For any unsuccessful IAM Sync operations, check aoss_service_manager.out logs on both the Source PCVM and Target PCVM for details on the failure. One such example of a failure is when an IAM user is not present in the configured AD directory Source PCVM time="2022-08-24 17:00:18Z" level=info msg="Sending request to remote service manager" file="pc_client.go:44" url="https://Target_PC:9440/oss/api/nutanix/v3/iam_replicator/ Target PCVM time="2022-08-24 17:00:19Z" level=info msg="IAM endpoint from MSP: xx.xx.xx.xx:5553" file="poseidon_utils.go:142" This issue can be resolved by deleting the stale user from the list of API users configured for Objects access on the Source PC and retrying the IAM replication from Source to Target PC.
KB11505
Nutanix Kubernetes Engine - Kubernetes 1.19.8 Airgap deployment fails at the CSI driver deployment step with Karbon 2.2.2
K8s cluster deployment with Karbon 2.2.2 in Airgap environment fails to fetch the CSI driver snapshot images as it points to the Internet repo quay.io instead of the local airgap repo.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Karbon 2.2.2, the Airgap deployment of a new NKE cluster running 1.19.8 (which adds the new CSI Snapshot feature) fails to verify the CSI deployment, causing the new cluster deployment to fail.In ~/data/logs/karbon_core.out on the Prism Central VM(s) (PCVM): 2021-06-04T11:10:09.177Z csi_driver.go:480: [ERROR] [k8s_cluster=test] Failed to verify CSI driver deployment: Operation timed out: Waiting for at least one available replica for CSI provisioner This happens due to the following incorrect CSI driver definition which does not have the correct Airgap image location updated: "CSIDriverConfig": { The Snapshot Images are incorrectly pointing to the quay.io repo instead of the local Karbon airgap-0 repo, leading to the image pull to fail from the Internet in Airgap environments.
This issue is resolved in Karbon 2.2.3, 2.3 and later Workaround The following steps need to be applied to each of the PCVMs: 1. Edit the karbon_core_config file using: nutanix@PCVM:~$ sudo vi /home/docker/karbon_core/karbon_core_config.json 2. Add the two flags below under the entry_point directive, which should look like the following: "image": "karbon-core:v2.2.2", 3. Restart karbon_core service using: nutanix@PCVM:~$ genesis stop karbon_core; cluster start Note: Do not restart the service during active deployment or activities.Afterward, the new deployment is going to point to the correct CSI snapshot image location and deployment should proceed.
KB14780
Nutanix Kubernetes Engine - How to SCP a file from an NKE Kubernetes VM
This article explains how to SCP a file from an NKE Kubernetes node to the Prism Central VM without the PCVM's password. This may be especially useful when collecting logs via the remote support tunnel.
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.Performing SSH into an NKE Kubernetes node requires use of the SSH script downloaded or copied from the NKE UI, or the karbonctl cluster ssh command. This command includes functionality that generates a private key file and a certificate file which are used for SSH connectivity. If a file on a Kubernetes node needs to be copied to the Prism Central VM, the scp command is available on the NKE Kubernetes nodes; however, performing SCP from these nodes to the Prism Central VM (PCVM) will prompt for the PCVM's password. When a user is not available to enter the PCVM password, such as when needing to upload files to a support case via the remote support tunnel, the private key file generated by karbonctl cluster ssh script may be used to SCP files from a Kubernetes node to the PCVM.
To copy a file from PC VM to Karbon VM or vice versa, follow the steps below Step1: Login to karbonctl with the below command nutanix@PCVM:~$ ~/karbon/karbonctl cluster login --pc-username <username> In the first command replace the <username> with either admin or any AD user who has atleast Cluster Viewer permission. In the second command replace <karbon_cluster_name> with the name of the Karbon cluster where you want to copy files. Step2: Generate ssh cluster script with the below command, nutanix@PCVM:~$ ~/karbon/karbonctl cluster ssh script --cluster-name ${CLUSTER_NAME} > ~/tmp/${CLUSTER_NAME}.sh Step3: Dump the private key file to use for scp nutanix@PCVM:~$ sh ${CLUSTER_NAME}.sh -s The above command will show two files like in the example below, nutanix@NTNX-PCVM:~$ sh ${CLUSTER_NAME}.sh -s Copy the Private Key file location.Step4: Perform secure copy with the below command To copy files from PC VM to the Karbon VM nutanix@NTNX-PCVM:~$ scp -i <output_from_step3> [email protected] <filename_in_PC> nutanix@<karbon_vm_ip>:<destination_file_location> In the above command replace the following <output_from_step3> - Private Key file location from Step3 <karbon_vm_ip> - IP of the Karbon VM where file should be copied to <filename_in_PC> - Location of the file that needs to be copied in the PC VM <destination_file_location> - Location of the file in Karbon VM An example to copy a file from PC VM location /home/nutanix/test.txt to Karbon vm /home/nutanix/ directory is below, nutanix@NTNX-PCVM:~$ scp -i /tmp/KARBON_user_ffd9d6e6-cb86-48b5-5da1-2d0456ba77b4 ~/test.txt [email protected]:/home/nutanix/ To copy a file from the Karbon VM to the Prism Central VM nutanix@NTNX-PCVM:~$ scp -i /tmp/KARBON_user_ffd9d6e6-cb86-48b5-5da1-2d0456ba77b4 [email protected]:/home/nutanix/ /home/nutanix/tmp/
KB1590
NCC Health Check: ipmi_sel_check
The NCC health check ipmi_sel_check checks if any IPMI SEL assertions or power failures (NX platforms only) were logged in the last 24 hours.
The NCC health check ipmi_sel_check checks if any IPMI SEL assertions(All Platforms) or power failures (NX platforms only) were logged in the last 24 hours. This check displays a report of FAIL if there are any critical assertions present in the IPMI SEL records. It also returns an INFO message with the latest five assertions that have occurred in the previous 24 hours.(For All Platforms) This check displays a report of ERR if there are any power supply problems detected (NX platforms only). If no assertions or power failures have occurred in the past 24 hours, and no critical assertions have ever occurred, this check will result in a PASS. Running the NCC Check You can run this check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or you can run this check separately: nutanix@cvm$ ncc health_checks hardware_checks ipmi_checks ipmi_sel_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. The power failure check is scheduled to run every 5 minutes, by default, and will generate an alert after a failure. Sample Outputs For Status: PASS Running : health_checks hardware_checks ipmi_checks ipmi_sel_check For Status: INFO Detailed information for ipmi_sel_check: For Status: FAIL Running : health_checks hardware_checks ipmi_checks ipmi_sel_check Another sample for Status: FAIL Running : health_checks hardware_checks ipmi_checks ipmi_sel_check The following is an example of how the IPMI SEL records look: 30 | 04/16/2014 | 17:00:56 | System Event #0xff | PEF Action | Asserted Output messaging Note: This hardware-related check executes on the below hardware: IPMI SEL assertions check: All platforms except SYS and InspurPower failures Check: Nutanix NX [ { "Check ID": "Check for IPMI SEL assertions and critical event logs in the past 24 hours" }, { "Check ID": "IPMI events logged in the last 24 hours." }, { "Check ID": "Refer to KB 1590." }, { "Check ID": "Host might have restarted." }, { "Check ID": "This check is scheduled to run every hour, by default." }, { "Check ID": "This check will generate an alert after 1 failure." }, { "Check ID": "15030" }, { "Check ID": "Check for IPMI SEL Power failure in the past 24 hours" }, { "Check ID": "The power supply has detected an Over-Current condition for this block and its protection mechanism has activated." }, { "Check ID": "Do not replace or reseat the power supply. Immediately call Nutanix technical support for assistance." }, { "Check ID": "Block will automatically shut down as part of the Over-Current protection mechanism." }, { "Check ID": "IPMI SEL Log Power Failure" }, { "Check ID": "IPMI SEL Log Power Failure on ip_address" }, { "Check ID": "Power failure Over-Current-Protection engaged - host ip_address: {alert_msg} do not remove and add back power - Immediately Contact Nutanix Support." }, { "Check ID": "This check is scheduled to run every 5 minutes, by default." }, { "Check ID": "This check will generate an alert after 1 failure." } ]
Obtain the SEL from the IPMI Web Interface or collect it from the CLI with the following commands. AHV: [root@ahv ~]# /ipmicfg -sel list ESXi: [root@esxi:~] /ipmicfg -sel list Hyper-V: nutanix@cvm$ winsh 'C:/Progra~1/Nutanix/ipmicfg/IPMICFG-Win.exe -pminfo' Upgrade to the latest version of NCC and re-run this check to validate results. In case of PSU Failure condition detected together with below OCP alert, contact Nutanix Support immediately. Message : Power failure Over-Current-Protection engaged - host <IP>: Power 0xc5 has 1 OCP power supply failures.(in the past 24 hours) Note: Correctable ECCs events could be a false positive if BMC/BIOS is not at the recommended versions and NCC version is not up to date. To check if the BMC/BIOS are at the recommended versions run: nutanix@cvm$ ncc health_checks system_checks bmc_bios_version_check Use the following check to specifically validate CECC: nutanix@cvm$ ncc health_checks system_checks ipmi_sel_cecc_check If you require Nutanix Support assistance, provide the output of the above commands, as well as the following NCC command on any CVM (Controller VM) as an attachment to the case. nutanix@cvm$ ncc hardware_info show_hardware_info If assertions are displayed, review for unexplained events and, if necessary, contact Nutanix Support for further assistance. For critical assertions (for example, Uncorrectable ECC Errors, Processor IERR), contact Nutanix Support to investigate the hardware and take necessary actions. If you do not see any assertion, there may be a problem with running the ipmitool on the host. Contact Nutanix Support https://portal.nutanix.com to investigate further.
KB13306
Curator crash or scan failure due to multiple egroupID references
This article documents a rare issue where curator can be impacted when multiple vblocks are pointing to the same egroupID in AOS 6.x.
Clusters running AOS 6.x are susceptible to a rare issue that results in persistent Curator scan failures, which if left unresolved can cause data unavailability. Specifically, due to a rare scenario an extent and it's associated vblock can be updated to point to different egroups, causing Curator scans to fail when this data is processed. When this issue occurs you may see any of the following impact: Alerts indicating Curator partial/full scan failure.Alerts indicating the Curator service has recently crashed. Free space in the cluster not being reclaimed over time. When an extent/vblock which is pointing to multiple egroups is scanned by Curator, the following Curator crash may occur: F20220401 17:00:00.651966Z 14550 vdisk_block_map_task.cc:902] Check failed: extent_region_info->egroup_id == extent_group_id (896869209 vs. 896869205) vdisk_id=896836523, vdisk_block=32152; { "locs": [ Curator scan failures will generate a cluster alert like the below: ID : a46d3457-700b-42cc-9547-ab124cb64e0d See KB-3786 https://portal.nutanix.com/kb/3786 for more information on Curator scan failure alert.To confirm the cluster is impacted by this issue, you can look for the below failure messages in the curator.INFO logs on all CVMs with the following command: nutanix@cvm$ allssh 'zgrep "Check failed: extent_region_info->egroup_id" /home/nutanix/data/logs/curator*INFO* |tail -2'; date -u
For clusters running AOS version 6.x that encounter the errors noted, and/or maintenance operations cannot be completed because of this, contact Nutanix Support http://portal.nutanix.com for a workaround that will help stabilize the cluster.
""Title"": ""ReFS file system on VMs running windows operating systems""
null
null
null
null
KB11320
Missing boot drive VMFS partition on ESXi 7.0 during Foundation version 4.5.4 or later
ESXi 7.0 nodes provisioned with Foundation 4.5.4 or later ends up with 20GB partition for CVM while missing the remaining 75GB available on the Hypervisor boot drive.
As of ESXi version 7.0.x, ESXi partition scheme has changed. When using the default installation, the ESXi installer now consumes 99% of the remaining free space on 64GB SATADOM or consumes 128GB of the space for the boot drives bigger in size than 128GB for the OSDATA partition.To address this change, Foundation version 4.5.4 and later carves out 20GB of space for CVM local datastore to accommodate the CVM and its files. While this works for a boot drive less than 128GB in size, it results in some unused space if the drive size is more than 128GB. This unused space is the extra partition that we see on the ESXi host once the ESXi is installed. For example, if the boot drive size is 223GB, OSDATA parition will be 128GB, CVM local datastore vmfs partition will be 20GB which will result in an unused 75GB vmfs partition on the ESXi host. [root@localhost:~] df -h [root@NTNX-19SM6K520137-C:~] ls -ltrh /vmfs/devices/disks/ [root@NTNX-19SM6K520137-C:~] esxcfg-volume -l
Workaround If it is needed to utilize the unused vmfs partition, mount it to the ESXi host which will create a new datastore on it. esxcli storage vmfs snapshot mount -u <partition uuid> Solution The issue is fixed in Foundation version 5.0.2 where it will still carve out 20GB of free space from the boot drive if the boot drive size is less than 128GB however if the boot drive size is more than 128GB, instead of carving separate space for CVM local datastore, it will use all the remaining available space beyond 128GB.
KB14548
NDB | Cleanup database group operation fails with the error ‘Instance name missing in app info’.
Cleanup database group operation fails with the error ‘Instance name missing in app info’.
Cleanup database group operation fails with the error ‘Instance name missing in app info’. Operation ID logs will log the below error [ERROR ] [0000-NOPID],Traceback (most recent call last): The clone_group deletion failed as below Failed to mark the clone (id:dbdff6d0-f860-489b-b0ba-51ce789073e4, name:Zestimate) belonging to clone group (id:d11fb02c-46a0-4b07-a7df-4f863ae1007e, name:ntxsql003_dat004) as 'DELETION_FAILED'
Manually drop the DBs on the MSSQL server instance, then re-run the cleanup Database Group operation in NDB.
KB15430
LCM Framework upgrade from 2.4.5 or before using LCM-2.7-H
In Dark site environment, if you are upgrading from LCM-2.4.5 (or before) to LCM-2.7 or later, you need to use the LCM-2.7-H bundle.
In LCM-2.7 and later, we have released two bundles in Nutanix portal download page https://portal.nutanix.com/page/downloads?product=lcm The original version: e.g. LCM Framework Bundle (version: 2.7)The version with -H: e.g. LCM Framework Bundle (Version: 2.7.1.44719-H) If you are a dark site customer and running LCM version 2.4.5 or before - Please upgrade first using with-H version: e.g. LCM Framework Bundle (Version: 2.7.1.44719-H) found in the "Other Versions" section.Then you can upgrade to latest version of LCM by original version in the future. If you try to upgrade from LCM 2.4.5 or before using the non-H version, e.g. LCM Framework Bundle (version: 2.7) instead of 2.7.1.44719-H, you will observe the following error signature.With Local web serverWhen you try to save the LCM configuration specifying the Local web server, you get the following error message on LCM. Failed to update LCM config, error: Failed to set url to http://<local web server IP>/release in the Config. error: URL 'http://<local web server IP>/release' is either incorrect or not reachable In /home/nutanix/data/logs/genesis.out, the log outputs that the master_manifest.tgz.sign file is not found on local web server. 2023-12-18 10:55:54,240Z INFO 00942832 download_utils.py:908 Testing connectivity with url http://<local web server IP>/release With Direct UploadDirect Upload displays the following error message on LCM. Operation failed. Reason: NfsError('NFS: Lookup of /bundle_upload/3355ffbc-2d00-4aaf-63d8-e20fdf780d70/master_manifest.tgz.sign failed with NFS3ERR_NOENT(-2)',)
FAQs: Q1. What does H stands for in the 2.7.1.44719-H bundle ?A: It stands for Hop version, for older lcm version 2.7.1.44719-H is a hop version to go to latest version.Q2. If I am running connected site, will I have to do something ?A: No - you need to perform inventory. It will take care of upgrading LCM framework twice ( 2.4.5 -> 2.7 -> latest) in the backend.Q3. If I am running LCM-2.4.5 or before and want to upgrade to LCM-2.7, can I use LCM Framework Bundle (version: 2.7) A: No, the Inventory may fail. You need to use LCM Framework Bundle (Version: 2.7.1.44719-H).Q4. Why we need a hop version to go to latest LCM framework version?A: Because there are certain files which are not backward compatible beyond LCM-2.4.5. Hence, you are requested to perform a Hop version.Q5. Where can I find the LCM Framework Bundle (Version: 2.7.1.44719-H) ?A: Nutanix portal lcm download page - Other Versions https://portal.nutanix.com/page/downloads?product=lcm
KB10632
"NAT (masquerading) IP displayed on Prism Central multicluster instead of VIP"
NAT and PC
A customer that has configured a NAT IP (masquerading IP) on Prism Central and didn't use the complete procedure to remove it.Note: NAT on PC is not supported as of yet.This was seen when trying to remove Prism Central (PC) from the Prism Element (PE) PC Previously they had set the masquerading IP and is shown as the External or Masquerading nutanix@CVM:~$ ncli multicluster get-cluster-state You run the commands to remove it ncli multicluster delete-cluster-state cluster-id=ClusterID nutanix@CVM~$ ncli multicluster delete-cluster-state cluster-id=3711e7af-1047-45fe-b9c8-391b73327d00 You verify it has been removed ncli multicluster get-cluster-state nutanix@CVM:~$ ncli multicluster get-cluster-state The register using the correct IP as seen using this cli ncli multicluster add-to-multicluster external-ip-address-or-svm-ips=X.220.X.15 username=admin password=######## nutanix@CVM:~$ ncli multicluster add-to-multicluster external-ip-address-or-svm-ips=X.220.X.15 username=admin password=###### You verify using this cli ncli multicluster get-cluster-state and it shows the old IP again but not the IP you used in the cli to register the cluster nutanix@CVM:~$ ncli multicluster get-cluster-state
Root Cause / Resolution The masquerading IP, just like the VIP is stored in Zeus.To remove it, simply run on the PCVM: ncli cluster edit-info masquerading-ip-address=- masquerading-port=- Confirm current masquerading_ip On Prism Central (PC) VM run the command zeus_config_printer | grep -i masquerading nutanix@PCVM:~$ zeus_config_printer | grep -i masquerading You can see that there is a masquerading_ip set. To clear this on the Prism Central (PC) VM run the following command ncli cluster edit-info masquerading-ip-address=- masquerading-port=- Example of output nutanix@PCVM:~$ ncli cluster edit-info masquerading-ip-address=- masquerading-port=- On Prism Central (PC) VM run the command zeus_config_printer | grep -i masquerading to confirm nutanix@cvm:~$ zeus_config_printer | grep -i masquerading When you add using the correct IP ncli multicluster add-to-multicluster external-ip-address-or-svm-ips=xxx.xxx.xxx.10 username=admin password=######## nutanix@CVM:~$ ncli multicluster add-to-multicluster external-ip-address-or-svm-ips=xxx.xxx.xxx.10 username=admin password=######## ​​​​​​ 5. Verify again nutanix@CVM:~$ ncli multicluster get-cluster-state
KB6301
Nutanix DRaaS - Entity Sync/Replication Troubleshooting
Entity Sync/Replication Troubleshooting with Nutanix DRaaS.
Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.Availability zones synchronize disaster recovery configuration entities when they are paired with each other. Paired availability zones synchronize the following disaster recovery configuration entities and if there is any issue you need to understand the workflow along with the troubleshooting steps. Protection policies - A protection policy is synchronized whenever you create, update, or delete the protection policy. Recovery plans - A recovery plan is synchronized whenever you create, update, or delete the recovery plan. The list of availability zones to which Xi DRaaS must synchronize a recovery plan is derived from the entities that are included in the recovery plan. The entities used to derive the availability zone list are categories and explicitly added VMs.
If there are any issues/alerts for the replication after you set up the Protection plan & Recovery plan with below error: Troubleshooting Steps: Communication issue with Xi PC Check aplos - restart aplos and aplos_engine servicesCheck network communication Entity Sync is failing Check Magneto logs from On-prem end nutanix@PCVM$less ~/data/logs/mageneto.out Check ergon tasks from On-prem end nutanix@PCVM$htttp://<xi PC IP>:2090 Cannot access the local PE cluster to replicate to Xi Check PC Logs - nutanix@PCVM$less ~/data/logs/aplos.out Check communication between Xi PC and Xi PEsCommunication between Xi PC and on-prem PE Snapshots are not being replicated to Xi AZ Check communication between Xi PE cluster and on-prem PE clusterCheck Cerebro and stargate ports are open (if they get blocked at firewall) 2009 - stargate2020 - cerebro
KB2725
NCC Health Check: compression_disabled_check
The NCC health check compression_disabled_check determines if compression is disabled due to metadata usage exceeding the maximum percent or capacity usage limits.
The NCC health check compression_disabled_check determines if compression is disabled due to metadata exceeding the maximum percent or capacity usage limits.The value of these limits is different depending on the particular AOS version being used. Running the NCC Check It can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all or individually as: nutanix@cvm$ ncc health_checks stargate_checks compression_disabled_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is not scheduled to run on an interval.This check will not generate an alert. Sample output For Status: INFO Running : health_checks stargate_checks compression_disabled_check[==================================================] 100%/health_checks/stargate_checks/compression_disabled_check [ INFO ]-----------------------------------------------------------------------------------------------+Detailed information for compression_disabled_check:Node x.x.x.x: INFO: Compression is disabled.+---------------+| State | Count |+---------------+| Info | 1 || Total | 1 |+---------------+Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log For Status: PASS Running : health_checks stargate_checks compression_disabled_check[==================================================] 100%/health_checks/stargate_checks/compression_disabled_check [ PASS ]------------------------------------------------------------------------------------------------+ Output messaging [ { "Check ID": "Check whether compressions is automatically disabled." }, { "Check ID": "Metadata usage has exceeded the 65 percent default threshold." }, { "Check ID": "Contact Nutanix Support to investigate the cause and manually reenable compression." }, { "Check ID": "Compression is disabled" } ]
Manual intervention is required to investigate high metadata usage and re-enable compression on the cluster.Consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/Collecting Additional Information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --aggregate=true Attaching Files to the Case When viewing the support case on the support portal, use the Reply option and upload the files from there.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
KB14865
Alert - A160180 - PC is not AZ paired
Troubleshooting and resolving alert "Remote file server of data protection policy is registered to a Prism Central which is not in an Availability Zone pair".
This Nutanix article provides the information required for troubleshooting the alert Host-limit-check for your Nutanix Files cluster. Alert overview The PC-is-not-AZ-paired alert is generated when the number of hosts exceeds the maximum of 32. Sample alert Block Serial Number: 23SMXXXXXXXX Output messaging [ { "160180": "Newly Registered File Server is a Participant in an Existing Data Protection Policy with a Remote File Server Registered to a Prism Central, which isn't in an Availability Zone Pair with the Local Prism Central", "Check ID": "Description" }, { "160180": "The newly registered file server is part of an existing data protection policy with a remote file server registered to Prism Central, which isn't in an Availability Zone Pair with the Local Prism Central", "Check ID": "Cause of failure" }, { "160180": "Add an Availability Zone pair between local Prism Central and Prism Central of the remote file server. Alternatively, you can unregister the newly registered Prism Element and register it to another Prism Central in an Availability Zone pair with the Prism Central of the Remote File Server. If you have any issues, please refer to KB article 14865.", "Check ID": "Resolutions" }, { "160180": "The remote file server is not visible on the local Prism Central. This means that the data protection policy cannot be managed appropriately. Also, DR workflow cannot be accomplished.", "Check ID": "Impact" }, { "160180": "A160180", "Check ID": "Alert ID" }, { "160180": "Remote file server of data protection policy is registered to a Prism Central which is not in an Availability Zone pair", "Check ID": "Alert Title" }, { "160180": "Remote file server {remote_fs_name} for data protection policy {policy_uuid} with file server {fs_name} is registered to Prism Central {remote_pc_name} ({remote_pc_uuid}), which is not in an Availability Zone pair with the local Prism Central", "Check ID": "Alert Message" } ]
Resolving the issueThis alert ensures that Availability Zones are maintained through File Server Migrations. If a File Server, or a File Server Remote DR site, moves to a new Prism Central instance, and the new Prism Central is not included in the Availability Zones, this alert will trigger. Add an Availability Zone pair between local Prism Central and Prism Central of the remote file server. Alternatively, you can unregister the newly registered Prism Element and register it to another Prism Central in an Availability Zone pair with the Prism Central of the Remote File Server. For information about Availability Zones and their management, refer to the Nutanix Disaster Recovery Guide https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:Disaster-Recovery-DRaaS-Guide. If you need assistance or if the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com./. Collect additional information and attach them to the support case. Collecting additional information Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB 2871 https://portal.nutanix.com/kb/2871.Collect Files related logs. For more information on Logbay, see KB-3094 https://portal.nutanix.com/kb/3094. CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM, ideally run this on the Minerva leader as there were issues seen otherwise, to get the Minerva leader IP, on the AOS cluster, run nutanix@cvm:~$ afs info.get_leader Once you are on the Minerva leader CVM, run: nutanix@CVM:~$ ncc log_collector --file_server_name_list=<fs_name> --last_no_of_days=5 --minerva_collect_sysstats=True fileserver_logs For example: nutanix@CVM:~$ ncli fs ls | grep -m1 Name Attaching files to the caseTo attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
KB12904
Nutanix Move | Cannot inventory VMs with special characters
Nutanix Move may show zero VMs for a Hyper-V cluster or standalone host if there are virtual machines that contain special or non-English characters.
When adding a Hyper-V cluster or standalone host to Nutanix Move there may not be any VMs shown for migration. The following error can be seen from the UI: Failed to get inventory for source 'Hyper-V_x.x.x.x. [DisplayMessage='Failed to read VM's OS info", Location="/hermes/go/src/hypervisor/hyperv/utils.go:353"] Move HyperV agent internal error. (error=0x8000) Checking the logs from Move VM the following can be seen from /opt-xtract-vm/logs/srcagent.log: I0321 02:07:40.880975 9 errcode.go:40] [x.x.x.x] (*HyperVSrcAgentApi).GetInventory entered
Identify and rename any VMs containing special or non-english characters. Refresh the cluster or host from Move UI and proceed with migration plan.
KB10242
Alert - A130149 - Guest Power Operation Failed
This Nutanix article provides the information required for troubleshooting the alert VM Guest Power Op Failed for your Nutanix cluster.
Alert overview The VM Guest Power Op Failed alert can be generated if power operation failed in the guest operating system. Sample alert Block Serial Number: 16SMXXXXXXXX Output messaging [ { "Check ID": "Guest Power Operation Failed" }, { "Check ID": "Power operation failed in the guest operating system." }, { "Check ID": "(A) Manually shut down the VM by logging into the VM and running the appropriate shutdown command.\t\t\t(B) Resolve the issue of the failure. If you cannot resolve the issue, contact Nutanix support." }, { "Check ID": "Desired power operation may not be completed." }, { "Check ID": "A130149" }, { "Check ID": "VM Guest Power Op Failed" }, { "Check ID": "Failed to perform operation '{operation}' on VM '{vm_name}'. {reason}." } ]
Perform the Guest Power Operations manually from the VM console or remote desktop access.To resolve the alert, perform NGT (Nutanix Guest Tools) update/Installation https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-ngt-upgrade-r.html and reboot the Guest VM. NGT will allow performing the Guest operations from Prism. If the above does not resolve the issue, engage Nutanix Support https://portal.nutanix.com/ to check further. Collecting additional information Before collecting additional information, upgrade NCC. For information about upgrading NCC, see KB-2871 https://portal.nutanix.com/kb/2871.Collect the NCC health_checks output file ncc-output-latest.log. For information about collecting the output file, see KB-2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information about Logbay, see KB-6691 https://portal.nutanix.com/kb/6691. nutanix@cvm$ logbay collect --from=YYYY/MM/DD-HH:MM:SS --duration=+XhYm Attaching files to the caseTo attach files to the case, follow KB-1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. Requesting assistanceIf you need further assistance from Nutanix Support, comment in the case on the support portal, asking Nutanix Support to contact you. If you need urgent assistance, contact the support team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. You can also click the Escalate button from the View Case page on the Support portal and explain the urgency in the comment. Closing the caseIf this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. You can also update the support case, saying it is okay to close it.
KB14388
NDB UI reports successfull Rollback after a failed Oracle Database Server Patching operation even though rollback did not completely bring the database online
NDB UI reports successful Rollback after a failed Oracle Database Server Patching operation even though rollback did not completely bring the database online
Database Server Patching fails for a Oracle Database Server due to Oracle Patch installation failureNDB initiates a Rollback for the Database ServerNDB UI shows Rollback as SuccessHowever Logs collected from the DB Server shows Rollback did not complete (operationid_SCRIPTS.log) Executing script rollback_patch_database.sh as user oracle
This is currently a known issue, Engage Nutanix Support for further assistance
KB16945
Upgrading AHV from version 20160925.x or 20170830.x using LCM 3.0 fails with the "zero length field name in format" error
Upgrading AHV from version 20160925.x or 20170830.x using LCM 3.0 fails with the "zero length field name in format" error
Nutanix has identified an issue when upgrading AHV from version 20160925.x or 20170830.x using LCM 3.0 fails with the following error: Operation Failed. Reason: LCM operation update failed on leader, ip: [10.x.x.40] due to LCM Operation encountered exception 'NoneType' object has no attribute 'get'.. The following exception can be found in the /home/nutanix/data/logs/lcm_ops.out log on CVM: 2024-05-23 11:41:48 ERROR 93955904 __init__.py:198 Traceback (most recent call last): A similar error can be found in the /var/log/lcm_ahv_upgrade.log on AHV host: 2024-05-23 10:54:46,758 Gathered bundle information: {'AHV': 'AHV-DVD-x86_64-el7.nutanix.20201105.2244.iso'}
This issue has been fixed with LCM-3.0.0.1, Upgrade to LCM-3.0.0.1 or above. Workaround if LCM version < LCM-3.0.0.1: Perform the upgrade manually using the Host Boot Disk Repair https://portal.nutanix.com/page/documents/details?targetId=Hypervisor-Boot-Drive-Replacement-Platform-NX3155GG5:Hypervisor-Boot-Drive-Replacement-Platform-NX3155GG5 workflow (the section about hardware replacement is not needed in this case and should be skipped).
KB15939
Categories' v4 API used with 'expand' parameter do not return results
When using v4 API 'Categories' together with the 'expand' parameter, an error is returned when the filter does not return results.
Scenario: Categories v4 API: https://developers.nutanix.com/api-reference?namespace=prism&version=v4.0.a2#tag/Categories APIs of concern: https://developers.nutanix.com/api-reference?namespace=prism&version=v4.0.a2#tag/Categories/operation/getAllCategories https://developers.nutanix.com/api-reference?namespace=prism&version=v4.0.a2#tag/Categories/operation/getCategoryByExtId Affected API versions: v4.0.a2, v4.0.b1 Affected Prism Central (PC) release versions: pc.2024.1 Issue: When the above APIs are used with one of the following query parameters: $expand=associations($filter=<filter>) $expand=detailedAssociations($filter=<filter>) And <filter> does not match any results available, then instead of returning the category or categories without the expanded results, an error code CTGRS-50024 and HTTP code 500 are returned. See the example below: {
Nutanix Engineering is aware of the issue and is working on a fix in a future release.
KB14027
Nutanix DRaaS | Stargate ports blocked by Xi
When configuring DRaaS, it may be seen that only the stargate ports are not responding to netcat.
When performing network connectivity tests with a newly provisioned customer from their onprem to Xi, it may be seen that roughly half of the ports are failing despite customer firewall config allowing all ports: nutanix@PCVM:~$ for i in {1024..1034}; do nc -zvw3 206.80.158.76 $i; done Looking at the failures, we see ports 1024, 1026, 1028, and 1030 are timing out. We can check the XAT port mappings from the upstream Xi PC 2070 page:From this page, we see that ports 1024, 1026, 1028, and 1030 all correspond to the stargate ports (2009) of the individual Xi CVMs (plus the VIP). We can then check the iptables rules for the Xi CVMs and see that there is no rule to allow 2009: nutanix@CVM:~$ sudo iptables -nL | grep 2009 For reference, on a working tenant: nutanix@CVM:~$ sudo iptables -nL | grep 2009 Note that the ACCEPT for 10.0.0.0/8 is missing from the problem Xi CVMs.
Please collect a full log bundle on the Xi Prism Central and Xi CVMs after applying this fix, and open a Xi Oncall for RCA.To resolve this issue, we can add the following rule to all of the Xi CVMs: nutanix@CVM~:$ allssh "modify_firewall -f -r 10.0.0.0/255.0.0.0 -p 2009 -i eth0" There are no service restarts required after applying this change. After applying this firewall rule, we can double check the iptables once more to be sure 10.0.0.0/8 traffic is now being accepted: nutanix@CVM:~$ allssh "modify_firewall -f -r 10.0.0.0/255.0.0.0 -p 2009 -i eth0" Running the connectivity test on customer's onprem PCVM/CVMs should now fully succeed as well: nutanix@PCVM:~$ for i in {1024..1034}; do nc -zvw3 206.80.158.76 $i; done
KB6387
SMTP test email works but alert emails fails
Alert emails are not getting triggered in cluster however test emails with Prism in SMTP configuration are working.
Symptom 1: Sending a test email from Prism web page works as expected however triggering an alert email manually will throw below error: nutanix@CVM:~/data/logs$ ~/serviceability/bin/email-alerts --to_address="[email protected]" --subject="test `ncli cluster get-params`" This is because of an incompatible authentication ie., (AUTH_CRAM_MD5)configured on SMTP server with CVM being in FIPS mode. /usr/lib64/python2.6/smtplib.py SocketLabs SMTP Server service > SocketLabs SMTP Server service cannot be used in Nutanix due to compatibility issues.> SocketLabs requires Nutanix CVM to use an authentication method in the default implementation. The username and password are shared by SocketLabs to the customer.> The incompatibility is because SocketLabs uses CRAM-MD5 and Auth Login types of authentication methods. Nutanix CVMs support Auth Login. On the Nutanix CVMs FIPS is enabled by default and FIPS mode is incompatible with the CRAM-MD5 authentication method. If in the SMTP settings, no username and password are added when using SocketLabs as the SMTP server then in ncli cluster get-smtp-server we will see the following error: SMTP failure in sending email: {u'<username>@<domain_name>': (501, '5.7.0 Authentication required. Please authenticate with CRAM MD5 or AUTH Login.\n5.7.0 see http://support.socketlabs.com/kb/81 for more information.')} And if the username and password are supplied then in ncli cluster get-smtp-server, we will see the following error. Generic error in sending email: kSmptError or Send email socket exception: 1. [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:618) The below command will not show any error message on the screen. Refer TH-11120 https://jira.nutanix.com/browse/TH-11120 for more details. ncli cluster send-test-email SMTP2GO server service > Similar error is seen if the customer is using SMTP2GO server service.> The incompatibility is because SMTP2GO also uses CRAM-MD5 and Auth Login types of authentication methods. Initially if the customer is not using appropriate port number and security protocol as specified in the Prism Central Guide, we see the below signature in the logs. ( In this case the configuration was > Port - 587 and Security protocol - SSL) $ ncli cluster get-smtp-server After we change the port number and security protocol as suggested in the document above, we see the below signature in the logs. $ ncli cluster get-smtp-server While checking further logs, below signature is seen in "send-email.log", which indicates the incompatibility between the authentication method used on SMTP server and FIPS enabled on Nutanix CVM - "grep ERROR ~/data/logs/send-email.log" Symptom 2: Sending a test email from both Prism web UI and CLI works ( KB-2773 https://portal.nutanix.com/kb/2773). However, cluster alert email cannot be sent. Refer to TH-5975 https://jira.nutanix.com/browse/TH-5975. Find the alert_manager leader nutanix@CVM$ alert_tool Check alert_manager.out in alert manager leader CVM with the following error: E0225 18:14:28.873324 22357 alert_policy.cc:657] Sending email for alert (uuid: b2387b8c-bd8b-46ac-98d6-225486341ba1) failed with error, 9 send-email.log - 0 emails are sent 2021-02-25 18:14:03 INFO send-email:228 Sending email via SMTP server xx.xx.xx.xx:25, SMTP Host Type: User Configured to: '[email protected]' Check /home/log/messages, alert_manager failed to invoke email-alert script due to out of memory issue. 2021-02-25T18:14:28.468512+09:00 NTNX-18SMXXXXXXXX-A-CVM kernel: [23652002.760822] python2.7 invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=100 Check Heap size in alert_manager.INFO, you may see the memory usage is reaching 256M nutanix@CVM$ tail ~/data/logs/alert_manager.INFO | grep Heap Alert_manager simply calls email-alerts script using system(2), but it is failing with error 9 (EBADF) because alert_manager is hitting its memory limit (256M) and invoked oom-killer for email-alerts process.
Symptom 1: The solution is based on FIPS mode on SMTP server: Since CVM has FIPS mode enabled, use authentication mode as AUTH_PLAIN or AUTH_LOGIN (plaintext) in SMTP configuration health_checks system_checks auto_support_check to make sure the check passes. NOTE: SocketLabs or any SMTP server using the CRAM-MD5 authentication method is not compatible with Nutanix CVMs. Configuration that works on SMTP2GO server - IP Authentication – No User Name/Password is required when sending emails from the specified IPs Configuration on Nutanix end that worked after the above changes - Port - 2525 Symptom 2: It's still not clear why alert_manager uses so much memory under some condition. If you see this symptom, before providing the workaround, please collect the log bundles and the following information from the alert_manager leader CVM: nutanix@CVM$ uptime Collect files ~/tmp/growth.out, ~/tmp/heapstats.out and ~/tmp/stack.out.Additionally, collect the following details with respect to the cluster: collect top sysstats from CVMs when the issue occursAOS versionNCC versionNumber of alertsNumber of Node's and VM'sWhat operations were being performed around the time of OOM error? Now you can provide the workaround to restart alert_manager service: nutanix@CVM$ allssh genesis stop alert_manager
{
null
null
null
null
KB7075
Foundation- StandardError: This node is expected to have exactly 2 LSI SAS3008. But phoenix could find only 1 device(s)
This issue occurs in events when phoenix is unable to detect the LSI card.
Foundation fails to image the node while creating cluster and node will be stuck in phoenix. Following trace is seen in the logs 20190307 06:12:34 INFO Node with ip 10.16.70.12 is in phoenix. Generating hardware_config.json Check the model name for the node and execute lspci while node is in Phoenix to check if there is any trace of LSI cardDepending upon node type, there will be either 1 or 2 LSI cards per node. Example for node with 2x LSI nutanix@CVM:~$ lspci | grep -i "serial attached" Example for node with 1x LSI nutanix@CVM:~$ lspci | grep -i "serial attached"
Rescan the PCI Bus by running: echo 1 > /sys/bus/pci/rescan Verify that the LSI card(s) is now being detected:If detected, initiate phoenix again by running: [root@phoenix ]# /phoenix/phoenix If the LSI controller is not detected even after rescanning the PCI bus, try to reseat the LSI controller referring the HW Replacement Documentation https://portal.nutanix.com/page/documents/list?type=hardware If it is still not detected, dispatch a new LSI Card.
KB5375
vCenter 6.5 Datastore Web Browser : "Cannot connect to host" while trying to copy files between datastores
null
While Datastore Web Browser File Copy feature can be leveraged in vCenter 6.0 to copy files between datastores from different clusters, this is not true for vCenter 6.5If the destination datastore is mounted on several containers, the copy will randomly fail with the following error message : "Cannot connect to host"If the destination datastore is mounted on a single container, the copy will work just fine.
VMware vCenter 6.5 bugIn the unlikely case there is a need for copying files manually between cluster, an alternative is to use scp
KB13977
LCM: NIC upgrades fail with "Unable to stage module pem"
LCM NIC firmware upgrades fail with "LcmActionsError: Unable to stage module pem: err:Unable to download the environment module files" when using LCM Dark Site Webserver.
LCM NIC Firmware upgrades fail when using a Dark Site Webserver. All other upgrades work without issue. lcm_ops.out shows that we are unable to boot into phoenix. DEBUG: [2022-11-14 21:08:30.325322] Unable to boot cvm [X.X.X.X] into phoenix with error: Failed to get pem image version from tar file path /home/nutanix/tmp/lcm_staging/nutanix/env_modules/pem/phoenix-ivu-v5.0.4-x86_64.iso with error 'NoneType' object has no attribute 'groups' lcm_ops.out will also state that we are Unable to download the environment module files 2022-11-14 21:12:55,433Z ERROR env_utils.py:535 (X.X.X.X, kLcmUpdateOperation, ce768ac7-e799-4383-7cf1-0140313be7b9) Unable to download the file {u'status': u'recommended', u'entity_class': u'Environment', u'shasum': u'88996f9f76669bf8bc5ca3dff007812282d36acc3ebf00a14b4f4835e3af7bf5', u'name': u'phoenix-ivu-v5.0.4-x86_64.iso', u'url': u' http://X.X.X.X/lcm/release/builds/env_modules/ivu/5.0.4/phoenix-ivu-v5.0.4-x86_64.iso', u'image': u'phoenix-ivu-v5.0.4-x86_64.iso', u'flag_list': [u'smoke', u'x86_64', u'ahv', u'esx', u'hyperv'], u'entity_model': u'IVU', u'version': u'5.0.4', u'tag_list': [], u'update_library_list': []} Manual attempts to download via wget on the lcm_leader are successful. nutanix@NTNX-CVM:~$ wget http://X.X.X.X/lcm/release/builds/env_modules/ivu/5.0.4/phoenix-ivu-v5.0.4-x86_64.iso
This issue is resolved in LCM-2.7. Please upgrade to LCM-2.7 or higher version - Release Notes | Life Cycle Manager Version 2.7 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-LCM:Release-Notes-LCMIf you are using LCM for the upgrade at a dark site or a location without Internet access, please upgrade to the latest LCM build (LCM-2.7 or higher) using Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide:Life-Cycle-Manager-Dark-Site-Guide
KB6065
Nutanix Files - SMB signing impact
SMB signing will cause a significant performance hit on a Nutanix Files share.
SMB signing is set to auto-negotiate on Nutanix Files, meaning we will honor what the client requests. Negotiation occurs between the SMB client and the SMB server to decide whether signing will be used. The following table shows the effective behavior for SMBv3 and SMBv2. Please refer to Microsoft network client: Digitally sign communications (always) https://docs.microsoft.com/en-us/windows/security/threat-protection/security-policy-settings/microsoft-network-client-digitally-sign-communications-always for more details. * Default for domain controller SMB traffic** Default for all other SMB trafficPerformance impact of SMB signing tested by engineering. All SMB communications to and from clients experience a performance impact (increased CPU usage) on both client and server when SMB signing is enabled. Observed significant impact with large copy files (see results below). For workloads such as user-profiles and home-directories, the impact is minimal (less than 5%). No performance impact is observed on metadata workload when SMB signing is enabled.If the customer is using SMB3 (Windows 10 and up), along with SMB3 Signing (Windows 10 and up), they may see performance improvement compared to SMB2 Signing(Windows 7). [ { "Server – required": "Signed *", "Server – not required": "Not signed **" } ]
Please see the following links on how to disable signing in for the client.For Mac clients: https://support.apple.com/en-us/HT205926 https://support.apple.com/en-us/HT205926 For Windows clients: https://blogs.technet.microsoft.com/josebda/2010/12/01/the-basics-of-smb-signing-covering-both-smb1-and-smb2/ https://blogs.technet.microsoft.com/josebda/2010/12/01/the-basics-of-smb-signing-covering-both-smb1-and-smb2/ http://mctexpert.blogspot.com/2011/02/disable-smb-signing.html http://mctexpert.blogspot.com/2011/02/disable-smb-signing.html https://techcommunity.microsoft.com/t5/storage-at-microsoft/configure-smb-signing-with-confidence/ba-p/2418102 https://techcommunity.microsoft.com/t5/storage-at-microsoft/configure-smb-signing-with-confidence/ba-p/2418102 Run the below on a FSVM to force Nutanix Files to enforce SMB signing for all clients.1) Verify server signing. afs fs.info | grep ^SMB Example: FSVM:~$ afs fs.info | grep ^SMB 2) Update the server singing to require clentes to have signing enabled. afs fs.edit smb_signing_required=true Example: FSVM:~$ afs fs.edit smb_signing_required=true 3) Verify that the server signing has been updated to ON afs fs.info | grep ^SMB Example: FSVM:~$ afs fs.info | grep ^SMB[ { "SMB 2": "CPU Diff % with Signing Enabled", "Seq Write(MBps)": "CPU Diff % with Signing Enabled" }, { "SMB 2": "-73", "Seq Write(MBps)": "-77" }, { "SMB 2": "-59", "Seq Write(MBps)": "-66" }, { "SMB 2": "SMB 3", "Seq Write(MBps)": "" }, { "SMB 2": "Seq Write(MBps)", "Seq Write(MBps)": "Seq Read(MBps)" }, { "SMB 2": "CPU Diff % with Signing Enabled", "Seq Write(MBps)": "CPU Diff % with Signing Enabled" }, { "SMB 2": "-61", "Seq Write(MBps)": "-46" }, { "SMB 2": "-30", "Seq Write(MBps)": "-25" } ]
KB6019
Snapshot fails with error Failed to snapshot NFS files for consistency group cg_name in protection domain pd_name, error: kTimeout.
Customers might receive this alert during a snapshot operation. This KB explains one scenario where this is may be seen and the steps to diagnose and resolve the problem
Customers might receive an alert for snapshot failure due to kTimeout. This error can appear on both Metro Availability and Async-DR setups. Symptoms: There is a condition reported in the field on Metro Cluster setups where the automatic reference snapshots (checkpoints) which are executed by default every 4 hours will fail to complete, generating an alert in Prism.Reference snapshots or checkpoints are vStore level snapshots taken automatically on a Metro relationship in order to have a recent restore point in case Metro relationship breaks and a resynchronization is required. With this reference snapshots, only the delta changes from the time of the last snapshot are transferred. Essentially, these reference snapshots are regular async-DR snapshots of the whole contents of the container protected by Metro but only metadata is copied across clusters. Data transfer is not required as it is already present at both sides of the metro relationship. The snapshot operation requires iterating all the files inside the container and doing a quiesce on the namespace in order to perform the snapshot creation.If the required actions to take the snapshot cannot be completed in the alloted time, Prism will display a kTimeout error and the snapshot creation will fail. This means that it will not be possible to restore data from this snapshot and the system will rely on the latest healthy checkpoint to perform the re-synchronization if Metro relationship breaks.Note that this condition can also be hit on clusters which are only using Async-DR Protection Domains as they follow a very similar workflow as the one utilized for creating Metro Availability reference snapshots. ncli alert ls ID : 1520952567494513:4016394661:0005229c-541e-130d-1eb7-a0369f746524 Message : Protection domain CT-PA3-1 snapshot (2213413898339771684, 1445429705052941, 537990501) failed because Failed to snapshot NFS files for consistency group CT-PA3-1_1504188155928169 in protection domain CT-PA3-1, error: kTimeout. Severity : kCritical Title : Protection Domain Snapshot Failure Created On : Tue Mar 13 15:49:27 CET 2018 Acknowledged : false Acknowledged By : Cerebro master logs will display a kTimeout error as can be seen in ~/data/logs/cerebro.INFO W0313 12:51:18.811934 8082 stargate_interface.cc:394] RPC to 172.16.38.26:2009 method NfsSnapshotGroup returned error kTimeoutW0313 12:51:18.812175 8082 snapshot_consistency_group_sub_op.cc:3602] Failed to snapshot NFS files for consistency group CT-PA3-1_1516663811292860 in protection domain CT-PA3-1, error: kTimeout Stargate logs on the same node as the Cerebro Master will show how Stargate was unable to complete the operation in time: ~/data/logs/stargate.INFO W0510 09:34:27.814880 7438 stargate_interface.cc:394] RPC to 172.16.38.28:2009 method NfsSnapshotGroup returned error kTimeoutI0510 09:34:27.815364 7438 snapshot_consistency_group_sub_op.cc:3550] Snapshotting files completed with error: kTimeout, NFS error: -1W0510 09:34:27.815991 7438 snapshot_consistency_group_sub_op.cc:3602] Failed to snapshot NFS files for consistency group CT-PA3-5_1517412704118498 in protection domain CT-PA3-5, error: kTimeout How to identify:There are two ways to identify this issue:1. Stargate logs of the node where the RPC for method NFsSnapshotGroup was sent to. Note that there will be no retries (no lines containing kRetry) for the SnapshotVdisk Op. The errors will be similar to: I1005 17:00:18.698127 23347 rpc_client.cc:442] RPC timed out: rpc_id=4522116524215571622 peer=local method=SnapshotVDisk 2. Enable Stargate activity traces and wait until the kTimeout alert is triggered (Consult with a senior resource if you have any doubts about enabling a gflag on a customer's system). This needs to be checked on a Cerebro master from one side where the Protection Domains are marked as Active. a. Verify if the activity traces are already enabled: nutanix@NTNX-13SM66430021-A-CVM:10.64.34.100:~/data/logs$ allssh "curl http://0:2009/h/gflags 2>/dev/null | grep activity_tracer_default_sampling_frequency"================== 10.64.34.100 =================--activity_tracer_default_sampling_frequency0================== 10.64.34.101 =================--activity_tracer_default_sampling_frequency=0================== 10.64.34.102 =================--activity_tracer_default_sampling_frequency=0================== 10.64.34.103 =================--activity_tracer_default_sampling_frequency=0 b. If they are not, enable them on the live stargate process. This is a safe operation, but the setting must be reverted after the troubleshooting is completed as this adds overhead to the stargate process: nutanix@NTNX-13SM66430021-A-CVM:10.64.34.100:~/data/logs$ allssh curl http://0:2009/h/gflags?activity_tracer_default_sampling_frequency=1 2>/dev/null...nutanix@NTNX-13SM66430021-A-CVM:10.64.34.90:~/data/logs$ allssh "curl http://0:2009/h/gflags 2>/dev/null | grep activity_tracer_default_sampling_frequency"================== 10.64.34.100 =================--activity_tracer_default_sampling_frequency=1 (default 0)================== 10.64.34.101 =================--activity_tracer_default_sampling_frequency=1 (default 0)================== 10.64.34.102 =================--activity_tracer_default_sampling_frequency=1 (default 0)================== 10.64.34.103 =================--activity_tracer_default_sampling_frequency=1 (default 0) 2.1. After the alert is triggered, look for the following log signatures: a. Identify which CVM is holding the Cerebro master nutanix@NTNX-14SX35060017-A-CVM:10.64.34.100:~$ allssh links -dump http://0:2020 | grep "Master Handle"|Master Handle [1]10.64.34.103:2020| b. Ssh to the CVM holding the cerebro master and use links to navigate to the activity traces page. Open cerebro_master component and select the “Error” bucket. Expand the entries: nutanix@NTNX-14SX35060017-A-CVM:10.64.34.100:~$ ssh 10.64.34.103FIPS mode initializedNutanix Controller VMLast login: Mon Aug 20 12:58:10 2018 from 10.64.34.100nutanix@NTNX-14SX35060017-D-CVM:10.64.34.103:~$ links http://0:2020/h/traces c. Expand Error bucket and look for the following signature: |2018/05/10-09:34:07.942769| 19.874125|CerebroMasterSnapshotConsistencyGroupSubOp || | |opid_refresh=09:34:18 snapshot_handle=(2213413898339771684, 1445429705052941, 544004091) consistency_group=CT-PA3-5_1517412704118498 opid=7877157 || 09:34:07.942769| . 0| Start || 09:34:07.942793| . 24| Starting snapshot consistency group sub op for pd: CT-PA3-5 and cg: CT-PA3-5_1517412704118498 || 09:34:07.945971| . 3178| Snapshotting 16 files || 09:34:27.815376| 19.869405| Snapshotting files completed with error: 1|| 09:34:27.816884| . 1508| Finishing with cerebro error kRetry || 09:34:27.816894| . 10| Finished with error d. Exit links and locate the NFS (Stargate) master: nutanix@NTNX-14SX35060017-A-CVM:10.64.34.100:~$ allssh 'links -dump http://0:2009 | grep master'================== 10.64.34.100 ================= NFS master handle: [6]10.64.34.102:2009================== 10.64.34.101 ================= NFS master handle: [6]10.64.34.102:2009================== 10.64.34.102 ================= e. Navigate to the activity traces of the NFS master, selecting component Admission Control (admctl): nutanix@NTNX-14SX35060017-A-CVM:10.64.34.100:~$ ssh 10.64.34.102FIPS mode initializedNutanix Controller VMLast login: Mon Aug 20 13:27:35 2018 from 10.64.34.100nutanix@NTNX-14SX35060017-C-CVM:10.64.34.102:~$ links http://0:2009/h/traces f. Expand the error tab as with the Cerebro Master traces, look for the following error signature: |2018/05/11-13:35:36.002068| 15.181393|AdmctlSnapshotVDiskOp || | |priority=kRead vdisk_name=NFS:11:4:35498 opid=93910082886 rpcid=1935965554258976630 || 13:35:36.002068| . 0| Start || 13:35:36.002074| . 6| pushed to qos queue || 13:35:36.002083| . 9| Executing || 13:35:36.002083| . 0| Admitted || 13:35:36.002114| . 31| Locks on 1 vdisks acquired. || 13:35:36.002115| . 1| Vdisk id lock acquired || 13:35:51.183344| 15.181229| Waiting for oplog to be flushed || 13:35:51.183352| . 8| Flushing operation log || 13:35:51.183394| . 42| Oplog flushed for the vdisk. || 13:35:51.183444| . 50| Aborting snapshot since RPC has timed out || 13:35:51.183461| . 17| Finished with error g. A correct snapshot operation should look like the following:
Background on Oplog sharing implementation: - Old Oplog implementationBefore Oplog sharing was introduced when oplog on the parent contained unflushed data at the time of taking a snapshot of vdisk P, P is marked immutable and an an extra snapshot was created, P' which inherits all the properties of P but remains mutable, leaving the nameless parent, and two children (one of which inherits the old parent's name). This way of taking snapshots could also cause outages during customer maintenance windows as per ISB-054 ( https://docs.google.com/document/d/11J9aWFZVeRzhJORVUiWvzPHjkKiSmteb5c_jnFbMAVI/edit https://docs.google.com/document/d/11J9aWFZVeRzhJORVUiWvzPHjkKiSmteb5c_jnFbMAVI/edit) If the system was not able to drain the oplog in time, two copies of the oplog were generated and drained, doubling the work. With Oplog sharing, as in the diagram below episodes are not being duplicated. This allows to save resources consumed while draining the episodes.- Oplog sharing (AOS >= 5.1) Root cause: Oplog sharing introduced a problem described on ENG-89640.With Oplog sharing we require all the vdisks that are part of the Oplog Sharing chain to be hosted on the same node. Before the fix introduced in ENG-89640: 1. There was no retry logic to re-attempt the snapshot operation if it failed (for reasons such a very busy oplog)2. The lookup logic to identify the vdisks part of the same oplog sharing chain had a flaw. The logic was only checking the node where the leaf vdisk of the chain was hosted. If it was not hosted by any stargate yet, it was re-hosted to the same node where the snapshot was being taken. With this operation, ancestors of the the leaf vdisk which might have their oplog still pending draining could be pulled also, disturbing the oplog draining process and causing it to fail with the errors described above. ENG-89640 solves the two above conditions by: 1. 3 retry operations to complete the snapshot, per vdisk.2. Always forward the request to Snapshot a vdisk to the node where the oplog sharing ancestor’s root vdisk is hosted. If there is no node that is hosting it, in that case host the entire oplog sharing chain on the local node where you have received the snapshot. Solution: Verify that the customer is following all the Best Practices regarding number of VMs per Protection Domain as this has influence regarding the scenario described on this KB.Request customer to a version with the fix (older than 5.6.1, 5.5.3). Retries will be done more frequently to ensure the snapshot is taken successfully. Notes: IMPORTANT: Remove the activity_traces gflag after the troubleshooting is completedIf analyzing a collected log bundle (and assuming the activity_traces from stargate are enabled, the traces can be parsed to look for the log signatures under <bundle_location>/cvm_logs/activity_tracerThis scenario pertains to AOS 5.1 , where the oplog sharing was introduced, and in versions where the fix was not introduced. There are some instances where customers could hit this on AOS versions previous to 5.1 due to a extremely busy stargate but would very rare instances.
KB11876
NGT Setup Fails due to Carbonite Software modifying Drivers
NGT Setup Fails due to Carbonite Software modifying Drivers.
NGT Setup Fails due to Carbonite Software modifying Drivers. Update to the latest version of Carbonite and then attempt to reinstall NGT tools. If you are still facing the same issue on the latest version of Carbonite after following the workaround below, consider opening a support case with Carbonite and Nutanix Support for further investigation. Error "Setup Failed 0x80070643 - Fatal error during installation" is seen during VM Mobility Driver installation. Nutanix oem*.inf files are altered by Carbonite Move software. For example, oem7.inf first lines failing balloon driver as seen below. Carbonite Move by Double-Take Software Inc: ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; From pnputil.exe output, we can see that oem7.inf file belongs to Nutanix Balloon driver: pnputil.exe -e or pnputil.exe /enum-drivers
Prior to resolving this issue, gather NGT logs from the following location on the affected VM and attach them to Nutanix support case. C:\Program Files\Nutanix\logsC:\windows\inf\ Devices that need to have drivers reinstalled: Nutanix VirtIO Ethernet adapter (in Network adapters)Nutanix VirtIO Balloon driver (in System devices)Nutanix VirtIO SCSI pass-through controller Reinstall the drivers and update drivers' store: Mount VirtIO 1.1.5 ISO to the VM.Using the command pnputil get the list of the drivers: pnputil.exe -e From the list, find viorng.inf and vioser.inf drivers and delete them using pnputil command. Example syntax (replace the oemx.inf with the necessary filename): pnputil.exe /delete-driver <oemX.inf> /force In Device Manager, update drivers for the devices listed above.Find the device in Device Manager.Right-click on the device and choose ‘Update driver’.Click “Browse my computer for driver software” and specify the directory where drivers are located on virtual CD-ROM, for example, D:\Windows Server 2008R2\amd64, then click next.If the update of all drivers is successful, then run “Nutanix-VirtIO-1.1.5-64bit.msi” installer located on the CD-ROM.If the installation of MSI is successful, attempt to reinstall or upgrade NGT.
KB6296
Entity Sync from On-prem PC to Nutanix DRaaS PC fails
After creation of protection policy, noticed that initial sync between Onprem and Nutanix DRaaS PC failed with message “Syncing entities from Availability Zone; US-EAST-1B”.
Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.After creation of protection policy, noticed that initial sync between Onprem and Xi PC failed with message “Syncing entities from Availability Zone; US-EAST-1B”
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)" Steps for SRE:Open Xi-ONCALL to fix the gflag on Xi side PC and then force syncFrom previous case:Devyani Kanada and Nikhil Loya engaged as this is known issue from ENG-173543 and ENG-160453, where the problem was the Xi metering/Billing URL changed but did not reflect on current version of AOS EA-5.9.Added the gflag in magneto.gflags, aplos.gflags and aplos_engine.gflags under the ~/config directory on the Xi side to change the billing URL and then did the ForceSync on Onprem PC VM to fix issue. XRE should be able to assist with this request.Gflag: allssh echo "--magneto_xi_metering_url=\"https://xi.nutanix.com/api/v1/prices\"" >> ~/config/aplos.gflags
KB3708
NCC Health Check: virtual_ip_check
The NCC health check virtual_ip_check checks if the cluster virtual IP is configured and reachable.
The NCC health check virtual_ip_check checks if the cluster virtual IP is configured and reachable. Running the NCC Check You can run this check as part of the complete NCC health checks. nutanix@cvm$ ncc health_checks run_all Or you can run this check individually. nutanix@cvm$ ncc health_checks system_checks virtual_ip_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every hour by default. This check will generate an alert after 1 failure. Sample Outputs For Status: WARN Running : health_checks system_checks virtual_ip_check For Status: PASS Running : health_checks system_checks virtual_ip_check Output messaging [ { "Check ID": "Check if virtual IP is configured and reachable" }, { "Check ID": "Cluster virtual IP is not configured.\t\t\tCluster services are down, or the cluster is not started yet." }, { "Check ID": "Configure a valid virtual IP for the cluster and verify that all cluster services are up." }, { "Check ID": "Nutanix features that use virtual IP addresses might be adversely affected." }, { "Check ID": "vm_type Virtual IP Check" }, { "Check ID": "vm_type Virtual IP is configured but unreachable." } ]
If the virtual_ip_check does not pass, check that a cluster Virtual IP is configured, confirm it is configured in the same subnet as the CVMs (Controller VMs) or PCVMs (Prism Central VMs) external interfaces, that it does not conflict with any other devices on the same network configured with the same IP, and that it is reachable using ping from the CVMs/PCVMs.Note:1. There is a known issue with AOS 6.0.x where the check could still fail despite Cluster Virtual IP being configured. Refer to KB-11475 https://portal.nutanix.com/kb/11475 for more details.2. Prism Virtual IP is configured but unreachable alert and VIP becomes permanently unreachable observed on AOS 6.5.1.x. Refer to KB-13870 http://portal.nutanix.com/kb/13870for more details.Checking the cluster virtual IP in the Prism web console Click the cluster name in the main menu of the Prism web console dashboard.In the Cluster Details pop-up window, check that a Cluster Virtual IP is configured and is correct. Verifying the cluster Virtual IP by using the CLI Log on to any CVM, or PCVM for a PC cluster, in the cluster through SSH.View if a Cluster Virtual IP is configured. nutanix@cvm$ ncli cluster info Configure the Virtual IP if the cluster Virtual IP is not configured or incorrect for Prism Element. Log on to any CVM in the cluster through SSH. nutanix@cvm$ ncli cluster set-external-ip-address external-ip-address=x.x.x.x Note: The cluster Virtual IP might not be accessible if some of the cluster services are down or if the cluster is not started yet. Verify that all cluster services are up by connecting to any CVM in the cluster through ssh. nutanix@cvm$ cluster status Verify the VIP is reachable from CVMs/PCVMs in the same cluster nutanix@cvm$ ping -c 2 <VIP> If virtual_ip_check PASSes but you are unable to launch Prism UI page using the VIP, verify whether the VIP configured is in the same subnet as that of the CVMs or PCVM IPs. Setting, changing, or removing the Virtual IP by using the CLI for PE and PC (required for Prism Central). Set or change the PC or PE cluster Virtual IP nutanix@CVM/PCVM$ ncli cluster edit-params external-ip-address=x.x.x.x Or remove the PC or PE cluster Virtual IP nutanix@CVM/PCVM$ ncli cluster edit-params external-ip-address=- Note: If the virtual_ip_check is firing for a PC cluster then make sure you are running the above commands from the Prism Central VM. In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com.
KB10158
NGT installation fails with the following error message: The system cannot find the file specified. cmd.exe /c net start "Nutanix Self Service Restore Gateway"
NGT installation fails with the following error message: The system cannot find the file specified. cmd.exe /c net start "Nutanix Self Service Restore Gateway".
NGT installation fails with the following error message: The system cannot find the file specified. cmd.exe /c net start "Nutanix Self Service Restore Gateway" Scenario 1In Application event log the below error is logged every time when you try to install/upgrade NGT(the drive letter may be different): File "C:\Program Files\Nutanix\python\lib\../pkg_resources\__init__.py", line 1227, in get_cache_path From the logs we can see that NGT is trying to extract a python egg (an archive with python libraries) to I: drive. "Nutanix Self Service Restore Gateway" runs in the context of a local system account and it should have access to the drive where it extracts the python egg. By default, the file is extracted to the directory defined in %APPDATA% environment variable of the system user.Verify the AppData path for the system user from the Windows registry: Registry key: "HKEY_USERS\S-1-5-18\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders"Registry value: AppDataFor example: Scenario 2 In this scenario, the user VM has a custom registry entry for Nutanix Guest Tools at the following path: Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonService\2.7. For some versions of NGT, having this value causes the installation to fail, as described with the error message 'The system cannot find the file specified. cmd.exe /c net start "Nutanix Self Service Restore Gateway".' In these cases, we have also seen the following event in the Windows Event logs (Event ID 3 with Nutanix SSR as the source): The instance's SvcRun() method failed Scenario 3In the Nutanix_Guest_Tools_<date&time>_NutanixSSRPackage.log file, which is located under %TEMP% directory, the following can be observed: ExecuteSilently: Command = cmd.exe /c net start "Nutanix Self Service Restore Gateway" If you manually try to start the service by running the following command (before closing the NGT installer): cmd.exe /c net start "Nutanix Self Service Restore Gateway" It will immediately fail without any error: C:\> cmd.exe /c net start "Nutanix Self Service Restore Gateway" Based on the presented information, we can see that the installation is having problems starting the Nutanix Self-Service Restore Gateway service. Scenario 4 In this scenario in the Application event log, the below error is logged whenever you try to install/upgrade NGT: Traceback (most recent call last): In the Nutanix_Guest_Tools_<date&time>_NutanixSSRPackage.log file, we can see the following error log: ExecuteSilently: CreateProcess for command: cmd.exe /c net start "Nutanix Self Service Restore Gateway" file succeeded. In the Nutanix_Guest_Tools_<date&time>.log, the following log is shown: [0158:1F60][2022-04-12T08:58:21]i301: Applying execute package: NutanixSSRPackage, action: Install, path: C:\ProgramData\Package Cache\{A8C0BA8F-DA00-499A-86E1-9F0D46A0C01B}v2.1.3.0\NutanixSSRUI.msi, arguments: ' ALLUSERS="1" ARPSYSTEMCOMPONENT="1" MSIFASTINSTALL="7" ROLLBACKSSRUI=""' Referring to the given information, we see that the Nutanix Self Service Restore Gateway service is not starting due to some Python cache not properly loading. Scenario 5Reinstalling NGT may rollback after showing similar "The system cannot find the file specified" errors about "Nutanix Self Service Restore Gateway" service and "Nutanix Guest Agent" service if the "Nutanix Self Service Restore" package has already been installed, but some of Python files of this package are missing.This issue happens if an NGT installation is cancelled but does not finish (system heavy load, system restart, etc). After Windows is restarted during the cancellation attempt, NGT is removed by the "Programs and Features" control panel app. The "Nutanix Self-Service Restore" package remains in the system partially, and so when the NGT installation is attempted, the installation process skips installing the "Nutanix Self-Service Restore" package, since the installation process has determined that the package is already installed. The installation process fails to start the "Nutanix Self-Service Restore Gateway" service and the "Nutanix Guest Agent" service because some of the Python files are missing from the "Nutanix Self Service Restore" package; thus, the installation process rolls back the NGT setup.You may see in the "Application" Windows event log, "ModuleNotFoundError" when the two services were being started by the installation process: Traceback (most recent call last): And in this example case, "systems_utils.py" file did not exist in "C:\Program Files\Nutanix\ssr\ssr_utils" folder. This file belongs to "Nutanix Self-Service Restore" package. CMD> tree /f /a "C:\Program Files\Nutanix" You may also see "Get-Package" PowerShell command showing NGT-related packages even while "Programs and Features" control panel application does not list "Nutanix Guest Tools". (In the following example, "Nutanix Guest Tools Infrastructure Components Package 1" and "Nutanix Self Service Restore" packages existed even though "Nutanix Guest Tools" was not listed by "Programs and Features" control panel application. PS C:\Windows\system32> get-package | ft -AutoSize -Wrap
Scenario 1 Revert the AppData registry key at “HKEY_USERS\S-1-5-18\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders” on the affected virtual machine to a default value: Name: AppData Reboot the virtual machine.After reboot verify the following: The data in AppData registry value is not changed%AppData% environment variable for local system user is set to the expected value (path) Start command prompt in the context of local system user: PS C:\> psexec -i -s cmd.exe Note: psexec.exe is Microsoft tool available at here https://docs.microsoft.com/en-us/sysinternals/downloads/psexecMake sure that you are in the context of local system user: whoami Check AppData environment variable: set appdata If all verifications are successful, then try NGT installation. If step 3a fails, find the group policy updating the variable. Scenario 2Assuming that the installation is failing as described, the workaround for this issue is to temporarily delete the registry key on the user VM, install NGT, and then re-add the registry key as required. It is recommended to do this during a window when you can safely perform the steps. Removing this registry key may impact the application which is using this key and so this should not be done without consideration or prior planning. You can follow the general steps below to work around the issue Back up this VM to ensure you can revert back if needed. Find a maintenance window when you can afford to remove the registry key, understanding that this process has the potential to impact the application using this registry keyUsing the Windows Registry Editor, find the Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonService\2.7 registry keyExport the registry key for easy reapplication. Save this .reg file somewhere you will remember it (such as the Desktop)Delete the exported registry key Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonService\2.7Retry the Nutanix Guest Tools installation and reboot as needed. Re-apply the registry key using the exported .reg file This issue was identified in version NGT 1.7.5, although it may affect other versions as well. As a troubleshooting measure, you can try cloning the VM to a test VM and trying these steps on the test VM. This may help identify if this is actually the cause or not, without impacting the original VM. A fix for this issue is being worked on for a future release of NGT, to avoid having to follow these steps.Scenario 3The behaviour seen in this scenario can be explained by the Windows Application Event log being not writable. Possible causes are: The Application log is corrupted or full.Windows Event Log service is not running.A group policy "Configure log access" is enabled for UVMs in the domain, specifying the security descriptor for accessing log files. When enabled, only those users whose security descriptor matches the configured specified value can access the log. You can check if the policy is enabled using the following command: GET-GPO (Information about the command and its options can be found Here https://learn.microsoft.com/en-us/powershell/module/grouppolicy/get-gpo?view=windowsserver2022-ps) In case the Application log is corrupted or full, clear the entries to restore the functionality of the Application log. Open Event Viewer on the VM in question and select Windows log and then Application log Select “Clear Log” on the right pane and click "Clear": At this point, entries in the Application log should start appearing. In case the Windows Event Log service is not running, check the status of the service and, if needed, start the service: If the service is not running, right-click on it, and choose “Start”: In case "Configure log access" is enabled, disable the below through the Group Policy Management interface: Folder Id: Software\Policies\Microsoft\Windows\EventLog\Application\ChannelAccess Note: Consult with your Active Directory or Security Administrator about disabling this GPO. Scenario 4Looking at the Application Event Logs, we can see the gencache.py script is trying to read and load the content of the cached data from <systemdrive>\windows\temp\gen_py directory. This operation fails due to an invalid load key: '\x00'. This issue might occur due to corruption in the Python cached data from a previous installation. As a workaround for this issue, remove/rename the temp python gen_py directory by following the steps below: Open Services snap-in and locate Windows Event Log serviceCancel the ongoing installation process of the NGT.Navigate to the <systemdrive>:\windows\temp directory and rename the gen_py directory to gen_py.old.Start the installation of the NGT from the beginning, and this process will create a new gen_py directory with the correct files and values.After the installation completes, check the status of the Nutanix Self Service Restore Gateway service. Scenario 5Remove the partially-installed existing NGT-related packages manually by "Uninstall-Package" PowerShell command.For example, if you need to uninstall "Nutanix Self Service Restore" and "Nutanix Guest Tools Infrastructure Components Package 1" packages to clean up NGT completely: PS> Get-Package -Name "Nutanix Self Service Restore" | Uninstall-Package After you have cleaned up partially installed NGT-related packages, re-try the NGT setup again.
KB14113
G5 EOL and AOS compatibility case handling
There are 14 NX G5 platforms launched in the field. 10 of the 14 platforms will be EOL by the end of October 2023 and the remaining 4 platforms NX-9030-G5, NX-3155G-G5, NX-1175S-G5 and NX-8150-G5 will be EOL by the end of February, July and September 2024. The cross-functional team have agreed to support AOS 6.5.x as the last AOS version for all G5 platforms. This means customers will not be able to upgrade to AOS 6.6 and above on any of the NX G5 platforms. The precheck will fail during the upgrade process.
There are 14 NX G5 platforms launched in the field. 10 of the 14 platforms will be EOL by the end of October 2023 and the remaining 4 platforms NX-9030-G5, NX-3155G-G5, NX-1175S-G5 and NX-8150-G5 will be EOL by the end of February, July and September 2024. More details can be found on the Portal EOL Information page: https://portal.nutanix.com/page/documents/eol/list?type=platform https://portal.nutanix.com/page/documents/eol/list?type=platform. According to the general support policy, “Nutanix will qualify all Short Term Support (STS) releases of AOS leading up to and including the first Long Term Support (LTS) release that is made after the hardware end of maintenance date”, the policy is written this way to cover the period needed to support the hardware platform with an AOS release family through to EOL and potentially the extended support period (if applicable). However, as an exception, the cross-functional team (Engineering, Support, PM) have agreed to support AOS 6.5.x as the last AOS version for all G5 platforms. This means customers will not be able to upgrade to AOS 6.6 and above on any of the NX G5 platforms. The precheck will fail during the upgrade process (ENG-474340 and ENG-502668).
Nutanix may provide extended support contracts to customers running on the 10 NX G5 platforms that will be EOL in October 2023. Nutanix will not provide any extended support contracts to the NX G5 customers running on NX-9030-G5, NX-3155G-G5, NX-1175S-G5 and NX-8150-G5 as the last AOS release family supported on these platforms does not have enough support life to cover an extension.In case of any escalations for extended support, please start an internal email for further discussion and CC the following attendees: Your managerYuantao Jin & Chad Singleton: [email protected] Mulchand: [email protected] Miller & Swapna Utterker: [email protected] If the customer escalations are related to AOS compatibility and support, please start an internal email for the discussion and CC the following attendees: Your managerYuantao Jin & Chad Singleton: [email protected] Mulchand: [email protected] Vedanabhatla: [email protected] Nanda Kumar: [email protected]
KB13526
Objects VMs send traffic to centos.pool.ntp.org
Objects VMs send traffic to centos.pool.ntp.org
Customers may report that Objects VMs are trying to synchronize time not with existing, configured NTP server on PC/PE but with some other, unrelated, external one.While checking aoss-ui-gateway POD configuration we see that there is a /etc/ntp.conf file that contains X.centos.pool.ntp.org servers in its configuration.Login inside Objects cluster and run: [nutanix@object-prod-eee9b1-default-0 ~]$ kubectl exec -i -t aoss-ui-gateway-0 -- grep centos /etc/ntp.conf KB-8170 https://portal.nutanix.com/kb/KB-8170 will help with procedure if it is not familiartcpdump(if installed) will show that this POD indeed sends traffic towards the NTP servers [nutanix@aoss-ui-gateway-0 /]$ sudo tcpdump -i eth0 port 123
Engineering is working on a long term fix in scope of ENG-491960 https://jira.nutanix.com/browse/ENG-491960We have not seen any cases where this affects running workload. Should it happen please inform engineering team.
KB11782
Projects page on Prism Central versions Web UI may hang on load
This article provides a workaround to an issue where the Projects page on Prism Central versions 2021.5.0.1, 2021.7, 2021.9 web UI may hang during loading.
On Prism Central (PC) versions 2021.5.x, 2021.7.x, 2021.9.x the Projects page may hang on load: The problem is observed when the cumulative VM count in Projects loaded on the web UI is more than 500. Consider the following examples: If project "prj1" contains 300 VMs and project "prj2" contains 250 VMs, then the problem will be triggered as "prj1" and "prj2" cumulative VM count is more than 500.If project "prj3" contains 1000 VMs, then the problem will be triggered as the VM count in "prj3" alone is more than 500. By default, Prism Central UI will try to load all available Projects.
SOLUTIONThis problem was resolved in Prism Central version pc.2022.1, please upgrade to this release to fix this issue.WORKAROUNDIf an upgrade to pc.2022.1 is not possible, use the following workaround. On pc.2021.5.x, pc.2021.7.x, pc.2021.9.x, reduce the cumulative VM count by using filters. Apply filters on the Projects page to limit the number of Projects loaded on the web UI. Considering the example scenario above, filter by name "prj1" so only one project will be loaded to view. The maximum count of VMs will be 300, and the page should load successfully: You can obtain a list of project names to apply in filters using the nuclei command on Prism Central VM: nutanix@PCVM$ nuclei project.list count=999 NOTE: In case there is a single project with more than 500 VMs, the above workaround with filters on pc.2021.5.x, pc.2021.7.x, pc.2021.9.x may not work. Upgrade to pc.2022.1 or contact Nutanix Support https://portal.nutanix.com/ in this case.
""ISB-100-2019-05-30"": ""ISB-029-2016-12-01""
null
null
null
null
KB14295
NDB Postgres DB Server provisioning fails with "Failed to configure storage for database" with 1Gb memory profile
Provisioning a Postgres DB Server fails with the error "Failed to configure storage for database" if 1Gb memory compute profile is selected
Provisioning of a Postgres DB Server fails with the following error when a custom compute profile with 1Gb memory is selected: 'Failed to configure storage for database.Check log file /tmp/5b6cc4c9-65d8-45f5-933b-4c03da448328/5b6cc4c9-65d8-45f5-933b-4c03da448328_SCRIPTS.log on xxx.xxx.xxx.xxx for further details' The referred above log file will not contain any useful information and will only say that services do not exist.
Nutanix Engineering is working on resolving the issue in future NDB versions.WorkaroundUse a compute profile with minimum 2Gb of memory.
KB12034
LCM BIOS Update failed while uploading BIOS via Redfish
LCM BIOS update will sometimes fail while uploading BIOS file via Redfish, connection will get dropped with "Connection aborted: Broken pipe" resulting in failed LCM update
On platforms running G6/G7 and BMC 7.10 with Redfish enabled, it was noted that BIOS update via LCM may fail and error message presented by LCM in Prism UI is: Operation failed. Reason: Update of release.smc.redpool.bios.update failed on xxx.xxx.xxx.105 (environment hypervisor) with error: [Update failed with error: [Failed to upload BIOS binary. Status: None, Response: {}]] Logs have been collected and are available to download on xxx.xxx.xxx.115 And when looking at lcm_ops.out file on LCM Leader, following can be found that indicates successful Redfish communication all the way through until POST method is called to upload BIOS binary lcm_ops.out: 2021-09-06 22:37:46,193Z INFO helper.py:113 (xxx.xxx.xxx.105, update, 8f55a915-1525-4605-b038-d88cc0e9381b, upgrade stage [1/2]) [2021-09-06 22:37:14.193112] Sleeping for 10 seconds
Reset IPMI to factory defaults by preserving User configuration in order to resolve this. Open IPMI WEB UIGo to Maintenance -> Factory DefaultSelect option "Remove current settings but preserve User configurations"Click on "Restore"Retry BIOS update via LCM Alternatively, screenshot guide can be found in KB-10833 https://portal.nutanix.com/kb/10833.
KB13599
Objects cluster becomes unmanageable if hosting PE cluster is unregistered from PC
Objects cluster becomes unmanageable if hosting PE cluster is unregistered from PC
Objects cluster upgrade may fail with following error: Upgrade of entity PC CORE CLUSTER(Objects Service:myobjectsclustername), on host () to version [3.5.1] started on 2022-09-06-02:02:03, finished on 2022-09-06-02:03:07 with error message: [Update failed with error: [Upgrade of Objects service myobjectsclustername to 3.5.1 failed with error: CVM IPs are not found]] API call to get the PE cluster details where the Objects cluster myobjectsclustername is residing fails as UUID cannot be found: nutanix@PCVM:~$ less data/logs/aoss_service_manager.out.20220906-084329Z The same in Aplos: nutanix@PCVM:~$ less data/logs/aplos_engine.out The cluster UUID cannot be found in the list of currently registered PEs nutanix@PCVM:~$ nuclei cluster.list | grep 0005c69c-66e4-5bb0-5840-3cecef5aee2e Objects cluster's VMs cannot be found with Objects cluster name nutanix@PCVM:~$ ncli vm ls | grep myobjectsclustername This means PE cluster (UUID 0005c69c-66e4-5bb0-5840-3cecef5aee2e) on which Objects cluster was initially deployed, was later unregistered from PC.
Reregister PE cluster back to the PC from which Objects cluster was deployed and retry upgrade.
KB9257
Deleted VM(s) show up when getting a VM list using GET API
Deleted VM(s) show up when getting a VM list using GET API
User may see an issue where deleted VM(s) on the cluster are being returned when calling the VM GET API - and instead of returning a 404 we get an empty status. Metadata still exists in IDF for that stale VM(s) in the form of the entity capability. The issue does not affect VM showing up in UI, it only affects API calls. If you run API GET for VM within PC, you may still be able to search for it even though its deleted:For e.g.: VM Get Response for https://10.x.x.x:9440/api/nutanix/v3/vms/c00811f8-7979-44e1-ace6-00d49b940dc5 One scenario when this issue may occur is when VMs are created using POST APIs on PE and deleted from PC UI.DIAGNOSIS:Looking at the /home/data/logs/aplos.out logs, you see following errors: 20200224-114934:2020-02-26 00:43:02 WARNING intent_spec.py:231 Could not create entity capability object for the entity where deleted VM in question has UUID = "c00811f8-7979-44e1-ace6-00d49b940dc5" Above error means that when VM was created, "entity capability" was never created. In working state, we don't see stale VMs as they are normally supposed to be taken care of by the "ssp_cleaner". However, "ssp_cleaner" only deletes when the "entity capability" exists. For this particular case, the "entity capability" does not exist/was never created. "ssp_cleaner" in Prism Element will not clean up the "intent_spec". Subsequently, "ssp_cleaner" in PC will not clean up the "intent_spec" because it still exists on the PE.
PLEASE CONSULT SR. SRE OR STAFF SRE TO CONFIRM VMs ARE DELETED TO AVOID INADVERTENT CONSEQUENCES BEFORE USING WORKAROUND BELOWWORKAROUND:The workaround is to delete "intent_specs" for the deleted VM. Go to nuclei on PE and run vm.list. First identify the VM UUID <nuclei> vm.list Run "diag.get_specs kind_uuid=<VM_UUID>" to get associated "Spec UUID" for "Kind UUID" <nuclei> diag.get_specs kind_uuid=c00811f8-7979-44e1-ace6-00d49b940dc5 Run "diag.delete_specs uuids=<Spec UUID>" to delete "intent_specs" for VM in questions. Use Spec UUID from above output which is "db457c4e-ed01-54b1-ba71-bc1b6650889f" in this case. <nuclei> diag.delete_specs uuids=db457c4e-ed01-54b1-ba71-bc1b6650889f
KB13414
Unable to Login to Prism Central as Postgres Master Pod's Disk Runs Out of Space if Replica is Down
Postgres master pod's disk runs out of space if replica is down which can further lead to IAM pods in crashloop and users will be unable to login to Prism Central
IAM Pods fails to start with status CrashLoopBackOff as it cannot connect to Postgres database e="2023-01-31T21:45:11Z" level=info msg="Using memory for cache" Check if Postgres is Up or if the cape pod (not the cape-backrest) only has 1 running pod (Ready will show 1/2). For issues seen with the cape-backreest pods, please see KB-16254 https://portal.nutanix.com/kb/16254 nutanix@PCVM:~$ sudo kubectl get pods -n ntnx-base Status for “pgo-deploy” pod should be Completed and for “postgres-operator”, it should be Running 4/4. Pods with your cluster name (here cape) should be running.Postgresql patroni cluster is in crashloop, due to no space left in /pgdata on pg leader: nutanix@PCVM:~$ sudo kubectl logs -n ntnx-base cape-747ffcbcff-5zj5p database | grep FATAL |tail Confirm if the disk is out of space: nutanix@PCVM:~$ sudo kubectl exec -n ntnx-base cape-747ffcbcff-5zj5p -c database -- df -h Here, leader cape-747ffcbcff-5zj5p accumulated WAL logs endlessly as it was unable to replicate with replica cape-ofhj-586b67fdb7-lr2zr (could be due to CoreDNS issue as mentioned in KB-13003 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V000000LZGcSAO)Verify by how much the replica pod is lagging nutanix@PCVM:~$ sudo kubectl exec -n ntnx-base cape-747ffcbcff-5zj5p -c database -- patronictl list
If the issue is seen before pc.2022.6, this is a known issue - ENG-450908 https://jira.nutanix.com/browse/ENG-450908. This issue is resolved in pc.2022.6 and beyond. This issue causes the disk full on the primary Postgres pod due to the master and replica not being able to connect with each other. If the issue is seen on pc.2023.3 and beyond, please attach your case to ENG-611958 https://jira.nutanix.com/browse/ENG-611958. This issue has been fixed in pc.2023.4.0.2/pc.2024.1Workaround: Please engage STL/Devex/Engineering to apply the workaround as this includes manual cleanup of /pgdata partition. The workaround and scenario is mentioned here (for reference only): https://docs.google.com/document/d/1AFpTi-eGxsiAFhnUn9ueVVMrF5J0KyFBrGgdAkTHxRE/edit# https://docs.google.com/document/d/1AFpTi-eGxsiAFhnUn9ueVVMrF5J0KyFBrGgdAkTHxRE/edit#
KB9157
Getting an error saying 'Checkpoints not found' when trying to import a VM back to Hyper-V manager (Windows Server 2012 R2)
This KB talks about the method to import a VM back to Hyper-v manager when its Snapshot XML file is deleted
You may get 'Checkpoints not found' when trying to import a VM to Hyper-V host. While the exact conditions leading to this situation are unclear, there can be scenarios where The VM's disks, configuration XML files and snapshot disks are present but the snapshot XML file is removed; Or a backed up VM has it's configuration XML file in place along with the disks but the snapshot XML file could be missing. The actual snapshot itself, that has the data, is present in both these scenarios but importing such VM without the snapshot XML file will result in the following error:We can confirm if the problem is due to the missing snapshot XML file by various methods. The easiest method is to check the VM's folder in the storage and look for the XML file under the Snapshots directory. In the problem case, there will be the snapshot bin and VSV files present but not the XML file.Another method can be used to double confirm the problem --- turn on extra debug Hyper-V VMMs logging in Event Viewer when importing the VM: Open Event viewer in the host where you are trying to import the VM back;Enable Analytic and Debug logs to be shown by Event Viewer: View -> Show Analytic and Debug Logs;Enable VMMs analytical logging: Application and Service Logs -> Microsoft -> Windows -> Hyper-V VMMS -> Analytic -> Enable log;Now try to import the VM and wait till the operation fails;Disable VMMs analytical logging: Application and Service Logs -> Microsoft -> Windows -> Hyper-V VMMS -> Analytic -> Disable log;Check the Warning level message with Event ID 1101;Go to details and there will be an error message similar to the below (in Friendly View under UserData): VmmsPlannedVirtualMachine::ImportSnapshots: Snapshot config file \\NTNX-CLSTR-01.hdma.net\NTNX-CNT-02\SV-Finance\Snapshots\900398F3-E91F-453E-96AF-688D2CC90CF1.xml could not be found. Skipping snapshot import.
At this point, we can try a couple of methods to restore the VM without its snapshots. Note: The data from the snapshots would be lost. In the Import Virtual Machine wizard, the 'Locate Checkpoints' option would automatically appear after the 'Choose Import Type' selection if the snapshot XML cannot be found automatically. Use 'Remove checkpoints' at this point to import the VM without snapshots. This is a safer method.Only if the above fails, the VM's configuration XML file needs to be edited to remove the snapshot references prior to importing the VM again. Make sure you backup original XML file prior any modifications: Go to the VM folder and open the VM's configuration XML file using a text editor;Find the snapshots section (starts with <snapshots> and ends with </snapshots>); When there are snapshots, the snapshots section would look like the below: <snapshots> Edit the snapshots section to make it look like the one without snapshots and save the file (remove the section between <node0> and </node0>, change the size value to 0. After the change snapshots section should look as below: <snapshots> Save the changes and try to import the VM.
KB14047
A160161 - A160162 - File Server Promote or Demote Failed
Addressing Alerts A160161 and A160162, File Server Promote Failed or Demote Failed, respectively.
This Nutanix article provides the information required for troubleshooting the alert FilesServerPromoteFailed or FileServerDemoteFailed for your Nutanix cluster.Addressing Alerts A160161 and A160162, File Server Promote Failed or Demote Failed, respectively. Sample Alert Block Serial Number: 23SMXXXXXXXX Output messaging [ { "Check ID": "File Server Promote Failed" }, { "Check ID": "Check alert message for details" }, { "Check ID": "Try to promote the file server again, and if the failure persists, then contact Nutanix support." }, { "Check ID": "File Server may not be usable." }, { "Check ID": "A160161" }, { "Check ID": "File server promote failed." }, { "Check ID": "File server {file_server_name} promote failed due to {reason}." }, { "Check ID": "160162" }, { "Check ID": "File Server Demote Failed" }, { "Check ID": "Check alert message for details" }, { "Check ID": "Try to demote the file server again, and if the failure persists, then contact Nutanix support." }, { "Check ID": "File Server may not be usable." }, { "Check ID": "A160162" }, { "Check ID": "File server demote failed." }, { "Check ID": "File server {file_server_name} demote failed due to {reason}." } ]
TroubleshootingWhen promoting or demoting a File Server with Metro Availability, there is a chance of failure. In this case, please refer to Metro Documentation https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_6:wc-protection-domain-configuration-ma-wc-c.html to verify that all conditions are met. If the Metro cluster is on ESXi hosts, a host reboot can lead to issues with datastores. This will then lead to failure of promotions and demotions. Refer to KB-8670 http://portal.nutanix.com/kb/8570 for additional information. Once the issue from that KB is addressed, File Server promotions and demotions will be possible immediately. Ensure that the following ports remain open for all the Controller VM IP addresses in Cluster A and Cluster B: TCP 2009TCP 2020 Communication between the two metro clusters happens over these ports and, therefore, requires that these ports must remain open.Should there continue to be an issue, please contact Nutanix support for further assistance. Collecting Additional Information If you need further assistance or in case the above-mentioned steps do not complete the transition to Smart DR, consider engaging Nutanix Support http://portal.nutanix.com. Gather the following information and attach it to the support case. Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 http://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 http://portal.nutanix.com/kb/2871.Collect the Logbay bundle from Minerva leader using the following commands. For more information on Logbay, see KB 6691 http://portal.nutanix.com/kb/6691.Using File Server VM Name: nutanix@cvm$ logbay collect -t file_server_logs -O file_server_name_list=<FS name> Using File Server VM IP: nutanix@cvm$ logbay collect -t file_server_logs -O file_server_vm_list=<FSVM IP> If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), or the generated logs has under 10 MB log size, collect the NCC log bundle instead using the following command on CVM: nutanix@cvm$ ncc log_collector --file_server_name_list=<FS Name> fileserver_logs Note: Execute "afs info.get_leader" command from one of the CVMs (Controller VMs) to get the Minerva leader IP. Attaching Files to the Case Attach the files at the bottom of the support case on the Nutanix Support Portal. If the size of the logs being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 http://portal.nutanix.com/kb/1294.
KB1952
Hyperint - setting up log levels
null
By default, hyperint log messages gather and display INFO level messages. This article describes how to enable DEBUG level messages.
In the file /home/nutanix/config/hyperint/log4j.xml, modify the following line: <logger name="com.nutanix.hyperint" additivity="false"> Hyperint process has to be restarted: allssh genesis stop hyperint Notes: This modification should be temporary only. Once the needed log collection is done, set the log level back to the default INFO value.This will enable DEBUG logs only on the CVM the file was modified. If needed, you'll have to repeat the steps above on multiple/additional CVMs.
KB4687
BMC firmware and supported TLS versions
null
This KB lists the BMC firmware version and the SSL/TLS version(s) supported on them.
Notes: BMC firmware version 3.41 and 3.48 are still going through QA and not available for download.​​FOR X9 SYSTEMS (NONE-G4/ G5): Uploading the IPMI configuration .bin file after the BMC upgrade (flash and factory restore) will remove the patch and allow communication with unsupported TLS versions. [ { "Model": "NX-1065-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-1065S-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-1155-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-3060-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-3060-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-3155G-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-3175-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-6035C-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-6155-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-8150-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "SX-1065-G5", "BMC Firmware Version": "3.28, 3.35, 3.41", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-1020", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-1050", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-1065S", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-3050", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-3060", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-6020", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-6035C", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-6050", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-7110", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-9040", "BMC Firmware Version": "3.48", "Supported TLS Version": "TLS 1.2" }, { "Model": "NX-1065-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-3060-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-3060-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-3155G-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-3155G-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-6035-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-8150-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-9060-G4", "BMC Firmware Version": "1.92, 1.97", "Supported TLS Version": "TLS 1.0, TLS 1.1, TLS 1.2" }, { "Model": "NX-1020", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-1050", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-1065S", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-3050", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-3060", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-6020", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-6035C", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-6050", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-7110", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" }, { "Model": "NX-9040", "BMC Firmware Version": "3.24, 3.4", "Supported TLS Version": "SSLv3, TLS 1.0" } ]