id
stringlengths
1
25.2k
title
stringlengths
5
916
summary
stringlengths
4
1.51k
description
stringlengths
2
32.8k
solution
stringlengths
2
32.8k
KB10087
High CVM CPU usage reported due to 3rd party software
All or most of the CVM reports High CVM CPU usage reported due to 3rd party software running scans - "cpu utilization (100 %) above threshold (90 %) "
All or most of the CVM reports High CVM CPU usage reported due to 3rd party software running scans - "cpu utilization (100 %) above threshold (90 %) ", This happens everytime you run a NCC. Even after increasing the CPU on the CVM it still triggers. There was no performance impact reported by the customer, but this can change depending on the environment.1. TOP on the CVM shows normal2. No issues with the Load average on the CVM3. CPU %idle is 20-30% all the time 4. Arithmos stats reports high CPU stats as it is coming from hypervisor5. Prism UI will show high CPU for those CVMs as the value is coming from the hypervisor and the hypervisor reports high CPU6. There will be high number of IOPS and BW utilization cluster wide in this case.7. Even after increase the CPU on the CVM, you would notice the CVM CPU usage is highNCC Alert : Detailed information for vm_checks: The output of the esxtop command after expanding with the GID of the CVM VM - %USED for all the vCPU is more than 100 % 8:44:15am up 223 days 7:44, 1562 worlds, 43 VMs, 146 vCPUs; CPU load average: 0.47, 0.47, 0.49 The output of top command on the CVM, does not closely match the esxtop data, it should not match but still there is a difference on CPU for at least 30-40 % on CPU idle : 02:11:15 PM CPU %user %nice %system %iowait %steal %idle
On further investigation, we noticed that CVM CPU from top inside the CVM reports less CPU usage and the host reports very high so we wanted to understand where is the gap coming from.Customer was running Endpoint Detection and Response (EDR) - Sophos, which was generating very high number of IOS and consuming a lot of BW. When we stopped the service on the VMs, we saw a huge dip on IOPS and BW consumptions.Furthermore, customer has to involve the vendor to further investigate why there is huge IOPS getting generated from these 3rd party SW causing CVM CPU usage high.
KB12979
Nutanix DRaaS | Prism Central has High number of tasks generated for VPN Connection Update which are failing continuously
This KB details an issue observed in PC, Due to a new bug introduced in PC.2022.1, customers may notice a large number of failing “VPN connection update” tasks on their on-premises Prism Central. While this does not have an impact on the DR datapath, this will flood the task database and fire off alerts.
Nutanix Disaster Recovery as a Service (DRaaS) is formerly known as Xi Leap.An issue has been observed on Xi Leap connected systems following a Prism Central upgrade to version pc.2022.1, where tasks will be generated every 30 seconds for "VPN Connection Update" which fail continuously. Due to ENG-457880 https://jira.nutanix.com/browse/ENG-457880 introduced in PC.2022.1, customers may notice a large number of failing “VPN connection update” tasks on their on-premises Prism Central. While this does not have an impact on the DR datapath, this will flood the task database and fire off alerts. Diagnosis: 1. In Prism Central, there will be tasks generating every 30 seconds and failing for "Vpn connection update" 2. SSH to the on-prem Prism Central VM, and look at /home/nutanix/data/logs/atlas.out, where the following trace will be present: 2022-03-07 18:29:15,933Z INFO base_task.py:637 Task c2f6bd4b-e6cc-43af-9a2f-134dfa7424fb(VpnConnectionUpdate) failed with message: HTTPSConnectionPool(host='192.168.236.30', port=8888): Max retries exceeded with url: /v1/gateway/ipsec-conn (Caused by ResponseError('too many 500 error responses',)) 3. At this point, find the OnPrem VPN VM IP address to be able to login and further debug the problem. Steps can be found How to connect VPN VM https://confluence.eng.nutanix.com:8443/display/STK/WF%3A+VPN+IPSEC+troubleshooting 4. Once connected to the VPN VM follow the steps below and the following error will be noticed regarding host resolution cd /var/log/ocx-vpn The following signature will be present: 2022-03-07 18:35:42,706 - execformat.executor - DEBUG - Executed command "/opt/vyatta/sbin/my_set vpn ipsec ike-group 7901eb6f-126b-474d-8673-bd6944fecfd1 proposal 5 hash sha256" return code: ('', '', 0)
To fix the issue and prevent new tasks: Login to the Xi Leap PC for the tenant via AGS ( Need AGS access for this, ask a Xi Leap SME https://confluence.eng.nutanix.com:8443/display/STK/Xi+Leap if needed) Using Nuclei, run the following to get the UUID for the VPN connection <nuclei> vpn_connection.list Run the following command to explicitly set the default QoS (Quality of Service) and DPD (Dead Peer Detection) parameters for the VPN connection: <nuclei> vpn_connection.update <vpn connection uuid> interval_secs=30 timeout_secs=120 Run the above command for each VPN connection created for the tenant. After running the above steps, the task will succeed, and the repeated VPN connection update tasks should stop. Check all the VPN connections again by using the following command: <nuclei> vpn_connection.list
KB5120
Commvault backups my fail after backup job reconfiguration
Commvault IntelliSnap backup job configuration can be changed to use Rest API v3 from v2.
Error message in Commvault IntelliSnap: 2412 101c 01/01 21:00:22 16208 CVLibCurl::CVLibCurlSendHttpReqLog() - failed to send http request Rest API v3 call created by Commvault, instead of v2 as before the Commvault upgrade or reconfiguration. Nutanix cluster local user was used and backups were working previously (with API v2)no directory services was configured on Nutanix cluster Rest API v3 need directory services and SSP configured.Required privileges for user on Nutanix: cluster admin and SSP admin
configure Nutanix cluster with Directory Services with AD/LDAP (if not already)create Commvault service account in directory - AD/LDAPrequired privileges for service account: cluster admin and SSP admin test user by logging to Prism and SSP with UPN (user@domain)customer or Commvault Support, needs to update the backup job with the directory user
KB14076
Prism Central – In Microservice infrastructure enabled Prism Centrals, /home directory space can be exhausted due to docker folders
In PC clusters with CMSP enabled in pre-pc.2022.9 releases, the size of /home/docker/docker-latest folder keeps growing over time. This leads to the eventual exhaustion of /home directory space.
Note: Starting with PC.2023.1.0.1, the migration of docker folders to the MSP is handled automatically through the upgrade eliminating the need to follow this article's instructions. In Prism Central (PC) clusters with microservice infrastructure enabled in pre-pc.2022.9 releases, the size of /home/docker/docker-latest folder keeps growing. This leads to the eventual exhaustion of /home directory space.Scenario 1: New deployment of Prism Central with version > pc.2022.9. The deployment will automatically place the docker folder on Prism Central's new MSP disk. No user intervention is needed.Scenario 2: PC is running <= pc.2022.6.0.x, and Microservice IS NOT enabled on Prism Central.Scenario 3: PC is running <= pc.2022.6, and Microservice IS enabled on Prism Central. - Upgrade to 2023.1.0.x/pc.2023.3 will automatically migrate the docker folder to the new MSP disk on Prism Central. No user intervention is needed.- However, if the upgrade is not possible due to insufficient space on the /home disk, then consider running these manual steps Nutanix Engineering has developed a script to migrate the docker folder to the new MSP disk on version pc.2022.6 and pc.2022.9. This scenario will be handled as part of an upgrade in a future version. Note: This script is approved for use only on pc.2022.6 and pc.2022.9. If your PC cluster is running a version less than pc.2022.6 (for example, pc.2022.4), try and upgrade to pc.2022.6.x at the minimum if possible. If an upgrade is not possible, engage Nutanix Support http://portal.nutanix.com.
​​​​Essential conditions to be checked before running the script:CMSP Enabled Check: Check the version of the PC and ensure it is less than 2022.9.Confirm CMSP is enabled by performing the following:If the login screen shows the new IAMv2 screen, then microservice infrastructure is enabled. For example, the screenshot below refers to the newer IAMv2 screen: Log in to Prism Central > Settings > Prism Central Management, check section > "Prism Central on Microservices Infrastructure. If the section says “Enable Now,” the microservice infrastructure is not enabled. If there are details like domain name, DNS, or Private IP details, then microservice infrastructure is enabled. Check if the following services are enabled: CMSP, viz, envoy, fluent, keepalived, registry, CoreDNS (for example, in the example below, envoy and Coredns are enabled as the symlink is noticed only for these three services). If they are not, then microservice infrastructure is not enabled. nutanix@PCVM:/etc/systemd/system/multi-user.target.wants$ ls /etc/fstab Check: Examine the contents of the /etc/fstab file by running 'cat' command.If we see the below output with " /home/docker/docker-latest ", do not proceed with the manual migration /home/nutanix/data/sys-storage/NFS_2_0_270_f136e388_01c3_4d38_971a_827e95861cfd/docker/docker-latest/ /home/docker/docker-latest none nosuid,defaults,bind 0 0 To proceed make sure we DO NOT have an entry around /home/docker/docker-latest Disk Space Check: Check the available disk space on the /home partition. nutanix@PCVM:~$ df -h Check the /home partition usage amongst all users. Verify if "docker" user is using the most space: nutanix@PCVM:/home$ sudo du -skx * | sort -rn Verify if docker overlay2 is taking up the most storage: nutanix@PCVM:/home$ sudo du -cmh --max-depth=1 /home/docker/docker-latest | sort -h If you confirm that the docker files use the most space on /home, it is necessary to migrate them to the MSP partition. Run NCC health_check ( KB-5228 http://portal.nutanix.com/kb/5228) to verify if PC VM disk usage is exceeding thresholds: nutanix@PCVM:-$ ncc health_checks system_checks pcvm_disk_usage_check DO NOT UPGRADE THE PRISM CENTRAL TO ANOTHER pc.2022.6.x version, if the workaround has already been implemented on the Prism Central cluster running version pc.2022.6.x. Only upgrade to pc.2022.9 or higher in that scenario. An upgrade to another pc.2022.6.x version would result in the PC upgrade being stuck. This issue is resolved in pc.2022.9 and higher.If all the conditions are met, engage Nutanix Support http://portal.nutanix.com for further assistance.If the /home usage IS NOT related to docker files, please take a look at KB-5228 http://portal.nutanix.com/kb/5228 to troubleshoot this issue. ONLY use this KB if the /home usage is high due to docker files.
KB15482
Project Users Are Unable to Manage VM Templates in PC
Project Users are Unable to Manage VM Templates in PC irrespective of permissions granted to the user
Project users cannot manage VM templates even if explicit permissions are granted to the user. The Templates page would display the following error: Failed to fetch Templates, please make sure all Prism Central services are running. If the user tries to "Create VM From Template", the below error is observed: Unable to fetch VM Templates Identification:To troubleshoot the cause of failure, we will need to track the VM templates API call as mentioned below: On the Prism leader, the /home/apache/ikat_access_logs/prism_proxy_access_log.out would show that the v4 API call is being forwarded to Mercury service on PCVM .25 and receiving a 500 error code response [2023-08-22T10:58:22.327Z] "GET /api/vmm/v4.0.a1/templates?vmSpec=true HTTP/2" 500 - 0 260 34 34 "x.x.x.x" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "24e49fb2-f19f-4660-9e3a-9a47bf616644" "x.x.x.x" "x.x.x.25:9444" Further Checking Mercury logs on PCVM .25, we observe a 500 (INTERNAL ERROR) response from the downstream service handling the API call. I20230822 10:58:22.327766Z 119008 request_processor_handle_v3_api_op.cc:168] <HandleApiOp: op_id: 696527> Received Api request with message ID: 7, URI: /api/vmm/v4.0.a1/templates?vmSpec=true, base-path: /api/vmm/v4.0.a1/templates, type: GET, XFF: 'x.x.x.126', X-Request-Id: 24e49fb2-f19f-4660-9e3a-9a47bf616644, is_internal_request: false, is_privileged_user: false, is_mtls_authenticated: false, is_chunked_request: false, body-size: 0 As this is a v4 API call, the above API request is forwarded to Adonis from Mercury. The below response can be observed in the /home/nutanix/adonis/logs/access/access_log.log file where we see the API call getting a 500 error code response: 127.0.0.1 - - [22/Aug/2023:03:58:22 -0700] "GET /api/vmm/v4.0.a1/templates?vmSpec=true HTTP/1.1" 500 260 At the same time, /home/nutanix/adonis/logs/prism-service.log will have the following entry indicating that there was an error response from Catalog for the API call: 2023-08-22 10:58:22,355Z WARN [XNIO-2 task-2] [69cbecbc09add223,69cbecbc09add223] RpcClient:doHandleRpcResponse:454 Transport error reported by bottom half while trying to send RPC with rpcId=6635168871055622275, reason=Error response code: 500 Checking the Catalog Master logs at this time, the following traceback will be seen in the catalog.out log file: Traceback (most recent call last): Catalog forwards a filter request to IAM and in the case of project users, the filter response would include project_uuid. The above exception is raised because project_uuid is currently an Unknown IAM filter that the Catalog service is unable to handle. As Catalog is unable to handle the IAM filter response it leads to UI error "Unable to fetch VM Templates" or "Failed to fetch Templates".
This issue is resolved in pc.2023.4. Upgrade Prism Central to prevent it from happening.With this issue, customers will still be able to manage VM templates with the use of local users or non-project AD users.
""Title"": ""Foundation may crash during AOS download for large scale installs/add-node operations""
null
null
null
null
KB9476
CVM: How to recover the CVM in emergency mode
This KB describes workaround recovering the CVM in emergency mode in dual SSD platform, due to CVM being rebooted when one of the SSD was failed.
In a dual SSD platform, where one of the SDD disk has failed and if the CVM being rebooted, it enters emergency mode with following warning. Warning: /dev/disk/by-id/md-uuid<uuid> does not exist The phenomenon was observed due to the RAID device (md0) on a healthy disk is in "inactive" state. Smartctl checks may pass for the drive. Check dmesg and message logs to confirm if the drive is faulty. message log from CVM : 2020-06-30T12:20:36.287021+09:00 NTNX-xxx-A-CVM kernel: [ 2.742163] scsi 2:0:3:0: qdepth(254), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1) 2020-06-30T15:55:22.698725+09:00 NTNX-xxx-A-CVM systemd-udevd[4538]: error opening ATTR{/sys/devices/pci0000:00/0000:00:06.0/host0/port-0:0/expander-0:0/port-0:0:1/end_device-0:0:1/target0:0:1/0:0:1:0/block/sdb/sdb2/bdi/read_ahead_kb} for writing: No such file or directory dmesg log [ 2.872633] sd 0:0:1:0: [sdb] tag#6008 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
Note : RAID device on a healthy disk going to "inactive" state is fixed on the versions 5.10.10.1 and above. so it recommended to upgrade AOS to latest version after the workaround is implemented. 1. Check md RAID device in /proc/mdstat2. Set the RAID device on the healthy disk as active. # mdadm --manage /dev/md0 --run It will show the RAID device on failed disk as "removed". 3. If you see the disk is bad as per the logs, please bring the CVM up and then start removing the SSD, please use the KB9043 https://portal.nutanix.com/kb/000009043 to remove the SSD from prism/ncli4. Once the faulty SSD has been replaced, repartition and add the new disk from Prism -> Hardware tab ( KB9043 https://portal.nutanix.com/kb/000009043 )
KB11557
Expand cluster pre-check : test2_2_network_validation
Expand cluster pre-check - test2_2_network_validation
Expand cluster pre-check test2_2_network_validation validates the provided network parameters of new nodes against the current network state. The pre-check fails with the following errors: Scenario 1 Failed to validate and allocate backplane ips for new nodes Or: Service segmentation validation failed with error: <error> Scenario 2 Failed to validate DVS on the new node. Reason: <error> Scenario 3 Failed to validate host network <host_network_name> presence on host <host_ip>. Please create Host Network <host_network_name> on appropriate switch if not already present and retry Scenario 4 If an ESXi cluster has the management network(vmk0) and the CVM Management interface (eth0) configured on NSX DVS Segments or port groups and the incoming node is not configured with NSX. The following error will be seen: Failure in pre expand-cluster tests. Errors: Precheck: test2_2_network_validation failed. Networks [u'xxxx', u'yyyy'] are not found to be present on host xx.xx.xx.xx as nsx dvs portgroups. Please refer to KB-11557. Scenario 5 Failure in pre expand-cluster tests. Errors: Precheck: test2_2_network_validation failed. Cluster has ESXi nodes with mixed network configuration. Nodes <list of CVM IP’s> have DVS enabled whereas nodes <list of CVM IPs> does not have DVS enabled. Please ensure all nodes to be of single type. Scenario 6 When expanding a cluster with specific backplane IPs in a network-segmentation environment, the following error may be seen: ERROR 47684560 backplane_utils.py:188 IP xx.xx.xx.xx does not belong to backplane ip pool xxxx
Scenario 1If Network segmentation is enabled, validate the IP pool and the allocated IPs. Scenario 2If the cluster is on DVS, and Network Segmentation is enabled, verify that the new node is part of the vCenter cluster and is added to the required DVS. Scenario 3If the cluster contains ESXi Compute only (CO) nodes with AHV storage only nodes, the CO node to be added must have a host network <host_network_name> on the appropriate switch. The precheck will fail if this network is not present. Create the host network and retry the expansion to fix the issue. Scenario 4Configure NSX on the incoming node (new node). The network settings must be identical to that of the nodes in the cluster. Retry the cluster expansion after this. Scenario 5This check looks to see if the CVM external network interface is on a distributed switch. Configure virtual network interface of the CVMs with IP addresses in the second set (in the error message) to be on the same distributed virtual switch (DVS) used by the remaining nodes within the cluster. Scenario 6 Check the network_segment_status and verify the backplane pool name: nutanix@CVM:~$ network_segment_status Example output: Network Segmentation is currently enabled for backplane In Prism UI -> Settings -> Network Configuration -> Internal Interfaces -> Click "Create New Interface" -> Select the existing IP Pool -> backplane_ip_pool_xxxxxx -> Click Edit In the IP Pool Window, Click "+ Add an IP range" to add the required IP addresses for the new CVM/Host: Retry the cluster expansion. If the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support http://portal.nutanix.com.
KB14644
NDB | Cloning of MYSQL fails due to special character in root password
This KB addresses an issue where Cloning of MYSQL DB fails if special characters are used in the MYSQL root password
Clone operation on MYSQL DB fails because the root password(s) ([root@%] and [root@localhost]) of the DB has/have one or more special characters (colon (:), semicolon (;), etc) in it/them.Symptoms:1. Cloning of MYSQL DB fails with: mysqladmin: [Warning] Using a password on the command line interface can be insecure. 2. In the <operation_ID>.log file, the rollback would have started immediately after this error line is seen: TASK [set_fact] ****************************************************************
Verify with the Customer if the password contains any Special Characters. If yes, consider changing the MYSQL password for both root accounts so as not to include any special characters, update the passwords on NDB, and re-try the clone operation using the steps outlined in the following document: Changing the MYSQL root password on NDB https://confluence.eng.nutanix.com:8443/display/ED/Change+Mysql+and+MariaDB+root+password+in+NDBPlanned Improvements being tracked on ERA-25528 https://jira.nutanix.com/browse/ERA-25528 ERA-25529 https://jira.nutanix.com/browse/ERA-25529 - Feasibility is being evaluated and the fix targeted for NDB 2.5.3.
{
null
null
null
null
KB6864
LCM Firmware Upgrade Fails on Dell Hyper-V Nodes Imaged with Phoenix 3.2.1 and older
The purpose of this KB is to assist in creating a Staging partition on Dell Hyper-V hosts that were imaged with RASR Phoenix Version 3.2.1 and earlier. The lack of the Staging partition will cause both LCM Phoenix based firmware upgrades and 1-Click 2012 R2 > 2016.
This KB is only applicable for Hyper-V 2012R2 clusters, which were imaged with the Dell RASR tool leveraging a Phoenix version lower than 3.2.1. As a result, LCM firmware upgrades and 1-Click Hypervisor upgrades from 2012 R2 to 2016 will fail. This KB provides the steps for remediating LCM based failures. For 1-Click Hypervisor upgrades, an ONCALL should be opened to recover as additional steps may be required.To check the Phoenix Version at the time the node was imaged, run the following from any CVM in the cluster: cvm$ allssh 'cat /etc/nutanix/phoenix_version' Note that clusters may have some nodes that were added or imaged with a Phoenix version newer than 3.2.1. For those nodes, no remediation is needed.During a failed LCM firmware upgrade, you’ll get an Error on Prism similar to: lcm_ops_by_phoenix failed while running _execute_pre_actions These older Phoenix imaged nodes don't have a staging partition. Thus, the upgrade process will fail with an error in the Foundation logs: 20181012 10:54:55 ERROR Failed to execute command (ls C:\/hyperv_first_boot.ps1) on HyperV host with error: or 20181012 10:54:56 ERROR Failed to execute command (ls C:\/.foundation_staging_partition) on HyperV host with error: Using ‘diskpart’ from the Hyper-V host command prompt, issue ‘list volume’.An example of a bad partition will show a similar output with only the OS and Recovery volumes as seen below. Note: drive letters can vary node to node. DISKPART> list volume
The primary reason for upgrade failure is due to the not having a staging partition and missing the marker file required for the upgrade. This section provides the steps to cancel any pending LCM tasks and remediate the missing staging partition and marker file.First, we need to cancel any pending LCM task. Please follow the steps in KB-4872 https://portal.nutanix.com/#/page/kbs/details?targetId=kA00e000000XeM1CAK for cancelling LCM tasks with the “lcm_task_cleanup.py” script.Only proceed to the next steps when all LCM tasks have been verified as Failed or Aborted.If a node is still stuck in the Phoenix prompt, we need to reboot the host back to Windows. From the iDrac Virtual Console, issue: python /phoenix/reboot_to_host.py Next, we need to create a "staging" folder on the OS Drive (C:\) and shrink the Recovery partition (NTFS) on the Dell node by 200MB and create a new Staging partition (FAT32) from the 200MB of now unallocated space on all hosts in the cluster.Run from only one CVM per cluster: hostssh 'mkdir C:\staging' Run on each Hyper-V host in a RDP session, open a Command Prompt (cmd) as Administrator: C:\Users\administrator.xyz> diskpart (Open DiskPart) Create a '.foundation_staging_partition' marker file on the new partition C:\Users\administrator> cd C:\staging Complete Example BelowNote: Output will be similar, but not exact. DISKPART> list disk After creating the Staging partition and ‘.foundation_staging_partition’ marker file on all Hyper-V hosts re-initiate the LCM upgrade from Prism.NOTE: Under rare instances, you may see an error where there are multiple active partitions when trying to reboot the host. The error will state: Booting from Hard drive C: In the event this occurs, you must attach a Windows 2012R2 ISO to the iDrac and boot into the Recovery > Troubleshoot > Command Prompt mode. This will allow you to access 'diskpart' and set the non-OS partitions as inactive. To do this: diskpart> list disk Repeat this on all partitions that are not the OS partition, and ensure the OS partitions is marked active. diskpart> select partition # Select "Turn off your PC" from the Windows menu and detach the virtual media from the iDrac console. Once the media is detached, use the iDrac to power the host back on for a Normal Boot.
KB7469
Rolling restart tasks stuck due to intermittent ergon issues
Rolling restart tasks stuck due to intermittent ergon issues
As a result of deliberate rolling reboot task or rolling reboot initiated due to any component upgrade in the cluster (e.g. CVM memory) the corresponding ergon tasks may get stuck if ergon service is not healthy .There are 2 scenarios: 1- Scenario1: Only first 2 rolling tasks are created and stuck for a while b37b2472-6eaf-4d83-5302-a4cb942f3e28 64042e76-5d85-4bac-73ad-dc3f8e1f0e69 Genesis 20 hypervisor rolling reboot checks kRunning The task will have percentage_complete 28% nutanix@NTNX-J7016B5Z-A-CVM:10.1.2.54:~$ ecli task.get 64042e76-5d85-4bac-73ad-dc3f8e1f0e69 Ergon service is not available few minutes after task got created: I0126 15:36:41.534014Z 26247 common.go:394] TRANSITION:TU=b37b2472-6eaf-4d83-5302-a4cb942f3e28:PTU=64042e76-5d85-4bac-73ad-dc3f8e1f0e69:RTU=64042e76-5d85-4bac-73ad-dc3f8e1f0e69:Genesis:20:hypervisor rolling reboot checks:kQueued/kRunning2024/01/26 15:37:17 Recv loop terminated: err=read tcp 10.1.2.54:33442->10.1.2.60:9876: i/o timeout Scenario2: Rolling restart made more progress and has one or more subtasks stuck in running: Task UUID Parent Task UUID Component Sequence-id Type Status In Ergon logs service you will find intermittent issues during the time when rolling tasks were running
1- Scenario1: Abort the 2 ongoing ergon tasks. Warning: This method is approved by engineering to be used for specific workflows only. Using the ergon_update workflow for non-approved tasks can cause issues like genesis crash loop nutanix@cvm:~$ ~/bin/ergon_update_task --task_uuid=task_uuid --task_status=aborted 2- Scenario2: Collect logs and open TH.
KB9093
Issue with Athena can cause Prism API calls to fail and Prism to load very slowly
This describes the few different ways Athena may impact Prism's ability to load
There is a known issue due to which Prism API Calls fail. The issue manifests in various ways, some of which are as follows: 1. Third-party Backup failing #ThirdParty 2. Move Migration #MoveMigration 3. File Server Dashboard not loading #FileServer 4. Citrix components can behave slow. #Citrix 1. Third-Party Backup failing undefined The backup jobs for User VMs with third-party backup software, such as Veeam may fail a few minutes later after starting backups. Eg: The error messages on the Veeam Appliance console are as below. "Preparing to backup" We can find errors from “nxbackupagent.log" on the Veeam Appliance server as below.There is always a NutanixAPIv3 error a few minutes after starting the Backup job. [2020-02-24] [10:04:05.700] [145985] [Info] Backup: Starting. VmUuid = ‘ 27778cb-...-208', JobUuid = ‘ 7c7d58c-...-1ae28', The aplos.out log shows the following stack traces - indicating API call failure. 2019-12-14 03:31:42 UWSGI - WSGI app 0 (mountpoint='') ready in 10 seconds on interpreter 0x86aac0 pid: 30093 (default app) The athena.log.INFO shows unexpected errors such as follows: WARN 2020-02-14 06:43:19,571 main-SendThread(172.16.118.145:9876) apache.zookeeper.ClientCnxn.run:1102 2. Move Migration undefined While performing Move migration using PE as a target, we might encounter this issue. The following are the errors in move logs. E0325 15:11:19.466322 6 v3_ahv.go:96] [HostIPAddrOrFQDN="x.x.x.x", Location="/hermes/go/src/common/restclient/restclient.go:201", Response="400 BAD REQUEST:{ 3. File Server Dashboard not loading undefinedFile Server Dashboard is slow to load and the "ncli fs ls" command takes very long to respond.The total time to respond can be determined using the following command: nutanix@CVM$ time ncli fs list-shares uuid=<File Server UUID> However, if there are more than one fileservers, one or more of them might be affected. Use the following command to identify the impacted fileservers: nutanix@CVM:~ for i in `afs info.fileservers | egrep -v "info|UUID" | awk '{print $1}'`; do echo "File Server UUID: $i"; time ncli fs list-shares uuid=$i > /dev/null; echo "----------";done Once the Nutanix Files server has been identified, check the FSVMs prism_gateway.log for the below. ERROR 2021-02-24 14:53:26,738 http-nio-0.0.0.0-9081-exec-42 [] web.sessions.AthenaBasedSessionManagement.getAthenaToken:189 Athena exception com.nutanix.prism.services.athena.AthenaRpcClient.signToken(AthenaRpcClient.java:150) Check the FSVMs athena.log.INFO for the below traceback. WARN 2021-02-19 02:11:54,599 main-SendThread(zk1:9876) apache.zookeeper.ClientCnxn.run:1102 Session 0x2765d462154ec81 for server null, unexpected error, closing socket connection and attempting reconnect Check the FSVMs aplos.out for the below traceback. 2021-02-19 01:08:46 ERROR auth.py:94 Traceback (most recent call last): 4. Citrix components can behave slow. undefined API from Citrix would be slow or the API would Timeout with 504 error. This make certain Citrix components to load slow. The bold section shows the time taken for the API. This is observed in prism_proxy_access_log.out of the Prism Leader. grep with the Citrix DDC or Connector IP(Will be provided by the customer). The snippet below the DDC or Connector IP is 172.xx.xx.109 prism_proxy_access_log.out:[2022-08-24T07:39:59.666Z] "GET /PrismGateway/services/rest/v1/cluster/ HTTP/1.1" 504 - 0 77 295003 295003 "172.xx.237.109" "-" "50a3034b-31c2-469b-a624-5b52505340a3" "172.xx.237.176" "172.xx.237.176:9444" In TH-9257, it was observed that the PE Prism got slow because of an API call was failing towards Prism Central from PE. The following was noted in the Prism Gateway logs: WARN 2022-08-24 07:39:55,739Z http-nio-127.0.0.1-9081-exec-220 [] background.multicluster.GetClusterIpAddress.isReachableAddress:188 IP address 172.xx.xx.250 is not reachable: prism_gateway.log is filled with the above errors causing the other vi API calls to fail from Prism gateway. Check aplos logs on PCVM to see errors related to Athena service unavailable.
According to ENG-276686 https://jira.nutanix.com/browse/ENG-276686, this is caused by a connection issue between Athena and Zookeeper, which eventually caused an API failure on Aplos. So the API call for the snapshot is unable to return successfully to the third-party backup software. This was was reportedly fixed in 5.10.11, 5.17.1, 5.15.1, 5.17.0.3 with ENG-249633; if you are encountering this issue on a more recent AOS release, please have a look at We are using the 'ZOOKEEPER_HOST_PORT_LIST' system env to connect to the zookeeper. This system env is connection string 'zk1,zk2,zk3' which will be resolved to the ip by host DNS. However, once the IP of the Zookeeper gets changed the underlying Zookeeper library is still using the old IP to connect to Zookeeper and result in connection failure.Workaround for scenarios 1 and 2: restart Athena service on the CVMs. This is a non-disruptive restart. nutanix@CVM~$ allssh 'genesis stop athena'; cluster start; Workaround for scenario 3: restart the Athena service on the FSVMs. nutanix@FSVM~$ allssh 'genesis stop athena'; cluster start; Workaround for scenario 4: restart the Athena service on the PCVM/s. It was observed in the PC Aplos logs that the Athena service on one of the PCVMs was unresponsive. Athena Service had to be restarted on that PCVM.Please Note: Unresponsive Athena Service on PE can also fail the API or Slow it. nutanix@CVM~$ allssh 'genesis stop athena'; cluster start;
{
null
null
null
null
KB6441
Pre-Upgrade Check: test_pinned_config
null
This is a pre-upgrade check that makes sure upgrade from pre-5.0 versions to 5.0 or higher release version is not allowed, if the cluster has pinned vdisks.Note: This pre-upgrade check runs on Prism Element (AOS) cluster and Prism Central during upgrades.
[ { "Failure message seen in the UI": "Failed to determine pinned vdisk config, RPC to Arithmos failed [Error]", "Details": "Software is unable to query and retrieve the pinned vdisk config", "Action to be taken": "Please collect NCC log bundle and reach out to Nutanix Support." }, { "Failure message seen in the UI": "Upgrade not allowed to this version since the cluster has pinned virtual disks", "Details": "If the cluster has pinned vdisks, upgrade from pre-5.0 to 5.0 or higher version is blocked.", "Action to be taken": "Remove vdisk pinning. \t\t\tTo view which vdisks have pinning enabled - \n\t\t\tncli virtual-disk list | grep 'Pinning Enabled'\n\t\t\t\t\t\tIn case assistance is required for remove the pinning or if the above command shows that there are no vdisks pinned and the pre-upgrade check is still failing, please reach out to Nutanix Support." } ]
KB6579
Foundation - Hardware Compatibility Checker (hfcl_checker)
This article describes how to use the HCL Checker available in Foundation 4.3
The HCL checker was first introduced in Foundation 4.3 as a basic tool to determine if a nodes hardware has been qualified by Nutanix. In Foundation 4.3, the tool will be disabled by default. Pending any issues, it will be enabled by default in Foundation 4.4.To run the tool in Foundation 4.3, you'll need to perform the following steps: Enable "hardware_qualification" feature in foundation. Create or modify the features.json file on the Foundation Server. vi foundation/config/features.json Add the {"hardware_qualification": true} flag in the opened features.json file {"hardware_qualification": true} After creating the file, restart the foundation_service on the foundation server sudo service foundation_service restart Install the hfcl_checker in phoenix by building the phoenix iso using foundation/bin/generate_iso. More details about using the generate_iso script can be found in KB 3523 /home/nutanix/foundation/bin/generate_iso phoenix --aos-package=<nos_package_location> --temp-dir=/home/nutanix/foundation/tmp Once the ISO has been generated, scp it off the node to your local desktop.Modify the features.json file on the Foundation Server to remove the previously added {"hardware_qualification": true} flagAfter modifying the file, restart the foundation_service on the foundation server sudo service foundation_service restart To use the hfcl_checker, attach the phoenix ISO via IPMI and boot the node into it.
To use the hfcl_checker, review the available hfcl_checker commands are here: hfcl_checker -h Return help with all available options After running the tool with the hfc_checker command, the output will be similar to the following: $ hfcl_checker Notes: The described process in this KB is only to check hardware compatibility on the node. Do not use the phoenix iso built with hcl_checker to image the node. All logs will also be logged at /var/log/aurora/hfcl_checker.log. By default the log_level is ERROR, for more verbose logs, pass in the argument --log-level INFO/DEBUGThe nodes hardware is qualified against the hcl.json which packaged with phoenix today. In the future, this will pull from an upcoming hfcl database
KB9334
Third party backup may fail intermittently due to Athena Service restarts (OOM)
3rd party backups fail intermittently due to Athena service being killed by OOM as the service reaches the memory limit defined in the Cgroup introduced in 5.11.1.
3rd party backups fail intermittently as restAPI calls start to fail due to Athena service responsible for authentication is being killed by OOM as the service reaches the memory limit defined by the service of 256MB as of 5.11.1 and later in ENG-203508. Symptoms This was noticed with larger VMs (20+ disks) and backup jobs with more than VMs included, but may not be an exclusive list of pre-requisites to hit Athena memory limit.Below an example of VEEAM backup logs where it can be seen the restAPI calls fail with error 502 - communication error: [2020-04-22] [20:16:33.240] [104247] [Debug] [RestCli] Response data (502 Proxy Error): Below can be noticed on ~/data/logs/cerebro.INFO where a full snapshot is created and then removed once the backup job fails. 0422 20:00:10.272500 27322 protection_domain.cc:15351] <protection_domain = 'ef1318f6-2454-47ff-b602-165c570a349d'> Top level meta op completed, Meta opid = 100167, Opcode = SnapshotProtectionDomain, Creation time = 20200422-20:00:08-GMT+0200, Duration (secs) = 1, Attributes = snapshot=(5586271321242699278, 1569233109179081, 100168) take_full=true force_legacy_snapshot=true, Aborted = No, Aborted detail = -, Error = kNoError, Protection Domain = ef1318f6-2454-47ff-b602-165c570a349d API 502 Error can be seen in the /home/log/httpd24/ssl_access_log which is shown then in the 3rd party backup log. 192.168.197.5 49020 - - [22/Apr/2020:20:16:32 +0200] 25453 "POST /api/nutanix/v3/data/changed_regions HTTP/1.1" 200 78 "-" "route2" "-" "-" 73 + 305139 675 2764 Aplos also not reaching Athena (responsible for Authentication starting with AOS 5.10 and later) and failing with below traceback and status 500 (Internal Error) for "changed_regions" call : Traceback (most recent call last): Athena service FATALs can be seen - as usual for python services the FATAL records the time the service exits: nutanix@CVM:~$ allssh cat data/logs/athena.FATAL Finally the reason for the service being restarted is found in /var/log/dmesg which shows OOM for athena as it is reaching 256MB cgroup limit (introduced in ENG-203508): 2020-04-22T21:01:29.410163+02:00 NTNX-6JDH6Z2-A-CVM kernel: [97805.105253] C2 CompilerThre invoked oom-killer: gfp_mask=0xd0, order=0, oom_score_adj=100
Workaround As a workaround Athena memory can be increased to 512MB to avoid further crashes by following this procedure. This limit is defined by the genesis service when it creates a cgroup for athena at start time:IMPORTANT NOTELTS AOS versions 5.15.x up to 5.15.3 are affected by defect ENG-127735 https://jira.nutanix.com/browse/ENG-127735 that prevents genesis gflags to be overwritten therefore the steps below do not work.AOS must be upgraded to 5.15.4 first for the procedure below to work. There is no work around to apply the gflag on prior AOS versions. Use edit-aos-gflag to configure the genesis gflag to increase athena service memory limit of 512 MB - athena_memory_limit_mb: --athena_memory_limit_mb=512 Note for AOS older than 5.15.1 and 5.17.1 edit-aos-gflag does not work and the a gflag file has to be used instead - ENG-266031 https://jira.nutanix.com/browse/ENG-266031. See also note above for genesis gflagsRestart genesis service across the cluster to ensure the gflag is read which happens during initialization: nutanix@CVM:~$cluster restart_genesis Once genesis is fully initialized restart the Athena service so that Genesis starts it again applying the new memory limit: nutanix@CVM:~$:allssh genesis stop athena; cluster start Verify the new cgroup memory limit has been applied in the Setting memory limit for cgroup athena to log message: nutanix@CVM:~$allssh "grep 'cgroup athena' data/logs/genesis.out" Note in some instances it has been observed the old limit was still seen after the restart and a second restart was needed for the new value to be applied. Also consider the important note above regarding ENG-127735 https://jira.nutanix.com/browse/ENG-127735. Resolution Veeam Root cause for the excessive memory utilization of Athena service has been identified as being caused by the way Veeam performs authentication. Veeam sends client authentication credentials for every API v3 request instead of using cookies once the first authentication passes.Nutanix engineering has requested Veeam to modify the authentication method to use token-based, and Veeam is working on a fix at their side which can be tracked in NIRVANA-213 https://jira.nutanix.com/browse/NIRVANA-213.There is no ETA as of Nov 2020. Commvault Commvault is also likely to cause excessive Athena memory utilization as basic authentication is the default authentication method used for API calls to Nutanix. This OOM condition is more likely to occur on large backup subclients where CBT is enabled. Backups will appear stalled in the CommCell console.Commvault V11 SP24 October Maintenance Release includes a change from basic authentication to session based authentication for API calls, which is now the preferred method. For large deployments ensure the customer is up to date with Commvault Maintenance Releases or ask them for confirmation from Commvault Support that DIAG-2722 is deployed in their environment
KB15996
Metropolis service crashing on Prism Central with the "interface {} is nil, not string" error
Metropolis service crashing on Prism Central with the "interface {} is nil, not string" error
Metropolis service crashing on Prism Central with the "interface {} is nil, not string" error. The crash is due to the 'entity_version' sent as part of the NULL RPC payload.The following traceback can be found in the /home/nutanix/data/logs/metropolis.out log on Prism Central: I0914 00:10:52.051018Z 154836 vm_task.go:928] [66329138-d262-4ed4-bc79-9c6099b7d47a] Calling PE via remote anduril RPC calls
Please engage Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/ to recover the cluster.This issue is resolved in pc.2024.1. Please upgrade Prism Central to the specified version or newer.
KB12380
HPDX: NVME disk might not be visible on Prism
NVMe disks might not be shown on Prism UI if node is running on some iLO versions.
Nutanix has identified an issue where the Prism UI > Hardware > Disks view does not show an NVMe disk populated when iLO 2.33, 2.40 to 2.46 is installed. The issue applies to the following hardware platforms: DX360-10SFF Gen 10 ALL NVME [No vmd enabled]DX560-24SFF Gen10DX2600-XL170R-Gen10 You can confirm if you are impacted by this issue by running the command list_disks from any Controller VM (CVM) as the "nutanix" user: nutanix@CVM$ list_disks Check the iLO version running on the nodes impacted by logging into the host as root user and running: [root@AHV]# ipmitool mc info | grep -i "Firmware Revision"
Nutanix has qualified iLO 2.33 available till LCM (Life Cycle Manager) 2.4.3.2 and has qualified iLO 2.48 in LCM-2.4.3.3 and later. HPE iLO versions 2.40 to 2.46 have not been qualified on Nutanix and are not available through LCM To resolve the issue, either reach out to HPE to downgrade the iLO to 2.33, or make use of LCM 2.4.3.3 or later to upgrade to iLO 2.48 or later. Upgrade to the latest HPE SPP only using the Nutanix LCM. Do not upgrade firmware outside of LCM without Nutanix Support confirmation. Reach out to Nutanix Support http://portal.nutanix.com/ with any further queries.
KB4484
List of required vCenter permissions
This article lists required vCenter permissions to register vCenter Server with Prism.
To register vCenter Server with Prism, it is recommended to have administrative privileges. However, you can also use an administrative account with fewer privileges. Following are the requirements for an account to get the necessary access to the vCenter Server. Requirement (registration):vCenter Server users should have the privileges to create an extension.Use case:For the ESXi-based Nutanix cluster authentication.Requirement (VM management operations):While providing the privileges to the user in the vCenter Server for VM management-related privileges (create, update, delete).Use Case:This helps in executing VM management through the vCenter Server APIs, and you can leverage the benefit of other vCenter Server features, such as DRS, HA, Network, etc., by using the VM management service provided by Nutanix.Once the vCenter Server registration is successful, this service exposes only the VM management APIs: VM create VM getVM updateVM deleteVM power operationsVM launch consoleVM disk add/deleteVM NIC add/deleteMount guest tools If the cluster is not registered to the vCenter Server, only VM get, VM power, and VM launch console operations will work.
To create a role with less privileges, perform the following steps. Create a new role with the following vCenter Server privileges by navigating to Manage > Permissions. Assign this role to a local user that you will be creating or to an active directory user at the vCenter level.For all Prism operations after registration, we use the extension created during registration. We log in using this extension, which is an administrator user.
KB14283
NDB | Oracle Database registration fails with error: "Sorry, user era_user is not allowed to execute /bin/sh..."
Registration of Database VMs fail at discovering database layout stage
During an Oracle registration, it may fail at the Discovering Database Layout step. This issue is seen when the era_user does not have the right permission in the sudoers file. The NDB Pre-requirements Validation Report passes because the NDB Drive user has sudo rights, but the granular permissions for the user in the sudoers file may be incorrect.Symptoms:The Oracle database registration is running and the operation fails at the "Discovering Database layout" step:The NDB Pre-requirements Validation Report passes. [era_user@eravm ~]$ ./era_linux_prechecks.sh --database_type oracle_database -d However In the logs, the following error is seen. This error typically means that the era_user(called NDB Drive user in NDB Docs) does not have proper permissions in the /etc/sudoers file. Unable to register a new Oracle database. Getting error: "Sorry, user era_user is not allowed to execute '/bin/sh -c echo BECOME-SUCCESS-kfcmmahsqykhvhcdjinunkyzasgdqvmn ; ERA_BASE=/opt/era_base LD_LIBRARY_PATH=/u01/app/oracle/product/19.0.0/dbhome_1/lib ORACLE_HOME=/u01/app/oracle/product/19.0.0/dbhome_1 ORACLE_SID=tstendur OPERATION=DISCOVER DB_PASSWORD='' /opt/era_base/era_engine/stack/linux/python/bin/python3.6' as oracle on oracleserver.domain.com.\n"
This output is an example of incorrect permissions for the era_user account: [era_user@eravm ~]$ sudo -l The era_user privileges are limited and needs to leverage the sudo command to execute with the necessary permissions. The incorrect configuration above prevents the era_user from executing sudo and in turn the NDB Database registration from executing required commands.To fix this, the era_user entry in the /etc/sudoers file should look like this: User era_user may run the following commands on oracleserver: Follow the Nutanix Database Service Oracle Database Management Guide's Database Registration Prerequisites https://portal.nutanix.com/page/documents/details?targetId=Nutanix-NDB-Oracle-Database-Management-Guide-v2_5:top-prerequisite-oracle-database-registration-r.html section to apply the correct permissions for the NDB Drive user.
KB3639
Hyper-V: Shared Nothing Live Migration in Hyper-V Might Fail if it Takes More Than 10 hours to Complete
You may need to move virtual machines to or from the Nutanix Cluster. This article describes a situation where the live migration fails when the process takes more than 10 hours.
Consider the following situation. You are running Hyper-V and are in the process of migrating VMs.You are either migrating VMs to the Nutanix SMB or migrating VMs from Nutanix to an external cluster or stand-alone Hyper-V host.You are moving a large VM that needs 10 or more hours to transfer the VHD or VHDX files. Migration starts successfully, but it fails after 10 or more hours with an error message similar to the following. No credentials are available in the security package (0x8009030E). In this case, the error message implies that the Kerberos tickets are no longer valid.
You can resolve the issue by performing one of the following. Modify the Lifetime of the Kerberos Ticket Nutanix recommends that you temporarily increase the lifetime of the Kerberos ticket by modifying the group policies Maximum lifetime for service ticket and Maximum lifetime for user ticket. Find the Organizational Unit (OU) where the computer accounts are residing and check which policies apply.Set the duration of the tickets to a longer value as described in the following Microsoft documentation. For example, 1440 which is 24 hours. https://technet.microsoft.com/en-us/library/dd277401.aspx https://technet.microsoft.com/en-us/library/dd277401.aspx Run the following on both the source and destination. gpupdate /force klist purge Change Live Migration Settings Change Live migration Settings to use CredSSP. To change the Live migration settings, log on to both computers before the live migration and change the settings.
KB15920
LCM - Direct upload compatibility does not show available upgrade.
This article describes a behavior of Direct upload which overwrites the compatibility file, showing the older compatibility.
You may not see the most recent available updates or dependencies when uploading the LCM framework and Nutanix Compatibility bundle together using LCM Direct Upload, and this overwrites the content of pm_file object.LCM Framework Bundle contains the following:As we can observe, the nutanix_compatibility.tgz file gets bundled with the LCM framework Bundle. Nutanix also provides "Nutanix Compatibility Bundle" separately downloadable from the Nutanix Portal. https://portal.nutanix.com/page/downloads?product=lcmIt is possible that the compatibility requirements change after the release of the LCM Framework Bundle due to another component release. Nutanix recommends uploading the latest "Nutanix Compatibility Bundle" from the Nutanix Portal. When uploaded separately, the latest "Nutanix Compatibility Bundle" overwrites the framework bundled nutanix_compatibility.tgz file.The above-mentioned problem occurs when you upload the LCM framework Bundle and Nutanix Compatibility Bundle together. The nutanix_compatibilit.tgz file, which is part of the LCM Framework Bundle, overwrites the latest "Nutanix Compatibility Bundle", preventing the most recent software versions from being available for upgrading. Example: LCM Framework Bundle (version: 2.7) is released on Nov 16th 2023. Therefore, the nutanix_compatibility.tgz bundled with it is updated on Nov 16th 2023. The bundled nutanix_compatibility.tgz does not contain any data point after Nov 16th 2023.Nutanix releases NCC 4.6.6.1 on Dec 14th 2023.The "Nutanix Compatibility Bundle" is updated on Dec 14th, 2023, to include NCC 4.6.6.1 dependency requirements, which can be separately downloaded from the Nutanix Portal. https://portal.nutanix.com/page/downloads?product=lcmIf you do not upload the "Nutanix Compatibility Bundle" or upload it together with the LCM Framework Bundle, you will not see NCC 4.6.6.1 as an available upgrade option, as the nutanix_compatibility.tgz file dates from Nov 16th 2023.
The recommended order to upload is: LCM Framework BundleNutanix Compatibility BundleIndividual product LCM bundle. If you do not see the required available updates or dependencies in the LCM, re-upload the latest Nutanix Compatibility Bundle from Nutanix Portal. https://portal.nutanix.com/page/downloads?product=lcm
KB11889
Nutanix Volumes iSCSI Data Services IP (DSIP) address disconnected or unreachable after stargate NFS master change
This KB describes an issue wherein the  iSCSI data services IP address (DSIP) became unavailable which caused an outage for Nutanix Files cluster. This will have the same impact on any feature, VM or physical host that use Nutanix Volumes.
Nutanix Volumes is enterprise-class, software-defined storage that exposes storage resources directly to virtualized guest operating systems or physical hosts using the iSCSI protocol. It is also used by other Nutanix features including, Calm, Leap, Karbon, Objects, and Files.Nutanix Volumes utilizes an iSCSI data services IP address (DSIP) to clients for target discovery which simplifies external iSCSI configuration on clients. This iSCSI data services IP address acts as an iSCSI target discovery portal and initial connection point.This KB describes an issue wherein the iSCSI data services IP address (DSIP) become unavailable, which will end up causing an outage for Nutanix Files cluster. This will have the same impact on any feature, VM or physical host that use Nutanix Volumes.For Nutanix Files, the issue will show the following symptoms: An alert will be triggered about Discovery of iSCSI targets failed. [160025] [A160025] shown in Prism Web UI. ===================== Nutanix Files will show as "Not Reachable" when you check Prism Web UI > Home > File Server or when you can run the following command: nutanix@cvm$ ncli fs ls Example ===================== In the Nutanix Files FSVMs, the Volume Groups will not be mounted or shown "no pools available" nutanix@NTNX-x-x-x-236-A-FSVM:~$ allssh "sudo zpool list" iSCSI data services IP address (DSIP) is not reachable or not pingable from all the CVMs in the cluster and from all the Nutanix Files FSVMs.
When Stargate NFS master is unexpectedly dead or restarted, then the iSCSI data services IP address (DSIP) is re-hosted in a new CVM and it takes over the Stargate NFS master role and as part of the DSIP failover, DSIP will be assigned to virtual network interface "eth0:2". The issue happens, when ifconfig command on interface eth0:2 gets kicked off as part the failover procedure however, it never ends and this process gets stuck or hung although it is supposed to get completed within around max 30 seconds. Please follow below procedures to diagnose the symptoms to remediate the issue. 1. Identify DSIP in cluster. It can be obtained by using NCLI as below. nutanix@cvm$ ncli cluster info | grep 'External Data Services' 2. Identify the current NFS master. Stargate page will provide the info. nutanix@cvm$ links --dump http:0:2009 | grep 'NFS master handle' 3. Confirm if interface eth0:2 is up in a new NFS master CVM. Also make sure the interface eth0:2 has been assigned the correct DSIP (In this example, x.x.x.5). nutanix@cvm$ allssh "ifconfig | grep -A1 eth0:2" Example: nutanix@NTNX-serial-A-CVM:x.x.x.7:~$ allssh "ifconfig | grep -A1 eth0:2" 4. Confirm iSCSI data services IP (DSIP) is hosted by NFS master CVM. Below example demonstrates the Data service IP x.x.x.5 being hosted in NFS master IP x.x.x.7 nutanix@cvm$ allssh "grep <DSIP address> /home/nutanix/data/logs/sysstats/tcp_socket*" Example: nutanix@NTNX-serial-A-CVM:x.x.x7:~$ allssh "grep x.x.x.5 /home/nutanix/data/logs/sysstats/tcp_socket*" nutanix@cvm$ allssh "egrep 'TIMESTAMP|<DSIP address>' /home/nutanix/data/logs/sysstats/netstat* | grep -C2 <DSIP address>" Example: nutanix@NTNX-serial-A-CVM:x.x.x.7:~$ allssh "egrep 'TIMESTAMP|x.x.x.5' /home/nutanix/data/logs/sysstats/netstat* | grep -C2 x.x.x.5 | tail -n 10" 5. Check for recent Stargate restart or crash as well as NFS master change nutanix@cvm$ allssh "ls -ltra /home/nutanix/data/logs/stargate.FATAL" nutanix@cvm$ allssh "ls -ltra /home/nutanix/data/cores | grep stargate" nutanix@cvm$ allssh "grep 'NFS namespace master changed' /home/nutanix/data/logs/stargate.*" nutanix@cvm$ allssh "grep 'Starting Stargate' /home/nutanix/data/logs/stargate.*" If any of above steps confirm any abnormal behaviours by displaying unexpected outputs to CLI, then please check ifconfig process still remains running for a long time in top.INFO file or not or if real time you can use the top command. It needs to be checked in a new Stargate NFS master CVM. This process is expected to get completed within 30 seconds and if it is longer than that, then ifconfig process is considered to be in a hung state. Below example demonstrates that ifconfig CLI is never finished for nearly more than 2 hours and its PID "13432" has not been even changed at all for the same duration of the time, which is the problem that needs to be addressed. Log from CVM: /home/nutanix/data/logs/sysstats/top.INFO top.INFO.20210723-130811:#TIMESTAMP 1627009696 : 07/23/2021 01:08:16 PM As a workaround to resolve the issue, this hung state needs to be cleared up by manually killing the process or rebooting the CVM. To check if the "ifconfig" process run by stargate as root is stuck real time, get the PID, USER and the TIME, if time is longer than 30 seconds, proceed on killing the process nutanix@cvm$ top -c -n 31 | egrep "TIME+|ifconfig" To kill the process, replace <PID> with the Process ID (PID) obtained from the command above. nutanix@cvm$ kill -9 <PID>
KB5490
NCC Health Check: rest_connection_checks
The NCC health check rest_connection_checks checks for healthy connectivity back to Nutanix Pulse Insights servers when Pulse is enabled on the cluster and reports an INFO or WARN status if any issues are detected.
The NCC health check rest_connection_checks checks for healthy connectivity back to Nutanix Pulse Insights servers when Pulse is enabled on the cluster and reports an INFO or WARN status if any issues are detected. The Pulse feature gives diagnostic system data to Nutanix customer support teams so the teams can deliver proactive, context-aware support. The Nutanix cluster automatically and unobtrusively collects the information of the diagnostic system with no effect on system performance. Pulse does not collect customer data but instead shares basic system-level information necessary for monitoring the health and status of a Nutanix cluster. For more information on Pulse, see Configuring Pulse https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-pulse-configure-pc-t.html on the Prism Web Console Guide. To ensure that the cluster has a healthy connection to Nutanix Insights servers, the CFS service on the cluster is polled by NCC for a successful connection to the REST endpoints used by Pulse. If an issue occurs with either the CFS service on the CVMs (Controller VMs), or connectivity from the CVMs outbound back to the Nutanix Pulse Insights service, then this check reports a WARN status. Running the NCC check This check can be run as part of the complete NCC check by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks pulse_checks rest_connection_checks You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. The check is scheduled to run every 30 minutes, but an alert is only generated when the condition exists for more than 24hrs. Sample output If Pulse is enabled but a problem is detected for inter-CVM communication with the CFS service or any unexpected CFS service problem that prevents the connectivity status from being polled by NCC, the following output appears. INFO: Unable to determine Pulse connectivity status to REST endpoint If Pulse is enabled but the cluster does not have access to https://insights.nutanix.com, the following output appears. Running : health_checks pulse_checks rest_connection_checks ------------------------------------------------------------------------+ You may also see the following alert in Prism. Pulse cannot connect to REST server endpoint Note: If the underlying condition is resolved, then this check auto-resolves the alert. If Pulse is not enabled in the Gear icon > Pulse window of the Prism web console, this check displays the following INFO message. INFO: Data sending for Pulse is disabled. Output messaging [ { "Check ID": "This check is scheduled to run every 30 minutes, by default." }, { "Check ID": "Check if Pulse can connect to REST server endpoint." }, { "Check ID": "Pulse cannot connect to REST server endpoint." }, { "Check ID": "Ensure that the REST server endpoint is reachable from Pulse." }, { "Check ID": "Data-driven serviceability and customer support cannot be performed." }, { "Check ID": "This check will generate an alert after 48 consecutive failures across scheduled intervals.." }, { "Check ID": "Pulse cannot connect to REST server endpoint" }, { "Check ID": "Pulse cannot connect to REST server endpoint. Connection Status: connection_status, Pulse Enabled: enable, Error Message: message" } ]
Determine if there are any connectivity issues between your cluster and the Pulse Insights connection point. Run the following command from any CVM in the affected cluster to poll the Nutanix Insights server. nutanix@cvm$ allssh "curl -L -k https://insights.nutanix.com" Or (if proxy server configured - no authentication) nutanix@cvm$ allssh "curl -L -x [protocol://]proxy_server[:port] -k https://insights.nutanix.com" Or (if proxy server configured - authentication) nutanix@cvm$ allssh "curl -L -x [protocol://][user:password@]proxy_server[:port] -k https://insights.nutanix.com" Each CVM must return the following output. "I-AM-ALIVE" If any CVM reports a connection timeout or the URL is not reachable, there may be an upstream network connectivity issue that requires resolution. Review your DNS, routing, and firewall or ACLs for your network. Your CVMs require direct outbound Internet access to external FQDN insights.nutanix.com on destination port tcp/443 (for HTTPS). Insights leverage your configured HTTP Proxy if entered in Gear icon > HTTP Proxy. Note: If you are using an HTTP or HTTPS proxy to grant Internet access to your CVMs, then review your proxy logs for permission or connectivity issues to https://insights.nutanix.com from any or all of the external IP addresses of the CVM.Determine if the CFS service on one or more CVMs is experiencing issues. Run the following command. nutanix@cvm$ curl -s -k 'http://127.0.0.1:2042/h/connectivity_status?force=false&nodes=localhost' Example output: {"https://insights.nutanix.com:443":{"cluster_level_summary":{"configured_proxies":[],"connection_status":"success","connection_tested_time_usecs":1524143114014750,"enabled":"true","message":"","proxy_used":""},"node_level_status": Example output with the above curl output piped into "| python -mjson.tool" to "beautify" the json output: nutanix@cvm$ curl -s -k 'http://127.0.0.1:2042/h/connectivity_status?force=false&nodes=localhost' | python -mjson.tool If the command gives no response or the output is not similar to the preceding example that includes the snippet "connection_status":"success", or if connection_status shows failed, then stop the Cluster Health service and then restart the service by using the following command. This procedure is safe and does not impact storage availability. nutanix@cvm:10.X.X.86:~$ allssh "curl -s -k 'http://127.0.0.1:2042/h/connectivity_status?force=false&nodes=localhost'" If Prism Element (PE) is registered with Prism Central (PC), each PE will use the PC as the proxy: In these instances, ensure the above curl check(s) is successful when run from PC. nutanix@cvm$ allssh "genesis stop cluster_health" After the Cluster Health service restarts, the CFS plugin might respawn in several minutes. Wait for more than five minutes before you recheck the CFS service response using the curl command.
KB16936
LCM inventory task may stuck due to past LCM framework startup failure
LCM inventory task may stuck for a long time keeping many of kLcmInventoryTask in kQueued state. This may be due to LCM framwork startup failure on some CVM.
LCM inventory task may stuck for a long time keeping many of kLcmInventoryTask in kQueued state. nutanix@cvm:~$ ecli task.list include_completed=false; In this case, LCM framework may not be running on non-LCM-leader CVM.ValidationAffected Node Identification: We do NOT see the operation successful logs lcm_ops.out on the affected CVM. nutanix@cvm:~$ allssh 'grep "LCM operation.*is successful" ~/data/logs/lcm_ops.out | tail -3' Affected Node Identification (alternative): the tasks in queued state are assigned to the affected node.Example: for task 03427bfe-92b6-4efd-6bfe-b3d13f6c9780 in ecli task.list, we see the following log in genesis.out of the lcm_leader. nutanix@cvm:~$ lcm_leader LCM framework statusWe do NOT see the following log just after the last genesis restart. nutanix@cvm:~$ grep -e 'GENESIS START' -e 'Calling LCM framework to start' ~/data/logs/genesis.out*
There are several reasons why the last genesis restart does not start LCM framework. One example is that service start procedure is interrupted due to the memory shortage of CVM, but not limited. ## genesis.out on affected CVM (these are logs after restarting only genesis) After resolving the root cause, restart genesis on affected CVM to start LCM framework again. nutanix@cvm:~$ genesis restart It is expected that the inventory task fails once if LCM framework starts appropriately. Restart inventory and check it succeeds.
KB6736
Server Imaging using Phoenix ISO
This article describes how to image a server using a thin Phoenix ISO.
NOTE: This workflow is specifically written for GSK ( HPE-335 https://jira.nutanix.com/browse/HPE-335). For all other customers, refer to the Phoenix Guide on the Portal. IMPORTANT: This KB needs to stay internal only. This article describes how to image a server using a thin Phoenix ISO.
Perform the following procedure to image a server using a thin Phoenix ISO image. Establish an SSH connection to the Foundation VM.Enter the following commands: $ cd /home/nutanix/foundation/lib/phoenix/ Copy the squashfs.img file to an HTTP server of your choice. $ scp ./squashfs.img <HTTP_server> Enter the following command. No need to keep a local copy of squashfs.img. $ rm -f ./squashfs.img Edit the isolinux.cfg file: $ vim ./boot/isolinux/isolinux.cfg Change the menu type value of default to Installer. For example: default Installer Scroll down to the Installer menu section. For example: label Installer Go to the end of the existing append line and add the following parameters leaving the existing parameters intact. The append line must be a single line. LIVEFS_URL=<url>. The complete URL of the path where you staged the squashfs.img file.PHOENIX_IP=<ip_address>. Enter a static IP for Phoenix to be used while imaging. Do not add this parameter if you want Phoenix to use DHCP instead.MASK=<subnet_mask>. Enter a subnet mask for Phoenix to be used while imaging. Do not add this parameter if you want Phoenix to use DHCP instead.GATEWAY=<default_gateway>. Enter the default gateway for Phoenix to be used while imaging. Do not add this parameter if you want Phoenix to use DHCP instead.FOUND_IP=<ip_address>. This is usually the IP address of the Foundation VM. But since GSK does not use a Foundation VM, and this parameter cannot be empty, just put any IP address here that is pingable, for example, the default gateway.AZ_CONF_URL=<url>. The complete URL of the path where you have staged your answer file (JSON format). For example: label Installer Save your changes to the isolinux.cfg file.To create a customized Phoenix ISO, enter the following commands: $ cd /home/nutanix/foundation/lib/phoenix/phx_temp The newly created custom_phoenix.iso will be located in /home/nutanix/foundation/lib/phoenix/custom_phoenix.iso. Create a JSON answer file and stage it on an HTTP server.After staging the answer file and other payload files on the HTTP server, plug your Phoenix ISO into the system using virtual media APIs as per platform vendor’s instructions or APIs.Instruct the server to perform a onetime boot to cdrom and reboot the server. You can set up the onetime boot to cdrom and reboot the server using a vendor agnostic Linux utility called ipmitool.After Phoenix is booted, the CVM (Controller VM) or AOS bits and the hypervisor content are installed. An ISO is installed along with the hypervisor first boot scripts, or just the hypervisor first boot scripts for an existing hypervisor is installed in advance.After the installation is completed (unless exceptions are encountered), Phoenix will gracefully reboot the system into the hypervisor to register and set up the Nutanix Controller VM. Note: Ensure that your boot settings are set to M.2 with legacy boot. Configure an HTTP server of your choice to receive POST messages that Phoenix will post back to provide log updates and progress. This assists callbacks in automating the post-installation operations such as cluster creation. In the answer key, provide the monitoring_url_root and ssh_key to receive callbacks and use the desired SSH private key for keyless SSH access to any CVM and execute the cluster create command. After successful execution of this command, the SSH key is destroyed and the CVM will be locked down until you log into Prism and add your key via the Prism interface provided in the Cluster Lockdown menu for later SSH sessions needed for CVM or cluster access.
KB11835
1-click Hypervisor upgrade from ESXi 6.7 U3 or later to 7.0 U2a is not installing the correct i40en driver
1-click Hypervisor upgrade from ESXi 6.7 U3 or later to 7.0 U2a is not installing the correct i40en driver. The article provides steps to update the driver to the correct version.
1-click Hypervisor upgrade from ESXi 6.7 U3 or later to 7.0 U2a is not installing the correct i40en driver. This article applies to clusters upgrading to ESX7.0 U2a and NX HW models only.Symptoms ESXi hypervisor has been upgraded from ESXi 6.7 U3 or later to ESXi 7.0 U2a.You have an active "Intel(R) Ethernet Controller" NIC connected to ESXi hosts with a driver name starting with i40. [root@ESXi01:~] esxcli network nic list The network driver name changed from i40en to i40enu(problematic driver) after the ESXi upgrade, as shown below. The driver name change may lead to Network connectivity issues on the respective SPF+ uplink ports. Before upgrade: [root@ESXi01:~] esxcli network nic list After upgrade: [root@ESXi01:~] esxcfg-nics -l Affected ESXi upgrade paths. [ { "Source ESXi version": "ESXi 6.7 U3", "Network driver(Name: Version) Before upgrade": "i40en: 1.9.5-1OEM.670.0.0.8169922", "Target ESXi version": "ESXi 7.0U2a", "Network driver(Name: Version) After upgrade": "i40enu: 1.8.1.136-1vmw.702.0.0.17867351" }, { "Source ESXi version": "ESXi 7.0", "Network driver(Name: Version) Before upgrade": "i40en: 1.10.6-1OEM.670.0.0.8169922", "Target ESXi version": "ESXi 7.0U2a", "Network driver(Name: Version) After upgrade": "i40enu: 1.8.1.136-1vmw.702.0.0.17867351" }, { "Source ESXi version": "ESXi 7.0U2", "Network driver(Name: Version) Before upgrade": "i40en: 1.10.6-1OEM.670.0.0.8169922", "Target ESXi version": "ESXi 7.0U2a", "Network driver(Name: Version) After upgrade": "i40enu: 1.8.1.136-1vmw.702.0.0.17867351" } ]
SolutionThe problem is fixed in AOS 6.0.2, 5.20.2, or later. To avoid the problem, upgrade your cluster to AOS 6.0.2, 5.20.2, or later before upgrading ESXi to 7.0 U2a.WorkaroundTo work around the problem, the i40enu driver needs to be replaced with the i40en driver as per the following steps:Note: the workaround step does not apply to non-Nutanix platform clusters, such as Dell XC series clusters built clusters as the VIB driver in the latest foundation directory (under AOS 5.20.3.5 for example) location lists 2.x VIB, which is not qualified yet. Copy the latest i40en vib from a CVM to ESXi hosts with the problematic NIC driver(i40enu): nutanix@CVM:~$ ls -al /home/nutanix/foundation/lib/driver/esx/vibs/*/i40en-1.10.6* Install the i40en VIB on the ESXi hosts: [root@ESXi01:~] esxcli software vib install -v /vmfs/volumes/NTNX-local-ds-*/i40en-1.10.6-1OEM.670.0.0.8169922.x86_64.vib Restart the ESXi hosts. Refer to Restarting a Nutanix Node https://portal.nutanix.com/page/documents/details?targetId=vSphere-Admin6-AOS-v6_5:vsp-node-restart-vsphere6-t.html for instructions. If you need to restart multiple hosts, verify that the rebooted node is back online in the cluster before rebooting the next node.After reboot, verify that the i40en driver was installed properly: [root@ESXi01:~] esxcli software vib list | grep -i i40en Contact Nutanix Support http://portal.nutanix.com if you need any assistance.
KB1252
Root Cause Analysis of CVM Reboots
This article describes how to troubleshoot and perform root cause analysis when a CVM (Controller VM) suddenly reboots.
This article describes how to troubleshoot and perform root cause analysis when a CVM (Controller VM) suddenly reboots. Logs to look for inside the CVM: dmesg Logs to look for on AHV host: /tmp/NTNX.serial.out.0 Logs to look for inside the ESXi: /vmfs/volumes/NTNX-local-ds-<serial>-<pos>/ServiceVM_Centos/ServiceVM_Centos.0.out To review memory/cpu usage/disk latency of the CVM at the time of the reboot, the sysstats under /home/nutanix/data/logs/sysstats logs can be reviewed. Note that the logs are in UTC timestamp. /home/nutanix/data/logs/sysstats/meminfo.INFO
Examples CVM command last reboot: nutanix@cvm$ last reboot Logs on CVM /var/log/messages and kern.log: Dec 23 09:40:06 NTNX-CVM-A kernel: fioinf Waiting for /dev/fct0 to be created ESXi logs /vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/ServiceVM*/vmware.log: 2013-12-23T17:35:25.959Z| vcpu-0| I120: CPU reset: soft (mode 1) "Restart Guest OS" on CVM initiated from vCentre results in the following signature in the cvm's vmware.log (Note that this entry does not occur in the vmware.log if the CVM has been gracefully restarted from within the Nutanix Cluster via AOS Upgrade or cvm_shutdown command) 2022-03-01T23:24:30.638Z| vmx| I125: Tools: sending 'OS_Reboot' (state = 2) state change request "Shutdown Guest OS" on CVM initiated from vCentre results in the following signature in the cvm's vmware.log (Note that this entry does not occur in the vmware.log if the CVM has been gracefully shutdown from within the Nutanix Cluster via AOS Upgrade or cvm_shutdown command) 2022-03-02T00:22:15.448Z| vmx| I125: Tools: sending 'OS_Halt' (state = 1) state change request Another example of vmware.log (based on VMware bug nr. 676321): 2013-07-17T22:35:53.907Z| vcpu-0| W110: MONITOR PANIC: vcpu-7:ASSERT vmcore/exts/hv/vt/hv-vt.c:1933 bugNr=676321 Another vmware.log (EPT misconfiguration - VMware KB 1036775 https://kb.vmware.com/s/article/1036775): 2013-05-03T17:27:43.262Z| vcpu-1| MONITOR PANIC: vcpu-0:EPT misconfiguration: PA b49b405b0 ESXi logs /vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/ServiceVM*/ServiceVM.out.0 shows jbd2/fio driver issue in this example: last sysfs file: /sys/devices/pci0000:00/0000:00:10.0/host2/target2:0:2/2:0:2:0/block/sdb/queue/scheduler For any recent hard drive failure, check hades.out log. If the SSD is the metadata drive, AOS will force a CVM to reboot. Also, if AOS has trouble removing an HDD and a forced removal is triggered by hades, a CVM will reboot. The output of ServiceVM.out.0 ( Bug 735768 https://bugzilla.redhat.com/show_bug.cgi?id=735768): kernel BUG at fs/jbd2/commit.c:353! ESXi vmksummary to see if the ESXi host rebooted: [root@esxi]# grep -i bootstop /var/log/vmksummary.log AHV: System Boot logs from Audit logs on the hypervisor CVM: nutanix@cvm$ sudo grep -i "kmsg started" /home/log/messages Scroll a few lines above to have more information: nutanix@cvm$ sudo grep -i -B 5 "kmsg started" /home/log/messages For newer versions of CVM, you may have to grep for "rsyslogd.*start" rather than "kmsg started": nutanix@cvm$ sudo grep -i "rsyslogd.*start" /var/log/messages
KB7586
NCC Health Check: coreoff_check
NCC 3.9.0. The NCC health check coreoff_check checks if the core saltstate is set to coreoff on Prism Central (PC) or CVMs.
The NCC health check coreoff_check checks if the core saltstate is set to coreoff on Prism Central (PC) or CVMs. Core dumps are useful for isolated troubleshooting but leaving enabled may unnecessarily fill the node /home system partition, and this can affect node stability and availability. Running the NCC check It can be run as part of the complete NCC health checks by running: nutanix@cvm$ ncc health_checks run_all Or individually as: nutanix@cvm$ ncc health_checks system_checks coreoff_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. [Currently available only in Prism Element] Sample output For status: PASS Running : health_checks system_checks coreoff_check For status: FAIL Running : health_checks system_checks coreoff_check Output messaging [ { "Check ID": "Checks if saltstate is correctly set to default coreoff state to ensure services do not produce core dumps unnecessarily." }, { "Check ID": "Core dumps are enabled for services running on CVM or PCVM." }, { "Check ID": "Follow KB7586 to validate and fix coreoff saltstate. Resolution may involve graceful node reboot, so ensure the cluster is otherwise healthy and resiliency status is OK prior to this action." }, { "Check ID": "Core dumps are useful for isolated troubleshooting but leaving enabled may unnecessarily fill the node /home system partition, and this can affect node stability and availability." }, { "Check ID": "A101069" }, { "Check ID": "Core dumps are enabled on this CVM or PCVM." }, { "Check ID": "Core dumps are enabled on the CVM or PCVM with IP [CVM or PCVM]." }, { "Check ID": "This check is scheduled to run every day, by default." }, { "Check ID": "This check does not generate an alert." } ]
Should the check report a FAIL, follow the below procedure to set saltstate to coreoff: SSH to affected Prism Central VM [PCVM] or Prism Element VM [CVM]Run below command: nutanix@cvm$ ncli cluster edit-cvm-security-params enable-core=false Reboot PCVM or CVMs in a rolling fashion. Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter to make sure that cluster can tolerate node being down. Do not proceed if the cluster cannot tolerate the failure of at least 1 node.SSH to CVM / PCVM which you are going to reboot and run below command: nutanix@cvm$ cvm_shutdown -r now Monitor the CVM / PCVM status via VM console.Once the CVM / PCVM is back online, wait for the services to start automatically.Follow step 1 before proceeding with the reboot of the next CVM / PCVM. A known issue in the NCC-3.10.x series, which will be fixed in a later NCC version. ERR : Failed to execute command ncli cluster edit-cvm-security-params on cvm. ERROR: If you get the above ERR message, kindly check if the command ncli cluster edit-cvm-security-params executes on the CVM. If it executes, ignore the NCC ERR message. If the command does not execute, engage Nutanix Support. If you require further assistance, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/.
KB14547
NDB | How to revert database clone state to READY from Refreshing.
NDB | How to revert database clone state to READY from Refreshing.
Database clone can stuck in REFRESHING state due to issue and doesn't allow clone removal or any operation. This KB provides steps to manually revert state to READY state to continue operations.
Following are the steps to bring database clone back to READY state.1. SSH to the era server.2. Get DB or Clone ID using era CLI which is in REFRESHING state. era > clone list 3. Update the database or clone status to READY. psql -U postgres era_repos 4. Check the Clone status back to READY era > clone list
KB15186
Metro impact due to Prism VIP assignment delays
Metro can be impacted during cluster maintenance or planned cvm reboots where the VIP assignment is delayed.
Due to a timing and timeout issue it has been found that in some cases LCM upgrades, 1 click upgrades and planned CVM reboots can result in the VIP of the cluster not being assigned to the new Prism Leader timeously. If the cluster is part of a Metro cluster pair then this delay can impact Metro Availability. Note: Clusters using DR Network Segmentation are not affected and do not require the suggested workaround. The issue can surface when the below set of requirements are met: Metro Availability is configured between 2 clustersCVM being restarted for maintenance or upgrades is current Prism Leader and Cerebro leader.Rare timing when CVM is still on the network(cvm is still powered on) while VIP is being moved to newly elected Prism Leader. Impact When cerebro leader on the a metro cluster changes the remote cluster uses the VIP to find the newly elected leader. The state of the Metro PD is noted as Remote Unreachable. It is thus not being promoted nor being disabled on the active or the standby site.All IO's on the container used for the Metro PD is stalled and the VM's hosted on this container will not be able to process and IO to their vDisks .The VIP is configured on the destination cluster, yet not reachable. Identification: If the above symptoms are seen it can be identified as followed: ssh into a CVM in the cluster which was undergoing maintenance and use the below command check if there was a delay in assigning the VIP nutanix@CVM:~$ allssh "grep -B1 'Error creating SSH client:' ~/data/logs/hera.out*" E.g. of output when a delay in assigning the VIP has occurred; in the example below there is a delay of 1m18s: nutanix@CVM:~$ allssh "grep 'Error creating SSH client: ' ~/data/logs/hera.out*" E.g. of output when cluster was not impacted by the issue nutanix@CVM:~$ allssh "grep 'Error creating SSH client: ' ~/data/logs/hera.out*"
Workaround:To prevent the delayed VIP assignment from temporarily disrupting the Metro Protection Domains all the remote CVM's IP's, including the VIP should be added to the remote site configuration - Remote Site Configuration https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-remote-site-any-configuration-c.htmlBelow is an example of such a configuration:Solution:This delayed VIP assignment is now resolved in AOS 6.5.5 and higher. Please upgrade to a release which contains the fix. Note: The workaround should be used when upgrading from a release which does not have the fix to a release which does include the fix.
KB10427
Identifying CVE and CESA patches in Nutanix Products
Customers should review the release notes or use the Nutanix Vulnerability Database (NXVD) to review released or pending CVEs.
Customers should review the release notes or use the Nutanix Vulnerability Database (NXVD) to review released or pending CVEs. Note that in some instances, CVEs or other customer-identified vulnerabilities do not apply to a given Nutanix product, and a communication or discussion about vulnerability scanner false positives may be advisable. Customers are advised to reach out to their Nutanix Systems Engineer or Nutanix Support https://portal.nutanix.com for additional assistance. For vulnerability queries, there are often scenarios where additional research is needed: For questions involving the historical mapping of CVEs to Nutanix product releases, customers can access the NXVD (Nutanix Vulnerability Database) application located at this URL: https://portal.nutanix.com/page/documents/security-vulnerabilities/list https://portal.nutanix.com/page/documents/security-vulnerabilities/list. You must log in to the Support Portal in order to use this tool. Note that NXVD does not provide forward-looking guidance regarding CVE scheduling into upcoming releases. Nutanix' patching policy including target timeframes for releases is found in KB-4110 http://portal.nutanix.com/kb/4110"
How to research security vulnerability questions: Scenario: You know the CVE Search the NXVD application for a given CVE, such as CVE-2018-18584. The NXVD is updated nightly at this location: https://portal.nutanix.com/page/documents/security-vulnerabilities/list https://portal.nutanix.com/page/documents/security-vulnerabilities/list. You will typically see the CVE alongside a number of other CVEs associated with a CESA. Scenario: You know the CESA number Search the NXVD application for a given CESA. The NXVD is updated nightly at this location: https://portal.nutanix.com/page/documents/security-vulnerabilities/list https://portal.nutanix.com/page/documents/security-vulnerabilities/list. The list of CVEs associated with a given CESA, alongside fix-integrated release(s), will be listed in the NXVD. Scenario: You have not been able to find all of the CVEs, or have other security-related questions. Engage your Nutanix Systems Engineer or Nutanix Support https://portal.nutanix.com. Scenario: Your security scan reports on a vulnerability that should already be patched in their version of a given Nutanix product. It is always possible for Nessus (or other vulnerability scanners) to report a false positive due to the way Red Hat backports fixes into their packages. Engage your Nutanix Systems Engineer or Nutanix Support https://portal.nutanix.com for clarification on false positives.
KB8136
Self Service Restore (SSR) fails due to FNS security policy blocking NGT traffic
Self Service Restore (SSR) fails due to Flow Network Security (FNS) security policy blocking Nutanix Guest Tools (NGT) traffic.
The following error may be seen when opening Self Service Restore (SSR) page or running ngtcli command in guest VM when the Guest VM to Prism Virtual IP tcp/2074 is blocked. Error: error executing command; unable connect to data protection service When running the following command: nutanix@cvm:~: ncli ngt list vm-names=XXXX You see that the communication link is inactive. VM Name : XXXXXX Error in nutanix_guest_tools.out: nutanix@CVM$ cat ~/data/logs/nutanix_guest_tools.out | grep -i http Running "telnet <CVM IP> 2074" or "ping <CVM IP>" from the guest VM fails.Verify if Flow Network Security (FNS) microsegmentation feature is in use, and if so, check any existing 'enforced' security policies for 'discovered' traffic which may be being blocked inadvertently between the Guest VM and the Nutanix CVMs. Refer to the following documentation for information on which Ports and Protocols used by NGT, and how to discover blocked traffic in FNS. Portal / Software Type : Ports and Protocols https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Nutanix%20Guest%20Tools%20%28NGT%29 Note the "Guest VM to Controller VM communication | tcp/2074" Portal / Flow Network Security https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Flow%20Network%20Security%20Next-Gen / Allowing Discovered Traffic https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Network-Security-VLAN-Guide:mul-discovered-traffic-allow-grid-view-t.html
If the matching NGT traffic type is detected in an FNS security policy's 'discovered' traffic, then there are a couple of options to consider;Option 1) Put the security policy, where the NGT traffic is shown as block, into 'Monitor' mode. Note: Doing this will effectively disable any blocking actions of the entire security policy which may leave other VMs unprotected until the policy is put back into 'enforce' mode. Only do this option if you understand the implications of moving a policy from enforce to monitor mode.Confirm that the VM can now access SSR snapshots. Use the "ncli ngt list vm-names=XXXX" command from above to confirm if the Communication Link becomes activeIf SSR snapshots/comm link looks OK, reconfigure or add an FNS security policy to allow the VM traffic for NGT and move the policy back to 'enforce' mode. Option 2) Create, or add to an existing, FNS security policy to target the affected VM(s) with the NGT SSR communication issuesAllow the related NGT traffic as per the above "Port and Protocols" documentation for the Guest VMApply the security policyConfirm that the VM can now access SSR snapshots. Use the "ncli ngt list vm-names=XXXX" command from above to confirm if the Communication Link becomes active
KB14681
Simultaneous VM migration or HA VM restart on the same host fails for vNUMA enabled VMs
HA fails to restart vNUMA enabled VMs
Issue 1:When a High-Availability event occurs, the VMs restart on one of the other available hosts. Some VMs may attempt to restart on the same host. In case of vNUMA enabled VMs, when they are restarted on the same host the operation may fail. Issue 2: When vNUMA enabled VMs are migrated simultaneously to the same host, the operation may fail. SymptomsAcropolis logs show an attempt to power on the VMs on the available host performing vNUMA calculation 2023-02-12 06:28:17,190Z INFO set_power_state_task.py:913 Powering on VM c1729c1b-08f1-448f-80ae-5969520f2941 on node: 6b0d284e-efed-4ccf-9fd8-c97db39d8338 Eventually it fails with following log signature 2023-02-12 06:29:01,433Z WARNING set_power_state_task.py:846 Unable to honor vNUMA pinning, retrying VmSetPowerState c1729c1b-08f1-448f-80ae-5969520f2941 kPowerOn with splatter. Looking at the Qemu logs on the host, the VMs show the same error 2023-02-12T06:29:00.001203Z qemu-kvm: -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu/c1729c1b-08f1-448f-80ae-5969520f2941,share=yes,size=1181116006400,host-nodes=0,policy=bind: os_mem_prealloc: Insufficient free host memory pages available to allocate guest RAM
This issue is resolved in: AOS 6.5.X family (LTS): AOS 6.5.6AOS 6.7.X family (STS): AOS 6.7 Upgrade AOS to versions specified above or newer.The issue happens because AHV attempts to start both VMs on the pNUMA node 0 while the pNUMA node 3 is empty: 2023-02-12 06:28:18,137Z INFO vnuma_pinning_solver_mixin.py:132 Attempt pinning (strict: True distinct pNUMA: True) of vNUMA VM with 2 vNUMA nodes to pNUMA. Result {0: 0, 1: 1} Those VMs should not be using the same pNUMA node.
KB8401
Manage IPMI Configuration Alerts through SMCIPMITOOL
null
To receive an alert for IPMI Event, alert destinations can be configured.Due to an SMC issue, Alert No. 1 configuration would get reset after a IPMI Unit/factory reset. To avoid this issue, skip the Alert No.1 and use Alert No. 2-10 for configuring alert destinations. When cluster has multiple nodes, it is tedious to login to IPMI of every node and update these settings. It can be performed using SMCIPMITOOL through a CVM with below steps:
When we select an alert and click on Modify, below are the options available: Change directory: nutanix@CVM:~$ cd /home/nutanix/foundation/lib/bin/smcipmitool Command to set the Destination IP: # Syntax: ./SMCIPMITool <ipmi IP> <IPMI username> <IPMI password> ipmi oem x10cfg alert ip <alert No> <IP Address> Command to set Event Severity: There are five level of Event Severity that can be configured- Disable All, Informational, Warning, Critical, Non-recoverable. # Event Severity Levels: # Syntax: ./SMCIPMITool <ipmi IP> <IPMI username> <IPMI password> ipmi oem x10cfg alert level <alert No> <Event Severity Level> Command to set Email Address: # Syntax: ./SMCIPMITool <ipmi IP> <IPMI username> <IPMI password> ipmi oem x10cfg alert mail <alert No> <Email Address> Command to configure a subject and message: # Syntax: ./SMCIPMITool <ipmi IP> <IPMI username> <IPMI password> ipmi oem x10cfg alert subject <alert No> <Subject>
KB13931
Fixing Inconsistent CVM and PCVM filesystem errors
Internal KB for fixing inconsistent file system errors on the CVM and the PCVM.
http://portal.nutanix.com/kb/5156 http://portal.nutanix.com/kb/5156Since 5.20 these faults are nearly always nuisance orphan inodes and not actual inconsistencies or corruption. When the check was implemented we were addressing legitimate kernel issues that caused FS corruption in AOS 5.10 and early 5.15.As of NCC 4.5 we changed the behavior of the check so that it will constantly catch old errors and due to that has increased the number of alerts and cases significantly. ( ENG-470465 https://jira.nutanix.com/browse/ENG-470465) Due to this behavior we have updated the guidance and are investigating the best change to revert/alter the NCC behavior and temper the field impact if possible. In an ideal system Ext4 filesystem errors should not exist. Make sure to check if the underlying hardware is healthy before proceeding. Use KB-1113 https://nutanix.my.salesforce.com/kA0600000008USr or WF: Disk Debugging https://confluence.eng.nutanix.com:8443/display/STK/WF%3A+Disk+Debugging when checking disk health.We should be making sure we are making a concerted effort to RCA what the error was that resulted in the error on the filesystem and getting the data to identify any field trends.All of the data points should be in your case comments and be attached to the case. You should have; the partition it occurred on, the current tune2fs output, what the error is in dmesg at that time of the IO error, any ooms noticed, and finally any fatals, Please also include; the hypervisor version, any downrev disk or HBA fw, disk health states that were checked, controller stats, data from the out of band management GUI. Specifically, from the IPMI GUI record records in regards to the health of the disks/controllers. Please also note the uptime, as well as any errors in the CVM serial log, any ungraceful actions like a power outage or someone turning off the CVM from the hypervisor, any DIMM or CPU errors logged from checking the mcelog, kernel logs or BMC event logs, the output of any fsck runs, if the fire alarms in the datacenter went off, if there was an earthquake around that time. These cases are being looked at, if you need assistance please engage your escalation paths. In AOS 5.20.3 and 6.0.3, code was implemented to set the boot partition to automatically check every mount and the Stargate EXT4 partitions already did this when mounted by Hades if there is an error that needs to be cleared. This was also implemented into PC 2022.1 and above. Determine if the errors seen on the CVM are for the Stargate partitions, or the non-Stargate partitions (i.e /, /tmp, /home) Capture dmesg, serial log, tune2fs output, uptime and errors_count for the affected CVM and devices. Also make sure a log bundle was gathered. nutanix@cvm:~ $ uptime Follow the correct resolution guidance in the solution tables below for either the CVM or PCVM.** This is an internal solution article for the fs_inconsistency_check due to the limit on internal comments size. Please reference KB 8514 http://portal.nutanix.com/kb/8514 for the fs_inconsistency_check descriptions and customer facing guidance.
Troubleshooting GuideThis section applies to EXT4/EXT2 CVM partitions only: Mnt Cnt: Abbreviation for the "Maximum mount count" from tune2fs for the partition.Mnted: Abbreviation for if the partition is a mounted partition on the CVM.SG: Abbreviation for if the partition is a Stargate partition.Last error: The last error indicated in the tune2fs -l output for the partition.7 days refers to the time since the "Last error time" logged in the tune2fs output​​​​ Note: Do not modify the table below. Please contact Brian Lyon for any suggestions related to this table or the Action Plan listing.*** If at any point in the below options it appears that there is ongoing instability that needs to be corrected first.*** If there are ongoing filesystem errors and cleared errors re-occur shortly after or re-manifest then Action Plan 4 will likely be necessary. Please engage STL assistance if you believe this to be the case. For PCVM inconsistent filesystem resolution: First step is always to identify when and why the error occurred if at all possible. This starts with the when which is easiest to determine by referencing the tune2fs output "Last error time" and potentially the first error counter as well. After that thoroughly check the PCVM messages, dmesg, Fatal logs, iostat and the PE cluster logs during the timeframe for problems that might explain the issue. Also specifically document any ongoing oom errors just prior, during or after the logged error.Note: If the file system errors are related to the CMSP pods and their volumes then the below process will not work. Refer to KB- https://portal.nutanix.com/kb/14476 14476 https://portal.nutanix.com/kb/14476 for details on fixing file system errors on CMSP volumes.NOTE: The rolling reboot for scaleout PC must be executed one PCVM at a time to prevent impact to PC availability. If a single PC instance AND customer is using FVN (Flow Virtual Networking), please schedule an outage window to execute the reboot. See KB-16489 http://portal.nutanix.com/kb/16489 for additional details. PC Ver(s): An abbreviation for Prism Central VM version which can be seen with ncli cluster info.Mnt Cnt: Abbreviation for the "Maximum mount count" from tune2fs for the partition.Mnted: Abbreviation for if the partition is a mounted partition on the CVM.CAS: Is an abbreviation for Cassandra disk. These are disks that are mounted beyond the stargate-storage path. Example: Issue: Disk replaced without allowing the disk removal to complete If a disk is replaced without allowing the logical drive removal to complete, the disk may still be mounted and the kernel may still have open handles on the partition.If these errors are seen during or on a node with a drive replacement, please confirm if the errors occurred on the old drive or after the new drive was in use before engaging escalation assistance.In case the old disk that has already been replaced still has a stale mount point as seen with "df -kh", you may need to manually validate the current status [using commands like: "smartctl -x /dev/sd[X], list_disks, ncli disk list-tombstone-entries, edit-hades -p" etc.] and unmount appropriately Verify the tombstone entry has the removed disk serial number intended, to avoid marking the disk online again and remove it if necessary.If unmounting reports that the disk is busy and cannot be unmounted, then find the process that is keeping the dead disk's mount point busy. Investigate further into this as the disk has not been replaced yet or the replacement is incomplete. Issue: NCC complains docker container partition has EXT4 fs errors Node x.x.x.187: Identify who is the consumer of the docker container. Below, the example lists this as /dev/sdi, however this can be any device id such as sdj or sdm: PCVM:~$ mount | grep sdi Shutdown the consumer service; in this example the service is calm: (Note: stop the msp_controller service if the consumer is MSP) PCVM:~$ genesis stop nucalm epsilon Unmount the device PCVM:~$ sudo umount /home/docker/docker-latest/plugins/965d0b336b942c1c8b93477c9ae52f8f0b574f0ded6d667c7a4b8fa15711e9d1/propagated-mount/calm-42507f58 Verify there are no consumers of device before running fsck PCVM:~$ mount |grep sdi Discover iSCSI target name using the PE DSIP PCVM:~$ sudo iscsiadm -m discoverydb -t st -p <pe_dsip>:3260 --discover | grep calm-42507f58 Use iscsiadm command to perform iscsi login with the iSCSI target name from previous step PCVM:~$ sudo iscsiadm --mode node --targetname <target_name_from_output_above> -p <pe_dsip>:3260 --login Run fsck on the device PCVM:~$ sudo fsck -y -f /dev/sdi Logout from iscsi session: PCVM:~$ sudo iscsiadm --mode node --targetname <target_name_from_login> -p <pe_dsip>:3260 --logout Start the services back up PCVM:~$ cluster start Wait for a few minutes for the services to be started. Use docker ps -a to verify both are up and running and healthyRepeat the process with other PCVMs, as needed Issue: If the docker container partition has EXT4 fs errors and is a registry mount point Node X.X.X.101: Identify who is the consumer of the docker container. Below, the example listing sdj, it is related to a Registry mount point (vg): PCVM:~$ mount | grep sdj Take a backup of the registry vg backup from related PE (vg-name is e.g. registry-3b9b3de1-5d12-4413-7794-3ebc93b9d485 ): CVM:~$ acli vg.clone <vg-name>-backup clone_from_vg=<vg-uuid> Verify the annotation fields and note the iscsi_target_name field: CVM:~$ acli vg.get <vg-name> On PCVM - Disable docker plugin, stop nucalm/epsilon service, registry service, and msp_controller across all PCVMs: PCVM:~$ allssh docker plugin disable -f nutanix:latest Unmount the device: PCVM:~$ sudo umount <device path> Discover which PCVM is using the Registry VG iscsi target name: PCVM:~$ allssh "sudo iscsiadm -m discoverydb -t st -p <pe_dsip>:3260 --discover | grep <VG_iscsi_target_name>" On that PCVM - use iscsiadm command to perform iscsi login with the iSCSI target name from previous step: PCVM:~$ sudo iscsiadm --mode node --targetname <target_name_from_output_above> -p <pe_dsip>:3260 --login Get the device name associated with this iSCSI target LUN. (as device name may have been changed.) PCVM:~$ ls -l /dev/disk/by-path/ | grep iqn.2010-06.com.nutanix:9bf72f83028952744045c71d7beb29740519b2bd1b3b77aa8b4bee8e74917b02:nutanix-docker-volume-plugin-tgt0 Verify there are no consumers of device before running fsck (e.g. sdj): PCVM:~$ ls -l /dev/disk/by-path/ | grep sdj Run fsck on the affected device on device name obtained from above. PCVM:~$ sudo fsck -y -f /dev/sdj Logout from iscsi session: PCVM:~$ sudo iscsiadm --mode node --targetname <target_name_from_login> -p <pe_dsip>:3260 --logout Start docker plugin, nucalm/epsilon service, registry service and msp_controller across all PCVMs: PCVM:~$ docker plugin enable nutanix:latest Wait for a few minutes for the services to be started. Use docker ps -a to verify plugin is running and healthyCan also remove cloned backup if successful checked there are no other EXT issues.[ { "AOS Ver(s)": "6.0.1 to\t\t\t6.0.1.6", "Mnt Cnt": "NA", "Mnted": "NA", "SG": "N", "Actions": "- If the error is less than 7 days old, run through Action Plan 7 and after cluster health is confirmed, proceed with Action Plan 5.\t\t\t- If the error is older than 7 days, follow Action Plan 5 below." }, { "AOS Ver(s)": "5.20+\t\t\t6.0.1.7+", "Mnt Cnt": "Not 1", "Mnted": "Y", "SG": "N", "Actions": "- If the error is less than 7 days old, run through Action Plan 7 and after cluster health is confirmed, proceed with Action Plan 2.\t\t\t- If the error is older than 7 days, follow Action Plan 2 below." }, { "AOS Ver(s)": "5.20+\t\t\t6.0.1.7+", "Mnt Cnt": "Not 1", "Mnted": "N", "SG": "N", "Actions": "- If the error is less than 7 days old, run through Action Plan 7 and after cluster health is confirmed, proceed with Action Plan 3.\t\t\t- If the error is older than 7 days, follow Action Plan 3 below." }, { "AOS Ver(s)": "5.20+\t\t\t6.0.1.7+", "Mnt Cnt": "1", "Mnted": "N", "SG": "N", "Actions": "- If the error is less than 7 days old, run through Action Plan 7 and after cluster health is confirmed, proceed with Action Plan 3.\t\t\t- If the error is older than 7 days, follow Action Plan 3 below." }, { "AOS Ver(s)": "ANY AOS", "Mnt Cnt": "NA", "Mnted": "Y", "SG": "Y", "Actions": "- Regardless of the error age, follow Action Plan 4 below." }, { "AOS Ver(s)": "ANY AOS", "Mnt Cnt": "NA", "Mnted": "N", "SG": "Y", "Actions": "- Regardless of the error age, follow Action Plan 4 below." }, { "AOS Ver(s)": "5.10.11\t\t\t5.10.11.1\t\t\t5.15.2 to 5.15.7\t\t\t5.18.x\t\t\t5.19.x\t\t\t6.0.1 to 6.0.1.6", "Mnt Cnt": "NA", "Mnted": "NA", "SG": "N", "Actions": "- Regardless of the error age, follow Action Plan 5 below." }, { "AOS Ver(s)": "Vers not listed above for:\t\t\t5.10.x\t\t\t5.11.x\t\t\t5.12.x\t\t\t5.15.x\t\t\t5.16.x\t\t\t5.17.x", "Mnt Cnt": "NA", "Mnted": "NA", "SG": "N", "Actions": "- Regardless of the error age, follow Action Plan 6 below." }, { "AOS Ver(s)": "Action Plan 1", "Mnt Cnt": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Reboot the CVM to allow the partition to check on mount and the fault to be cleared.\t\t\t3. Confirm the error was cleared and collect the CVM serial log and dmesg." }, { "AOS Ver(s)": "Action Plan 2", "Mnt Cnt": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Stop services on the CVM with the affected partition and migrate VMs off the node just to be safe.\n\t\t\tPlace CVM and host in maintenance mode. Ref: KB 4639\n\t\t\tNote: ESXi based hosts will not go into maintenance mode if it has running VMs, this includes the CVM. In this case we can disable DRS and manually migrate the UVMs and then proceed with step 2.b\t\t\t \t\t\t b. genesis stop all\t\t\t\t\t\t3. Confirm this is not a Stargate partition or a partition inside the mirror.\n\t\t\tStargate partitions should be self-evident. If you are unclear how to tell please reach out to an STL.​​​Use lsblk to check on the cvm if the partition is part of a Multi Disk mirror (md).\n\n\t\t\texample:\nsda 8:0 0 447.1G 0 disk\n├─sda1 8:1 0 10G 0 part\n│ └─md0 9:0 0 10G 0 raid\n\t\t\t4. Set the partition flag to check on the next reboot\n\n\t\t\tsudo tune2fs -c 1 /dev/xxx\n\t\t\t5. Reboot the CVM\t\t\t6. Confirm that the errors were cleared by looking at the errors_count for the device and the filesystem state in tune2fs\n\t\t\tgrep . /sys/fs/ext4/*/errors_countfor i in `sudo blkid | awk -F : '!/xfs|iso9660/{print $1}'`;do echo $i;sudo tune2fs -l $i | egrep 'Filesystem state|Last checked|Maximum|error|orphan';done\n\t\t\t7. Take the CVM and host out of maintenance mode. Ref: KB 4639" }, { "AOS Ver(s)": "Action Plan 3", "Mnt Cnt": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Stop services on the CVM with the affected partition and migrate VMs off the node just to be safe.\n\t\t\tPlace CVM and host in maintenance mode. Ref: KB 4639\n\t\t\tNote: ESXi based hosts will not go into maintenance mode if it has running VMs, this includes the CVM. In this case we can disable DRS and manually migrate the UVMs and then proceed with step 2.b\t\t\t \t\t\t b. genesis stop all\t\t\t\t\t\t3. Confirm this is not a Stargate partition or a partition inside the mirror.\n\t\t\tStargate partitions should be self-evident. If you are unclear how to tell please reach out to an STL.Is lsblk to check on the cvm if the partition is part of a Multi Disk mirror (md).\n\n\t\t\texample:\nsda 8:0 0 447.1G 0 disk\n├─sda1 8:1 0 10G 0 part\n│ └─md0 9:0 0 10G 0 raid1\n\t\t\t4. Set the partition flag to check on the next mount\n\n\t\t\tsudo tune2fs -c 1 /dev/xxx\n\t\t\t5. Check the partition manually with fsck. Make sure to save the output of the check and attach it to your case.\n\n\t\t\tBoot the affected CVM to Phoenix. Please follow KB-3523 to generate a phoenix iso from the cluster. You can mount the iso and make the CVM boot from it by using the process here: https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=STK&title=SVM+Rescue%3A+Create+and+MountPlease note, for a single node cluster, a VNC viewer is needed for this process. Refer to KB-5156 on how to connect to the CVM using a VNC viewer.Once the CVM boots to Phoenix, use lsblk to check the partition number\n\n\t\t\troot@phoenix:~$lsblk \n\n\t\t\t d. After identifying the partition number, run fsck on the affected partition\n\n\t\t\troot@phoenix:~$fsck -TMVfy /dev/md[partition number]\n\t\t\t e. Once the fix has been run boot the CVM out of Phoenix\n\n\t\t\t 6. Confirm that the errors were cleared by looking at the errors_count for the device and the filesystem state in tune2fs\n\n\t\t\tgrep . /sys/fs/ext4/*/errors_countfor i in `sudo blkid | awk -F : '!/xfs|iso9660/{print $1}'`;do echo $i;sudo tune2fs -l $i | egrep 'Filesystem state|Last checked|Maximum|error|orphan';done\n\n\t\t\t7. Reboot the CVM\t\t\t8. Take the CVM and host out of maintenance mode. Ref: KB 4639\n\n\t\t\t**There are instances where you may see the error counts still showing. Please ensure to boot the CVM and re-run the validation commands above to verify errors were cleared.**" }, { "AOS Ver(s)": "Action Plan 4", "Mnt Cnt": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Confirm the disk is healthy and stable with smartctl and checking for device errors in dmesg. Check the controller stats for instability. If no hardware issues proceed.\t\t\t3. Go to prism and logically remove the drive from the cluster.\t\t\t4. After the disk successfully removes click repartition and add in prism to create a new partition which will not have an error and add it back to the cluster.\t\t\t5. Confirm that the errors were cleared by looking at the errors_count for the device and the filesystem state in tune2fs\n\t\t\tgrep . /sys/fs/ext4/*/errors_countfor i in `sudo blkid | awk -F : '!/xfs|iso9660/{print $1}'`;do echo $i;sudo tune2fs -l $i | egrep 'Filesystem state|Last checked|Maximum|error|orphan';done" }, { "AOS Ver(s)": "Action Plan 5", "Mnt Cnt": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Confirm the disk is healthy and stable with smartctl and checking for device errors in dmesg. Check the controller stats for instability. If no hardware issues proceed.\t\t\t3. Recreate the CVM using the single_ssd_repair script. You can find tips on monitoring and troubleshooting the process in KB-5231.\t\t\t4. Upgrade AOS to preferably the latest LTS.\t\t\t\t\t\tIMPORTANT: Please note the passwords for nutanix and admin users before using the single_ssd_repair script and confirm they are in place afterwards. ENG-495806 is open for an issue where these passwords can revert to defaults.\t\t\tIMPORTANT: The ext4 filesystem errors reported by this NCC check on clusters running AOS 6.0 to 6.0.1.7 could be due to Hades/fsck killed due to OOM as explained in FA-99. Please check KB 12204 to rule out this issue." }, { "AOS Ver(s)": "Action Plan 6", "Mnt Cnt": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Confirm the disk is healthy and stable with smartctl and checking for device errors in dmesg. Check the controller stats for instability. If no hardware issues proceed. Perform SVM Rescue followed by the boot_disk_replace script for the affected CVM. This procedure is documented here\n\t\t\tThis script can be run from any working CVM in the same cluster.Shutdown the CVM that is alerting for EXT4-fs errors before running the scripts below.\n\n\t\t\tSingle-SSD nodes (EXT4 errors on /dev/sda1-4) use this syntax to start the script:\n\n\t\t\tnutanix@HEALTHY-CVM:~$ single_ssd_repair --ignore_disk_removal --svm_ip [IP_OF_CVM_WITH_ERRORS]\n\n\t\t\tMulti-SSD nodes (EXT4 errors on /dev/md0-2) use this syntax to start the script:\n\n\t\t\tnutanix@HEALTHY-CVM:~$ single_ssd_repair --ignore_disk_removal --allow_dual_ssd --svm_ip [IP_OF_CVM_WITH_ERRORS]\n\t\t\tCAUTION: On AOS less than 5.20.4 and 6.0.2, DO NOT use the CVM rescue method on a 2-node cluster due to issues with the boot_replace_disk script. REF: ENG-258198." }, { "AOS Ver(s)": "Action Plan 7", "Mnt Cnt": "Check the health of the system, collect data and logs to legitimately attempt to RCA the cause/origination of the error. Please be diligent in documenting, collecting the data, and summarizing the logs checked/ruled out and errors seen so we can isolate field causes. Please also note any OOM events that occur just prior/during the error. If you need assistance RCAing or isolating a cause please leverage your escalation paths.\n\t\t\tThoroughly check the CVM messages, dmesg, (both found in /var/log directory) Fatal logs, smartctl, HBA stats, iostat and CVM serial log for anomalies.Check the host for cores, logs and data logged prior/during/after the fs error is reported in tune2fs and/or dmesg from the CVM.\n\t\t\t\tAHV: Check the /var/log/ directory for cores/crash, mcelog, dmesg, CVM serial log, CVM qemu log, messages logsESXi: Check the /var/log and /var/run/ directories for cores/zdump, CVM serial log, CVMs vmware.log, vmkernel, syslog, hostd, vobd\n\t\t\t\t\n\t\t\tReferences for assistance: ESXi log locations, KB 1887; Disk/HBA issues: KB 1113, KB 1203; AHV Dumps: KB 4866, KB 13345" }, { "AOS Ver(s)": "Any Ver", "Mnt Cnt": "Y", "Mnted": "Y", "SG": "N", "Actions": "Ask that they reboot the CVM. Upon reboot the partition should be checked and the fault cleared." }, { "AOS Ver(s)": "Any Ver", "Mnt Cnt": "NA", "Mnted": "Y", "SG": "N", "Actions": "Ask that they reboot the CVM. Upon reboot the partition should be checked and the fault cleared." }, { "AOS Ver(s)": "Any Ver", "Mnt Cnt": "Not 1", "Mnted": "Y", "SG": "N", "Actions": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Stop services on the PCVM with the affected partition.\n\t\t\tgenesis stop all\n\t\t\t3. Confirm this is not a Cassandra partition.\t\t\t4. Set the partition flag to check on the next reboot\n\t\t\tsudo tune2fs -c 1 /dev/xxx\n\t\t\t5. Reboot the PCVM\t\t\t6. Confirm that the errors were cleared by looking at the errors_count for the device and the filesystem state in tune2fs\n\t\t\tgrep . /sys/fs/ext4/*/errors_countfor i in `sudo blkid | awk -F : '!/xfs|iso9660/{print $1}'`;do echo $i;sudo tune2fs -l $i | egrep 'Filesystem state|Last checked|Maximum|error|orphan';done" }, { "AOS Ver(s)": "Any Ver", "Mnt Cnt": "N", "Mnted": "N", "SG": "N", "Actions": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Stop services on the PCVM with the affected partition.\n\t\t\tgenesis stop all\n\t\t\t3. Confirm this is not a Cassandra partition.\t\t\t4. Set the partition flag to check on the next mount.\t\t\tsudo tune2fs -c 1 /dev/xxx\t\t\t5. Check the partition manually with fsck and save the output of the check to attach to your case.\n\t\t\tsudo fsck -TMVfy /dev/sda2\n\t\t\t6. Confirm that the errors were cleared by looking at the errors_count for the device and the filesystem state in tune2fs.\n\n\t\t\tgrep . /sys/fs/ext4/*/errors_countfor i in `sudo blkid | awk -F : '!/xfs|iso9660/{print $1}'`;do echo $i;sudo tune2fs -l $i | egrep 'Filesystem state|Last checked|Maximum|error|orphan';done\n\t\t\t7. Reboot the PCVM" }, { "AOS Ver(s)": "Any Ver", "Mnt Cnt": "Any", "Mnted": "Any", "SG": "Y", "Actions": "1. Confirm the cluster is healthy and no other issues exist on the cluster.\t\t\t2. Stop services on the PCVM with the affected partition\n\n\t\t\tgenesis stop all\n\n\t\t\t3. Run fsck -n /dev/ and capture the output. The -n flag performs a read-only check.\n\n\t\t\t4. Capture the output of the read-only fsck and open a TH to engage an STL.\n\n\t\t\tNote: There must not be any disk IO for fsck -n to return valid results: genesis stop all is not optional." } ]
KB10936
Duplicate scheduled reports generated on PE/PC after Daylight Savings Time (DST) change
Large number of report instances are triggered and corresponding number of e-mails are sent for a scheduled report after switching to daylight savings time (DST).
Prism Central:After Daylight Savings time (DST) change, an unexpectedly large number of report instances is triggered by Prism Central. The corresponding number of emails is sent after DST changes where the report via email is configured.There is also a mechanism in reporting to retain only a certain number of reports – the report retention policy. In addition to generating reports, you may see multiple deleted report instance tasks. These tasks are removing reports that are beyond the configured retention count. A large number of these tasks causes degraded performance of Prism Central.AOS/Prism Element:On Prism Element (PE), after Daylight Saving Time change in North America, NCC health checks scheduled in Prism Element will run repeatedly for one hour, producing dozens of duplicate emails.Evidence of this behavior can be observed in the log file: ~/data/logs/delphi.out with repeated messages concerning "Recheduling task xxx name: ncc_schedule_task"These will begin at scheduled time of report and repeat for one hour - for example: 2021-03-24 21:00:48,130Z INFO base_task.py:51 Timeout for ncc_schedule_task is 1800
The issue has been fixed on pc.2021.3 for Prism Central and on 5.20 for AOS/Prism Element. For older versions please follow the workaround below:Prism Central:The problem occurs due to a code that converts to DST in one place but uses epoch time in another. Currently, there are 3 workarounds for this issue: Change the timezone of Prism Central to the standard time (non-DST time).Remove the Schedule: Edit the report.In the upper left corner click ‘Edit Schedule’. At the bottom of the popup click the option to remove the schedule.Save the changes. This will stop the report generation from continuing. Set up a Schedule with Playbooks. (Note: The Playbooks require the same license as reports, Prism Pro in this case) Go to Operations > Playbooks from the hamburger menu.Select Create Playbook.Set a trigger - in this case, set a Time Trigger.From the time trigger customization screen - change the radio button to configure a schedule at a recurring interval.Fill in the rest of the details to match the desired schedule frequency.Now you need to add the actions. Click Add action on the left-hand side.Search for Generate Report and click the Select button to add the Generate report action.Choose the report config you would like to configure the schedule for and fill out any of the details you would like for how this report is generated.Now add one more action and search for the Email action.Fill in the details of the email action and be sure to add the report into the attachment field (click the dropdown and you should see the option to add it).Save and close. Click the enabled toggle to enable the Playbook. AOS/Prism Element:The problem occurs due to a code that converts to DST in one place but uses epoch time in another.Currently, there are 2 workarounds for this issue: Remove the Schedule for NCC.Change the timezone of the cluster to the standard time (non-DST time) and do a rolling restart of the cluster.
KB5439
Windows File Copy Performance
The following article addresses common misconceptions about Windows file-copy operations and highlights ways to increase copy throughput in Windows.
Copying files with Windows Explorer will not depict true storage fabric throughput. NOTE: This article is intended to provide a general overview of the limitations of file-copy operations in MS Windows Explorer. The copy operation is single-threaded, which limits the maximum throughput available to the guest OS. This limitation applies to moving data between disks within a VM or copying data from one VM to another using a network file share. Moving data within the directory structure of a single disk incurs the same penalty. What does single-threaded mean? Being single-threaded means that an application (Windows Explorer in this case) can only do one job at a time. For example, with a 4 GB file, the application cannot copy the entire amount of data into memory and then write to the destination. As a workaround, the application reads the source file in chunks and copies those chunks to the destination. This avoids consuming excess system memory, however, the application now has to perform multiple operations serially to move the data. With newer version of Windows e.g. Windows 2016 (build 1607) Microsoft improved the way the data is using the FCM (File Cache Manager) when clearing dirty pages and draining the IO to the disk subsystem. It should be kept in mind that this only happens with the default disk removal policy "Best Performance" is in use. With the default disk removal policy the "Avg Disk Bytes/Write" with a max of 32 MB can be used so basically every write from Windows to the disk subsystem has optimally a maximum of 32 MB in size. With different hypervisors this will be cut into smaller chunks on the SCSI level. For AHV we will see a max IO size of 256 KB while with ESXi we see either 1 MB (LSI Logic SAS) or 512 KB (PVSCI). This as a result then shows more IO in parallel and should increase the throughput. Certainly the behavior differs depending on what hardware is in use, such as All Flash vs. Hybrid or all NVMe. There is a known issue with Windows 2019 and Windows 2022 before the June 2022 Patchday Cumulative Update fix. Please refer to KB-13014 https://portal.nutanix.com/kb/13014 for more information. For more information on measuring Windows File Copy performance, see the following. Using File Copy to Measure Storage Performance https://blogs.technet.microsoft.com/josebda/2014/08/18/using-file-copy-to-measure-storage-performance-why-its-not-a-good-idea-and-what-you-should-do-instead Multi-Threaded Operations - Benefits and Caveats If the application uses multiple threads, the operation can be divided up in each thread to move data in parallel. Parallelism works effectively when you have multiple files to copy (speaking solely of file copy operations), if you only have a single large file to copy unless the application used to copy the file is able to split it into multiple pieces, the copy operation will only use one thread, Other types of applications such as Databases, handle the concurrency and parallel IO differently than file copy operations which in turn it yields better IO performance. One caveat to using multi-threaded operations is to not use more threads than your system can support. When this occurs, the system reaches a point of diminishing returns where performance suffers due to thread contention. A guideline for maximum threads is to use no more than 4 times the number of CPU cores assigned to the VM. User experience with this parameter may vary depending on physical hardware and hypervisor.Below there are a couple of scenarios that can be encountered when copying files in Windows Machines and some possible ways to address them (it is implied File Explorer is being used for the operation). Copying multiple files simultaneously is perceived as slow If you need to copy a large number of files within a windows machine and you are using Windows Explorer, depending on the number of files and their size the copy operation might be perceived as suboptimal, as already stated above, since Windows Explorer is single-threaded, you are limited to a single thread per copy operation which will have an impact on the operation. Copying large files between disks delivers low throughput There can be multiple reasons why this can occur, below are some of the most common ones: If The cluster is a hybrid system, due to ILM it is possible the data being read is not on SSD anymore and it was down-migrated to HDD, due to the latter being slower media it is expected for throughput to be lower. This can be checked by going in Prism Element, select the VM being worked on, then IO metrics. Here is an example o a VM that around 42.4% of its data is being read from HDD. Oplog limitations Oplog is similar to a filesystem journal and is built to handle bursty writes (random writes as well as sequential with < 1.5 Outstanding IO), coalesce them, and then sequentially drain the data to the extent store. The majority of workloads do not need any additional tuning related to Oplog. However, workloads that are write-intensive and sustained like the file copy of bigger files can potentially fill up Oplog up to its 6GB limit. When Oplog becomes full, data has to be drained in parallel with new data coming in which could lead into the "Oplog cliff" reducing the throughput of the incoming writes. Oplog has changed over the last year and is dependent on the fact if All Flash is used or Hybrid it might behave differently. Please reach out to support for further analysis.Below is an example of what to expect when throughput drops due to Oplog draining.Note: The throughput numbers and the occurrence timeframe of the drop will differ depending on multiple factors (i.e. All Flash vs Hybrid, file sizes being copied, etc.), but in each case, there will be a drop in throughput once Oplog starts draining. NTFS Compression within Windows is enabled NTFS can compress files using LZNT1 algorithm (a variant of the LZ77). Files are compressed in 16-cluster chunks. With 4 KB clusters, files are compressed in 64 KB chunks. If the compression reduces 64 kB of data to 60 kB or less, NTFS treats the unneeded 4 KB pages like empty sparse file clusters—they are not written. This allows not unreasonable random-access times. However, large compressible files become highly fragmented as every 64 KB chunk becomes a smaller fragment. Compression is not recommended by Microsoft for files exceeding 30 MB because of the performance hit. For information on how compression is handled in NTFS you can visit the following Microsoft documentation: How NTFS Works https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc781134(v=ws.10)?redirectedfrom=MSDNAlthough read-write access to compressed files is often, but not always transparent, Microsoft recommends avoiding compression on server systems and/or network shares holding roaming profiles because it puts a considerable load on the processor.When a large file is being read from disk, it is expected to get large sequential IO sizes (256 KB or larger) and an 'n' number of outstanding IOPs. since compression will work at a 64 KB granularity, this will significantly reduce read throughput as this workload is highly dependent on IO size (throughput=IO size* number of IOPs, meaning the smaller the IO size, the lower the throughput obtained). This translates into poor read performance as throughput is significantly reduced due to the smaller IO size and the IO pattern being random due to file fragmentation.Nutanix Storage containers have compression enabled by default, which should provide enough storage savings in the cluster, so compressing already compressed data will not translate into more storage savings, we recommend not enabling compression at the NTFS level on Windows VMs as there are no real benefits to it since data will be compressed in the storage container and doing so can have a significant impact on throughput oriented workloads. Windows File Explorer does not display true read/write speed When a file is transferred using Windows file explorer, the graph displayed can be misleading. The graph often initially shows a high transfer rate due to the system caching to memory. In reality, the true transfer rate is much lower and more consistent. Using Performance Monitor and watching Disk Read/Write bytes/s is a more accurate means of measuring the true transfer speed. For example, Explorer shows the file has been copied at 1.42GB/s. However, PerfMon shows a consistent rate of 216MBps. When the memory cache is full, Explorer shows the write speed decreasing and leveling off. Perfmon shows that the rate has actually not changed. Once the cache is full, the graph is beginning to represent the true transfer speed more in line with perfmon.
1. Use Robocopy with multi-threading In addition to Windows Explorer File Copy, Microsoft includes the robocopy application to allow for multi-threaded file copy or move operations. For more information on Robocopy, see the Microsoft Document https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/robocopy. By using the /MT:(x) parameter, the administrator can specify the number of threads to assign to the operation. C:\> robocopy <source> <destination> /MT:{x} /r:5 /w:5 /log:<path_to_log> /tee Even though it was already mentioned above, Large files are getting copies which need the best performance, make sure to use robocopy with de "/j" parameter to copy data using unbuffered IO (skipping the file being stored in memory and writing directly to disk), example: C:\> robocopy <source> <destination> /j /r:5 /w:5 /log:<path_to_log> /tee NOTE: If you have Files 4.1, you can take advantage of the migration tool built into this version. 2. Enable Flash mode to prevent data from being migrated to HDD Depending on your application, if it is latency-sensitive or throughput intensive, you might want to consider enabling flash mode on the affected vdisk/vm to prevent ILM from migrating data to HDD. However, before doing so make sure you have enough SSD space available and if this workload indeed needs to be in hot tier. Consider the following scenario, copying some files every 1 or 2 months from a particular vdisk used for archiving purposes when data is not often read, does it really matter in this case if the copy operation takes a little longer? Before any decision is made, make sure to read the documentation for Flash mode so you get familiar with all the ins and outs of using it and decide if you indeed need it: Flash Mode https://portal.nutanix.com/page/documents/solutions/details?targetId=BP-2031-Flash-Mode:BP-2031-Flash-Mode 3. Disable NTFS compression on the underlying virtual disk You can disable compression in windows by getting the properties of the disk and unchecking the "Compress this drive to save disk space" option. Keep in mind this might take a while depending on the amount of data in the disk, additionally, this might also increase CPU consumption within the VM so you might want to monitor that closely depending on the amount of vCPU assigned to the VM.
KB8139
LCM inventory failure "The requested URL returned error: 403 Forbidden" due to http proxy caching
LCM inventory task fails. Check if caching is enabled on the proxy. Check Linux environment variables.
Login into CVM and type the below command to get the LCM leader: nutanix@CVM:~$ lcm_leader SSH into LCM leader CVM and verify the below ERROR in ~/data/logs/genesis.out : 2019-09-04 12:05:27 ERROR exception.py:77 LCM Exception [LcmDownloadError]: Failed while downloading LCM Module 'release.smc.gen10.hba_LSISAS3008_AOC.hba_LSISAS3008_AOC_update' from ' Confirm proxy is configured on the cluster. As per the above log snippet LCM is unable to download a module and hence we see "ERROR 403: Forbidden": nutanix@cvm:~$ ncli http-proxy ls The manual download of the LCM master manifest file is successful, but the version reported on the JSON file is not the latest LCM available as per LCM release notes https://portal.nutanix.com/#/page/docs/details?targetId=Release-Notes-LCM:Release-Notes-LCM on the Nutanix portal. nutanix@cvm:~$export http_proxy='http://proxy-xxxx:8080' We don't see in LCM page updating the LCM framework to the latest version. From a working system(system which doesn't use the same proxy), download the master_manifest file again and check the latest LCM version available. nutanix@cvm:~$ curl http://download.nutanix.com/lcm/2.0/master_manifest.tgz -O As long as the modules LCM is trying to download have been replaced with newer versions on the Nutanix LCM servers, we are failing with the "Could not access the URL: The requested URL returned error: 403 Forbidden"
The customer's proxy is providing an old version of the master.manifest.tgz file because caching is enabled on the proxy. More information on proxy caching can be found here https://docs.trafficserver.apache.org/en/latest/admin-guide/configuration/cache-basics.en.html.Eventually, the issue will be resolved on its own after some time when the proxy detects it has an old version of the master_manifest.tgz file but we would recommend checking with the firewall/network team for disabling/adding an exception on the proxy for files caching from 'http://download.nutanix.com/lcm/*'.Nutanix does not recommend configuring the HTTP_PROXY and HTTPS_PROXY environment values for the nutanix user. These environment values may affect the LCM behavior. If you have set up these environment variables for the nutanix user, please revert.To check if a proxy is set within the Linux environment variables: Log into a CVM via SSHExecute the commands below to check the parameters on the entire cluster: nutanix@CVM:~$ allssh "env | grep -i proxy" nutanix@CVM:~$ allssh "cat /etc/environment" Example of configured proxy: nutanix@CVM:~$ allssh "env | grep -i proxy" Nutanix recommends clearing the setting by running command below (this will remove the proxy configuration): nutanix@CVM:~$ unset HTTP_PROXY To confirm the issue is resolved, download the manifest file again and check if the version reported in the JSON is the latest.Run a new LCM inventory. This upgrades the LCM framework and the inventory operations should now complete successfully.As a temporary solution, you can also swap between http/https options in LCM settings to bypass the issue. To change the protocol used for downloading LCM, on the LCM page navigate to the Inventory page, select the Settings button, and check/uncheck the "Enable HTTPS" option. Now proxy will download a fresh copy since the URL has changed.
KB14272
Prism login for Admin or Domain users may fail with "Server Unreachable" or sometimes get stuck at the login screen due to hung cluster_sync issue
Prism Login failures have been observed when the cluster_sync service responsible for syncing passwords between CVMs/PCVMs encounters a problem. In such cases, a running task for cluster_sync may be visible in the Ergon task list.
Customers might experience login failure on Prism web console. After entering the admin credentials or domain credentials, the login might get stuck at the login screen OR sometimes it might say "Server Unreachable". NOTE: When encountering this issue, you can SSH to Prism Central IP addresses or CVM addresses using the username "admin" with no issues. In certain cases, we see Prism Login failure because of a cluster_sync related issue. Check ergon tasks for cluster_sync and follow KB-8294 http://portal.nutanix.com/kb/8294 to clear the task if you find one running. nutanix@cvm$ ecli task.list include_completed=no If you don't see any Ergon tasks as above, please proceed in checking for the following signatures.Signatures1. See If you find “401 failure” AND “unknown error” from athena in aplos logs (/home/nutanix/data/logs/aplos.out), Please check if you see "v3/users/info" as returning "401", as in the example below. 2023-01-19 23:32:31,047Z DEBUG athena_auth.py:230 authenticate response authentication_token { NOTE: If you don't see debug logs, you might need to enable debug for aplos and aplos_engine using the commands below. Aplos uses port 9500 and aplos_engine uses 9447. Make sure to set these gflags back to "False" once you are done gathering the logs you need. allssh 'curl http://0:9500/h/gflags?debug=True' 2. You might see a 503 failure as well (aplos.out) 2023-01-19 23:33:02,326Z ERROR athena_auth.py:186 Error: Traceback (most recent call last): 3. Look into athena.debug.log and see if you find any PAM-related failure. DEBUG 2023-01-20T21:25:55,224Z Thread-1 authentication_connectors.basic_authenticators.PAMAuthenticator.authenticate:67 Authenticating against PAM module for admin 4. By default, our PE/PC used PAM authentication. PAM (Pluggable Authentication Modules) is the system under GNU/Linux that allows many applications or services to authenticate users in a centralized fashion. PAM relies on cluster_sync; check to see if you find a similar signature on cluster_sync.out logs as shown below. In this instance, the cluster_sync service did not have any FATALs but was failing silently. 2023-01-20 21:25:25,545Z INFO cluster_sync_monitor.py:250 Status: Cluster sync monitor is running [2897, 2970, 2971, 2972]. File monitor is running. Node monitor is running.
If you are seeing this behavior, please collect logs with debug enabled as described above and attach to ENG-661932 https://jira.nutanix.com/browse/ENG-661932.Restarting cluster_sync across all the CVMs or PCVMs will restore the ability to login to Prism. nutanix@CVM:~$ allssh "sudo /usr/local/nutanix/cluster/bin/cluster_sync restart" NOTE: Restarting cluster_sync on only one CVM/PCVM will not fix the issue.
KB9586
Nutanix Files - FSVM crashing due to slow IO
This KB can assist you in determining the RCA if the FSVM crashed or kernel panicked due to slow I/O on the AOS/storage side of the cluster.
Symptoms of slow backend when the system is overwhelmed by writes. Search the following log files for similar error messages. /home/log/messages/home/nutanix/data/cores/127.0.0.1.<time_stamp_of_core/vmcore-dmesg.txt Some things to look for. Note that with time /home/log/messages rolls over into /home/log/messages.x. "[FS]", "Data flush" (actual numbers might be different) [177057.402707] WARNING: [FS]: Data flush [338048KB] took 354 sec for zpool-NTNX-testnfs111583be-e38a9aa9-e151-436f-976f-d460cd91ed4d-6c4f360c-9455-4f61-b70f-08a126e7a7a2[txg: 18160] You should also always see a "hung task panic" in a slow backend issue. NOTE: the vmcore-dmesg.txt is owned by root, you may need to copy the file and change the permissions. Additionally you may just use the sudo less command to review the file. Example: nutanix@FSVM~:$ sudo less /home/nutanix/data/cores/127.0.0.1.<time_stamp_of_core/vmcore-dmesg.txt
1. High SSD utilization on the AOS cluster. Check the iostat output from the sysstats directory or run the on iostat command on the FSVM. Example iostat: avg-cpu: %user %nice %system %iowait %steal %idle << We can see that most of wait is on incoming I/O If you are troubleshooting this live you can use the curator links page (links http:0:2010) to view the SSD utilization (Links to All Pages -> Misc -> Tier Usage ->SSD-SATA ) as seen from curator. Example log messages: nutanix@FSVM~:$ sudo less /home/nutanix/data/cores/127.0.0.1.<time_stamp_of_core/vmcore-dmesg.txt We may have to ask the customer to reduce the workload on the cluster or do a deep inspection on the AOS side of the cluster to understand the situation on why the cluster is in the current state. One possibility is the situation described in KB-9057 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000Cw8MCAS when block awareness and EC artificially bloat the SSDs. 2. CPU contention on the hosts owning the FSVMs/bully VMs stealing cpu cycles Check the iostat and top logs in sysstats directory on the FSVM, if troubleshooting live you can use the uptime command to view the load average as well. Some examples of the log files: nutanix@FSVM:~$ less /var/log/sysstats/ nutanix@FSVM:~$ less /var/log/sysstats/ #TIMESTAMP 1590064300 : 05/21/2020 05:31:40 AM Additionally, you can see disconnects from the iscsi service from stargate. nutanix@CVM~:$ grep <ip_of_fsvm> stargate.* I0521 08:32:04.662266 10075 tcp_connection.cc:254] Could not read: Connection timed out [110] You will have kernel panics for hung pids/processes. You may more than one core file. Example log messages: nutanix@FSVM:~$ sudo less /home/nutanix/data/cores/127.0.0.1.<time_stamp_of_core/vmcore-dmesg.txt dmesg logs from the FSVM will also show similar messages: nutanix@FSVM~:$ dmesg One solution would be to evalulate the vCPU use on the hosts and cluster. We have a pending engineering ticket that will automatically set the CPU shares and reservations on the FSVM similar to how the CVM is provisioned. We have seen in other customer environments that the default vCPU and memory configurations for the FSVM could be bullied by other guest VMs on the host. Default configuration is 4 vCPU and 12GB RAM. nutanix@FSVM:~$ free -mh nutanix@NFSVM:~$ nproc Example CPU configuration for CVM and FSVM.CVM: sched.cpu.min = "10000" FSVM: vm_type = "kGuestVM" Example configuration on a ESXI environment: 3. Hardware/firmware issues on the hosts causing storage performance issues on the AOS side of the cluster, impacting the FSVM. Going through the logs we see several FSVM crashes relating to Cassandra or other critical services on the FSVM. Example log messages: nutanix@FSVM~:$ sudo less /home/nutanix/data/cores/127.0.0.1.<time_stamp_of_core/vmcore-dmesg.txt Cassandra Fatals from FSVM have the following format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg F0404 11:26:19.541208 35523 wal_cassandra_backend.cc:531] Failure when doing Cassandra describe_ring Use the following knowledgebase articles to troubleshoot and find known issues with LSI/HBA firmware KB-7060 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000PVRiCAO and KB-8896 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CuzvCAC
KB17164
The 13.11 and 16.6 version NVIDIA driver bundles contain old driver versions
Bundles for asynchronous drivers 13.11 and 16.6 were posted on the portal for various AHV releases. However, these bundles contain incorrect driver versions, 13.9 and 16.5, respectively.
Bundles for asynchronous drivers 13.11 and 16.6 were posted on the portal for various AHV releases. However, these bundles contain incorrect driver versions, 13.9 and 16.5, respectively. This discrepancy may cause a driver downgrade if a specific AHV upgrade is performed (e.g., upgrading from 20220304.488 with driver 13.10 to 20220304.504 with driver 13.11). It will not cause any issues if only the driver is upgraded without upgrading the AHV version.
Impacted releases: There are two workflows through which can change drivers: Driver upgrade without an AHV upgradeDriver upgrade as part of an AHV upgrade In the first workflow, if you upgrade the driver (thinking they are upgrading from version 13.9 to 13.11 or from 16.5 to 16.6), you will not observe any change in the driver version. While this should not have any operational impact, but there will be a discrepancy between the expected and actual driver versions.In the second workflow, you might inadvertently downgrade the driver in a specific upgrade path. For example, if you were using AHV version 20220304.488 with a 13.10 driver and noticed that driver version 13.11 is available for AHV version 20220304.504, you might attempt to upgrade from 20220304.488 (with the 13.10 driver) to 20220304.504 (expecting the 13.11 driver). However, after the upgrade, the driver version would be 13.9, which is a downgrade from 13.10.We have removed the bundles with the incorrect driver versions from the portal and updated the compatibility matrix https://portal.nutanix.com/page/documents/compatibility-interoperability-matrix to reflect these changes.[ { "AHV version": "20220304.504", "Already supported driver version": "13.9", "Incorrect driver version in the posted async bundles": "13.9", "Expected driver version in the latest async release": "13.11" }, { "AHV version": "", "Already supported driver version": "16.5", "Incorrect driver version in the posted async bundles": "16.5", "Expected driver version in the latest async release": "16.6" }, { "AHV version": "", "Already supported driver version": "17.0", "Incorrect driver version in the posted async bundles": "N/A", "Expected driver version in the latest async release": "N/A" }, { "AHV version": "20230302.100187", "Already supported driver version": "13.10", "Incorrect driver version in the posted async bundles": "13.10", "Expected driver version in the latest async release": "13.11" }, { "AHV version": "", "Already supported driver version": "16.5", "Incorrect driver version in the posted async bundles": "16.5", "Expected driver version in the latest async release": "16.6" }, { "AHV version": "", "Already supported driver version": "17.1", "Incorrect driver version in the posted async bundles": "N/A", "Expected driver version in the latest async release": "N/A" }, { "AHV version": "20230302.3002", "Already supported driver version": "13.10", "Incorrect driver version in the posted async bundles": "13.10", "Expected driver version in the latest async release": "13.11" }, { "AHV version": "", "Already supported driver version": "16.5", "Incorrect driver version in the posted async bundles": "16.5", "Expected driver version in the latest async release": "16.6" }, { "AHV version": "", "Already supported driver version": "17.1", "Incorrect driver version in the posted async bundles": "N/A", "Expected driver version in the latest async release": "N/A" } ]
KB17023
Upgrading File Analytics 2.0.x using LCM fails in the pre-checks phase
Upgrading File Analytics 2.0.x using LCM fails in the pre-checks phase
In File Analytics 2.0, the Python packages are not located in the directory /mnt/container/libs/pyThe env file getting imported by python executables copied(from LCM code) over and run on FAVM adds packages present in /mnt/containers/libs/py to the sys.path list. Since, elasticsearch is not present in /mnt/container/libs/py, it threw an import exception.In LCM leader CVM, the following entry is logged in lcm_ops.out: 2024-05-22 16:13:24,181Z ERROR 43772304 helper.py:145 (172.16.x.x, update, 27e3e1d1-abcd-1234-7d7c-38ea0b1edfd0) EXCEPT:{"err_msg": "Upgrade failed: Traceback (most recent call last):\r\n File \"/opt/nutanix/upgrade/bin/run_upgrade_tasks.py\", line 22, in <module>\r\n from upgrade.utils import utils\r\n File \"/opt/nutanix/upgrade/bin/../py/upgrade/utils/utils.py\", line 18, in <module>\r\n from elasticsearch import Elasticsearch, exceptions as es_exceptions\r\nImportError: No module named elasticsearch\r\nPre upgrade checks failed for file analytics\r\n", "name": "File Analytics", "stage": 1}
Creating symlinks in /mnt/container/libs/py for all packages present in /opt/nutanix/analytics/avm_management/build/libs made the upgrade work. Use the following steps to create the symlinks.1. Create "upgrade_from_200.sh" file on FA VM at "/home/nutanix" path. Copy below script in it: src_dir="/opt/nutanix/analytics/avm_management/lib/thirdparty" 2. Run the script with cmd: sh upgrade_from_200.sh. For the symlinks that already exists, error might be seen in execution which can be ignored. Sample script execution: [nutanix@FAVM ~]$ sh upgrade_from_220.sh 3. Ensure, symlinks are created at /mnt/containers/libs/py/: [nutanix@FAVM ~]$ ll /mnt/containers/libs/py/ total 1472 4. Trigger FA upgrade to latest build from PE5. Post upgrade delete the script file created at /home/nutanix location on FA VM6. Perform some audit operations and ensure you are able to view them on FA UI
KB15717
Nutanix Files Locking database bloat, causing share latency
SMB share access sluggishness and smbd crashes generating smbd cores
Customers will report performance issues accessing SMB shares since upgrading to Nutanix Files 4.4.x. Validation of the problem in the customer's environment. 1) The locking.tdb file is nearing or above its 4 GB threshold. On one the FSVMs run the following command. nutanix@NTNX-FSVM: allssh sudo ls -lah /home/samba/lock/locking.tdb nutanix@NTNX-FSVM:/home/log/samba$ allssh sudo ls -lah /home/samba/lock/locking.tdb 2) Large number of file locks as seen from the following command output. nutanix@NTNX-A-FSVM:/home/log/samba$ sudo smbstatus -L --fast | wc -l 3) smbd core dump shows NT_STATUS_UNSUCCESSFUL [email protected]:~/tmp$ sudo gdb -ex "thread apply all bt" $(which smbd) smbd-cbe4.core.98636.6.20231025-172229Z #3 0x00007fad07588166 in smb_panic_s3 (why=0x7facfd2d2040 "could not store share mode entry: NT_STATUS_UNSUCCESSFUL") at ../source3/lib/util.c:881 4) A snippet of /home/log/samba/smbd.log shows "Buffer Size Error" and "unclean exit" 2023-10-23 13:19:10.882522Z 2, 29234, rlimit.c:466 smbd_rlimit_report
If the problem is ongoing:1. This step needs to be done on only one FSVM (any FSVM). No need to repeat on other FSVMs, afs smb.set_conf 'smbd locking tdb hash size' 10007 section=global 2. The following steps need to be repeated on each FSVM, one at a time, ensuring that SMBD is UP and healthy before moving on to the next FSVM.3. Move the corrupted locking.tdb file mv /home/samba/lock/locking.tdb /home/samba/lock/locking.tdb.corrupt 4. Terminate smbd sudo killall smbd ; sleep 30 5. Cluster-wide SMB health check afs smb.health_check The output of this command should indicate that all health checks are PASSED.Example: <afs> smb.health_check 6. FSVM-specific SMB health check <afs> smb.health_check fsvm_ip=<FSVMIP> Cluster Health Status: Note: If the problem is not ongoing but would like to proactively implement the workaround, no need to do step mv /home/samba/lock/locking.tdb /home/samba/lock/locking.tdb.corrupt
KB11415
ESXi host failed to boot with Fatal error: 11 (Volume Corrupted)
When booting into the ESXi customer encountered Fatal error: 11 (Volume Corrupted)
When booting into the ESXi host the following error message during the boot process of an ESXi hostIt appears that the volume/SATADOM is corrupt Error loading /sb.v00 Even on reboot, the same error is seen.
Validate the health of SATADOM using Phoenix ISO running smartctl and perform Install and Configure Hypervisor.
KB6872
NCC Health Check: recovery_plan_incompatible_target_availability_zone_check
NCC 3.7.1. The NCC health check recovery_plan_incompatible_target_availability_zone_check raises an error if the Recovery Plan contains VMware VMs as part of the recovery plan associated with a remote Availability Zone that has Nutanix Prism Central version below 5.11.
Note: This health check is retired in NCC 5.0.0 and later. Ensure you are running the latest version of NCC before running the NCC health checks. The NCC health check recovery_plan_incompatible_target_availability_zone_check raises an error if the Recovery Plan contains VMware VMs as part of the recovery plan associated with a remote Availability Zone that has Nutanix Prism Central version below 5.11. This NCC check/Health Check is available only from Prism Central. Running the NCC check This check can be run as part of the complete NCC check from Prism Central by running: nutanix@PCVM$ ncc health_checks run_all Or individually as: nutanix@PCVM$ ncc health_checks draas_checks recovery_plan_checks recovery_plan_incompatible_target_availability_zone_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is scheduled to run every hour, by default. This check will generate an alert after 1 failure. The Prism Central Web console will report a failure if it finds VMs in the recovery plan that are replicated to a remote site that is PC version < 5.11. Once the issue is resolved, the alert will auto-resolve in 48 hours. Example failure output when the NCC check is run from Prism Central CLI: nutanix@PCVM$ ncc health_checks draas_checks recovery_plan_checks recovery_plan_incompatible_target_availability_zone_check Output messaging [ { "Check ID": "Checks if Recovery Plan contains VMware VMs and snapshots for these VMs are replicated to a recovery Availability Zone that doesn't support recovery of VMware VMs." }, { "Check ID": "Target Availability Zone is running Prism Central version less than AOS 5.11 version and hence it does not support the recovery of VMware VMs." }, { "Check ID": "Upgrade the Target Availability Zone to 5.11 or later version, or Remove the entity mentioned in the description of the alert from the Recovery plan." }, { "Check ID": "The VM recovery will fail." }, { "Check ID": "Incompatible Recovery Availability Zones for VMs in the Recovery Plan" }, { "Check ID": "Incompatible Recovery Availability Zones for VMs in the Recovery Plan recovery_plan_name" }, { "Check ID": "Incompatible Recovery Availability Zones for Recovery Plan recovery_plan_name. Recovery of VMs incompatibles_vms on Recovery Availability Zones incompatbile_target_availability_zone_names will fail." } ]
There are 2 possible solutions: Upgrade the Remote Availability Zone to version 5.11 and above. Refer to the upgrade guide on the Nutanix portal https://portal.nutanix.com.If you do not plan to upgrade, you can follow the steps below and remove the entity being reported in the alert from the Recovery plan. Identify the Entity that is being reported in the Alert or the NCC check failure messageUpdate the Recovery Plan listed in the alert message and remove the entity identified in step 1 from this Recovery plan.Note: You cannot have an empty recovery plan. If all entities of the recovery plan need to be removed, delete the recovery plan. This alert, if seen from the Prism Central Web Console, should auto-resolve after you have updated the recovery plan and removed the entity from the recovery plan. If you raise a Nutanix Support case, gather the following command output and attach it to the case: nutanix@PCVM$ ncc health_checks run_all
KB16557
UNAUTHORIZED: project not found error while pushing images to Harbor Registry
UNAUTHORIZED: project not found error while pushing images to Harbor Registry
null
When utilizing a local docker registry as part of your DKP deployment, there is a step where you can utilize the DKP binary to seed images to your local registry: ./dkp push image-bundle --image-bundle ./dkp-v2.5.1/container-images/konvoy-image-bundle-v2.5.1.tar --to-registry $DOCKER_REGISTRY_ADDRESS While seeding you may see error messages such as the following: 2023/09/12 20:01:18 retrying without mount: POST https://harbor-registry.daclusta/v2/harbor-registry/mesosphere/kube-proxy/blobs/uploads/?from=mesosphere%2Fkube-proxy&amp;mount=sha256%3A9fd5070b83085808ed850ff84acc98a116e839cd5dcfefa12f2906b7d9c6e50d&amp;origin=REDACTED: UNAUTHORIZED: project not found, name: mesosphere: project not found, name: mesosphere This appears to indicate that the image was not successfully pushed to your Harbor docker registry, but it is a false positive error message. This does not affect any other Local Registry solution such as Nexus or Artifactory. You can safely ignore these error messages. To determine if you are using a DKP binary affected by this issue, run the ./dkp version command: [tony@rockyadmin cluster-a-ag]$ ./dkp version diagnose: v0.6.3 dkp: v2.5.1 kommander: v2.5.1 konvoy: v2.5.1 mindthegap: v1.6.1 This problem effects DKP 2.5.0, 2.5.1, and 2.6.0. It is fixed in DKP 2.6.1.
KB4175
HW: Procedure to upgrade SL-3ME SATADOM firmware to S170119 - Restores SATADOM performance
Excessive Shift Register Retry activity on the SL-3ME SATADOM can result in degraded SATADOM performance, eventually leading to a host instability/lock-up. This article provides steps to restore the SATADOM’s performance by applying a firmware patch.
Excessive Shift Register Retry activity on the SL-3ME SATADOM may result in host stability issues and degraded SATADOM performance. This article focuses on the remediation steps that restore the performance of SATADOM. Hardware affected: SL-3ME SATADOM model running pre-S170119 firmware.Hypervisors affected: ESXi. ProblemOlder firmware used on the SL-3ME may get into a condition where the firmware needs to utilize retry mechanisms to facilitate reads for different memory cell regions. The ESXi host may then have difficulty communicating with its boot and logging disk, which leads to instability of host services and a potential lockup when ESXi considers the SATADOM unresponsive. The S170119 fix for the SL-3ME prevents the mechanism referred to as a Shift Register Retry from occurring. Confirm if the SATADOM is an SL-3ME ModelUse the following Controller VM command to query the local hypervisor to identify the SATADOM model and the firmware version (abbreviated to S140 on ESXi). ESXi: nutanix@cvm$ ssh [email protected] "esxcli storage core device list | grep -A 4 Path" Or run the following command on all nodes in the whole cluster: nutanix@cvm$ allssh 'ssh [email protected] "esxcli storage core device list | grep -A 4 Path"' Example output: Devfs Path: /vmfs/devices/disks/t10.ATA_____SATADOM2DSL_3ME__________________________20150318AA8111534072 Note: Apply the S170119 firmware settings if the version is S140 or S150.
Perform the SATADOM firmware upgrade manually using an ISO, or if you are running AOS version 4.7.5, the one-click SATADOM Firmware Upgrade can be leveraged. This article guides through an upgrade of the SATADOM firmware using the two methods. Note: One-click and LCM SATADOM firmware upgrade features are qualified for both NX and DELL platforms. The approximate time to upgrade SATADOM firmware is 30 minutes for each node. Steps to update the SATADOM SL-3ME to install S170119 firmware settings using One-Click SATADOM Firmware Upgrade methodThe following update procedure requires SATADOM firmware binary file (S170119.bin) and the metadata file (satadom-sl3me.json). Contact Nutanix Support https://portal.nutanix.com to access the files. Once the binary and metadata files are downloaded, follow the instructions from the Nutanix documentation http://portal.nutanix.com/#/page/docs/details?targetId=Web_Console_Guide-Prism_v4_7:wc_cluster_satadom_firmware_upgrade_wc_t.html. AOS version must be 4.7.5 to use One-Click SATADOM Firmware Upgrade procedure. Note: If a Nutanix cluster has a mix of SL-3ME and non-SL-3ME nodes, all the hosts are restarted into the phoenix mode irrespective of the SATADOM model during One-Click SATADOM Firmware Upgrade workflow. If the SATADOM model is SL-3ME, the firmware is upgraded. Otherwise, the host boots back to the hypervisor without upgrading the firmware. Steps to update the SATADOM SL-3ME to install S170119 firmware settings using the manual ISO method Note: The manual upgrade procedure is only recommended when LCM is unavailable. Contact Nutanix Support for guidance if unsure. The update procedure requires the host to be booted from an ISO file. Contact Nutanix Support https://portal.nutanix.com to access to the ISO file. Before performing the procedure, identify the host IP address, CVM IP address, and IPMI IP address for all nodes in the cluster with CVM: nutanix@cvm$ ncli host ls Check the cluster. Ensure that the cluster tolerates a node shutdown before you proceed. Perform the following procedure on one node at a time. Update the SATADOM firmware on the nodes that require the firmware update. Use the standard preparation steps to ensure the host is restarted after it no longer runs any VMs and the cluster has been confirmed to tolerate one node being down. Steps used to confirm the cluster can tolerate a node being down Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter to ensure the cluster can tolerate the node being down. Do not proceed if the cluster cannot tolerate failure of at least 1 node. Performing the SATADOM update to firmware version S170119 Follow the steps noted in Steps used to confirm the cluster can tolerate a node being down #node_being_down.Migrate guest VMs off the node by using the method appropriate for the hypervisor while the CVM is still running. For ESXi, place the host in maintenance mode.Shut down the CVM when that CVM is the only VM running on the host: Put the CVM in maintenance mode. Note: This is not the same as ESXi maintenance mode. This is to prevent the cluster from unnecessarily removing the node out of the cluster and rebuilding group replicas during maintenance. You can lessen the time required for Data Resiliency to be restored when the maintenance is completed. Get the host ID: nutanix@cvm ncli host ls Use the relevant host ID to enable maintenance mode. nutanix@cvm ncli host edit id=host-id enable-maintenance-mode=true From the CVM on that node, use: nutanix@cvm cvm_shutdown -P now Gracefully shut down the hypervisor host after verifying the following: On the node host, before shutting down, no VMs must be running. (Wait for VMs to power off. The process might take a few minutes).The IPMI IP address Web Interface works for the node by entering the IPMI IP address into a browser. The default login for the IPMI web interface is ADMIN:ADMIN. Use a Web Browser to the IPMI IP address of the node from a system where you have already downloaded the ISO file (the ISO file is provided by Nutanix support). Open the Remote Control > Console Redirection to launch the remote console. Use the Virtual Media > Virtual Storage menu item from that console viewer and select Logical Drive type: ISO File. Select Open Image to browse and Open the ISO file. Use Plug-In to Mount the ISO on the node. The file name of the ISO you downloaded depends on the link Nutanix Support provided to you and might differ slightly from the file name. The node can now be powered-on to boot from the ISO. If Live CD boots and then Kernel Panics, disable USB 3.0 support in BIOS. If Live CD boots and then indefinitely hangs and you have GPU cards in the node, disable the GPU slots through the BIOS. After the node boots up from the mounted ISO, log in as root (no password required).Verify that the SATADOM is visible and present: root# lsscsi The output of lsscsi lists a device that has description SATADOM-SL 3ME. Enter the following commands to apply the fix. root# cd /usr/local/bin Confirm which /dev/XXX is the SATADOM device. Run the following: root# ./FW_3ME.sh /dev/XXX <FW bin file ex:S170119.bin> -b 3600 Note: FW bin file must only contain the name of the binary. No absolute or relative paths must be specified and /dev/XXX is the SATADOM device. Following is a sample command: root# ./FW_3ME.sh /dev/sda S170119.bin -b 3600 Wait for 20 minutes for the preceding command to complete execution. If you should run into a condition where the message "Wrong Model name Generic FCR", please power down the node and power it back up. The adapter needs to flush power. Reboot the node. root# reboot Open the Virtual Media menu for the IPMI remote console and click Plug Out to unmount the ISO from the node. The host must complete booting normally into the hypervisor. For ESXi, the host must be taken out of maintenance mode after the host starts to allow the CVM to start. Once the CVM is in the UP state, take the CVM out of Maintenance mode. nutanix@cvm ncli host edit id=host-id enable-maintenance-mode=false Caution: Ensure that the CVM is taken out of maintenance mode before putting the next into maintenance mode. Taking the CVM out of the maintenance mode is equivalent to a CVM being down, and any further CVM going offline might impact storage availability for the entire cluster. Repeat the step for any additional nodes that need to be updated. Only perform the steps on one node at a time and ensure Data resiliency = OK before proceeding. When finished, perform the same commands referenced in Check 2 #check_2 to confirm that the cluster status is healthy after all activities are finished. After applying S170119 firmware settings on all the SL-3ME SATADOMs in the cluster, perform the following to ensure the newer firmware settings are applied on the SL-3ME SATADOMs. Identifying the SATA DOM details Use the following CVM command to query the local hypervisor to identify the SATA DOM model and the firmware version (abbreviated to S170 on ESXi). ESXi: nutanix@cvm$ ssh [email protected] "esxcli storage core device list | grep -A 4 Path" Or to run the following command on all nodes in the whole cluster: nutanix@cvm$ allssh 'ssh [email protected] "esxcli storage core device list | grep -A 4 Path"' Example output: Devfs Path: /vmfs/devices/disks/t10.ATA_____SATADOM2DSL_3ME__________________________20150318AA8111534072 Note: After upgrading SATADOM firmware on ESXi 5.5 hosts, the name of the SATADOM device has been changed breaking the ESXi core dump. This may result in a vCenter error: No coredump target has been configured. Host core dumps cannot be saved. As of ESXi hosts in 5.5 u3b (build 4179633), the core dump partition is automatically picked after a host reboot, so the following procedure is not needed. Before performing the following procedure, check if the core dump correctly points to the new SATADOM device with esxcli system coredump partition get. SATADOM device name from esxcli storage core device list before and after SATADOM firmware upgrades Before Upgrade Devfs Path: /vmfs/devices/disks/t10.ATA_____SATADOM2DSL_3ME__________________________20150318AA81115340C9 After Upgrade Devfs Path: /vmfs/devices/disks/t10.ATA_____SATADOM2DSL_3ME__00_______________________20150318AA81115340C9 Run the following commands in sequence on the ESXi host to resolve the coredump issue: [root@esxi]# esxcli system coredump partition set --unconfigure
KB7581
Ubuntu Desktop 18.04 GUI not loading (fails to boot)
An Ubuntu 18.04 desktop installation may complete booting but show error messages on the screen and the GUI will never appear
On AHV, Ubuntu Desktop 18.04 an install may complete without issues. When using LVM, an error like below may be shown after the first reboot: WARNING: Failed to connect to lvmetad. Falling back to internal scanning. If not using LVM, the screen may have messages generated that appear as: [ OK ] Started session for c174 of user gdm.
Attempt to change which terminal you are using from graphical to text. The shortcut key is alt-f2.The following workarounds may provide assistance: Change the graphics mode to text in the /boot/grub.cfg file by setting "linux_gfx_mode=text", or use the grub command "set gfxmode = text".Remove "xserver-xorg-video-fbdev" with "sudo apt-get remove xserver-xorg-video-fbdev*" Versions of Ubuntu after 19.04 should have this issue resolved. Based on work done by Nutanix to root cause this issue the fix should be provided upstream instead of implementing a workaround. There is more on the suspected bug report on Ubuntu's Launchpad https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1795857.
If you have a log bundle
the history can be found in the nu_hardware_change.log inside the hardware_info folder. In the example below
I am running this command from the root of the logbay bundle where the individual CVM folders are found. In the first event
a drive is registered as “removed”.
we see evidence of a firmware upgrade on one of the NVME drives. In the second event
""Title"": ""While performing an NCC upgrade via the Prism UI
NCC may not get upgraded on all or some of the nodes in the cluster\n\n\t\t\tAs part of the upgrade
the NCC tar bundle is copied from/to peer nodes (upgrade checks if the path /home/nutanix/data/ncc/installer/ncc- exists). In rare cases
command line upgrades are not affected.""
the NCC upgrade tar bundle may be attempted to be copied from a node which doesn’t yet have it causing the following issues:\n\n\t\t\t\n\t\t\t\tSilent NCC upgrade failure for the node.\n\n\t\t\t\t\n\t\t\t\t\tPrism does not show a problem for the affected node.\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\tcluster_health service will be down on the node.\n\n\t\t\t\t\n\t\t\t\t\tThe affected node will have no health checks performed while the cluster_health is not running. This may prevent alert notifications generated by cluster health from occurring.\n\t\t\t\t\t\n\t\t\t\t\n\t\t\t\tNCC will no longer be able to run on the node.\n\t\t\t\t\n\n\t\t\tNote:\n\n\t\t\tLarger clusters have higher probability of hitting this issue.\n\t\t\tIssue only occurs when performing the NCC upgrade from Prism
""Title"": ""Guest VMs might be temporarily unavailable during cluster maintenance on environments with DR features enabled""
null
null
null
null
KB7537
Genesis crashes with error "CRITICAL ipv4config.py"
Genesis crashes with error "CRITICAL ipv4config.py:332 no, Stack:"
After modifying ifcfg-eth0, genesis crashes with this error in genesis log. 2019-06-04 03:01:44 ERROR sudo.py:25 Failed to load file /var/run/dhclient-eth1.pid, ret 1, stdout , stderr cat: /var/run/dhclient-eth1.pid: No such file or directory This normally means there is a typo in the /etc/sysconfig/network-scripts/ifcfg-eth0 config file. Here is an example of the ifcfg-eth0. GATEWAY="10.241.182.1"
Check ifcfg-eth0 for typo or format issue, and correct it.If you cannot see any typo, move the ifcfg-eth0 file to a temp directory and create a new one.The restart genesis $ genesis restart
KB9180
SSH process causes error "Main memory usage in Controller VM or Prism Central VM is high"
Some clusters will get the alert "Main memory usage in Controller VM or Prism Central VM x.x.x.x is high" due to a runaway ssh process.
A memory leak in openSSH can cause high memory alerts and Acropolis instability on any AOS version prior to 5.15.7, 5.20, or 6.0. The CVM memory alert may manifest as the example below: nutanix@cvm$ ncli alert ls Acropolis can also be impacted with similar "Cannot allocate memory" entries in the acropolis.out log files. 2020-08-28 07:05:28 INFO base_task.py:586 Task 0a48c583-bc95-4684-9472-dd4ce2d9089b(VmSetPowerState fbdb28f7-8fff-4e01-a6c6-7cf08d01d411 kPowerOn) failed with message: [Errno 12] Cannot allocate memory To identify the issue top.INFO can be consulted. In the example below, ssh is using 6.5% memory. This is unusual memory usage for ssh. Normally, ssh should be around 10MB "RES" value in top, a very low percentage of total memory. If it is above 1m, it is likely this is the reason for it. Note: By default, the top RES column is measured in kilobytes, unless it is appended by a letter like g (gigabytes) or m (megabytes). Therefore an RES value of 5752 is actually 5752 kilobytes. 15127 nutanix 20 0 2215136 2.0g 3240 S 0.0 6.5 653:20.80 ssh -q -o CheckHostIp=no -o ConnectTimeout=15 -o StrictHostKeyChecking=no -o TCPKeepAlive=yes -o UserKnownHostsFile=/dev/null -o ControlPath=/home/nutanix/.ssh/controlmasters/tmpWJTmz7 -o PreferredAuthentications=publickey -o ControlMaster=yes -N -n [email protected] Note in the above line from top.INFO, the 6th column is the RES value, and 10th column is the %MEM column.On a live cluster, the following command works well which checks for values above 1G: nutanix@cvm$ allssh "grep ssh data/logs/sysstats/top.INFO* | sed 's/^[^:]*://' | awk '\$10>1.0 {print \$1\" \"\$9\" \"\$10}'" These commands do not give the times it happened, but they do show that it happened, so they are great for monitoring. They are also a great way to look for the issue without waiting for the alert to appear.
Resolution The root cause of this issue has been identified in ENG-251443 https://jira.nutanix.com/browse/ENG-251443. There is a small memory leak in openSSH client which increases memory utilization overtime while the SSH session is connected which is the case for Acropolis connections to the hosts. The rate of the leak varies depending on the cluster load. In the lab experiments it has been found to be in the range of a few MBs per day.As of 5.15.7, 5.20, and 6.0 the memory leak has been fixed within the openSSH client. If a cluster is impacted by the issue proceed with the workaround before upgrading to a fixed version. Workaround Warning: Perform the checks described in KB-12365 https://portal.nutanix.com/kb/12365 to make sure it is safe to stop Acropolis. The workaround if you face the issue described above is to restart acropolis service on the affected CVM.If several CVMs are affected, always start restarting acropolis on the CVM followers. If also Acropolis leader is affected, it needs to be restarted last:To obtain the current Acropolis master: nutanix@cvm$ links -dump http:0:2030 | grep Master Restarting acropolis triggers the restart of the ssh process and frees the memory: nutanix@CVM$ genesis stop acropolis; cluster start Note this work around is temporary in nature as the leak will start happening right after the restart of the process.In some instances it has been observed that arithmos can also be impacted from acropolis instability. VM management may not work as expected after restarting acropolis. In these instances arithmos will need to be restarted on the affected node. nutanix@CVM$ genesis stop arithmos; cluster start
KB16755
Category update for a cluster in PC fails with INVALID_OWNER_REFERENCE
Category update for a cluster in PC fails with INVALID_OWNER_REFERENCE
+ Category update tasks for a cluster in PC fails with the below error: + In ~/data/logs/aplos_engine.out logs for the aplos service leader, the user xxx@xxx is not found. 2024-03-19 13:06:06,137Z INFO category.py:184 Looking up category by (name_uuid: 3907810f-9ef4-4f69-9ce6-95d8d457ae10, value: Windows) + In nuclei when performing a GET call for the impacted cluster, it is observed that the owner_reference field has this user. Please keep a note of the user UUID here. nutanix@PCVM$ nuclei cluster.get <cluster_name> + Check if the user noted in the error message of the aplos_engine logs is currently present on the PC or not, use the following command to verify: nutanix@PCVM$ ncli user ls | grep de5dc0a0-21f6-58b3-88fb-8ba6677e2abb
Currently, the owner_reference for the cluster is pointing to the non-existent user and causing this issue: nutanix@PCVM$ nuclei cluster.get <cluster_name> Verify that the UUID for admin is 00000000-0000-0000-0000-000000000000: nutanix@PCVM$ nuclei user.list | grep admin Updating the owner_reference field with the UUID of the admin(00000000-0000-0000-0000-000000000000) through nuclei CLI, if it fails with an error then follow below steps: nutanix@PCVM$ nuclei cluster.update <cluster_name> metadata.owner_reference.kind=user metadata.owner_reference.uuid=00000000-0000-0000-0000-000000000000 Please collect the logs for this issue if this user was added or updated recently and attach the case to ENG-653885 https://jira.nutanix.com/browse/ENG-653885 so that we can investigate the cause. In order to resolve this issue, the PUT API call to update the owner_reference field shall be used and below are the steps for it. Please note that this should be done under the guidance of an STL or Devex. 1. Perform an update to the Owner Reference UUID Field using the PUT API call to update the cluster details, the following command will open a spec for the cluster entity in a Vi like editor for updating the Owner Reference UUID Field: nutanix@PCVM$ nuclei cluster.put <cluster_name> edit_spec=true 2. Update the owner_reference UUID to UUID of the admin i.e. 00000000-0000-0000-0000-000000000000 and save it (wq!): owner_reference: {kind: user, uuid: 00000000-0000-0000-0000-000000000000} <------ 3. The following tasks will be generated: nutanix@PCVM$ ecli task.list 4. Once the above tasks completed, the owner_reference field will be updated to admin and the category update tasks will start working as expected: nutanix@PCVM$ nuclei cluster.get <cluster_name>
KB16117
vDisk Migration Across Storage Containers Stuck
vDisk Migration across storage containers might get stuck for the VMs which have UEFI boot enabled if the field container_id is missing from their nvram config. 
vDisk Migration https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-vdisk-migration-c.html across storage containers might get stuck for the VMs which have UEFI boot enabled if the field container_id is missing from their nvram config. Symptoms We observe that the vDisk Migration tasks are stuck: nutanix@CVM~$ ecli task.list limit=5000 |egrep -i "container" The Anduril service logs (~/data/logs/anduril.out) throws watchdog for the task being stuck for some time: 2023-12-23 01:34:51,248Z WARNING task_slot_mixin.py:73 Watchdog fired for task 87b86704-e804-4f34-855d-49660a387e5b (VmChangeDiskContainerTask) (no progress for 870785.19 seconds) The Cerebro service might enter crash loop state with the below signature in cerebro*.INFO*: F20231223 01:29:28.926795Z 32169 extendedtypes.cc:49] Check failed: uuid_parse(uuid_string.c_str(), id) == 0 (-1 vs. 0) Identification Get the VM UUID and the the task percentage using the ecli task.get <task_UUID> command for the parent task: nutanix@CVM:$ ecli task.get 87b86704-e804-4f34-855d-49660a387e5b | egrep "entity_id|percentage" From the entity_id we fetched in the previous command, we can get the VM details using acli vm.get <VM_UUID>. Here we observe that the VM has UEFI boot enabled however, there is no container ID field in nvram config. nutanix@CVM$ acli vm.get fa4ebdfc-abc8-418b-b420-96d2c981b89d Ideal VM configuration for a UEFI boot VM: boot {
This issue is resolved in: AOS 6.6.X family (STS): AOS 6.6 Upgrade AOS to versions specified above or newer.Workaround NOTE: Please raise an ONCALL to engage engineering for aborting the problematic metaops and related Morphos tasks in ergon to stabilize Cerebro after verifying that the data migration has not yet started i.e. if the VM disks are still in the source container.Once there are no stuck tasks pending and the Cerebro service is stable, we can follow the below workaround to migrate such VMs: - Disable UEFI boot option to delete the nvramdisk config. This operation requires the VM to be powered off: - acli vm.update "<VM_Name>" uefi_boot=false Now, perform the migration of the vDisk https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_7:ahv-vdisk-migration-c.html to another container. You may power on the vm or not depending upon your use case. Enable UEFI boot again again. Again, we can perform this on a powered-off VM only. acli vm.update "<VM_Name>" uefi_boot=true
KB16169
G8 Slow BMC reset after upgrade to 08.01.09 or higher leads to LCM upgrade task failure
G8 BMC reset after upgrade to version 08.01.09 or higher may take longer than expected resulting in LCM upgrade failure. LCM currently cannot automatically recover in this scenario, resulting in node likely being detached from cluster
During BMC upgrade to 08.01.09 or higher on Nutanix G8 hardware LCM may report that BMC upgrade has failed or timed out. The upgrading node may be left in maintenance mode and additional LCM upgrade attempts will fail during LCM pre-check phase. LCM logs will likely show similar signatures to the following in lcm_ops.out BMC will report successful upgrade and proceed with BMC reset 2023-09-18 23:40:47,834Z INFO 47882352 helper.py:145 (XX.XX.XX.XX, update, 9e1d3b9b-ef18-4830-493b-c3e0cef9f063) [2023-09-18 23:29:26.602038] BMC updated successfully ​​​​​ipmicfg commands will report no errors when query with raw commands, but IPMI commands will continue to fail 2023-09-18 23:40:47,837Z INFO 47882352 helper.py:145 (XX.XX.XX.XX, update, 9e1d3b9b-ef18-4830-493b-c3e0cef9f063) [2023-09-18 23:32:46.881449] Failed to query BMC status. Error: IPMI command not completed normally. Completion Code=D5h LCM will continue to query BMC via IPMI commands and retry a total of 22 times, with 30 second delay (11 minutes wait time total) until the upgrade is reported as failed. 2023-09-18 23:40:47,841Z INFO 47882352 helper.py:145 (XX.XX.XX.XX, update, 9e1d3b9b-ef18-4830-493b-c3e0cef9f063) [2023-09-18 23:40:17.516450] BMC is not up
Nutanix Engineering are aware of the issue and require additional information (where possible) to triage the issue. The BMC version will likely show the newer upgraded version of 08.01.09 when reviewing the BMC version via Web GUI or IPMI. See KB-1262 https://portal.nutanix.com/kb/1262 for IPMI commands to check BMC/BIOS versions. Based on the behaviour seen in the field Nutanix Engineering believe BMC reset is taking longer than the expected LCM timeout to respond to ipmi commands.Gather the following information prior to removing the Host/CVM from maintenance mode to recover cluster resiliency. LCM currently cannot automatically revert maintenance mode operations in this failure scenario. Gather ipmicfg in-band output from the host where BMC upgraded reported as failed root@AHV# ./ipmicfg -summary Gather out of band ipmitool output from the hosts CVM or any other CVM in the cluster nutanix@cvm:~$ ipmitool -I lanplus -H <IP_ADDR> -U <USER> -P <PASSWD> chassis power status Ensure IPMI Web GUI is operational and accessible. Confirm current BMC version via GUIGather the output of lsmod for the host where upgrade timed out lsmod | grep -i ipmi Confirm IPMI license details with sumtool. Additional sumtool details/usage are available in KB-2444 https://portal.nutanix.com/kb/2444 nutanix@cvm:~$ ./sum -i <IPMI_IP_ADDR> -u <USER> -p <PASSWD> -c QueryProductKey Collect System Event Logs (SEL) and Maintenance Event Logs (MEL) manually from BMC Web GUI
KB9300
Move : Add source environment fails with "bash: -c: line 0: unexpected EOF"
null
Attempt do add source cluster fails with "bash: -c: line 0: unexpected EOF"Corresponding log output I0402 16:36:41.213222 6 hyperv_agent_impl.go:1771] [us-labhyp02.ndlab.local] Fetching the agent's Move IP address...
When password of an account used to add source cluster to Move contains single or double quotation, operation fails.Do either of the following to get around this Use an account which doesn't have single or double quote on it's passwordChange password of the account to something which doesn't contain single or double quotation This is planned to be addressed in future version
KB1695
VEEAM reports: [smb-share] does not support shadow copies
Veeam may report that the SMB share does not support shadow copies. This article gives some suggestions how to rectify this.
A customer is able to successfully install VEEAM and add the Nutanix cluster to the VEEAM console.When they select a VM and run a backup, it fails with the following error message: [12.08.2014 14:17:38] <11> Warning [RTS] [ALICE] Unable to create snapshot (Fileshare Provider) (mode: Crash consistent). At the same time, the stargate.error logs will display error messages that look like: E0812 11:47:15.799341 3210 smb_create_op.cc:243] SmbCreateOp: Pipe wkssvc not supported. Messages have also been seen that only show the "DceRpc: Unable to authenticate".
Verify if the following computer object exists in the AD: NX-CLSTR-01-SMB (referenced in the SMB path to the VM). Obviously this will be different for every customer.Regular access to the SMB share with VMs is allowed through the whitelist entries. There's no Kerberos authentication being used. However, in order to create a VSS snapshot we do need Kerberos authentication.(P.S: but to explicitly enable Kerberos from PRISM GUI is NOT required. )The sourcecode of dcerpc.h shows the following: 244 enum class DceRpcAuthService : uint8 { and dcerpc.cc: 370 if (req_auth->type != DceRpcAuthService::kSpnego) { The error message "DceRpc: Unable to authenticate - cannot handle auth type 10" indicates the host is trying to authenticate using NTLM which can mean 3 things: There's no computer object representing the Nutanix storage in ADThe Nutanix cluster doesn't think it is joined to the domainKerberos is not enabled Scenario 1: KB-3633 https://portal.nutanix.com/kb/3633 : VEEAM backup fail due to duplicate UserPrincipleName as Nutanix Storage Cluster Computer ObjectScenario 2: Another common reason for this error is the CVM time is more than 5 minutes ahead of the real AD time. This will make Kerberos tickets fail and result in the same error.Correcting the time in the CVM by following KB-4519 https://portal.nutanix.com/kb/4519Scenario 3:Please check if the Nutanix Storage AD object is enabled.If the Nutanix Storage Object is disabled then please go ahead and enable it. Scenario 4:If it still fails this may be due to stale kerberos tickets on the Hyper-V hosts. To clear these we use the 'klist' command documented here: http://technet.microsoft.com/en-us/library/hh134826.aspx http://technet.microsoft.com/en-us/library/hh134826.aspxRun this command from one of the CVMs to purge the stale tickets from all hosts in the cluster: allssh "source /etc/profile > /dev/null 2>&1; winsh 'klist -li 0x3e7 purge; klist -li 0x3e4 purge'" Try the backup job again and it should succeed. The DceRpc errors in stargate.error log should no longer show up.Scenario 5:Please make sure to check the VM configuration version is upgraded to 8.X on Hyper-V Server 2016 and Later. More details https://helpcenter.veeam.com/docs/backup/hyperv/online_backup.html?ver=110 To check the VM version from Hyper-V PowerShell: Get-VM * | Format-Table Name, Version To update VM configuration version: Update-VMVersion <vmname> <version number e.g 6.0, 7.0, etc>
KB15890
Cleartext passwords are transmitted via HTTP port 9444 between CVMs and seen on splunk/corelight
Local users and directory service login passwords are appearing in cleartext on logging servers Splunk/Corelight due to Mercury load balancers on CVMs communicate through unencrypted traffic. This KB documents the workaround for AOS versions can't upgrade to a fixed version 6.7 and later.
Security audit found clear text username and passwords being transmitted between CVMs over port 9444 via http.When login to Prism using local and directory service user login passwords are appearing in cleartext on logging servers Splunk/Corelight due to Mercury load balancers on CVMs communicate through unencrypted traffic. The issue is tracked under ENG-490285 https://jira.nutanix.com/browse/ENG-490285 and is fixed in AOS 6.7 or later, customers on AOS 6.5.4+ will need to wait for release 6.7.1. Back-porting this to 6.5.x will be a quite an effort and currently not available. This KB documents the workaround for AOS versions 6.5.4+ can't upgrade to a fixed version 6.7.Issue is also documented under Resolved Issues in AOS 6.7 release-notes here https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-AOS-v6_7:Release-Notes-AOS-v6_7
This workaround is documented in TH-12669 https://jira.nutanix.com/browse/TH-12669, ONCALL-16461 https://jira.nutanix.com/browse/ONCALL-16461 . Basically we are disabling the load balancer via the YAML file and load-balance the requests to local mercury only, that way, requests will never leave CVM boundaries. The caveat is that there won’t be any load balancing for the APIs, we may see spurious failures, but unlikely in a healthy system. Please check with the customer if they are comfortable with this. Below steps needs to be performed on all CVMs.Steps to perform on CVMs: Open /home/nutanix/config/ikat_control_plane/control-plane-pe.yaml file. Use vi editor or any other editor to open the file.Look for the following section in that file: non_load_balanced_ports: Add port 9444 to this list; it should look like the following: non_load_balanced_ports: Save and exit the editorPerform steps 1 through 3 on ALL CVM nodes.Once steps 1 through 3 are applied on all nodes, stop ikat_control_plane on each node (rolling restart not required) and perform cluster start. allssh 'genesis stop ikat_control_plane' To verify, run the following on a CVM node. curl http://0:9462/clusters | grep "port_9444" In the displayed list, the only IP that should be visible here is 127.0.0.1 with port 9444. Note the values may differ for the cluster. ....
KB7064
Move VM Migration fails with Error "Invalid VM credentials. AllowUVMOps is set but credentials are not provided."
null
This article describes a scenario where the migration of an AWS instance to AHV using Nutanix Move fails with an Error message “Invalid VM credentials. AllowUVMOps is set but credentials are not provided”We have only seen the above issue with a Windows User VM and not with a VM running Linux OS.You can successfully add the Source, Discover the VM for migration but the issue occurs when you start the Migration of the UVM.Below is the Error message you might see on the UI
As a workaround to fix this problem in Xtract 2.0.3 , Please follow the below steps : Delete the Current migration Plan. Create a new Migration Plan. At the Credentials page provide the common credentials for Windows. Also, Provide some dummy credentials for the Linux as well. Try the Migration again, and it should succeed. This is a known issue with Xtract 2.0.3 and is going to be fixed with Move 3.0
KB2405
Physical disk capacity in Prism
Explanation of the calculations for physical disk capacity as displayed in Prism.
This article answers questions about the disk capacity shown in Prism. It explains the apparent discrepancy between raw physical disk capacity and its corresponding disk capacity displayed in Prism. Key points: Two types of partition structures are used for the drives. And the partition size depends on the drive size.Each disk may show a different disk size on Prism. Each drive has multiple factors of reserved space, and the rest of the capacity is shown as the disk size on Prism (Hardware dashboard > Disk tab).The disk size on Prism comes from "disk_size" in the Solution section. Some reservation factors may fluctuate dynamically due to the workload and affect "disk_size" as well.
Here we use NX-1050 to explain the raw physical disk capacity, this corresponding capacity displayed in Prism, and the discrepancy. By default, each node in NX-1050 has one SSD and four HDDs. The SSD has 400GB, and each HDD has 1TB. SSD capacity The screenshot below shows the SSD disk properties in Prism. One may ask why Prism only shows 166.98 GiB of capacity for a 400GB SSD. Several factors contribute to the discrepancy: A disk sold with a total of 400GB is using base10 sizing. The capacity of the disk in base2 is 372.61 GiBPrism presents disk capacity in base2. In this case, 166.98 GiB or whatever is shown in Prism is the capacity that AOS can provide to VMs after deducting disk usage by AOS itself. The following diagram provides an excellent visual illustration as to what uses up disk space on SSDs for AOS operations and how capacity is shown in Prism is calculated. Further information on disk size and utilization can be found using the CVM CLI. In the case of the NX-1050, the SSD disk has 400 GB raw disk space.Physical disk information can be obtained using the "fdisk" command: nutanix@CVM:~$ fdisk -l /dev/sda In this case, the fdisk command shows the SSD has a capacity of 400.1 GB. Again this is base10 sizing, which means it is 372.61 GiB in base2 sizing. This can be seen by using the "df -h" command: nutanix@CVM:~$ df -h The CVM itself takes up about 60 GiB of boot disk space; Note: There is a change that is active since AOS 5.19, if the CVM boot SSD size is more than 2 TB - then there is an additional space reserved (as gaps) between root/home partitions to be able to increase them in a future AOS version.This means that on SSD disks with capacity more than 2 TiB, before 4th partition which is used by stargate partition disk will have 3 partitions + 3 gaps: 10G(sda1)+20G(gap1)+10G(sda2)+20G(gap2)+40G(sda3)+40G(gap3) = 140GiB.Here is an example showing the difference in partitioning, the left side is for the new logic, and the right side is for the old logic applied on the same size of SSD (look at numbers for start/end sector for partitions, partition #4 starts from sector 293603328, with 512 bytes sector/unit this is 150,324,903,936 bytes = 140 GiB): So, on big SSDs formatted with recent AOS versions CVM itself can take 140GiB from the beginning of the disk, instead of 60GiB which is currently used by partitions.CVM boot SDD disk is made up of the following partitions: /dev/sda1 & /dev/sda2 are 2 boot partitions, 10 GiB each. One boot partition is active (/dev/sda2 in this case), and the other is hidden & inactive./dev/sda3 is a /home partition which is about 40 GiB./dev/sda4 is used for AOS operations. In this case, /dev/sda4 has 308 GiB of raw disk space available. However, the ext4 file system uses and reserves some disk space, leaving the available disk capacity at 304.60 GiB. This can be confirmed by running the "zeus_config_printer" command on the disk in question and looking at the statfs_disk_size parameter. nutanix@CVM:~$ zeus_config_printer 2>/dev/null | grep -A11 -B3 "disks/BTTV32430233400HGN" So "statfs_disk_size" is the disk capacity that AOS sees and can use for various AOS operations and VMs.The actual disk capacity shown in Prism is "disk_size" in the above "zeus_config_printer" output. We can check stargate.INFO. We can see that "statfs_reserved" is equal to "statfs_disk_size" of zeus_config_printer, and "oplog_reserved" is equal to "oplog_disk_size", "estore_reserved" is equal to "disk_size", "ccache_reserved" is equal to "ccache_disk_size". nutanix@CVM:~$ grep "Metadata disk id=40" ~/data/logs/stargate.INFO|tail -n 1 |sed 's/ /\n/g'|grep -E 'statfs_reserved|oplog_reserved|estore_reserved|ccache_reserved|lws_store_reserved|metadata_reserved|curator_reservation_bytes'|sed 's/,//g'|awk -F'=' '{printf "%s: %s\n", $1,$2}' AOS reserves 5% of capacity in case there is a "disk full" situation. metadata_reserved = (statfs_disk_size - (estore_reserved + oplog_reserved + ccache_reserved + lws_store_reserved + curator_reservation_bytes)) * 95 / 100 So the "disk_size" is calculated using the following formula: disk_size = estore_reserved = statfs_reserved - oplog_reserved - ccache_reserved - lws_store_reserved - curator_reservation_bytes - (metadata_reserved * 100 / 95) So the calculation should be: disk_size = 327039254528 - 88448099942 - 21474836480 - 0 - 0 - (35935459247 * 100 / 95) = 179289518899 HDD capacity In NX-1050, there are 4 HDDs in each node. HDDs are used for the curator operations and has 782.2 GiB of capacity. The screenshot below shows the HDD disk properties in Prism. The following diagram provides an excellent visual illustration as to what uses up disk space on HDD's for AOS operations and how capacity is shown in Prism is calculated. Customers are asking why they purchased a 1TB HDD, but Prism only shows 862.2 GiB. 862.2 GiB or whatever shown in Prism is what capacity AOS can provide to VMs after deducting disk usage by AOS itself. I use 9XG4ZQJG as an example, which shows 782.2 GiB in Prism. In this case, the HDD disk is 1 TB raw disk space. This information can be obtained using the "df -h" command: nutanix@CVM:~$ fdisk -l /dev/sdd When HDD manufacturers label an HDD as a 1 TB disk, they use base10 in sizing. However, in vSphere client, latest Nutanix AOS Prism, Windows and Linux, disk size uses base2 in sizing. So a 1 TB (1000204886016 bytes) disk has 931.51 GiB in capacity (base 2 sizing). When partitioned, formatted and mounted in a CVM, the disk capacity is reduced from 931.51 GiB to 917 GiB of capacity. nutanix@CVM:~$ df -h Natively ext4 file system uses and reserves some space. So what is available to AOS is further reduced, and its size is displayed by "stats_disk_size" from the output of "zeus_config_printer". In this case, "stats_disk_size" has 974504964096 bytes, which is 907.58 GiB. nutanix@CVM:~$ zeus_config_printer 2>/dev/null | grep -A9 -B3 "disks/9XG4ZQJG" Out of "statfs_disk_size", AOS reserves 5% in case of disk full situation. The disk capacity shown in Prism is calculated like this: disk_size = (statfs_disk_size * 95 / 100) - curator_disk_reservation_bytes In this case, disk_size = 974504964096 * 95 / 100 = 92577 That is the disk_size for most HDDs. However, when an HDD is used for Curator, 80 GiB disk space is further reserved for curator tasks. In this case, 9XG4ZQJG is used for Curator task, and therefore its disk_size: disk_size = ((974504964096 * 95 / 100) - 85899345920) / 1024 / 1024 / 1024 = 782.20 GiB Note: Storage Capacity indicates the total amount of "disk_size" (extent store size)."disk_size" is the remaining of "ststfs_disk_size" after allocating the spaces for other factors.In a working Cluster there are certain parameters like "oplog_disk_size" that may fluctuate based on its workload over time. Therefore, it is expected that "disk_size" fluctuates in Prism.
{
null
null
null
null
KB15324
ESXi Host upgrade fails with "Exception caught in checking UVM migration status"
ESXi host upgrades might fail with error "Exception caught in checking UVM migration status
During a ESXi host upgrade, the process might fail with the following error messages (depending if the upgrade is initiated via 1-click or LCM). Customers might observe that the process is stuck on vCenter in "Enter Maintenance Mode" task. LCM_ops.out: data/logs/lcm_ops.out:2023-06-08 13:25:56,105Z ERROR 12952624 esx_upgrade_helper.py:1924 (xx.xx.xx.217, update, 0e642149-a1d6-xxxx-xxxx-ce8bbd6930eb, upgrade stage [2/2]) Exception caught in checking UVM migration status: 'info.progress' host_upgrade logs: 2022-08-09 17:36:53,102Z INFO MainThread esx_upgrade_helper.py:2189 VMD is not in use by the node.
Root cause is described on ENG-491192 https://jira.nutanix.com/browse/ENG-491192, where property collector might not populate all the properties for a given object.This is a AOS side fix, which is deployed in AOS 6.5.4 or higher.Workaround is to retry the upgrade, host and CVM would be needed to exit maintenance mode before re-attempting the upgrade. The task will eventually timeout, so there is no need to cancel it beforehand.
{
null
null
null
null
KB8722
Enabling and Disabling LCM API
This internal KB will share details about LCM API and which customers are using LCM API as it's in Early access.
Enabling and Disabling the LCM API Please note - LCM has moved to APIv4 in LCM-2.5 - shareable link to customer is: https://developers.nutanix.com/api-reference?namespace=lcm&version=v4.0.a1 https://developers.nutanix.com/api-reference?namespace=lcm&version=v4.0.a1To enable the LCM API, turn on the feature flag. SSH to the CVM, change directories to /home/nutanix/cluster/bin/lcm, and enter the command $./configure_lcm --enable_api After you have turned on the feature flag, restart Genesis. $ allssh genesis restart To disable the LCM API, turn off the feature flag.SSH to the CVM, change directories to /home/nutanix/cluster/bin/lcm, and enter the command $./configure_lcm --disable_api After you have turned off the feature flag, restart Genesis. $ allssh genesis restart API Explorer LCM API documentation and REST API explorer in live will be exposed and accessed at cluster path https://<cluster_ip>:9440/lcm/api_explorer/index.html. It provides customers to visualize and interact with the LCM API resources. Understanding LCM API The LCM API is constructed according to the following format: lcm v1.r0.b1 resources|operationsAPIs have two categories: resources and operations. Use resource APIs to view LCM static resources, such as configuration, entities, and tasks. Use operations APIs to issue LCM operations, such as inventory, prechecks, and updates.Authentication The LCM API requires authentication. LCM API allows users with admin roles, which includes the local user “admin” and the following AD users. PE AD users configured through role mapping with User_Admin or Cluster_Admin roles. PC AD users configured through role mapping as User_Admin or Cluster_Admin.AD users added by the SSP admin as Super Admin or Prism Admin. []
null
KB6403
Protection Domain Snapshot Failed - No Entities Snapshotted
This article describes an issue where snapshots are failing for Protection Domain(s) whenever there are active VM tasks.
Snapshot failures are occurring with an alert similar to the following: Protection domain DR snapshot '(599168001385765702, 1520020039059443, 57282695)' failed. No entities snapshotted, skipped 9. When reviewing the cerebro.ERROR and cerebro.WARNING logs on the Cerebro leader, you may see errors similar to the following: ~/data/logs/cerebro.ERROR log: E0923 23:35:41.123571 6992 master_base_op.cc:1306] Finishing with cerebro error kVmActiveTaskInProgress error detail 'VM has active tasks in progress.500383a9-e4f2-97ee-97e7-311ad6ea6f33 (testVM)' ~/data/logs/cerebro.WARNING log: W0924 07:36:16.413017 6992 snapshot_consistency_group_sub_op.cc:2084] <parent meta_opid: 57282694, CG: DR>: VM has active tasks in progress.528ec213-a858-8122-c6ed-3510570ecc79 (testVM) Note: To find the Cerebro leader, run the following command: nutanix@cvm$ service=/appliance/logical/leaders/cerebro_master; echo $(zkcat $service/`zkls $service| head -1`)| awk '{print $2}'
Within the Conditions for Implementing Asynchronous Replication https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide%3Asto-pd-guidelines-r.html, in the Data Protection and Recovery Guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:sto-data-protection-pe-c.html, it is documented that snapshots will fail if there are other ongoing tasks currently in progress for a VM. "The snapshot operation fails after six retries of the protection domain that has the VM on which some other ongoing tasks are currently in progress. The snapshot operation succeeds only if within these six retries the ongoing tasks on the VM get completed." See what VM tasks are running and either wait for them to complete or cancel them and then try the snapshot again. If this is a scheduled snapshot that fails every time because it runs with other scheduled VM tasks, and the VM tasks cannot be rescheduled, then reschedule the snapshot to have limited or no overlap with the other scheduled VM tasks.
KB8439
Hyper-V: LSI Firmware Upgrade for Hyper-V 2012R2
Since LCM does not Support updates for data drives or HBA controllers for Hyper-V 2012.
For Clusters Running Hyper-V 2012R2, LCM does not support updates for data drives or HBA controllers. The NCC health check lsi_firmware_rev_check verifies the LSI firmware compatibility. Running the NCC check Run the NCC check as part of the complete NCC health checks nutanix@cvm$ ncc health_checks run_all Or run the lsi_firmware_rev_check check individually nutanix@cvm$ ncc health_checks hardware_checks disk_checks lsi_firmware_rev_check Sample output For Status: PASS Running : health_checks hardware_checks disk_checks lsi_firmware_rev_check For Status: FAIL Running : health_checks hardware_checks disk_checks lsi_firmware_rev_check Since LCM upgrades are not supported, the firmware updates will need to be performed manually. Refer to the LCM Release Notes https://portal.nutanix.com/#/page/docs/details?targetId=Release-Notes-LCM:Release-Notes-LCM for more details.
Please proceed with the manual firmware updates via the steps provided in KB 6937 https://portal.nutanix.com/kb/6937
KB16738
Nutanix Files: Date modified is not updated for files or folders
Date modified is not updated for files or folders on Nutanix Files shares if clients have Windows Defender RTP enabled
Any file that is modified on a share in Windows does not have its "date modified" updated, even though the data in the file itself has been updated and saved.For example, the file below is created at 11:13 AM PST/2:13 PM EST, modified at 11:14 AM PST/2:14 PM EST and 11:17 AM PST/2:17 PM EST. However, Windows only shows the date it was created, and not the date and time the file is modified: If a file is created and modified from the FSVM itself, the date modified is updated properly. For example, created a new_test.txt file from the FSVM in the share path: nutanix@NTNX-A-FSVM:/zroot/shares/<share path/folder>$ vi new_test.txt Added more data to our file from the FSVM, after which date modified is updated: nutanix@NTNX-A-FSVM:/zroot/shares/<share path/folder>$ cat new_test.txt In the same folder Another Test File.txt is created and modified from a Windows machine, we see the date created is updated but the date modified is not: nutanix@NTNX-A-FSVM:/zroot/shares/<share path/folder>$ cat Another\ Test\ File.txt SMB client logs show an SACL denied error when attempting to reproduce the issue: mb2_validate_sequence_number: clearing id 3388 (position 3388) from bitmap We also see similar errors in the packet captures between the FSVM owner for the folder and the windows client: smb2.sec_info.infolevel == 0x00
The date modified is not updated properly if Windows Defender Real-Time Protection (RTP) is enabled, due to issues updating the Mtime. If RTP is disabled on the client, the date modified is updated as expected. Please upgrade to Nutanix Files 5.0 or above to resolve this issue.
KB3291
Phoenix fails with 'IndexError: list index out of range'
If node position is not given in phoenix menu, imaging process would fail and phoenix throws a traceback.
- Phoenix fails with the error 'IndexError: list index out of range' while performing Configure Hypervisor/Install CVM/Repair CVM.-We also see "EXT4-fs (sdh1): VFS: Can't find ext4 filesystem" on phoenix menu. - Below is the screenshot:
- The issue happens when the Node position is not set/blank on Phoenix GUI. - Please set the Node position to A/B/C/D by using the right arrow key on the keyboard.- Restart the Phoenix process using following command and it will install it successfully. phoenix # python /phoenix/phoenix
KB7454
Logical timestamp mismatch between Two-Node cluster and witness VM causes cluster to not transition modes properly and potentially impact the storage availability
During a node failover event in a 2 node cluster, the cluster state may go down and the witness VM may not be able to communicate with the nodes as expected.
A Nutanix Two-Node cluster relies on communication with a Witness VM in order to allow the cluster to transition between modes. A logical timestamp is kept updated between cluster and witness to track of the current cluster mode and state. Cluster Modes overview During a cluster mode change, a network partition or communication problem between the Two-Node cluster and the Witness VM might cause the logical timestamp values to fall out of sync. As a consequence, the Two-Node cluster will not be able to further record cluster mode transitions. A Two-Node cluster that is unable to record cluster mode transitions can suffer potential storage unavailability issues. Once the communication between the Two-Node cluster and the Witness VM is restored, due to a software defect, the logical timestamp values are not automatically corrected.WARNING: If the Two-Node cluster is impacted by the scenario described in this article, do not attempt to perform a failover or any other planned maintenance as that will impact the VMs running on the cluster. How to identify: Scenario 1: 1. Run the following commands from a CVM of the two-node cluster to verify current cluster configuration. The following output notes that the current cluster mode is KNormal and the leader CVM is id: 4 (kNormal with Leader mode). Logical timestamp is 1 for the Two-Node cluster. nutanix@CVM:~$ zeus_config_printer | sed -n '/^cluster_operation_mode/p;/^witness_state/,/^}/p' cluster_operation_mode: kNormalwitness_state { leader_svm_id: 4 cluster_transition_status: 17 witness_state_history { cluster_status: 0 transition_reason: kAuto last_transition_timestamp: 1554319496 leader_svm_id: 4 cluster_operation_mode: kStandAlone cluster_stopped_since_kStandAlone_for_upgrade: false }---- truncated ---- } logical_timestamp: 1 witness_object_uuid: "e5d90c16-bec5-4f4c-abe0-38ea0e3e6a73" 2. Run the following command the CVM to display the witness VM configuration. Logical timestamp (entity_version) is 0. In this case, there is a mismatch between the Two-Node cluster and the Witness VM, as the value is higher for the Two-Node cluster. nutaninx@CVM:~$ edit-witness-state --editor=cat ---- truncated ---- "entity_version": 0, "leader_uuid_list": [ "0dea6033-6590-467e-adf3-ccd690215811" 3. Refer to the Scenario 1 in the solution section of this KB and follow the steps to correct the mismatch. Scenario 2: 1. The mismatch can also happen the other way around, whereas the Witness VM has a higher logical timestamp than the Two-Node cluster: nutanix@CVM:~$ zeus_config_printer | sed -n '/^cluster_operation_mode/p;/^witness_state/,/^}/p' ---- truncated ----logical_timestamp: 10witness_object_uuid: "85bcff6f-85bb-4df4-aa0e-1af10c29aa30"cluster_operation_mode_logical_timestamp: 22cluster_stopped_since_kStandAlone_for_upgrade: false} nutaninx@CVM:~$ edit-witness-state --editor=cat ---- truncated ---- "entity_version": 11, "leader_uuid_list": [ "00000000-0000-0000-0000-000000000000" ] 2. Refer to the Scenario 2 in the solution section of this KB and follow the steps to correct the mismatch. The following log signatures will be written in Genesis logs in the Two-Node cluster: /home/nutanix/data/logs/genesis.out 2019-01-11 11:18:18 INFO witness_manager.py:576 Attempting to clear leader at witness object, using current timestamp: 32019-01-11 11:18:18 ERROR witness_client.py:357 Witness <witness_IP> failed to process request. Status: 403, Message: CAS Error, Reason: cas_error2019-01-11 11:18:18 INFO witness_manager.py:586 Unable to update witness, cannot clear leader information On higher AOS versions, the genesis.out (/home/nutanix/data/logs/genesis.out) will show the following logs: 2020-07-09 13:30:52 INFO witness_manager.py:675 The entity version is X which is smaller than the cluster entity version Y, needs to be synced up Here X = 10 and Y = 11.The following log signatures will be written in Genesis logs in the Witness VM: /home/nutanix/data/logs/aplos.out nutanix@CVM:~$ ssh <witness_vm_ip> grep -i kIncorrectCasValue -A20 /home/nutanix/data/logs/aplos.outFIPS mode initializedNutanix Controller VMnutanix@CVM's password: ---- truncated ----2020-02-27 01:22:18 INFO cpdb.py:444 Exception: Mark Update In Progress Failed (kIncorrectCasValue)2020-02-27 01:22:18 ERROR resource.py:165 Traceback (most recent call last): File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/intentgw/v3_witness/api/resource.py", line 163, in dispatch_request File "/usr/local/nutanix/lib/py/Flask_RESTful-0.3.4-py2.7.egg/flask_restful/__init__.py", line 581, in dispatch_request resp = meth(*args, **kwargs) File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/intentgw/v3_witness/api/resource.py", line 96, in wrapper File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/intentgw/v3_witness/validators.py", line 121, in wrapper File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/intentgw/v3_witness/api/resource.py", line 71, in wrapper File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/intentgw/v3_witness/validators.py", line 112, in wrapper File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/lib/utils/log.py", line 135, in wrapper File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/lib/witness/log.py", line 24, in wrapper File "/home/jenkins/workspace/postcommit-jobs/nos/euphrates-5.10.3-stable/x86_64-aos-release-euphrates-5.10.3-stable/builds/build-euphrates-5.10.3-stable-release/python-tree/bdist.linux-x86_64/egg/aplos/intentgw/v3_witness/api/witness_uuid.py", line 102, in putApiError_5_0: {'api_version': '3.1','code': 403,'details': {'entity_version': 11, 'uuid': u'42d14deb-bc9c-4890-9862-8c254fe031e0'},'kind': 'witness','message': 'CAS Error','reason': 'cas_error','status': 'failure'} A logical timestamp mismatch will also cause AOS pre-upgrade checks to fail with the following error message in Prism: "Failure in pre-upgrade tests, errors Cannot upgrade two node cluster when cluster has a leader fixed. Current leader svm id: 4. Try again after some time , Please refer KB 6396" INFO: A Two-Node cluster operating normally will have the following configuration output. A -1 in leader_svm_id field means that the cluster mode is kNormal without Leader. nutanix@CVM:~$ zeus_config_printer | sed -n '/^cluster_operation_mode/p;/^witness_state/,/^}/p'cluster_operation_mode: kNormalwitness_state { leader_svm_id: -1 cluster_transition_status: 256 witness_state_history { cluster_status: 0 transition_reason: kAuto last_transition_timestamp: 1577469144 leader_svm_id: 4 cluster_operation_mode: kStandAlone cluster_stopped_since_kStandAlone_for_upgrade: false }[ { "Cluster Mode": "kSwitchToTwoNode", "State": "Node that was down is up and in process of joining the cluster", "Recommendation": "Wait until Data Resiliency reports OK in Prism." }, { "Cluster Mode": "kNormal with Leader", "State": "Cluster is under-replicated. Leader ensures that all data is replicated between nodes.", "Recommendation": "Wait until Data Resiliency reports OK in Prism." }, { "Cluster Mode": "kNormal without Leader", "State": "Cluster is UP and Normal.", "Recommendation": "Resiliency is OK. Cluster can tolerate a failure." } ]
This bug has been fixed in AOS 5.10.5 and above.This KB is specific to how to fix a version mismatch between the 2-node cluster and the Witness VM. For assistance in troubleshooting other state transitions, please reference the following: KB-5771 - Two Node Cluster - How to monitor 2-node cluster Recovery Progress after a failover https://portal.nutanix.com/kb/5771 KB-6396 - Pre-Upgrade Check: test_two_node_cluster_checks https://portal.nutanix.com/kb/6396 WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit) Scenario 1. Two-Node cluster logical timestamp is higher than the Witness VM logical timestamp - Edit the witness configuration and increase the value to match the Two-Node cluster one. Run the command from any CVM in the Two-Node cluster. As per the Scenario 1 in the description section, the value should be increased from 0 to 1. Save the changes. nutanix@CVM:~$ edit-witness-state --editor=vim Scenario 2. Witness VM logical timestamp is higher than the Two-Node cluster logical timestamp: - Edit zeus configuration in the Two-Node cluster and increase the value to match the Witness-VM one. As per the Scenario 1 in the description section, the value should be increased from 10 to 11. Save the changes. nutaninx@CVM:~$ edit-zeus --editor=vim Common steps: - After matching the logical timestamps, verify that the witness state shows "leader_svm_id: -1" and that cluster transition status is 256. nutanix@CVM:~$ zeus_config_printer | sed -n '/^cluster_operation_mode/p;/^witness_state/,/^}/p' - Prism Witness VM page should show the last keepalive updated: - At this point, AOS upgrade pre-checks should pass and it is safe to perform an AOS upgrade.
KB12630
Dead Curator worker stuck in SIGPROF loop
Dead Curator worker stuck in SIGPROF loop
Symptoms The Curator service page on the Curator master, reports Health status Down for one worker node (CVM.45 in the example) nutanix@CVM:~$ links --dump http:$(curator_cli get_master_location | grep Using | awk '{print $NF}') ZK node for curator Health Monitor doesn't list the CVM .45, further confirming it is Down on this node nutanix@CVM:~$ zkls /appliance/logical/health-monitor/curator However, genesis status on the CVM .45 shows that Curator is UP [email protected]:~$ gs | grep curator Curator processes are running on the CVM as per "top" but with excessively high CPU usage and could be in "D" or "R" state 'D' = UNINTERRUPTABLE_SLEEP'R' = RUNNING & RUNNABLE [email protected]:~$ top -bn1 -u nutanix | egrep "curator|CPU" Both Parent and Child processes are in SIGPROF loop [email protected]:~$ sudo strace -p 10201
Solution Nutanix Engineering is aware of the issue and is working on a fix in a future AOS release Workaround Restart the Curator service on the Affected CVM nutanix@CVM:~$ genesis stop curator
KB4488
Converting wildcard pfx to private key and public certificate
When you want to upload your SSL certificate, you need key/certs vs a pfx file.
You are unable to upload .pfx wildcard.
Note: The .pfx file must be in one of the support formats. The file cannot be SHA1 or SHA512. Download and install OpenSSL for Windows http://code.google.com/p/openssl-for-windows/. Run the following commands to export the private key and public certificate from the wildcard pfx. openssl pkcs12 -in c:\certs\wc.pfx -nocerts -out c:\certs\wc.key -nodes Create a chain list of CA certificates into a chain file in the following order. -Signer.crt The order is essential. The total chain must begin with the certificate of the signer and end with the root CA certificate as the final entry.Import the private key, public certificate and chain file into Prism using the SSL Certificate option.
KB14936
Unable to list/health check/reset remote connections on Prism Element cluster
Unable to list/health chek/reset remote connections on Prism Element cluster
SREs may notice they are unable to list, health check or reset a remote connection in Prism Element cluster. When trying to list, health check or reset remote connections an error "Not authorized to access remote_connection.list_all" or "Not authorized to access remote_connection.health_check_all"This issue may occur when permissions related to remote connections are deleted from IDF permission cacheTo check if the permissions are present in the cluster run the following command nutanix@NTNX-CVM:~$ idf_cli.py get-entities --guid permission |grep Remote_Connection You may see an output like above, in the above output you are missing the following permissions Health_Check_All_Remote_Connection - Required for health_check_all on all the remote connections List_Remote_Connection - Required for Listing remote connections Reset_Remote_Connection - Required for resetting the remote connection Health_Check_Remote_Connection - Required for health_check on a single remote connection
Ensure the description matches exactly before following the solution below, Step1: Download the script from this link https://download.nutanix.com/kbattachments/14894/fix_rc_permission_v1.py to /home/nutanix/tmp on any CVM. MD5SUM of the script is: bbb3137a6787fccaa6a3db9f8391742bStep2: Execute the script with the following command nutanix@NTNX-CVM:~$ python ~/tmp/fix_rc_permission_v1.py A successful execution with new permissions being added will look like below, Permission : Health_Check_Remote_Connection created with uuid: d053ed84-f1a4-42c0-9567-138e78dc8748: Step3: Use nuclei to confirm if list/health check of remote connections work in the cluster. nutanix@NTNX-CVM:10.134.86.163:~/tmp$ nuclei
KB4538
Self-Encrypting Drives (SED) with self-signed node certificates
How to enable Self-Encrypting Drive (SED) feature with self-signed certificates for the nodes instead of CA-signed certificates.
Prism UI is designed on the basis that a user who considers Self-Encrypting Drives (SEDs) using an external Key Management Server (KMS) will have a Certificate Authority (CA) they trust to sign the certificate using a generated Certificate Signing Request (CSR). The CA can be an internal one that is built on premises on or outside the Nutanix cluster, or an external one that the user will pay to get the certificate signed. Some KMS have a CA feature built in. Whilst it is highly recommended to use a certificate signed by a CA, it is technically possible to operate SEDs using a self-signed certificate signed by the Controller VM (CVM).
Follow the procedure Configuring Data-at-Rest Encryption https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:wc-security-data-encryption-wc-t.html from the Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:Nutanix-Security-Guide to generate a CSR. (This step can be omitted if a CSR has already been created in a previous scenario, for example, updating certificates.) Sample filenames: Private key: /home/nutanix/certs/svm.keyCSR: /home/nutanix/certs/svm.csr Sign the certificate using the generated private key and CSR on each CVM. Note: Each CVM will have its own private key and the certificate needs to be generated for all of them. Example: openssl x509 -req -sha256 -days 720 -in /home/nutanix/certs/svm.csr -signkey /home/nutanix/certs/svm.key -out /home/nutanix/certs/svm.crt Download the generated certificate (file specified with the -out option in step 2) from all CVMs and do the necessary operation to make KMS use them. Example of IBM KMS server: Register certs to the cluster by doing the following steps: Get UUID of nodes using below command on a CVM: ncli host ls Check the name of KMS using below command: ncli key-management-server ls Run the following command from each CVM: First time registration: ncli data-at-rest-encryption-certificate upload-cvm-certificates host-id=<UUID confirmed on step a> key-management-server-name=<KMS checked on step b> file-path=<file path specified on step 2> Updating registered information: ncli data-at-rest-encryption-certificate replace-cvm-certificate host-id=<UUID confirmed on step a> key-management-server-name=<KMS checked on step b> file-path=<file path specified on step 2> Test the setup: ncli data-at-rest-encryption test-configuration
KB1553
Hyper-V: Network PowerShell Cheat Sheet
This article provides a summary of useful commands and workflows that you need to perform during the Hyper-V installation and node replacement.
During the Hyper-V installation and node replacement, you often need to change the network configuration. This article provides a summary of useful commands and workflows such as recreating the NIC team, adding members to the team, recreating the switches, and changing the MTU. Update For Windows Server 2012, 2016, and 2019 Network Configurations. NIC Teaming: NIC team is the method for bonding the Hyper-V host physical NICs for Windows 2012-R2/Windows 2016 The NIC Team name is "NetAdapterTeam" for Nutanix Hyper-V Hosts. NIC Teams support LACP Switch Embedded Team (SET): SET is the method for bonding the Hyper-V host physical NICs for Windows 2016/Windows 2019 SET is supported for AOS Version 5.17+ SET does not support LACP SET is the default for Windows Server 2019 How to: Powershell Commands Syntax #powershell Network Adapater Commands #AdpaterCMD Network Adapter Team Commands # Switch Embedded Team Commands #SETCMD Network Troubleshooting Commands #NetTshoot Check for Jumbo Frames #jumboframes Recreating the NIC Team #RecreateNICTEAM Configure External Switch #External_Switch VLAN tag for the External Network #VLANTAGEXNET Recreate the Internal Switch #RecreatingIntSwitch Editing the Network Adapters in the NIC Team #EditNICTeam SET: Convert NIC TEAM to SET #TEAM_to_SET SET: Create Switch with SET #Create_Switch_with_SET
PowerShell Command Syntax undefined All the PowerShell commands follow the same structure: [Action]-[Object] [Flags] PowerShell Command Examples: To list all the network adapters, type: get-netadapter To set something on the network adapter, type: set-netadapter This structure of the Hyper-V commands works anywhere in Windows PowerShell. Remember this structure and you can use most of the Hyper-V commands. You can also use the Tab key to complete the commands. The tab-completion tool not just completes commands, but also looks through all possible completions. This makes it easy to figure out the flags. Use the Get-help command to access the man pages. In a CMD window, type powershell to access Windows PowerShell. If you are in Windows PowerShell, your prompt string starts with PS. If you accidentally close all the CMD windows, press ctrl+alt+del and go to the task manager to open a new CMD window. Following is a list of some useful Hyper-V commands. Network Adapter Commands undefined get-netadapter Netadapter Team Commands undefined get-netlbfoteam Switch Embedded Team (SET) undefined get-vmswitch Network Troubleshooting Commands undefined get-netadapterstatus See the driver events. This is for the 10 GbE adapter. get-winevent -providername ixgbn Get the VMs on a host: get-vm Get the VMs from SCVMM: get-scvirtualmachine Note: For System Center, all the commands are in the [action]-sc[object] format. This is the same as using SSH in ESXi. enter-pssession Checking and Setting the Jumbo Frames undefined Run the following commands to check and set the jumbo frames. get-NetAdapterAdvancedProperty -DisplayName “Jumbo Packet” | format-table Name, RegistryValue Recreating the NIC Team undefined Run the following commands to recreate the NIC team. Get the current adapters in the system. get-NetAdapter Create a new team. new-NetLBFOTeam -Name NetAdapterTeam -TeamMembers @(“Ethernet X”, “Ethernet Y”) Replace X and Y with the numbers you obtained from the previous command. The @() represents an array. Connect the team to the ExternalSwitch. set-VMSwitch -SwitchName ExternalSwitch -NetAdapterName NetAdapterTeam VLAN tag for the Externel Nework undefined Run the following commands to tag the external network. Check the current settings: get-VMNetworkAdapterVlan Set the Hyper-V host to use the correct VLAN: set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "ExternalSwitch" -Access -VlanID {xx} Replace {xx} with your VLAN ID. Set the Controller VM to use the correct VLAN: set-VMNetworkAdapterVlan -Access -VMNetworkAdapterName "External" -VMName {xxx} -VlanID {xx} Replace {xx} with your VLAN ID. Note: in case you need to undo a tagging, access via IPMI would allow the following command(s) to be entered: Host: set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "ExternalSwitch" -Untagged CVM: set-VMNetworkAdapterVlan -Access -VMNetworkAdapterName "External" -VMName {xxx} -Untagged Recreating the External Switch undefined Run the following commands to recreate the ExternalSwitch. Replace {name from above} with the name of the interface obtained from the previous command (get-netadapter). Get a list of the network adapters: get-netadapter Create a new switch: new-vmswitch -Name ExternalSwitch -AllowManagementOS $true -NetAdapterName {name from above} Connect the Controller VM to the ExternalSwitch: Connect-VMNetworkAdapter -VMName NTNX* -SwitchName ExternalSwitch -Name External If required, tag the VLAN on the Hyper-V host and the Controller VM. set-VMNetworkAdapterVlan -ManagementOS -Access -VlanID {xx} Set the jumbo frames. Repeat this step for the external vEthernet and the other Ethernet interface: set-NetAdapterAdvancedProperty -Name “Ethernet X” -DisplayName “Jumbo Packet” -RegistryValue 9014 Use sconfig to configure the IP address. Select option 8 in the menu that appears to configure the IP address. Verify the settings. get-vmswitch Recreating the Internal Switch undefined Run the following commands to recreate the InternalSwitch. Create the switch: New-VMSwitch -Name InternalSwitch -SwitchType Internal After creating the Internal switch, you may have to reboot the host for the registry changes to take effect. If the SVM is unable to reach the host (check by running ping 192.168.5.1 from the SVM), remove the internal NIC from the SVM and then add it back.Run the following commands from the hyper-V host: Remove-VMNetworkAdapter -VMName <cvm_name> -Name Internal Connect the internal adapter to the network: Connect-VMNetworkAdapter -VMName NTNX* -SwitchName InternalSwitch -Name Internal Use sconfig to configure the IP address. Select option 8 in the menu that appears to configure the IP address.Verify the settings: get-vmswitch Editing Network Adapaters in the NIC Team undefined Run the following commands to add and remove the NICs from the NIC team. List all the NICs attached to a NIC team and adapters in the system. get-NetLbfoTeamMemberget-NetAdapterget-NetLbfoTeam Remove a member from the team. remove-NetLbfoTeamMember -Name “Ethernet X” -Team “TeamName” Add a member to the team. add-NetLbfoTeamMember -Name “Ethernet X” -Team “TeamName” Connect the internal adapter to the network Connect-VMNetworkAdapter -VMName NTNX* -SwitchName InternalSwitch -Name Internal Configure IP addres to VMSwitch Option 1. Use sconfig to configure the IP address. Select the option 8 in the menu that appears to configure the IP address. Option 2. Use Powershell commands: new-NetIPAddress -InterfaceIndex <value> -IPAddress <value> -PrefixLength <value> -DefaultGateway <value> Verify the network settings: get-vmswitch Converting NIC Team to SET undefined Recommended before starting: CVM should be in maintenance mode and shutdownHost should be pause in Fail-over Cluster.Command should be used on IPMI console or Scripted (Advanced) Obtaining Current Host Networking Settings: $hostIP = (Get-NetIPAddress -InterfaceAlias "vEthernet (ExternalSwitch)" -AddressFamily IPv4).IPAddress Removing Host Networking Settings: remove-NetIpaddress $hostIP -confirm:$false Creating Switch with SET undefined New-VMSwitch -Name ExternalSwitch -NetadapterName @((Get-Netadapter | where LinkSpeed -eq '10 Gbps'| where name -notlike 'vEth*').name) -EnableEmbeddedTeaming $True Applying Network Settings to New Switch: $hostNetId = (get-netadapter | where name -eq 'vEthernet (ExternalSwitch)').ifIndex
KB2050
NCC Health Check: cvm_startup_dependency_check
Controller VM (CVM) may fail to start after the host is rebooted. The NCC health check cvm_startup_dependency_check determines whether any problems regarding CVM bootup are likely to occur upon the host reboot.
If an ESXi host is restarted, the Controller VM (CVM) may fail to start. The NCC health check cvm_startup_dependency_check determines whether any problems regarding CVM bootup are likely to occur when the host is restarted. This check runs on ESXi clusters only and verifies the presence of any startup dependencies that may result in a CVM not starting after the host reboot. The check examines the contents of /etc/rc.local.d/local.sh (on the ESXi host) and makes sure it is intact and not missing anything that might be important to start the CVM after the host restarts. Note: In ESXi 5.1+, the rc local script is /etc/rc.local.d/local.sh while in ESXi 5.0, it is /etc/rc.local. The cvm_startup_dependency_check consists of the following: PYNFS dependency checklocalcli check - checks if esxcli command is used in the rc local script. (Deprecated in NCC version 3.10.0.)vim command check - checks if "vim-cmd vmsvc/power.on" command is present in the rc local script.Autobackup check - checks if /sbin/auto-backup.sh has been run successfully.Network adapter setting check - checks if the network adapter is set to connect on power on.EOF check - checks if there is an "EOF" line at the end of the rc local script.RC local script exit statement present - checks if there is a top-level exit statement in the rc local script..dvsData directory in local datastore - checks if .dvsData directory is present on pynfs mounted local datastore and if it is persistent.Svmboot mount check - checks if svmboot.iso is present on mounted local datastore. Based on the outcome of the above checks, the result is either PASS, INFO, FAIL or ERR. Running the NCC CheckYou can run this check as part of the complete NCC Health Checks: nutanix@cvm$ ncc health_checks run_all Or you can run this check individually: nutanix@cvm$ ncc health_checks hypervisor_checks cvm_startup_dependency_check You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run. This check is not scheduled to run on an interval.This check will not generate an alert. Sample outputFor Status: PASS Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ PASS ] For Status: INFO Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ INFO ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ INFO ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ INFO ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ INFO ] For Status: FAIL Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Running /health_checks/hypervisor_checks/cvm_startup_dependency_check on all nodes [ FAIL ] Output messaging [ { "Check ID": "Check that /sbin/auto-backup.sh has run successfully" }, { "Check ID": "/sbin/auto-backup.sh has not run successfully." }, { "Check ID": "Make sure '/bootbank/state.tgz' has a newer timestamp." }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "106439" }, { "Check ID": "Check that .dvsData directory is present on pynfs mounted local datastore" }, { "Check ID": ".dvsData directory is not persistent yet." }, { "Check ID": "Check if .dvsData directory exists in the local datastore." }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "106437" }, { "Check ID": "Check that there is no line EOF at end of RC local script" }, { "Check ID": "EOF statement at end of RC local" }, { "Check ID": "Check that 'local.sh' does not have 'EOF'" }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "106438" }, { "Check ID": "Check that top level exit statement is not present in RC local script" }, { "Check ID": "Top-level exit statement present in script RC local preventing script lines from being run." }, { "Check ID": "Check if 'local.sh' has an 'exit' statement. Generate INFO if the exit statement is NOT within the if..fi statement" }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "103068" }, { "Check ID": "Check Network adapter setting" }, { "Check ID": "Network adapter is not set to not connect on power on" }, { "Check ID": "Check if ethernet0.startConnected =true is present in the CVM's .vmx file." }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "106431" }, { "Check ID": "Check PYNFS dependency" }, { "Check ID": "PYNFS is in use and is not present." }, { "Check ID": "Validate PYNFS configuration." }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "106433" }, { "Check ID": "Check that vim-cmd vmsvc/power.on command is present in the local script" }, { "Check ID": "vim-cmd vmsvc/power.on\\ command not present in local script." }, { "Check ID": "Check if 'vim-cmd vmsvc/power.on' entry is present in 'local.sh'." }, { "Check ID": "CVM may not boot up after the host reboot." }, { "Check ID": "106440" }, { "Check ID": "Checks if CVM ISO is on mounted local datastore" }, { "Check ID": "ServiceVM_Centos.iso is not on the mounted local datastore." }, { "Check ID": "Check if ServiceVM_Centos.iso exists in the local datastore." }, { "Check ID": "CVM may not start after the host reboot." } ]
Find your error message in the table below and perform the corresponding recommended actions. Do not restart the host or trigger any rolling reboot activity like CVM Memory upgrade or AOS upgrade until the check passes or Nutanix Support confirms that a restart will not cause any problem with the CVM bootup. In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com/.[ { "Failed check": "vim command check", "Error messages": "\"\"vim-cmd vmsvc/power.on\" command not present in local script\"", "Recommended actions": "Make sure 'vim-cmd vmsvc/power.on' entry is present in /etc/rc.local.d/local.sh." }, { "Failed check": "Autobackup check", "Error messages": "\"/sbin/auto-backup.sh has not run successfully\"", "Recommended actions": "Check whether /sbin/auto-backup.sh has been run successfully by confirming that /bootbank/state.tgz is newer than /etc/rc.local.d/local.sh.\t\t\tThe NCC check fails if /bootbank/state.tgz is not newer than /etc/rc.local.d/local.sh.\t\t\tSample result for the failure scenario:\n\t\t\tnutanix@cvm$ hostssh ls -al /bootbank/state.tgz\n============= xx.xx.xxx.32 ============\n-rwx------ 1 root root 72235 Jun 8 06:01 /bootbank/state.tgz\n============= xx.xx.xxx.33 ============\n-rwx------ 1 root root 72360 Jun 8 06:01 /bootbank/state.tgz\n============= xx.xx.xxx.34 ============\n-rwx------ 1 root root 72256 Jun 8 06:01 /bootbank/state.tgz\n\n\t\t\tnutanix@cvm$ hostssh ls -al /etc/rc.local.d/local.sh\n============= xx.xx.xxx.32 ============\n-rwxr-xr-x 1 root root 5116 Jun 7 07:08 /etc/rc.local.d/local.sh\n============= xx.xx.xxx.33 ============\n-rwxr-xr-x 1 root root 5116 Jun 8 06:51 /etc/rc.local.d/local.sh\n============= xx.xx.xxx.34 ============\n-rwxr-xr-x 1 root root 5116 Jun 7 07:09 /etc/rc.local.d/local.sh" }, { "Failed check": "Network adapter setting check", "Error messages": "\"Network adapter is set to not connect on power on\"", "Recommended actions": "Make sure the CVM VMX file does not have the setting \"ethernet0.startConnected\". If it does, make sure it is set to \"true\"." }, { "Failed check": "EOF check", "Error messages": "\"\"EOF\" statement at end of rc local\"", "Recommended actions": "Make sure /etc/rc.local.d/local.sh does not have an 'EOF' line." }, { "Failed check": ".dvsData directory in local datastore", "Error messages": "\".dvsData directory is not persistent yet\"", "Recommended actions": "If .dvsData directory exists in the local datastore, make sure it is persistent." }, { "Failed check": "Svmboot mount check", "Error messages": "\"Failed to find ServiceVM*.vmx\"\t\t\t\"Invalid vmx configuration\"\t\t\t\"ServiceVM_Centos.iso is not mounted\"\t\t\t\"ServiceVM_Centos.iso missing in vmx configuration\"", "Recommended actions": "Make sure /vmfs/volumes/NTNX-local-ds*/*/ServiceVM*.vmx exists, is readable and its contents are valid.Make sure /vmfs/volumes/NTNX-local-ds-XXXXXXX-A/ServiceVM_Centos/ServiceVM_Centos.iso is present in the CVM's .vmx file and is mounted.Make sure ServiceVM_Centos.iso is checked to start connected.Make sure the local datastore name is not changed to a non-default from NTNX-local-ds*.(non-configurable vSphere components)Rename ESXi local datastore name to default format of NTNX-local-ds* from vCenter server and run the check again.Make sure references to files ide0:0.fileName = \"ServiceVM_Centos.iso\" and serial0.fileName=\"ServiceVM_Centos.0.out\" are not showing a full path like \t\t\t\tide0:0.fileName = \"/vmfs/volumes/5a3a73f2-59cb5028-8a34-0cc47a9bc41e/ServiceVM_Centos/ServiceVM_Centos.iso\" , only the files should appear.Make sure memsize and shed.mem.minsize are not using a capital S at the 'size' word." } ]
KB11857
Nutanix Self-Service - Trying to update existing Vmware provider for a CALM Project fails with error "Invalid owner reference specified, User not found"
Unable to Update existing Vmware provider for a CALM Project with error "Invalid owner reference specified. Referenced User [email protected] was not found."
Nutanix Self-Service (NSS) is formerly known as Calm.1) While trying to update the VMware provider account for a CALM Project, was failing with an error: Invalid owner reference specified. Referenced User [email protected] was not found." 2) Below is the error message one would observe inside styx.log for nucalm container inside /home/docker/nucalm/log/ on the PCVM: ApiError: {'api_version': '3.0', 3) The user in the error message exists on PC and in a COMPLETE state. The UPN is "[email protected]" and the userid is "[email protected]": nutanix@PCVM$ nuclei user.get 3a1eb665-96ad-5404-9960-826eb7623f91 4) The user "[email protected]" has Super Admin privileges and is the user that created the account "[email protected]" which we were trying to update for the affected CALM Project5) Below is the spec we got from the payload of the PUT API CALL which was failing from the browser's Inspect network page for the CALM Project: {"spec":{"name":"JPR_New2","resources":{"type":"vmware","data":{"username":"[email protected]","password":{"attrs":{"is_secret_modified":true},"value":"XXXXXXX"},"port":"443","server":"10.X.X.X"}}},"api_version":"3.0","metadata":{"last_update_time":"1595585821126361","kind":"account", We can notice that for the account "[email protected]" the "owner_reference":{"kind":"user","name":"[email protected]" points to this user "[email protected]" .So the owner reference is set to the AD user("[email protected]"). After the creation, the AD user("[email protected]") left the organization, and any operation to the account([email protected]) fails with the user not found.
If you notice the above symptoms and error message while trying to update a Project on CALM, please collect CALM logs on PC using log bay and reach out to a local CALM SME or after confirmation open a CALM ONCALL to engage the CALM engineering team.In this scenario, ONCALL-10477 https://jira.nutanix.com/browse/ONCALL-10477 was opened and as a workaround to make things work, the CALM engineering team updated the owner reference (which is expected to be) for that particular account to admin from "[email protected]" by making a CURL request to the server.The cause of this error is that "A user([email protected]) with super admin privileges created the particular account which we were trying to update. So owner reference of the affected Project was set to the AD user([email protected]). CALM engineering team suspected that after the creation, the AD user([email protected]) left the organization or AD username credentials changed, and any operation to the account failed with the error user not found."In our case, the owner_reference for the affected Project was the user "[email protected]", but in the case of other working CALM projects the owner reference was “admin” user, so the CALM engineering team just changed the owner_reference to ‘admin’ user in the PUT API call which happens when we update the Account and using that PUT call & after changing this owner reference to admin user we were successfully able to UPDATE the project.
KB12467
Volumes disconnect - logs needed for RCA
This KB covers the logs needed for VG disconnect events. Prioritizes collecting logs as quickly as possible after an event is reported.
When storage disconnects are observed within guest VMs backed by Nutanix Volumes the correct logs must be gathered as soon as possible. This KB covers what data to collect through Logbay, Smart Support Remote Diagnostics, or SRE Assist. In general when the size of the cluster permits a full log bundle that covers an hour prior and an hour after the event is sufficient. For larger clusters where the size of a full log bundle could be significant, there are a few specific components that are a higher priority to capture.
Logbay Tags For logbay tag syntax and other options review the logbay cheatsheet https://confluence.eng.nutanix.com:8443/display/STK/Logbay+Cheatsheet on confluence. The following tags are the minimum that should be collected. These are supplied to logbay as a comma separated list. acropolis,aesdb,alerts,cassandra,cluster_config,container_info,curator,cvm_config,cvm_kernel,degraded_node,disk_failure,disk_usage_info,dynamic_ring_changer,genesis,hades,hardware_info,stargate,sysstats,zeus,zookeeper As an example, the following logbay command would collect logs from the previously listed tags from 0300 - 0500. NTNX@CVM:~$ logbay collect -t acropolis,aesdb,alerts,cassandra,cluster_config,container_info,curator,cvm_config,cvm_kernel,degraded_node,disk_failure,disk_usage_info,dynamic_ring_changer,genesis,hades,hardware_info,stargate,sysstats,zeus,zookeeper --from=2021/12/13-03:00:00 --duration=+2h0m0s Remote Diagnostics / SRE Assist For an overview of Remote Diagnostics, review the Remote Diagnostics L1 confluence page https://confluence.eng.nutanix.com:8443/display/SW/Insights+%7C+Remote+Diagnostics+-+Level+1+-+Finished+-+RV. A collection that covers an hour before and an hour after the event was reported should be sufficient. The same logbay tags can be collected through the Remote Diagnostics tool. In addition to the logs, a Panacea report should be collected from the same time period along with NCC. Additional data collectionWhile the specific tags and log bundle is a good starting point this is not an exhaustive list of all the data that needs to be collected. KB 3474 https://portal.nutanix.com/kb/3474 has additional configuration details, log signatures, and data points to collect when reviewing a Volumes related issue. Make sure that these details are captured along with a log bundle.
KB5549
LCM Pre-check 'test_vlan_detection" failed
When performing LCM inventory, the pre-checks might fail with: pre-check 'test_vlan_detection' failed
When performing LCM (Life Cycle Manager) inventory, the pre-checks might fail with the below message on Prism : Operation failed. Reason : Pre-check 'test_vlan_detection' failed('Pre-check test_vlan_detection failed') Failed to get vlan for cvm x.x.x.x This check verifies if we can get VLAN information from all nodes using NodeManager.
Find the LCM leader: nutanix@cvm$ lcm_leader SSH to the LCM leader and check the genesis.out logs: nutanix@cvm$ less ~/data/logs/genesis.out | grep -A10 -B10 vlan If this pre-check fails, based on the hypervisor, follow the below steps to verify: ESXi: UI Select each one of the CVMs (Controller VMs) in vCenter.In the Summary tab, verify if the CVMs are connected to the same port groups in the VM Hardware pane.Verify the VLAN Id on the port groups. CLI Run the below command to verify the VLANs for the CVM port group: root@esxi# esxcfg-vswitch -l Run the below command to verify the vlan assigned to the host (vmk0): root@esxi# esxcfg-vmknic -l AHV : UI Click the drop-down menu on Home and select Network.Select each of the CVMs and check if they all have the same VLAN Id. CLI Create file with all ovs-vsctl info: nutanix@cvm$ hostssh "ovs-vsctl show" > ovs_info Use the below grep command to find if there is any vlan (tag) associated to the CVM: nutanix@cvm$ cat ovs_info | egrep ""============="|"vnet[0-1]"" -A2 Use the below grep command to find if there is any vlan (tag) associated to the host: nutanix@cvm$ cat ovs_info | egrep ""============="|"Port\ \"br0\""" -A3 Hyper-V: UI In Hyper-V Manager, right click on the CVM and select Settings.Under Hardware, select the network adapter and check the vlan id.Make sure that all the CVMs have the same vlan. CLI Run the below command to verify the VLAN for the CVM: nutanix@cvm$ allssh "winsh get-VMNetworkAdapterVlan | grep CVM" Run the below command to verify the VLAN for the host: nutanix@cvm$ allssh "winsh get-VMNetworkAdapterVlan -ManagementOS" Please Note : This pre-check might fail due to the presence of NSX-T in cluster environment as well. You can validate the same using the below logs lines in genesis.out 2019-12-10 08:30:59 ERROR esx_utils.py:970 Unable to get NSX-T manager credentials in the system error: No NSX-T manager info present on cluster Refer KB-8546 https://portal.nutanix.com/kb/8546 in such cases to provide the details of NSX-T manager. In case the above steps do not resolve the issue, consider contacting Nutanix Support https://portal.nutanix.com.
KB15692
Live migration support for Credential guard enabled VMs (AOS 6.7.1 only)
Live migration support for Credential guard enabled VMs (AOS 6.7.1 only)
Credential guard support was introduced in AOS 5.19 under the FEAT-8057, but this was delivered without the Live Migration support. The FEAT-13048 enables the support for live migration of Credential Guard VMs (referred to as CG VM) on AHV clusters.Usage of this feature in AOS 6.7.1 is only approved for a single customer: Kyndryl Nederland B.V. - MoDFeature GA is planned for AOS 6.8.
Software requirementsTo use this feature, a customer must be on AOS 6.7.1 with the bundled AHV version 20230302.1011 (AHV 9.0.1). The feature gflag acropolis_enable_cg_vms_live_migration needs to be set to true.Hardware requirementsThe CPU family should be newer or the same as Skylake for customers with Intel nodes in their clusters. On AMD clusters, there is no such special requirement.This means that even if a cluster has all Skylake nodes but one Broadwell node, CG live migration would not be supported on the cluster.System workflowsGiven that the CG VMs are now migratable, they will be migrated during maintenance activities on the host (AHV upgrade/Node removal/Host maintenance mode).Further, for load balancing, Acropolis Dynamic Scheduler (ADS) may choose to migrate these VMs to resolve hotspots in the cluster.On clusters with high availability enabled to tolerate node failures, the CG VMs will have a guarantee of restart in case of a node failure.User WorkflowsUser-triggered migrations would start working for the CG VMs now using APIs or acli. However, the migrate button would still be greyed out in PE UI for 6.7.1 customers.CCLM This feature does not enable live migration of CG VMs across clusters. Node Add/Remove If an old generation CPU node is added to a cluster where CG live migration is supported, then the cluster would no longer support CG live migration. There is no alert to the customer on 6.7.1 in this case. To support CG live migration again - the customer would need to remove the corresponding node, power off, and then power on the existing CG VMs on the cluster.If an old generation node is removed, the existing CG VMs would require power off and then power on to support CG live migration. Enablement - ONLY PROVIDED FOR TROUBLESHOOTING REASONS - DO NOT SHARE WITH ANY CUSTOMERSIn 6.7.1, this feature is behind a gflag (acropolis_enable_cg_vms_live_migration) in the Acropolis service, which disables the feature by default. This flag should be enabled by using the following steps when upgrading the Cluster to the required versions. Perform AOS-only upgrade to version 6.7.1.Set the gflag acropolis_enable_cg_vms_live_migration in acropolis to true.Perform AHV upgrade to version 20230302.1011 (AHV 9.0.1).Power off all the CG VMsPower on all the CG VMs. If the high availability is enabled on the cluster, some VMs might fail to power on tightly packed clusters. This happens as in older releases, CG VMs were not HA-protected (meaning that additional memory was not reserved for them), but now we need to reserve additional memory to guarantee HA, which may not be available. Customers may need to power off some VMs or add capacity to the cluster.If "Kyndryl Nederland B.V. - MoD" reports any issues with live migration of CG VMs, collect logs and open an ONCALL to engage engineering.
KB15093
Curator FATAL due to "Metadata inconsistency type: kVBlockExtentId; Dedup extent"
This article describes an issue where deduplicated extents can become inconsistent.
On AOS 6.6.x prior to 6.6.2.7, a rare issue exists where deduplicated extents can become inconsistent. This can cause Curator scans to fail but could also affect the data consistency of any entity that uses these extents (VMs, VGs, etc.). The following message appears in /home/nutanix/data/logs/curator when this occurs.FATAL: F20230607 04:54:43.715281Z 425 audit_manager.cc:42] Metadata inconsistency type: kVBlockExtentId; Dedup extent oid=3002:sha1=01ce004bcc55b36fb6859b9d3cc9f8ad2c2566f5:sz=16384 has 1 references but no ExtentIdMap entry To confirm that the cluster is impacted by this issue, look for the mentioned FATAL messages in /home/nutanix/data/logs/curator.INFO logs on all CVMs with the following command: nutanix@cvm$ allssh 'zgrep "Metadata inconsistency type: kVBlockExtentId" ~/data/logs/curator*INFO*|tail -2'; date -u For this to occur, the following conditions must be met: Cluster using AOS earlier than 6.6.2.7 / 6.7.Deduplication is enabled on any cluster container.Erasure coding is enabled on the same containers where deduplication is enabled.
This issue is resolved in AOS 6.6.2.7 and 6.7. Upgrade to the latest supported version to permanently resolve and prevent this issue. For AOS 6.6.x clusters actively impacted by this issue or where maintenance operations cannot be completed because of it, contact Nutanix Support http://portal.nutanix.com for assistance.
KB9608
[Hyper-V] Host upgrade stuck due to space in user password
This article describes an issue where host upgrade is stuck due to space in user password.
Symptoms This issue is specific to Hyper-V. The hypervisor upgrade fails and gets stuck with the below status: nutanix@CVM$ host_upgrade_status One of the CVMs is in maintenance mode: nutanix@CVM:~/$ cs | grep -v UP The Genesis service is crashing on all the nodes with the below traceback: Get-SourceDCForReplication -Hostname xx-xxxx -Action delete -Username domain\user-name -password <passwd-hidden> work at Nutanix. on cvm ip: x.x.x.11 Let us say the User password is "We work at Nutanix". Genesis above only captured "We" as a password and left the rest. Then, the command does not understand "work" as there is no such parameter/argument in the command. See the highlighted text below for reference: 2020-04-12 00:24:20 ERROR hyperv_utils.py:105 Error executing cmd: Get-SourceDCForReplication -Hostname xx-xxxx -Action delete -Username domain\user-name -password <passwd-hidden> work at Nutanix. on cvm ip: x.x.x.11 ret: -1, out: , err: A positional parameter cannot be found that accepts argument 'work'. Cause This is because of missing quotes in Get-SourceDCForReplication. This is being tracked in ENG-268992 https://jira.nutanix.com/browse/ENG-268992. The password is visible in plain text, which needs to be hidden. This is tracked via ENG-301501 https://jira.nutanix.com/browse/ENG-301501.
The user password needs to be changed. The upgrade will not proceed as the password needs to be updated manually in post_upgrade_params ZK node. Engineering team is working on a script to fix this. Till then, engage Engineering team referring to ONCALL-8589 https://jira.nutanix.com/browse/ONCALL-8589 to perform zkedit.
KB10605
Foundation failing during Provisioning Network due to duplicate IP configured in Discovery OS
null
Problem:When using Foundation portable or Foundation VM to Foundation new nodes which are currently booted into Discovery OS, a particular set of circumstances may lead to duplicate IPv4 addresses being configured on the nodes in Discovery OS.The steps which need to have occurred are: The first Foundation attempt fails or is aborted after the Provisioning Network stage has configured an interface in Discovery OS with the CVM IP addressVLAN intended for the cluster is changed in Foundation configurationIP address is left the sameFoundation process is retried and fails with 'fatal: Provisioning network' If this sequence of events has occurred, open the IPMI console to one of the nodes and run the ifconfig command in discovery OS. If the output indicates the CVM IP address is assigned to two interfaces, on without a VLAN tag and one with a VLAN tag, then the process is likely experiencing this issue.Example:Consider the following Foundation configuration being applied against a node in Discovery OS:CVM IP configured is: x.x.x.123VLAN tag configured as 2If steps 1-4 above were confirmed, and ifconfig on Discovery OS shows:eth2 configured with x.x.x.123eth3.2 configured with x.x.x.123Any further IPv4 network communication to the node will then fail to succeed.
Reboot the node to clear the network configuration in Discovery OS and retry the Foundation process to work around the problem.This issue is fixed in Foundation 5.0.
KB15210
Nutanix Files: REST API user creation fails for AD user with "Request failed with status code 500"
Nutanix Files API user creation fails for AD users
Creating a Nutanix Files REST API user with an AD domain user fails if performed on the Prism Element redirected through Prism Central.Steps to reproduce the issue:1. Log in to Prism Central and launch Prism Element from Cluster Quick Access for the File Server cluster.2. Launch the File Server console from Prism Element.3. In the Files Console, go to Configuration > Manage roles.4. To add a REST API user, click + New User in the REST API access users section5. The below error is returned when creating Nutanix Files REST API user for AD domain users."Request failed with status code 500" Request failed with status code 500 We are aware of this symptom as a known issue.
This operation works by accessing Prism Element directly without involving Prism Central.1. Log in to Prism Element directly and launch the Files console.2. Create the Nutanix Files REST API user without special characters by following the syntax for username "AD\username".
KB6512
X-Ray: Custom Scenario Upload fails
If a user attempts to create a new custom scenario and package is imported as a zip, no configuration files are found in X-Ray. Workaround is to add files separately. Solution is tracked by XRAY-1006.
While attempting to add a custom scenario by uploading configuration files as a zip file, no configuration files are found in X-Ray. Following message is received: Scenario is not valid: No configuration files found at '/tmp/....'
Workaround Upload all the files for the scenario separately; orPlace the scenario files inside a folder within the zip file instead of just zipping them up in the root Solution The issue is tracked in XRAY-1006 https://jira.nutanix.com/browse/XRAY-1006. No fix is available as of Oct 2021.
KB1887
Extracting VMware core files
Extracting VMware core files
Whenever an ESXi host experiences a PSOD (Purple Screen of Death), it will attempt to write a core file to the disk. Typically, this will be saved in the /var/core directory: root@esxi# ~ # ls -lh /var/core/ Note: /var/core is a symlink to /scratch/core, and /scratch in turn points to /vmfs/volumes/<UUID>, for example, /vmfs/volumes/535eab40-5cb59b3c-fea8-002590e171c4 root@esxi# ~ # ls -ld /var/core This scratch partition is only 4 GB in size, so depending on the number of core files present, we may run out of space when we try to extract them. root@esxi# ~ # df -h /vmfs/volumes/535eab40-5cb59b3c-fea8-002590e171c4
Copy the core files to a Nutanix container: root@esxi# ~# cd /vmfs/volumes/<container name> Use vmkdump_extract to extract the core file: root@esxi# /vmfs/volumes/<container name>/cores # vmkdump_extract -l vmkernel-zdump.1 Warning: Header expects 104858112 byte zdump file, but stat says it's 104857600 bytes. Use the less command to open the vmkernel-log.2 file and then use Shift+G to jump to the bottom of the file. Use the up-arrow key to scroll up a little bit and then the cause of the PSOD will be revealed: 2014-11-14T15:28:41.565Z cpu0:44778392)WARNING: NMI: 952: LINT1 motherboard interrupt
}
null
null
null
null
KB8448
PC can't load: "Request could not be processed. badly formed hexadecimal UUID string: None"
PC cannot load due to cluster list failing due to badly formed hexadecimal uuid string.
After log in PC, it shows the following error:In prism_gateway.log, it shows the ApiError: 2019-10-28 08:54:03 ERROR resource.py:173 Traceback (most recent call last): In aplos.out, around the same time, it has a traceback due to intent spec with UUID 8c011d71-d58c-5324-b912-bd1cf16ecb3c. 2019-10-28 08:54:03 INFO intent_spec.py:111 [None] Found intent spec with UUID 8c011d71-d58c-5324-b912-bd1cf16ecb3c and cas_value 52 Inspect the Intent spec which shows [None] in the aplos.out. <nuclei> diag.get_specs uuid=8c011d71-d58c-5324-b912-bd1cf16ecb3c Confirmed the Kind UUID for the intent spec is None.
Engineering is working on ENG-254816 to fix this issue.As a workaround, manually delete the intent spec with None Kind UUID. Note: Only use this command if all conditions above are seen. If your conditions are not exactly the same, please consult with SR. SREs or DevEx. <nuclei> diag.delete_specs uuid=8c011d71-d58c-5324-b912-bd1cf16ecb3c Refresh PC in browser and log in. Now the GUI should load.
KB14397
Cross-Cluster Live Migration (CCLM) iptables entries missing on new AHV node after cluster expansion
Expanding a new node into a cluster may not add all necessary iptables chains which are required for Cross-Cluster Live Migration (CCLM).
Expanding a new node into a cluster may not add all necessary iptables chains which are required for Cross-Cluster Live Migration (CCLM). As a result, if a VM is attempted to be migrated between clusters to this host, it fails with an error message like this in Prism Central: VM migration from the protected Availability Zone failed, error detail: Entity Migrate failed while notifying VM service due to error: Failed to run CCLM prepare : Anduril failed to handle request for VM 10155eb8-3aa0-44c0-9409-330f1a7279e9: error = 32, details = Acropolis failed to handle PrepareVmCrossClusterLiveMigrate request for VM 10155eb8-3aa0-44c0-9409-330f1a7279e9: error = 64, details = Maximum number of retries exceeded for adding a iptable rule for host port Looking at ~/data/logs/anduril.out on the Prism Element anduril leader (can be found by following KB-4355 https://portal.nutanix.com/kb/4355) we can see the following message: 2023-02-28 12:24:07,696Z ERROR ergon_base_task.py:473 Task PrepareVmCrossClusterLiveMigrateTask failed with message: Traceback (most recent call last): Dropping down to ~/data/logs/acropolis.out on the acropolis leader, we can confirm the iptables signature: 2023-02-28 12:23:49,313Z INFO host.py:2972 Retrying(1): iptables rule for opening qemu port 49250: 1, , iptables: No chain/target/match by that name.
This issue is resolved in: AOS 6.8.X family (eSTS): AOS 6.8 Upgrade AOS to versions specified above or newer.WorkaroundTo workaround this issue, perform the checks described in KB 12365 https://portal.nutanix.com/kb/12365 to make sure it is safe to stop Acropolis and then restart Acropolis leader (can be found by following KB 4355 https://portal.nutanix.com/kb/4355) with the following command: nutanix@cvm:~$ genesis stop acropolis && cluster start Once the acropolis leader has been migrated and all services are stable, validate that the CCLM chain has been properly installed: nutanix@cvm:~/data/logs$ hostssh 'iptables -L -n | grep CCLM-INPUT'
KB9339
How to update NGT version in the cluster to a newer version available in latest AOS release
null
This KB instructs how to upgrade NGT to a newer version than what is available in the current AOS installed in the cluster. When complete you will have a new folder in /home/ngt/installer/ that corresponds to the version number of NGT that you put on the AOS cluster. This procedure needs to be done to all nodes on an AOS cluster. You do not have to do it to all AOS clusters in an environment and different clusters could have different versions. The file /home/ngt/installer/latest is the file that has the current version of NGT that the cluster uses during its automated activities and the corresponding directory.
All the commands below are ran from the same CVM. The example in commands below is trying to install NGT available in 5.12 AOS to 5.11.x1. Download the AOS bundle that has the required ngt version. You can check the AOS version and NGT bundled with it from https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=NGT&title=AOS-NGT-Vm+Mobility+version+mapping 2. Run the following command to check the location nutanix core package in the bundle nutanix@NTNX-CVM:~$ tar tf <aos_targz_bundle> | grep nutanix-core Sample Output: nutanix@NTNX-CVM:~$ tar tf nutanix_installer_package-release-euphrates-5.12-stable-047fec91cabcb9da0178269749af1c6f47c4963c-x86_64.tar.gz | grep nutanix-core 3. Now extract the nutanix core package from AOS bundle by running the following command, for the file nutanix@NTNX-CVM:~$ tar xf <aos_targ_bundle> <output_from_step_2> Sample Output: nutanix@NTNX-CVM:~$ tar xf nutanix_installer_package-release-euphrates-5.12-stable-047fec91cabcb9da0178269749af1c6f47c4963c-x86_64.tar.gz install/pkg/nutanix-core-el7.3-release-euphrates-5.12-stable-047fec91cabcb9da0178269749af1c6f47c4963c.tar.gz 4. Navigate the extracted directory and extract the ngt folder from the core package nutanix@NTNX-CVM:~$ cd install/pkg/ Sample Output: nutanix@NTNX-CVM:~$ cd install/pkg/ 5. Now extract the ngt-installer.tar.gz file nutanix@NTNX-CVM:~/tmp/install/pkg$ cd ./ngt 6. Make directory for the ngt version and move the files by running. nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ mkdir $(cat ngt-installer-version) && mv info linux windows $_ 7. Copy the directory to all the CVM's home folder nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh scp -r $IP:$(pwd)/$(cat ngt-installer-version) ~ 8. Copy the directory from home folder of each CVM to ngt folder nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh "sudo sh -c \"cp -r /home/nutanix/$(cat ngt-installer-version) /home/ngt/installer/\"" 9. Change ownership of ngt folder nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh "sudo chown -R ngt:ngt /home/ngt/installer/$(cat ngt-installer-version)" 10. Confirm the ownership on the newly created folder is showing as ngt. nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh "sudo ls -l /home/ngt/installer | grep $(cat ngt-installer-version)" 11. Update the latest ngt version file in the ngt directory nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh "sudo sh -c \"echo -n $(cat ngt-installer-version) > /home/ngt/installer/latest \"" 12. Confirm correct version is reflected in the latest file nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh 'sudo cat /home/ngt/installer/latest;echo' 13. Restart ngt service nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ allssh genesis stop nutanix_guest_tools ; cluster start 14a. Confirming the updated version by referring to the NGT service page and also Prism Central is applicable A. On the AOS cluster Get the NGT master location: 14b. Prism Central will list the NGT cluster version for each VM's details you view example:15. Cleanup the extracted temporary directory from earlier nutanix@NTNX-CVM:~/tmp/install/pkg/ngt$ cd && rm -rf ~/tmp/install
KB1116
Stopping HA.py from creating autopath route on certain hosts
How to stop a particular host from being monitored by ha.py - possibly while troubleshooting internal vswitch bugs.
HA.py is the autopathing feature that Nutanix uses within the CVM to monitor ESXi hypervisors. Possible reasons to prevent ha.py from creating route in internal vswitch: All the guest VMs are migrated to a different host.Test internal vswitch. Run ssh 192.168.5.1 or 192.168.5.2 to test these ports.vSwitch or DVS configuration might get misconfigured due to ESXi bug (you may have delete CVM eth1 network adapter from CVM configs or delete and read vmkernel port). In reality, ESXi's vSwitch feature is quite stable on commonly used versions in late 2016. It is not expected that this troubleshooting method will need to be used by Nutanix. Note that disabling autopath disables the cluster. Consider alternative troubleshooting methods such as moving VMs off the problem node.
Find the host id, ESXi IP address and determine if it is monitored by ha.py: nutanix@cvm$ ncli host ls | egrep "ID|Management Server|Monitored" Disable monitoring of hosts using the following command: nutanix@cvm$ ncli host set-monitoring enabled=false ids=<host_id> For example, to disable monitoring of host with ID 8: nutanix@cvm$ ncli host set-monitoring enabled=false ids=8 After this, you can do maintenance of the node and troubleshoot if the internal vswitch ports are working. Re-enable after the maintenance is done using the following command: nutanix@cvm$ ncli host set-monitoring enabled=true ids=<host_id> For example: nutanix@cvm$ ncli host set-monitoring enabled=true ids=8
KB5885
Hyper-V: Kerberos Constrained Delegation configuration issues
The article explains how to configure Kerberos Constrained Delegation on Nutanix Hyper-V clusters and some known issues.
In Nutanix Hyper-V 2016 environment, the SMB3 container requires Kerberos authentication to secure the storage container. During the failover cluster join workflow in Prism, the Hyper-V hosts computer objects in Active Directory are configured to enable constrained delegation for the hosts in the cluster. This allows live migration, virtual machine creation and other storage related management tasks to be delegated via Kerberos Constrained Delegation. If experiencing issues with any of the above management tasks using a remote computer, and they succeed directly on the host, it is likely there is an issue with constrained delegation configuration. When creating a new VM remotely using Failover Cluster Manager or Hyper-V Manager console, the error message is displayed: The server encountered an error while creating TestVM. Failed to create a new virtual Machine. The message can also be the following: The Virtual Machine Management Service encountered an error while configuring the hard disk on virtual machine TestVM. Failed to create the virtual hard disk. The system failed to create ‘\\NTNXCL01\CTR1\TestVM.vhdx’ In the VM properties, after going to the Virtual Hard Disk section and clicking "Inspect", the error message is displayed: An error occurred when attempting to retrieve the virtual hard disk “\\NTNXCL01\CTR01\TestVM\TestVM.vhdx” on server Host1. Getting the information for virtual disk ‘\\NTNXCL01\CTR01\TestVM\TestVM.vhdx’ failed. Changing the virtual hard disk properties (resizing it, or converting from dynamic to fixed size) shows an error: Error applying hard drive changes. ‘TestVM’ failed to modify device ‘Virtual Hard Disk’. (Virtual machine ID D0C3A421-59B7-40A9-9608-3061173FF5C1) Failed to open attachment `\\NTNXCL1\CTR1\TestVM\Virtual Hard Disks\TestVM.vhdx’. Error: ‘The user name or password is incorrect’ stargate.out log on target remote node may show following signature: E1201 12:00:28.971247 5440 gssapi_handler.cc:70] Major status = 65536 However, when logged on the the Hyper-V host directly (via RDP for example) – the VMs are created successfully.
These symptoms may indicate the lack of or incorrect Kerberos Constrained Delegation configuration. To check the delegation settings, open the "Active Directory - Users and Computers" console, and check the properties of every cluster node's computer object. On the Delegation tab, the "Trust this computer for delegation to specified services only" option must be selected, and two services must be added for every other host. For example, if you have a cluster of three nodes: Jaws-1, Jaws-2 and Jaws-3 - here is the configuration example: If the delegation is disabled (option “Do not trust this computer for delegation” is chosen, or you do not see some of the nodes here – you may either add them manually, or use Enable-ConstrainedDelegation PowerShell cmdlet on any of the nodes. The cmdlet is a script written by Nutanix to automate the constrained delegation configuration in a cluster. The syntax is: Enable-ConstrainedDelegation -ClusterName <cluster name as per AD computer object> -Hosts <comma separated names of the hosts as per AD computer objects> The delegation may be also disabled on the User account level. If the computer objects' delegation settings are correct, check the user account in Active Directory - if the checkbox "Account is sensitive and cannot be delegated" is checked (like on the screenshot below) - uncheck it. In case the Kerberos delegation stops working unexpectedly, you may be hitting a known issue that can break Kerberos authentication https://docs.microsoft.com/en-us/windows/release-health/resolved-issues-windows-8.1-and-windows-server-2012-r2#2748msgdesc form November 9 Cumulative Update for Windows Server 2019 and below. As an example, you see errors "Username or password incorrect" when creating new VMs. Microsoft has released a hotfix that should be installed on all Active Directory Domain Controllers KB5008602: for Windows Server 2012R2 https://support.microsoft.com/en-au/topic/kb5008604-authentication-fails-on-domain-controllers-in-certain-kerberos-scenarios-on-windows-server-2012-e49e9c75-6aff-4cc4-a750-01ea198dfe59, for Windows Server 2016 https://support.microsoft.com/en-us/topic/november-14-2021-kb5008601-os-build-14393-4771-out-of-band-c8cd33ce-3d40-4853-bee4-a7cc943582b9 and for Windows Server 2019 https://support.microsoft.com/en-us/topic/november-14-2021-kb5008602-os-build-17763-2305-out-of-band-8583a8a3-ebed-4829-b285-356fb5aaacd7. Download and install the patch by following the respective link.
KB9688
X-Ray Reset Local Secret Store Passphrase
null
This article describes how to reset the X-Ray local secret store passphrase.
Note: The targets will need to be deleted and re-added since their saved credentials will not be able to be decrypted. First, log in to X-Ray via SSH. Then, run the command below to back up the vault (password) file and restart the xrayserver service: nutanix@xray$ sudo mv /home/nutanix/data/xray/data/vault /home/nutanix/data/xray/data/vault.backup && sudo docker restart xrayserver After this is done, the next time you visit X-Ray in the browser, you will see a dialog box asking you to set the local secret store password. See sample screenshot below:
KB14489
NGT installation failure logs do not provide information about the failure
Logs for some NGT installation failures do not provide any information about the error. This KB describes how to capture the required error messages to debug further.
NGT installer executes commands using the ExecutesSilently function. The NGT installer log files contain log lines from this function. But, this function does not record the stdout and the stderr from the executed command to the installer log file. In addition, this function records incorrect error messages by wrongly trying to convert the executed command exit code to the corresponding error message. The error message conversion should be done with Windows System Error code, not a command exit code. Case 1 NGT installation fails with a generic error, for example: Incorrect function. In this case, NutanixSSRPackage.log file doesn't output the actual error as well: MSI (s) (D8:38) [13:56:51:455]: Hello, I'm your 32bit Impersonated custom action server. The ExecuteSiliently was unable to read the executed command output and didn't print the command outputThe ExecuteSilently translated the error code 0x1 to "Incorrect function" incorrectly. This does not indicate the reason the cmd.exe exited with 0x1. Case 2 MSI (s) (A8:A8) [12:00:08:587]: Executing op: CustomActionSchedule(Action=StartSSRUIServiceInNGA,ActionType=1089,Source=BinaryData,Target=ExecuteSilently,CustomActionData=cmd.exe /c net start "Nutanix Self Service Restore Gateway") The ExecuteSiliently was unable to read the executed command output and didn't print the command outputThe ExecuteSilently translated the error code 0x2 to "The system cannot find the file specified" wrongly. This does not indicate the reason the cmd.exe exited with 0x2. Please note that the above log snippets are only an example, and the symptoms may vary depending on the actual error.
ENG-458776 https://jira.nutanix.com/browse/ENG-458776 has been filed to output/capture the actual error in the logs Case 1 Workaround For now, without the fix, we will need to manually run the install command that failed to see the actual error: Retry the NGT installation again and wait for it to failDo not press OK at the error message pop-up window so that the installation doesn't rollbackOpen command prompt as the Administrator and run the failed installation command manually. For example: cmd.exe /C ""C:\Program Files\Nutanix\Python36\python.exe" -E "C:\Program Files\Nutanix\ssr\ssr_gateway\ssr_gateway_service.py" install" This will fail and provide the failure message that can be used to debug further. Example: C:\Windows\system32>cmd.exe /C ""C:\Program Files\Nutanix\Python36\python.exe" -E "C:\Program Files\Nutanix\ssr\ssr_gateway\ssr_gateway_service.py" install" For other cases, please try to see why the command failed. Similarly to Case 1, the manual execution of the failed command may help to see why the command failed before.
KB9030
[NDB] Reset Web Console Admin Password and ERA CLI password
This article displays how to reset admin password and Era CLI password.
This article describes the steps to reset an ERA password for both the CLI and GUI if you still have CLI access.Note: You will need to be able to SSH to the ERA appliance.
How to reset The ERA GUI password? SSH to the ERA appliance.Execute the following command: [era@localhost ~]$ era-server Execute the following command: era-server > security password reset How to reset ERA CLI password? SSH to ERA.Execute the following command: [era@localhost ~]$ sudo passwd era​​​ There might be circumstances where the reset might not work, in that case, contact Nutanix Support.
KB5877
AHV installation using Phoenix can be stuck at "Configuring AHV for EFI boot" state when BIOS mode is EFI
The Phoenix AHV installation could be stuck at the "Configuring AHV for EFI boot" state forever when a BIOS mode is EFI.
The Phoenix AHV installation could be stuck at the "Configuring AHV for EFI boot" state forever when a BIOS mode is EFI.
Please switch a BIOS mode from EFI to Legacy BIOS mode. For Lenovo Hardware, do the below: Go to BIOS > System Settings > Legacy BIOS > Enable Again go to BIOS > Boot Manager > Boot Modes In Boot Manager > Boot Modes > Select Boot Mode, choose Legacy Mode Reboot the system
KB13552
LCM operation failed on hypervisor with [Catalog failure: Received buffer is too short]
LCM AOS/AHV upgrade fails as catalog is restarted after AOS upgrade
LCM upgrade operation task may fail with the below error on Prism. lcm_ops.out on the LCM leader node shows the below error. 2022-05-10 02:31:16,238Z INFO foundation_client.py:70 Retrieving the Foundation Rest Client. catalog_service_proxy on the LCM leader node shows catalog was trying to establish a session with the leader, but it failed.[Note: As catalog_service_proxy is not included in LCM log bundle, logbay must be collected.] I20220510 02:31:09.250295Z 22093 service_proxy.cc:540] Starting ServiceProxy (build version el7.3-release-euphrates-5.20.3.5-stable-17c9e02d9832ad2f2d6e69ac6126b871eef3f72f) catalog.out on the catalog leader shows the below output. It indicates catalog service on the catalog leader was still unstable as it was restarted after AOS upgrade. 2022-05-10 02:31:10,690Z INFO rpc_counter.py:151 [catalog_outgoing_rpc_stats] Registered metrics: ['requests_sent', 'success_count', 'error_count', 'success_response_time_usec', 'error_response_time_usec']
Nutanix Engineering is aware of the issue and is working on the solution. The fix is available in AOS 5.20.5 and AOS 6.1.1 and above.Please retry LCM upgrade as workaround.
}
null
null
null
null