id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB9954
|
HPEDX: IPMI IP change during Expand Cluster is not taking effect due to iLO requirement
| null |
During an Expand Cluster operation, if you give a new IPMI IP address for HPE nodes, the IP configuration will be updated in the cluster, but the iLO will remain with the old IP address until an iLO reset is done.
|
Workaround:
Run the following commands on the hypervisor.
AHV:
[root@host]# ipmitool mc reset cold
ESX:
[root@host]# ./ipmitool mc reset cold
Hyper-V:
C:\> ipmiutil reset -k
|
KB7423
|
Installing Nutanix Guest Tools in Windows 2016 machines might fail with icacls error
|
installing Nutanix Guest Tools in Windows 2016 machines might fail with icacls error
|
Installing Nutanix Guest Tools in Windows 2016 machines might fail with icacls error. Error message cmd.exe /c icacls "C:Program Files\Nutanix" /reset"
The NGT install logs and NGT_Verbose_Install_Log.txt provide us with the following error message.
MSI (s) (38:64) [09:51:40:250]: Hello, I'm your 32bit Elevated Non-remapped custom action server.
On Windows event logs we could see the following events show up with the following error message:
Log Name: Application
|
Following are the steps that can be performed to resolve the issue. Resolution of the issue involves re-install of NGT.
Uninstall Nutanix Guest Tools from "Programs and Features".Ensure Nutanix Guest Tools (NGT) service is removed from "Services".Run the following Powershell commandlet to check installed packages
Get-WmiObject -Class win32_product -Filter "Name like '%Nutanix'"
Ensure Nutanix Guest Tools is not listed.Delete "Nutanix" directory/folder in C:\Program Files\
|
KB5247
|
Mixed Use(MU) boot drives for 960G Configuration
|
This KB provides details about use of Mixed Use boot drives for 960G configuration.
1. All flash Nodes with mixed SSD drives produce the error "Node-has-fewer-than-2-non-PM863a-drives" because of being shipped with incorrect combination of 3DWPD and 1DWPD drives on NX platforms.
2. Customer "Suntec" approved MU Drives as CVM Boot drives
|
Alert message on host:
Host machine <Host IP> has the following problems: Node has fewer than 2 non-PM863a drives.
Scenario 1 : The above messages are seen on Prism alerts, in all-flash node NX-3060-G5 with mixed Samsung SSD drives.
The drive with the part number having “KM” in its name is the boot drive. At least 2 boot drives are required per node to create RAID partitions.The drive with the part number having “LM” in its name is a low endurance drive and is not qualified as the boot drive.
Look at the df –h output:
nutanix@cvm$ allssh df -h
In the above case, CVMs X.X.X.61 and X.X.X.63 do not have RAID partitions.
Check for all the disks in the cluster:
nutanix@CVM:~$ allssh list_disks
Check the RAID partitions using the cat /proc/mdstat command on all CVMs:
nutanix@CVM:~$ allssh "cat /proc/mdstat"
Check to see if there are any FRU issues:
nutanix@CVM:~$ ipmitool -I lanplus -H X.X.X.46 -U ADMIN -P <IPMI password> fru
If the FRU information looks correct, RAID partitions were not created during Foundation and Phoenix. This happened because the CVMs that do not have 2 KM Boot drives did not create the mirrored RAID partitions.In such cases, Phoenix of the node using the clean CVM option will not succeed in creating the RAID partitions because the one KM boot drive is not able to find its partner to create the other RAID partition.
Check to see if the node comes up after Phoenix with Genesis crashing.
Signature of the error in genesis.out:
2018-02-15 11:10:47 INFO hypervisor_ssh.py:32 Trying to access hypervisor with provided key...
Signature of error in hades.out:
2018-02-15 10:51:55 WARNING disk_manager.py:276 Failed to reach genesis. Skipping Hades configure
Look for any messages in /var/log/messages for RAID creation:
nutanix@:X.X.X.61:/home/log$ sudo grep '/dev/md' messages
To verify if the KM drives are in the hcl.json file:
On the node, that is out of the cluster:
nutanix@:X.X.X.61:~$ sudo smartctl -i /dev/sdb4 | grep Model
Skip the trailing numbers from the product name.
nutanix@:X.X.X.61:~$ grep MZ7KM960HMJP /etc/nutanix/hcl.json
Open the hcl.json file and go to the entry:
"manufacturer": "Samsung",
The boot field is set to true and that indicates that this is a boot drive.
Check the same information for the LM drive:
nutanix@:X.X.X.61:~$ sudo smartctl -i /dev/sda1 | grep Model
Inside hcl.json:
"manufacturer": "Samsung",
The boot parameter here indicates that the LM drive is not a boot drive.
Because of this information, Phoenix doesn't detect MZ7LM drive as boot drive and fails to install CVM.
CRITICAL svm_rescue:650 No suitable SVM boot disk found.
Scenario 2 : This issue, occurs on DELL R750xs Node (qualified for Customer "Suntec" only)
1. Confirm these are the nodes approved for "Suntec" R750xs nodes.
nutanix@cvm$ ipmitool fru
2. Confirm that more RI, Data drives are attached (KRM6VRUG960G or 1 DWPD drives), and lesser Boot drives (KRM6VVUG960G or 3 DWPD drives)
nutanix@CVM:~$ allssh list_disks
You can also count the attached drives as follows:
nutanix@cvm$ allssh 'lsblk -S -o MODEL -n | sort|uniq -c'
[
{
"SSD Type": "3DWPD / SM863a",
"Drive Label": "C-SSD-960GB-2.5-C",
"Product part number": "MZ7KM",
"S/N starts with": "S3F3"
},
{
"SSD Type": "1DWPD / PM863a",
"Drive Label": "C-SSD-960GB-2.5-V",
"Product part number": "MZ7LM",
"S/N starts with": "S2TZ"
},
{
"SSD Type": "SSD Type",
"Drive Label": "Product part number",
"Product part number": "Purpose"
},
{
"SSD Type": "3DWPD / Mixed Use",
"Drive Label": "KRM6VVUG960G",
"Product part number": "CVM Boot drive"
},
{
"SSD Type": "1DWPD / Read Intensive",
"Drive Label": "KRM6VRUG960G",
"Product part number": "Data Drive"
}
]
|
Scenario 1 : For NX-3060-G5
Find another extra LM drive from another node, that has an extra KM drive instead of a LM drive.Remove the disk logically from the node and swap it with the LM drive from the affected node.
Make sure that the disk is removed logically from the cluster so that all the data has been migrated off the disk to the other disks.
After swapping the disks, repartition and add the newly inserted disks in both nodes and make sure they are detected using the commands: df –h and list_disks.Phoenix the affected node using the clean CVM option to start the RAID creation workflow.After the node is imaged and the RAID partition is created, add the node to the cluster.Follow the same for other similarly affected nodes.
Scenario 2 : For R750xsPlease ensure, DELL replaces the correct drive model, The CVM Boot drive needs to be replaced with 3DWPD / Mixed Use drive. CVM data drive needs to be replaced with 1DWPD / Read Intensive drive.
|
KB7931
|
Health checks might get aborted on large clusters
|
We have seen an issue on NCC 3.8.0.1 and 3.10.0.x in large clusters, where health checks might get aborted.
|
We have seen an issue on NCC 3.8.0.1 and 3.10.0.x in large clusters, where health checks might get aborted. To verify this scenario, follow the below steps.
Get a complete list of tasks running for ncc.
nutanix@cvm$ ecli task.list component_list=NCC
Check the parent task from above list.
nutanix@cvm $ ecli task.get <Parent_Task UUID>
Check the child tasks associated. You may encounter aborted tasks. To verify this, you will see that status as:
"status": "kAborted",
Look at the health server logs. You will see it is in a crash loop while checking for task queue initialization.
2019-07-30 00:17:29 INFO health_server.py:333 Waiting for task queue initialization to be completed.
If the tasks remain in the same state for more than 45 minutes, it gets marked as aborted. This can be verified in the health_server logs.
2019-07-28 02:42:25 INFO health_server.py:333 Waiting for task queue initialization to be completed.
|
This issue has been fixed in NCC-3.9.0 please upgrade NCC to the latest version. The above problem occurs when Insights server has crashed, but health_server did not FATAL at the time. This happens on a race condition when insights is down during the time when health server was booting up.
You can verify this by checking the Fatal files or from the health_server logs.
2019-07-27 21:27:08 INFO arithmos_client.py:154 Attempting to update arithmos master after getting kNotMaster from previously known master.
To fix the problem, restart the cluster health service.
|
KB13110
|
Nutanix Disaster Recovery(Formerly Leap) - Re-enabling SyncRep fails for UEFI VMs after migrating the VMs VDisks to a new container
|
Stretch is failing for UEFI VM's if they were previously protected with SyncRep and then the VM was migrated to a new container.
|
An issue has been identified where a VM which is configured with UEFI boot and is added to a SyncRep Protection Policy, when removing/disabling the VM from the SyncRep, the nvram disk on the target cluster is not removed from the live nfs namespace. Stretch Parameters are removed from the child vdisk but not the parent vdisks. If the VM is then migrated to another container, Re-enabling SyncRep for the VM will constantly fail.Steps to Identify the issue:
EntityProtect Ergon tasks on the Source cluster will be created every 5 minutes and will end up in an aborted status:
nutanix@CVM:~$ ecli task.list component_list=Cerebro operation_type_list=EntityProtectDisks
On the Target Cluster ReceiveSnapshot tasks will be aborted:
nutanix@CVM:~$ ecli task.list component_list=Cerebro operation_type_list=ReceiveSnapshot
Cerebro Master Info logs on source PE will report that the replicate metaop failed due to and "insurmountable issue":
nutanix@CVM:~$ grep 'insurmountable issue' ~/data/logs/cerebro.INFO
Checking stargate logs with the "work_id" we can find the remote stargate on the target cluster:
nutanix@CVM:~$ allssh 'grep work_id=[work_id_from_cerebro_log] ~/data/logs/stargate.*INFO* |grep "Fetched remote stargate"'
Example:
nutanix@CVM:~$ allssh 'grep work_id=2606711 ~/data/logs/stargate.*INFO* |grep "Fetched remote stargate"'
Checking the remote target stargate logs, which was found from the logs, for the same work id it is noted that it failed to "add originating vdisk id to the vdisk of the target file":
nutanix@CVM:~$ grep "work_id=2606711" ~/data/logs/stargate.INFO |grep "Attempt to add originating vdisk.*failed"
Searching for the vdisk on the Target cluster with nfs_ls we will see that it is found in both 2 containers. As the vdisk creation is failing for the replicate you will also see the vdisks in the correct container as the source in the recycle_bin:
nutanix@CVM:~$ nfs_ls -liaR |egrep "^/|7d2d97fa-3027-4a5c-98bd-0dfbd427010a"
|
Workaround:
Removing the Recovery Points for the VM on the Target cluster will clean up the vdisks, preventing the stretch from being re-enabled.
This can be done by the customer, if they agree to, from the PC UI. Refer to documentation https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide-:ecd-ecdr-recoverypoints-ui-pc-r.html for instructions to delete the recovery points.
If removing the Recovery Points for the VM on the Target cluster did not resolve the issue, then most likely, live vdisk will cause this symptom.
Once the live vdisk is created on remote site, we stamp it with the below parameters:
• stretch_params_id
If any other vdisk has identical values for any of the mandatory fields, then the Pithos update would fail, leading to replication getting aborted.
Here are the mandatory stretch fields :
stretch_params_id
From the stargate log, check for "Associating stretch params id" for the given work_id :
grep "work_id=2606711" ~/data/logs/stargate.INFO |grep "Associating stretch params id"
The output should show something like :
Associating stretch params id (cluster_id: 1059493956957935184 cluster_incarnation_id: 1694680819624960 entity_id: 134478),
The above message has all the 5 mandatory fields. Check the vdisk_config_printer for another vdisk existence with any of the 5 mandatory fields.
Once the vdisk is identified, run medusa_printer to lookup NFS namespace :
medusa_printer --lookup nfs --nfs_inode_id <nfs-inode> --full_path
If the output NFS path shows something like :
/SelfServiceContainer/.acropolis/vmdisk/7d2d97fa-3027-4a5c-98bd-0dfbd427010a
Notice that it's not in the ".snapshot" directory but is live vdisk causing the symptoms.
Open a Tech-Help or ONCALL to engage a Support Tech Lead or a DevEx Engineer.
|
KB8152
|
Nutanix Kubernetes Engine - Prometheus Pod in Constant CrashLoopBackOff due to OOM Killed | Increase the Memory Limit
|
This article details the procedure to identify prometheus pods restarting due to Out of Memory and perhaps increase the memory for the pods.
|
Note: If this KB does not apply, refer to KB 8154 http://portal.nutanix.com/kb/8154 that details other Prometheus crash issues.Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.From the Kubernetes master VM or a workstation with kubectl installed and configured, confirm the following:
The Prometheus pod (prometheus-k8s-0) in Karbon is in a constant "CrashLoopBackOff" state:
[nutanix@masterVM ~]$ sudo kubectl get pods -n ntnx-system
Investigating via "kubectl describe" shows the reason as "OOM Killed":
[nutanix@masterVM ~]$ sudo kubectl describe pods prometheus-k8s-0 -n ntnx-system
This issue has been observed predominantly in clusters deployed on NKE versions prior to 1.0.3. In certain cases, the limit initially set (500Mi) was not enough. Starting with Karbon 1.0.3, the initial size was increased to 1Gi. However, in some cases, this limit is not enough and we still have to increase the memory limit to 2Gi to resolve the issue.
|
There is no permanent fix yet. KRBN-6327 https://jira.nutanix.com/browse/KRBN-6327 is created to re-evaluate the current memory limit for Prometheus pods and adjust the new sizing improvements in future NKE releases.As a workaround for the affected Kubernetes cluster, the memory limit may be increased to a reasonable value.The procedure to increase the limit is below.
Prometheus is deployed via Prometheus Operator. The definition exists in form of a CRD. First, save the current definition in a yaml file:
[nutanix@masterVM ~]$ sudo kubectl get prometheus -n ntnx-system k8s -o yaml > p8s.yaml
Create a backup of the above file in a different path or with a different name. Since the first file in step 1 will be modified, the backup created here can be used for recovery, if needed:
[nutanix@masterVM ~]$ sudo kubectl get prometheus -n ntnx-system k8s -o yaml > p8s_backup.yaml
Using your preferred editor, modify the p8s.yaml file and change the memory limit under the limits section. (Path: .spec.resources.limits.memory) to a conservative value. Sample content below:
apiVersion: monitoring.coreos.com/v1
Now, apply the new spec:
[nutanix@masterVM ~]$ sudo kubectl apply -f p8s.yaml
Note: if the above command fails to apply the updated .yaml when executed from a workstation with kubectl, try using the kubectl client on the master node/VM. It has been observed that trying to apply the .yaml with an older kubectl client failed with the following error: error: unable to decode "p8s.yaml": no kind "Prometheus" is registered for version "monitoring.coreos.com/v1"
The Prometheus operator will now restart Prometheus pods and when it is up, it should have the new memory limit. Please use below commands to verify the new limit and that the Prometheus pod is up and running:
[nutanix@masterVM ~]$ sudo kubectl get pods -n ntnx-system
[nutanix@masterVM ~]$ sudo kubectl describe pods -n ntnx-system prometheus-k8s-0
|
KB4432
|
SMB Mount Point
|
deploy our VM on Nutanix SMB object and is not local to the host hence we will not be able to see this in the explorer
|
We deploy our VM on Nutanix SMB object and is not local to the host hence we will not be able to see this in the explorer unless you mount it as a Network drive.This is the default behavior for an SMB share on Windows platform.Once it's mounted as a network drive, it creates a re-parse point which can be used deploy VM's without providing the UNC path. This can only be used locally and by default we can set one path however local mount points can be helpful.
|
For instanceSet-VMHost –VirtualHardDiskPath “ \\cluster-name\container-name\folder-name file://cluster-name/container-name/folder-name" –VirtualMachinePath " \\cluster-name\container-name\configfolder-name file://cluster-name/container-name/configfolder-name"However, the easiest way to deploy VM on other Container locally would be creating a mount point or Network drive so the user doesn't have to remember the path.CMDnet use Z: “ \\cluster-name\container-name\folder-name file://cluster-name/container-name/folder-name" /Persistent:YesPowerShellNew-PSDrive –Name “Z” –PSProvider FileSystem –Root “ \\cluster-name\container-name\folder-name file://cluster-name/container-name/folder-name" –PersistIn this way, we can always have easy access of Containers on Hyper-V host locally. The only thing to remember here is, this will not work if you try to access the mount point over network.
|
KB16259
|
Nutanix Files - List quota usage for files written by a user in a given share
|
You receive emails for exceeded quota. This article contains instructions to identify all files written to by a specific user on a share.
|
As a Files administrator, you may have assigned a subfolder within a share for a user to write files. You expect all the data to be written by each user to the respective folder, however, the user may also write the data to other subfolders of the same share. For example, when the User checks the folder "20042636" properties, the usage reported is 1.02GB. However, the quota alert reports a much higher number of 28.8GiB.
This is a system email for exceeded quota.
|
Use File Analytics reports to find the list of files created by a user. Refer to File Analytics Guide: Reports https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v3_3:ana-analytics-reports-c.html for more information on Reports.
Login to File Analytics as adminSelect ReportsSelect Create a new reportChoose the below attributes to generate a new report. Refer to File Analytics Guide: Creating a Custom Report https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v3_3:ana-analytics-report-custom-create-t.html for details. Once the report is generated, export it as a CSV file. While viewing the file, apply a filter to list the username. This will display the file location in other subfolders outside of the user's assigned folder. Navigate to each file path and check the folder usage to identify the total usage at the share level. The result should match the total usage mentioned by the quota alert.
|
KB11950
|
How to find shell idle timeout in CVM and AHV
|
How to find shell idle timeout configured in AHV and Controller VM.
|
This KB provides information on how to find shell timeout configured in Nutanix controller vm and AHV host.
|
Nutanix shell will logout users automatically after the session is idle for configured timeout. The timeout is configured in /etc/profile.d/os-security.sh file. By default the timeout is set for 15 minutes. To check the current idle timeout value run the following command from AHV host or CVMNOTE: Nutanix does not support making manual modifications to the timeout value in the file. The settings in the file are enforced through Salt configuration, will be lost on the next Salt run.For CVM.
nutanix@cvm: $ grep TMOUT /etc/profile.d/os-security.sh
For AHV host
root@ahv: $ grep TMOUT /etc/profile.d/os-security.sh
|
KB10838
|
Rebuild of CVM boot raid fails with: mdadm: add new device failed for /dev/sdX# as 2: Invalid argument
| null |
The CVM boot RAID disk may become degraded or fails the NCC health checks:
/health_checks/hardware_checks/disk_checks/boot_raid_check [ FAIL ]
One or more partitions making up the RAID may be missing. Use mdadm to find the status:
nutanix@CVM:~$ sudo mdadm --detail /dev/md[012]
Here the RAID includes a partition on /dev/sda1, the RAID state is "clean, degraded" and the disk containing the second RAID partition is still present and active on the system.Using "lsblk" output the second RAID disk may be seen with its partition intact. The other two RAID partitions may be intact and with the state "clean".Thus it may be determined appropriate to add the partition back to the RAID. In this example, the missing RAID partition is determined to be /dev/sdb1
nutanix@cvm$ sudo mdadm --add /dev/md0 /dev/sdb1
This may fail with the message:
mdadm: add new device failed for /dev/sdb1 as 2: Invalid argument
Closer inspection of the disk is required. Use "smartctl":
$ sudo smartctl -x /dev/sdb -T permissive
Here despite the health status showing "OK" the disk is also showing a number of uncorrected read errors.Search dmesg output for "blk_update"
nutanix@cvm$ dmesg | grep blk_update
Here we see a number of critical medium errors. Sector 2049 is the first sector of the first partition
nutanix@CVM$ sudo fdisk -l /dev/sdb
|
In this situation, the disk needs replacing as regions of the first partition are no longer functioning.Use the Prism console > hardware menu. From the Disk view, select the disk (use the serial number of the disk from smartctl output) and then "Remove Disk" as there may be other active partitions. Wait for the remove disk task to finish before physically removing the disk. Once the replacement disk is inserted, it should be automatically partitioned and added back to the CVM.
|
KB1113
|
HDD, SSD, and HBA troubleshooting
|
This article describes how to troubleshoot a failed disk and identify which component is causing the failure. If you have received an alert for a failed disk, you can troubleshoot a specific node instead of examining an entire cluster.
|
When a drive is experiencing recoverable errors, warnings, or a complete failure, the Stargate service marks the disk as offline. If the disk is detected to be offline 3 times within the hour, it is removed from the cluster automatically, and an alert is generated ( KB-4158 https://portal.nutanix.com/kb/4158 or KB-6287 https://portal.nutanix.com/kb/6287).
If an alert is generated in Prism, the disk must be replaced. Troubleshooting steps do not need to be performed.
NOTE: If a failed disk is encountered in a Nutanix Clusters on AWS, once the disk is confirmed to have failed proceed to condemn the respective node. Condemning the affected node will replace it with a new bare metal instance of the same type.
|
Once the disk is replaced, an NCC health check should be performed to ensure optimal cluster health.However, if an alert was not generated in the first place or further analysis is required, the steps below can be used to troubleshoot further.Before you begin troubleshooting, verify the type of HBA controller.Caution: Using the SAS3IRCU command against an LSI 3408 or higher HBA can cause NMI events that could lead to storage unavailability. Confirm the HBA controller before using these commands. To determine what type of HBA is used, look for the controller name located in /etc/nutanix/hardware_config.json on the CVM.Example of the output when SAS3008 is used. In this case, the command SAS3IRCU is the correct command to use. Note the "led_address": "sas3ircu:0,1:0" line:
"node": {
Example of the output when SAS3400/3800 (or newer) is used. In this case, using SAS3IRCU would be ill-advised. Use the storcli command instead. For information on StorCLI refer to KB-10951 https://portal.nutanix.com/kb/10951. Note "led_address": "storcli:0" line.
"storage_controllers_v2": [
Identify the problematic disks.
Check the Prism Web console for the failed disk. In the Diagram view, you can see red or grey for the missing disk.Check the Prism Web console for disk alerts, or use the following command to check for disks that generate the failure messages.
nutanix@cvm$ ncli alert ls
Check if any nodes are missing mounted disks. The two outputs should match numerically.
Check the disks that are mounted on the CVM (Controller VM).
nutanix@cvm$ allssh "df -h | grep -i stargate-storage | wc -l"
Check the disks that are physical in the CVM.
nutanix@cvm$ allssh "lsscsi | grep -v DVD-ROM | wc -l"
Check if the status of the disks is all Online and indicated as Normal.
nutanix@cvm$ ncli disk ls | egrep -i -E 'Online|Status'
Validate the expected number of disks in the cluster.
nutanix@cvm$ ncli disk ls | grep -i 'Status' | wc -l
The output of the command above should be the sum of the outputs of steps 1c.i and 1c.ii.
There are instances where the number can be higher or lower than expected. So, it is an important metric that can be compared to the disks listed in step 1b.
Look for extra or missing disks.
nutanix@cvm$ ncli disk ls
Check that all disks are indicated as mounted rw (read-write) and not ro (read-only).
nutanix@cvm$ sudo mount | grep -E 'stargate-storage.*rw'
Identify the problems with the disks nodes
Orphaned disk ID
This is a disk ID that the systems no longer use but was not properly removed. Symptoms include seeing an extra disk ID listed in the output of ncli disk ls.
To fix the orphaned disk ID:
nutanix@cvm$ ncli disk rm-start id=<diskID> force=true
Ensure that you validate the disk serial number and that the device is not in the system. Also, ensure that all the disks are populating using lsscsi, mount, df -h, and counting the disks for the full-disk population.
Failed disk and/or missing disk
Check if the disk is visible to the controller as it is the device whose bus the disk resides on. The following commands can be used:
lspci - displays the PCI devices seen by the CVM.
NVME device - Non-Volatile memory controller: Intel Corporation PCIe Data Center SSD (rev 01).SAS3008 controller - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) - LSI.SAS2308 controller (Dell) - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05).MegaRaid LSI 3108 (Dell) - RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02).LSI SAS3108 (UCS) - Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3108 PCI-Express Fusion-MPT SAS-3 (rev 02).
lsiutil - displays the HBA (Host Bus Adapter) cards perspective of the ports and if the ports are in an UP state. If a port is not up, either the device did not respond, or the port or connection to the device is bad. The most likely problem is the device (disk).
nutanix@cvm$ sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil -a 12,0,0 20
lsscsi - lists the SCSI bus devices seen that include any HDD or SSD (except NVME, which does not pass through the SATA controller).sas3ircu - reports slot position and disk state. It is useful for missing disks or verifying that disks are in the correct slot. (Do NOT run the following command on Lenovo HX hardware as it may lead to HBA lockups and resets)
nutanix@cvm$ sudo /home/nutanix/cluster/lib/lsi-sas/sas3ircu 0 display
storcli - Reports drive errors similar to lsiutil. Also reports slot position and disk state.
sudo ~/cluster/lib/storcli/storcli64 /call/pall show phyerrorcounters|tail -n+6 # Show phy error counts in concise output
Check the CVM's dmesg for LSI mpt3sas messages. We should typically see one entry for each physical slot. (The below example shows SAS address "0x5000c5007286a3f5" is repeatedly checked due to a bad/failed disk. Note how the other addresses are detected once, and the suspect is repeatedly being polled.)
nutanix@cvm$ sudo dmesg | grep "detecting\: handle"
smartctl - if Hades indicate that a disk is checked by smartctl 3 times in an hour, it is automatically failed.
nutanix@cvm$ sudo smartctl -x /dev/sdX -T permissive
See KB-8094 https://portal.nutanix.com/kb/8094 for troubleshooting with smartctl.
Check for offline disks using NCC check disk_online_check.
nutanix@cvm$ ncc health_checks hardware_checks disk_checks disk_online_check
See KB 1536 https://portal.nutanix.com/kb/1536 for further troubleshooting offline disks.
Confirm if disks are seen from LSI Config Utility. This can be useful for ruling out potential driver or CVM/Hypervisor configuration issues that could prevent you from detecting certain drives. The LSI Config Utility gives you an interface directly to the HBA firmware without relying on a software operating system. It can be used to do many of the same things that you can do with "lsiutil": (a) Check if a disk is detected in a particular slot, (b) Check a disk link speed, (c) Activate an LED beacon on a particular drive. On G6 & G7 platforms, the LSI Config Menu is disabled by default so you have to enable it in the BIOS before you can use it. On G8 platforms you must view the attached drives directly through the BIOS menu.
G8: View attached drives directly through the BIOS
Enter the BIOS Menu by hitting the DEL key at the "Nutanix" splash screen while the node is booting-up.Go to the "Advanced" Tab and select "SCC-B8SB80-B1 (PCISlot=0x8) Configuration". This is what the menu option is called on 3060-G8. It may be named slightly differently on other models.If the "Device Properties" option is greyed-out, select "Refresh Topology".Select "Drive Properties" to see a list of the SATA drives visible to the host.
G6 & G7: How to enable and access LSI HBA OPROM
Enter the BIOS Menu by hitting the DEL key at the "Nutanix" splash screen while the node is booting-up.Go to "Advanced" tab and find "LSI HBA OPROM". Set this to "Enabled". Then hit "F4" to "Save & Exit" the BIOS menu. This will cause the node to reboot.Note: After you have obtained the information you need, make sure to go back into the BIOS and DISABLE the OPROM. You can also press F3 to Load Optimized Defaults, which will bring the BIOS back to it's original factory settings where the OPROM is disabled.On the next boot-up, look for the screen titled "Avago Technologies MPT SAS3 BIOS" and hit CRTL+C to enter the "SAS Configuration Utility".Once inside the Config Utility, select the HBA card you are interested in. Multi-node models (2U4N, 2U2N) will only have a maximum of one HBA card, while single-node platforms (2U1N) may have as many as three. In multi-HBA systems, each HBA will be serving a different subset of drives on each node. On the next screen, select "SAS Topology" and then "Direct Attach Devices" to see information about the drives associated with that HBA.If the HBA you selected does not detect any drives at all, it will report "No devices to display."
There can be a case where the disk is DOWN in lsiutil, usually after a replacement or an upgrade of the disks. When all the above checks are carried out, and the disk is still not visible, compare the old and new disk "disk caddy or tray". Ensure the type is the same. There can be cases where an incorrect disk type is dispatched, and it does not seat properly in the disk bay hence not being detected by the controller.
Identify the node type or the problematic node
Run ncli host ls and find the matching node ID.
Specific node slot location, node serial, and node type is important information to document in case of recurring issues. It also helps to track the field issues with the HBA's, node locations, and node types.
Identify the failure occurrence.
Check the Stargate log.
The stargate.INFO log for the corresponding period indicates if Stargate saw an issue with a disk and sent it to the Disk Manager (Hades) to be checked or had other errors accessing the disk. Use the disk ID number and serial number to grep for in the Stargate log on the corresponding node the disk is in.
The Hades log contains information about the disks it sees and the health of the disks. It also checks which disk is metadata or Curator disk and selects one if one did not already exist in the system or was removed/disappeared from the system. Check the Hades log.Check df -h in /home/nutanix/data/logs/sysstats/df.INFO to see when the disk was last seen as mounted.Check /home/nutanix/data/logs/sysstats/iostat.INFO to see when the device was last seen.Check /home/log/messages for errors on the device, specifically using the device name, for example, sda or sdc.Check dmesg for errors on the controller or device. Run dmesg | less for the current messages in the ring, or look at the logged dmesg output in /var/log.
Identify the reasons for disks failure.
Check when the CVM was last started if the disk's last usage data were not available. Again, reference the Stargate and the Hades logs.Check the Stargate log around the time of disk failure. Stargate sends a disk to Hades to check if it does not respond in a given time and ops timeout against that disk. Different errors and versions represent it differently, so always search by disk ID and disk serial.
Check the count of disk failure.
If a drive failed more than once in this slot and the disk was replaced, it would indicate a potential chassis issue at that point.
Check if lsiutil is showing errors.
If lsiutil shows errors evenly on multiple slots, it can indicate a bad controller.
Check the drive FW and that there is not a known FW issue with the disk(s) with errors.If this is a G8 that the MCU version is 1.1A or higher and that the Backplanes were upgraded as well Ref: NX-G8: Nutanix Backplane CPLD, Motherboard CPLD, and Multinode EC firmware manual upgrade guide /articles/Knowledge_Base/NX-G8-Nutanix-Backplane-CPLD-Motherboard-CPLD-and-Multinode-EC-firmware-manual-upgrade-guide.If this is a G8 check that the LSI controller FW is 25.00.00 or higher. There are fixes related to SSD stability when trim is in use that correct an instance that causes PHY errors to be seen on drives and instability. It is also important from a troubleshooting standpoint to be on FW 25.00.00 or higher.
Note: The ID: 191, G-Sense_Error_Rate in "smartctl" output for Seagate HDD's can be safely ignored unless there is performance degradation. G-Sense_Error_Rate value only indicates HDD adapting to shock or vibration detection. Seagate recommends not to trust these values as this counter dynamically changes the threshold during runtime.
|
KB16495
|
LCM darksite inventory failure due to TCP sequence number randomization
|
This issue was noted in a customer environment where an external firewall was manipulating TCP sequence numbers as part of a security mechanism called "TCP sequence number randomization", which was causing the LCM download to fail due to the TCP handshake not getting established between the CVM and web server.
|
TCP sequence number randomization is a feature available on some firewall appliances (Cisco ASA etc.) The mechanism is meant as a defense against TCP sequence prediction attacks etc. This feature allows the security device (Firewalls) to manipulate traffic with packet inspection technologies.A firewall configured with this feature can change the sequence numbers of TCP conversations that it intercepts from the conversing hosts on the network so that an attacker snooping in on the connection may not predict the sequence numbers coming on the next packet. This works because the firewall device maintains an internal record of the mapping between the dynamically generated sequence numbers.We have seen issues with such manipulation in our security firewall offering "Flow Network Security" such as recorded in KB https://portal.nutanix.com/kb/000015422.However in this particular instance, the setup was a "dark" site meaning that the LCM server was hosted behind the DMZ with no access to the public Internet. It was observed that the LCM inventory operations were failing with the following error
LcmRecoverableError: ['Timed out performing distribute work for task 90a1361b-cc76-4e36-61bf-78642ca5ffb8'] on lcm leader
|
To install and run packet captures on CVM, refer to https://portal.nutanix.com/kb/02202To debug further a packet trace was started on the CVM for traffic from the dark site web server. Upon tracing the connection between the LCM leader and dark site server using tcpdump, the following traffic pattern was noted -
17:51:03.423508 xx:xx:xx:f6:91:f7 > yy:yy:yy:fe:0b:c9, ethertype IPv4 (0x0800), length 66: xx.yy.0.105.39562 > xx.yy.0.243.443: Flags [S], seq 780347801, win 29200, options [mss 1460,nop,nop,sackOK,nop,wscale 9], length 0
What the above is showing is that the CVM (IP 105) is initiating a connection with the server (IP 243) in the first 3 packets. However, past that, on the 4th until the 8 packets, the CVM keeps sending the data from 1 to 246 bytes without an ACK and we see an ACK from server at 17:51:06.424315 shows a duplicate ack with seq 3128895421, which is same as the second packet, meaning that it is a duplicate ack, and the server is not receiving the first SYN, ACK packet from the client. Since we could not gather pcap from the dark site server, we cannot determine the full picture here, but upon checking with the administrator, we found TCP sequence randomization setting enabled on the firewall appliance "Cisco ASA Series Firewall" upon disabling which the connection establishment issue resolved.
|
KB1088
|
How to troubleshoot Network Issues on ESXi in a Nutanix Block
|
This article reviews troubleshooting techniques on the physical network layer of ESXi.
|
If the vSphere when a cluster is configured with Distributed Virtual Switches, health checks are enabled that identify misconfiguration of MTU, VLAN, Teaming and Failover: https://kb.vmware.com/s/article/2032878 https://kb.vmware.com/s/article/2032878
Below are some of the things that can be checked when troubleshooting Network issues on an ESXi cluster configured with Standard Virtual Switches:
Check for the link and negotiation status:
~# esxcfg-nics -l
~# ethtool vmnic2
Note: In some ESXi hosts, ethtool may fail with error "Can not get control fd: No such file or directory." Use esxcli network nic get -n <vmnic#> instead to display the Network Device information. Example:
esxcli network nic get -n vmnic3
Check for link up/down/flapping. Log into the respective host and check the log file /var/log/vmkernel.log:
~# grep vmnic2 /var/log/vmkernel.log
Force re-negotiation by shutting down and bringing up a port:
~# esxcli network nic down -n vmnic2
Check VLAN and MTU settings. Use CDP to find the VLAN and network interface it is connected to:
~# vim-cmd hostsvc/net/query_networkhint --pnic-name=vmnic0 | egrep "location|mgmtAddr|softwareVersion|systemName|hardware|vlan|portId|port|ipSubnet"
Display the VLANs each port group is connected to:
~# esxcfg-vswitch -l
Check for MTU/duplex issues.
~# esxcfg-nics -l
~# esxcfg-vmknic -l
Check the interface for network errors. (The NIC interface is vmnic2 in this example. To see the list of NICs use the following command: esxcfg-nics -l):
~# ethtool -S vmnic2
Note: For Mellanox, use:
[root@esxi]# esxcli network nic stats get -n <vmnic> | egrep "Total receive errors|Receive CRC errors|Receive missed errors"
Check if the vmkernel network interface (vmk1) for vSwitchNutanix is enabled:
~# esxcfg-vmknic -l
Use the following command to enable the vmk1 interface:
~# esxcfg-vmknic --enable vmk-svm-iscsi-pg
|
Resolve physical layer issues:
Move it to a different switch portChange the cableReplace the NIC
Resolve VLAN issues. Check the VLAN tagging in the management port group. It should match the appropriate VLAN, or if it is a native VLAN, remove the tagging.Resolve packet drop and network latency issues:
Check the CVM /home/nutanix/data/logs/sysstats/ping_*.INFO for "unreachable" errors by running the following command:
nutanix@cvm$ cat ~/data/logs/sysstats/ping_*.INFO | egrep -i 'TIMESTAMP|unreach'
Check the CVM for high network latency issues:
nutanix@cvm$ grep -B 6 '[8-9][.]' ~/data/logs/sysstats/ping_hosts.INFO |grep -v ': [0-7][.]'
Print any latency beyond 30ms:
nutanix@cvm$ cat ~/data/logs/sysstats/ping_gateway.INFO | egrep -v "IP : time" | awk '/^#TIMESTAMP/ || $3>30.00 || $3=unreachable' | egrep -B1 " ms|unreachable" | egrep -v "\-\-"
|
KB12469
|
LCM SPP firmware upgrade failing with Chif driver not found error
|
ilorest commands will fail in ESXi if Chif driver is not present
|
SPP firmware upgrade through LCM can fail with 'Chif driver not found' error. This is due to ilorest utility unable to find Chif driver on ESXi node. lcm_ops.out log signature will look like below:
[2021-12-13 21:07:02.918082] Starting HPDX Inventory ...
|
Some HPE servers require the use of the HPE Customized image for a successful installation. The drivers for the new network and storage controllers in the servers are integrated into the HPE Customized image and are not part of the generic ESXi image that is distributed by VMware. The HPE-Customized ESXi ISO images need to be used and they can be downloaded from HPE Website.Nutanix nodes use an allowlist of hypervisor ISOs based on the file hash, and this allowlist includes the custom HPE ESXi ISO. You can view the hypervisor allowlist under the Foundation downloads page https://portal.nutanix.com/page/downloads?product=foundation on the Nutanix Support Portal. Refer to the Field Installation Guide https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-HPE:Field-Installation-Guide-HPE for step-by-step instructions for installing ESXi on Nutanix.
|
KB9963
|
NCC Health Check: host_boot_disk_uvm_check
|
The NCC health check host_boot_disk_uvm_check determines if there is any guest VM is installed on Host Boot Disk.
|
The NCC health check host_boot_disk_uvm_check determines if there is any guest VM is installed on Host Boot Disk.This check is available from NCC version 4.0.0 and above.
Running NCC check
Run the NCC check as part of the complete NCC Health Checks.
nutanix@cvm$ ncc health_checks run_all
Or run the host_boot_disk_uvm_check check individually.
nutanix@cvm$ ncc health_checks hardware_checks disk_checks host_boot_disk_uvm_check
This check can also be run from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run once a day, by default.Sample outputFor Status: PASS
Running : health_checks hardware_checks disk_checks host_boot_disk_uvm_check
For Status: WARNScenario 1:
Running : health_checks hardware_checks disk_checks host_boot_disk_uvm_check
Scenario 2:ESX cluster with vCLS VMs
NCC alert:
Output messaging
This hardware related check executes on the below hardware:
Nutanix NXDell XCLenovo HX[
{
"Check ID": "Checks that no guest VM is installed on Host Boot Disk."
},
{
"Check ID": "A guest VM is installed on Host Boot Disk or it is incorrectly configured.\t\t\tSATA DOM has failed."
},
{
"Check ID": "Remove guest VMs from host Boot Disk or reconfigure guest VMs on the host machine."
},
{
"Check ID": "Degradation of the Host Boot Disk will be accelerated, leading to unavailability of the node."
}
]
|
Scenario 1:Remove guest VMs from the host boot disk or reconfigure guest VMs on the host machine.On a Hyper-v cluster run "Get-VM" to list the VMs. Upgrade NCC to 4.3.0 if host_boot_disk_uvm_check returns an error "Cannot get vm information":
allssh 'winsh "Get-VM | select Name"'
Scenario 2:Click on your ESXI Cluster object. You then click on “Configure” tab, and on “Datastores” under “vSphere Cluster Services”. Now you will see “VCLS Allowed”, if you click on “ADD” you now will be able to select the datastores to which these vCLS VMs should be provisioned. Select a shared datastore from the list. Login as [email protected] or with Admin privileges.
|
KB16406
|
Portable Foundation bare-metal imaging fails for platforms requiring Tartarus
|
Portable Foundation fails when Tartarus finds no valid platforms
|
Bare-metal imaging via Portable Foundation is supported only on NX, Dell, and HPe hardware. Portable Foundation will fail for models of these hardware platforms that require the Tartarus library for imaging. Examples of these platforms include Dell 16G and HPe Gen 11 hardware. The key signature for this issue is the Foundation debug.log. Windows location: C:\Program Files (x86)\Nutanix\Portable Foundation\log\debug.logMacOS location: /Application/foundation.app/Contents/Resources/log/debug.logPrimary symptom:After Tartarus initialization is complete, a DEBUG message advising "Failed to load all plugins", followed by multiple Unable to satisfy required capability 'LOCAL_SHELL' messages."Tartarus platforms" will note no available platforms. Example:
2024-02-09 09:01:46,179Z INFO Tartarus initialization complete
Secondary symptoms:These platform-specific symptoms may occur when Portable Foundation encounters the Tartarus issue documented above. The second secondary symptoms can also occur for reasons outside the scope of this KB but are shared as the primary symptom may cause these secondary symptoms to manifest. Dell 16G platform - which uses iDRAC9 - Foundation debug.log will show Dell 16G is reported by iDRAC:
2024-02-23 04:34:37,802Z Thread-26 detect_node_type.detect_device_type:255 INFO: Fallback to legacy bmc detection
But advise iDRAC7 is detected:
2024-02-23 04:34:40,938Z Thread-26 imaging_step_type_detection.detect_node_type_via_ipmi:70 INFO: Detected class idrac7 for node with IPMI IP x.x.x.10
And an error about USERSPACE_NFS feature:
2024-02-23 03:07:00,188Z ERROR Exception in running <ImagingStepInitIPMI(<NodeConfig(x.x.x.10) @9850>) @96a0>
HPe Gen 11 platform - Foundation debug.log will report iLO 6 detected:
2024-02-09 09:01:49,709Z INFO iLO version : ilo6, FirmwareVersion : iLO 6 v1.56, BIOS Version : U54 v1.44 (07/31/2023)
Followed by errors advising iLO 6 is not supported with legacy BMC:
2024-02-09 09:02:03,729Z ERROR Exception during clean up remote boot
|
This issue has been resolved in Foundation 5.6.1. Please use Foundation version 5.6.1 or later for successful node imaging.
|
KB3584
|
Nutanix VM Mobility Setup Fails with the Error Status: 1603 | Nutanix Guest Tools fails error: 'Invalid Function'
|
Nutanix VM mobility setup fails with the 1603 error status or Nutanix Guest Tool (NGT) fails to install with the error message "Invalid Function".
|
Nutanix VM Mobility
You can observe the following error messages, events, and log entries if the Nutanix VM mobility setup fails with the 1603 error status.
The Nutanix VM Mobility Setup screen displays the following error message.
Nutanix VM Mobility Setup Wizard ended prematurely.
Event Viewer contains the following events.
Event 11708, MsiInstaller, Product: Nutanix VM Mobility -- Installation failed.
The MSI log file contains log entries similar to the following:
CheckSha2Support: Entering CheckSha2Support in C:\Windows\Installer\MSICC90.tmp, version 0.0.0.0
Usually, the MSI log file can be found in the location:
%USERPROFILE%\AppData\Local\Temp
For example:
C:\Users\Administrator\AppData\Local\Temp\1\MSIfbd27.LOG.
Nutanix Guest Tools Fails with "Invalid Function"
Verify if there is an issue with PowerShell profile
Open a command prompt on the Windows VM.Run the following command:
> powershell get-date
A single-line output similar to the following is displayed.
"Thursday, February 9, 2017 9:55:16 PM"
If the output is more than one line (as in the example below), proceed to the Solution section of KB 4147 https://portal.nutanix.com/kb/4147. Otherwise, proceed to the next step of verifying the issue with the WMI repository.
"Major Minor Build Revision"
Verify the Issue with the WMI repository
If the WMI service is running but the repository is corrupt, the NGT installation fails and the following message is displayed:
Refer to the installation error message to identify the component for which the installation failed. Review the log files for the component for error messages. Nutanix_Guest_Tools_<date_time_stamp> is a generic informational log file while the rest of the file refer to the individual components. In the example above the installation failed for NutanixGuestAgentPackage:
Logfile output:
CopyDir: Command: cmd.exe /c xcopy "Z:\\..\..\config" "C:\Program Files\Nutanix\config\"/E /C /O /Y /H.
Copy the error message in the quote marks and run the command from an elevated (ensure administrative privilege) PowerShell prompt:
"C:\Program Files\Nutanix\Python27\python.exe" "C:\Program Files\Nutanix\python\bin\guest_agent_service_wrapper.py" "--startup=Auto" "install"
Running the command provides you with a stack trace error message:
When attempting to query the IdentifyingNumber of the Computer at a PowerShell prompt, an error is displayed notifying that an Invalid Class is being used within the WMI namespace:
Connect to the namespace using the wbemtest utility. To do this operation, go to Run or Search and type wbemtest. In the Windows Management Instrumentation Tester window, click Connect > Connect.
Click Open Class.
Access the same Class Object that you called on in the earlier command (Win32_ComputerSystemProduct).
Note: The class object is case-sensitive.
If you receive an error when you attempt to open the Class Object, it is likely the WMI repository is corrupt. Verify by opening the wmimgmt.msc console. Go to Run or Search utility and type wmimgmt.msc:
Right-click WMI Control (Local) in the left pane and select Properties. If you see the following message on the General tab, the WMI repository is corrupt:
|
Nutanix VM Mobility
The error message in the MSI log file indicates that the installer failed to connect to a WMI namespace because a required service has not started. Ensure that the Windows Management Instrumentation service of the VM is started. If the service is not started, start the service and retry the installation.
Nutanix Guest Tools (NGT)
Solution 1Create a new VM and test the NGT installation.Solution 2Retrieve and reset the WMI repository by using the following steps.
Disable and stop the WMI service from an elevated command prompt (note that there is a space between "=" and "disabled"):
sc config winmgmt start= disabled
Run the following commands:
winmgmt /salvagerepository %windir%\System32\wbem
Re-enable the WMI service (note that there is a space between "=" and "auto"):
sc config winmgmt start= auto
Reboot the VM.Retry the NGT install again.
If the problem persists, rebuild the repository.
Disable and stop the WMI service from an elevated CMD prompt (note that there is a space between "=" and "disabled"):
sc config winmgmt start= disabled
Rename the repository folder to repository.old:
Path: %windir%\System32\wbem\repository
Re-enable the WMI service (note that there is a space between "=" and "auto"):
sc config winmgmt start= auto
Reboot the VM and re-attempt the installation of NGT.
If the problem persists, rebuild the VM and perform the installation as a local administrator, re-attempt the installation before joining a domain.Solution 3Download the Microsoft http://https://support.microsoft.com/en-nz/help/17588/windows-fix-problems-that-block-programs-being-installed-or-removed Program Install and Uninstall troubleshooter https://support.microsoft.com/en-us/topic/fix-problems-that-block-programs-from-being-installed-or-removed-cca7d1b6-65a9-3d98-426b-e9f927e1eb4d, clear the previous installation then retry the installation.
|
KB4978
|
SSL Certificate Upload Troubleshooting - Replacing self-signed certificates with CA-generated certificates in Prism
|
This article describes how to replace self-signed certificates with CA-generated certificates in Prism and how to troubleshoot them in case of issues.
|
To replace the SSL certificate, you need the following three files:
Private Key - Generated using RSA-2048 key type and signed using SHA-256 hash. Nutanix also supports EC DSA 256-bit and EC DSA 384-bit. However, RSA-2048 is the most commonly used key type.
Public Certificate - Issued by a Certificate Authority (CA). Nutanix supports X.509 certificates in Base64 encoded PEM format.
CA Certificate/Chain - The certificate of the CA that issued the public certificate above. If the issuing CA is an intermediate CA, you also need the root CA certificate. If there are multiple intermediate CAs, you need the certificates for all the intermediate CAs and the root CA. Refer to the Security Guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:wc-recommended-key-combinations-prism-r.html for the list of recommended key configurations.
Below is a sample screenshot of the SSL Certificate dialogue box in Prism.
Note: The CA certificate upload will fail if the certificate includes a SHA1 signature in addition to SHA256 (Supported algorithm).This issue has been resolved in AOS 6.6 (and above) or AOS 6.5.2 (and above). Upgrade AOS to a fixed version to resolve such issues. Use the steps mentioned in the Solution section for all other cases.
|
Generate the private key and Certificate Signing Request (CSR) using the openssl command below from a CVM (Controller VM) or a Linux system.
nutanix@cvm$ openssl req -out server.csr -new -newkey rsa:2048 \
Replace the values below with your company information:
Notes:
Or you can also use the following command to follow the workflow to enter those values manually.
nutanix@cvm$ openssl req -out server.csr -new -newkey rsa:2048 -nodes -sha256 -keyout server.key
The command above will generate the following files:
Example output:
nutanix@cvm$ openssl req -out server.csr -new -newkey rsa:2048 -nodes -sha256 -subj "/C=<Country>/ST=<State>/L=<Location>/O=<Organization>/OU=<Organization Unit>/CN=<Common Name>" -keyout server.key
Example server.key output file:
-----BEGIN RSA PRIVATE KEY-----
Example server.csr output file:
-----BEGIN CERTIFICATE REQUEST-----
C = CountryST= StateL = LocationO = OrganizationOU = Organization UnitCN = Common NameSAN.1 = Subject Alternative Name 1 (For example: example.com)SAN.2 = Subject Alternative Name 2 (For example *.example.com)You can remove one of DNS:<SAN> sections.At least one Subject Alternative Name has to be specified according to RFC-2818 https://tools.ietf.org/html/rfc2818.server.key - Private Keyserver.csr - Certificate Signing Request
Note: Subject Alternative Names should include all domain names and IPs that are covered under the cert. There should be a SAN entry for each CVM IP and the cluster VIP. If not, the cert will only be valid for the IP of the machine on which the CSR was created.
Send the CSR (server.csr) to the CA and receive the following files:
The public certificate is validated by the issuing CA and if the issuing CA is intermediate, the issuing CA certificate is validated by the root CA. So there is a chain of certificates validated in that fashion.
A CA-signed public certificate for the website (for example, myPublicCert.cer)The CA's public certificate (for example, intermediateCAcert.crt for an intermediate CA)A root CA certificate, in case the CA is intermediate (for example, rootCAcert.crt)
Copy the certificate files received from STEP 2 to the same location on the CVM or the Linux system where the files generated from step 1 (server.key and server.csr) are located.
Since there is only one field in Prism to upload the CA Certificate/Chain, you need to merge the intermediate and root CA into one file using the command below. Make sure that the chain file does not contain the leaf (end-entity) certificate and goes in the correct order of validation so the highest authority, which is root CA, is the last.
nutanix@cvm$ cat intermediateCAcert.crt rootCAcert.crt > ca_chain_certs.crt
The example above will merge the files intermediateCAcert.crt and rootCAcert.crt into a new file called ca_chain_certs.crt.
Example ca_chain_certs.crt output file:
-----BEGIN CERTIFICATE-----
4. Verify that the generated certificate is OK:
nutanix@cvm$ openssl verify -CAfile ca_chain_certs.crt myPublicCert.cer
5. Ensure that the signature algorithm used by the CA Certificate/Chain uses SHA-256 as the signature algorithm. You can use the following command to verify that the signature algorithm used is SHA-256:
nutanix@cvm$ openssl crl2pkcs7 -nocrl -certfile ca_chain_certs.crt | openssl pkcs7 -print_certs -noout -text | grep -Ew '(Subject|Issuer|Signature Algorithm):' | grep -C1 Issuer
In case the signature algorithm being used by the CA Certificate/Chain is not SHA-256, you will encounter the following error message upon uploading the certificates and the keys in the next step:
Unsupported signature algorithm detected on one or more certificates in CA chain. Please refer to the FIPS 140-2 documentation for more information
Once the output shows OK in step 4, and the signature algorithm is verified to be SHA-256 in Step 5, copy the below files to your local desktop/laptop from where you can upload the files through Prism and it should go through without any errors. Refer to Security Guide - Importing an SSL Certificate https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide%3Amul-security-ssl-certificate-pc-t.html for instructions.
server.key (generated in step 1)myPublicCert.cer (received in step 2)ca_chain_certs.crt (created in step 3)
Troubleshooting
Verify the certificate is X.509 format:
nutanix@cvm$ openssl x509 -in <cert.crt> -text -noout
Verify key is in RSA-2048 and SHA-256 signed:
nutanix@cvm$ openssl rsa -in <server.key> -check
Verify keys and certificates are in PEM encoding ASCII format:
nutanix@cvm$ openssl x509 -in <cert.crt> -inform pem -text -noout
Check if the certificate is DER encoded:
nutanix@cvm$ openssl x509 -in <cert.crt> -inform der -text -noout
Notes
Ensure that there are no extra white spaces in the body of the certificate. It should begin with "-----------BEGIN CERTIFICATE--------" and end with "-----------END CERTIFICATE-----------" as suggested in KB 4573 https://portal.nutanix.com/kb/4573.If the certificate is DER encoded, convert the certificate from DER to PEM encoding ASCII format:
nutanix@cvm$ openssl x509 -in <certDER.crt> -inform der -outform pem -out <cert.crt>
Check if the issuing CA is intermediate or root. If the CA is intermediate, make sure you have a root CA certificate and follow step 3 above to create the chain certificate file.Verify the chain certificate file only has the root and intermediate certificates by opening it with a text editor. Ensure the public or private certificates are not in the chain, as this would cause the upload to Prism to fail even though the validation below passes. If, for example, the public certificate is in the chain file, you can remove it using a regular text editor. Ensure no unnecessary space characters are left at the bottom of the file. Validate the certificate issued by the CA:
nutanix@cvm$ openssl verify -CAfile <ca_chain_certs.crt> <myPublicCert.cer>
If the Nutanix Cluster is running AOS 5.18 or higher:
After uploading the new CA Certificate on Prism successfully and trying to log in to the Prism Web UI, if it is still pointing to the old SSL Certificate or the SSL Certificate is Invalid and not allowing a secure browser connection, try the following:
Restart the "ikat_control_plane" and "ikat_proxy" services on the cluster one at a time.
nutanix@cvm$ allssh "source /etc/profile; genesis stop ikat_proxy ikat_control_plane && cluster start; sleep 60"
Log in to the Prism Web UI again to confirm they are loading with a secure connection.
|
KB13239
|
vCenter incomplete Maintenance Mode tasks prevent VM migration during LCM upgrades
|
Performing any LCM upgrade that require a host reboot on ESXi clusters managed by vCenter running version 7.0.3 builds can result in "DRS" or "manually" VM migration failures.
|
Performing any LCM upgrade that requires a host reboot on ESXi clusters managed by vCenter running version 7.0.3 builds can result in "DRS" or "manually" VM migration failures.
Verification:
LCM orchestrates the rolling reboot of the host as expected.After the ESXi host has rebooted, it will exit maintenance mode, but vCenter DRS will not move VMs back to the host.The following command results indicate multiple hosts running no user VMs.
nutanix@cvm:~$ hostssh "vm-support -V | grep Running | wc -l"
Multiple entries of incomplete Enter Maintenances Mode tasks will be shown in the vCenter Server Management Interface under "Recent Tasks":
Attempts to manually migrate VMs to the host will fail with the below errors:
You may notice hosts with high CPU and Memory utilization due to the unbalanced load of VMs distributed across the cluster:
ESXi host logs show:
ESXi host reboot event triggered (vmksummary.log)
bootstop[X]: Host is rebooting
ESXi host reboot event triggered by user "USER\DOMAIN" (hostd.log)
hostd.log:X info hostd[X] [Originator@6876 sub=Vimsvc.ha-eventmgr opID=XXX user=vpxuser:USER\DOMAIN Event XXXXX : User logged event: UserAction reboot: Initiated by VMware VirtualCenter
ESXi host entering maintenance mode task on the ESXi host (vobd.log)
[GenericCorrelator] 150988830us: [vob.user.maintenancemode.entering] The host has begun entering maintenance mode
Note: This issue is applicable to all Hardware platforms.
|
Nutanix Engineering has identified that running specific combinations of ESXi and vCenter may cause a condition where the calls sent by LCM will reboot the ESXi host before the completion of the "enter maintenance mode" task in vCenter. After the host reboot, this incomplete task will prevent "manually" or "DRS initiated VM migrations" to the affected host(s). In vCenter versions before 7.0.3, the "enter maintenance mode task" is marked as "failed", and after the host is rebooted, VM migrations are possible.This issue is resolved in LCM-2.5. Please upgrade to LCM-2.5 or higher version - Release Notes | Life Cycle Manager Version 2.5 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-LCM:top-Release-Notes-LCM-v2_5.htmlIf you are using LCM for the upgrade at a dark site or a location without Internet access, please upgrade to the latest LCM build (LCM-2.5 or later) using Life Cycle Manager Dark Site Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_5:Life-Cycle-Manager-Dark-Site-Guide-v2_5.
|
KB8462
|
Unable to start cluster, receiving message "Failed to reach a node where Genesis is up. Retrying..."
|
When issuing "cluster start" from CVM command line, the cluster fails to start and returns the message:
WARNING genesis_utils.py:1209 Failed to reach a node where Genesis is up. Retrying...
|
When configuring the Hyper-V host network, for example, modifying the switch names, the cluster may fail to start:
nutanix@CVM:~$ cluster start
When viewing the ~/data/logs/genesis.out log it shows that genesis attempts to connect to the hypervisor to get the IP configuration but fails to do so, as there are multiple IP addresses bound to the network adapter. The CVM can ping remotely and other CVM.
<date> <time> INFO ipv4config.py:855 Discovered network information: hwaddr 00:aa:11:bb:22:cc, address 10.224.149.73, netmask 255.255.255.224, gateway 10.224.149.65, vlan None
|
Review the IP addresses bound to the hypervisor public interface and remove the incorrect address.In this example, at some stage during the switch configuration where the name is changed the APIPA address has been applied to the network interface and remained bound, disrupting the CVM network.
|
KB10856
|
Cluster Maintenance Utility version via cli
|
Identify CMU version using CLI
|
A CLI on how to identify the recently added 'Cluster Maintenance Utility' or CMU version visible on the LCM Inventory page.
|
If CMU has not been updated that is CMU 1.0, then the /home/nutanix/data/scavenger/scavenger.py will not exist.e.g.
:~$ /home/nutanix/data/scavenger/scavenger.py --version
If CMU has been 'upgraded', execute the scavenger.py with the '--version' switch.e.g.
nutanix@CVM:~$ /home/nutanix/data/scavenger/scavenger.py --version
Note: CMU is not cluster aware. Each CVM will need to be queried to confirm the installed version.
|
KB8469
|
Alert - A801001 - Maximum VPN BGP route limit reached
|
Investigating the maximum number of routes accepted by the Xi VPN gateway over eBGP.
|
This Nutanix article provides the information required for troubleshooting the alert Maximum VPN BGP route limit reached for a Nutanix cluster.
Alert description
The Maximum VPN BGP route limit reached alert is raised if the On-prem VPN gateway advertises more than the maximum number of routes accepted by the VPN gateway over eBGP.
Sample alert
Block Serial Number: 16SMXXXXXXXX
|
Troubleshooting
Check the prefix of the advertise route from the on-prem VPN device.The prefix can also be checked from the Xi-portal under Xi Gateway Routes (Received).
Resolving the issueChange the prefix of the advertised routes to a smaller prefix to reduce the number of the routes advertised.
Collecting additional informationIf you need further assistance or the above steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com.
Requesting assistanceIf you need assistance from Nutanix Support, comment on the case on the support portal, asking Nutanix Support to contact you. If you need urgent assistance, click the Escalate button in the case and explain the urgency in the comment, and then Nutanix Support will contact you within an hour. You can also contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/.
Closing the caseIf this article resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email.
|
}
| null | null | null | null |
KB2914
|
Disk SERIAL# is not present in zeus configuration, but data is present. Cleaning of disk is required
|
This KB describes steps to take when an unclean disk is required to be added back to a cluster.
|
WARNING: Improper usage of disk_operator command can incur in data loss. If in doubt, consult with a Senior SRE or a STL.
When the Hades service starts, it performs disk related health checks. For any disks which are not currently part of the cluster (not in the Zeus configuration) Hades will check if there is any data present on the disk. If pre-existing data is found on the disk, the disk will be blocked from being added into the cluster and the below message will be logged to hades.out :
Disk <SERIAL#> is not present in zeus configuration, but data is present. Cleaning of disk is required
Reasons for the above scenario could be:
A "replacement" (RMA) disk which is not properly formatted.A disk was improperly removed from the cluster. It is no longer part of the cluster configuration, but data is still present on the disk.A healthy disk was incorrectly removed from the cluster for reasons other than an actual disk failure. For example, Hades may mark a disk offline 3 times if it goes read-only or has some controller related error which is unrelated to the disk itself. In these cases, it may be desirable to just reuse the existing disk after the other problem is resolved.
|
In order to allow the disk to be used in the cluster, first perform the following checks:
1. Confirm this disk is NOT currently part of the cluster. Check the output of ncli disk ls and search for the SERIAL# from the output. It should not be found.2. Confirm the disk is not currently in process of being removed from the cluster. Check ncli disk get-remove-status. If the disk is actively being removed, allow this to complete.3. Using either ncli disk ls or the Prism WebUI check which CVM owns this disk. The count of disks for this CVM should be one less than expected for the model.
nutanix@CVM:~$ ncli disk ls | grep cvm_ip | wc -l
4. Check the Hades state of the disk using the edit-hades [CVM ID] command.
Note: CVM ID is not necessary if the command is run on the CVM in question. CVM ID can be found in the output of the command svmips -dIf you do not intend to make changes, run edit-hades -p instead, to only print hades proto.
If there are lines in the slot_list entry for this disk where Hades previously marked it bad and removed it, they will need to be removed.The example below shows lines which would need to be removed:
slot_list { location: 4 disk_present: true disk { serial: "9Xxxxx34" model: "ST91000640NS" current_firmware_version: "SN03" target_firmware_version: "SN03" under_diagnosis: true <--- Remove this line if present is_bad: true <--- Remove this line if present disk_diagnostics: offline_time_stamp_list <--- Remove this line if present. Could exist up to 3 times. test_list: <--- Remove this line if present. Could exist up to 3 times. test_name: smartctl-short Started %0 <--- Remove this line if present. Could exist up to 3 times. vendor_info: "Not Available" }}
5. Check the Prism WebUI hardware diagram view and select this node. Check if there is a disk present in the affected slot location and if it is pending a Repartition and Add operation. If it is, complete the Repartition and Add from the Prism WebUI.6. If the Repartition and Add is not pending in Prism and there is no disk showing in the affected slot location, run the following commands on the CVM which owns this disk.
nutanix@cvm:~$ disk_operator clean_disk /dev/sdxnutanix@cvm:~$ sudo /usr/local/nutanix/bootstrap/bin/hades restart# After restarting hades service, need to check if Stargate page shows a cleaned disk.# If not, Stargate service needs to be restarted, make sure a data resiliency status is OK.nutanix@cvm:~$ cluster start
Replace "/dev/sdx" with the appropriate device name for the affected disk on this CVM. Run command lsscci to get the mount point
Note: Instead of the above "clean_disk" command, in certain cases of disk replacement, the Factory ships disks with a file called can_repartition as data on the ext4 partition. This file can be manually deleted by mounting the disk in a temporary area first, then perform follow the following restarts.
7. After running the commands in step 6, check the hades.out log to confirm we no longer see the cleaning required error for this disk.Check if the disk was automatically added back into the cluster by Hades or if the Repartition and Add operation is now available in the Prism WebUI.
|
KB16679
|
Stale Atlas ports are left after failed CCLM migration if Hermes pod is down
|
Stale Atlas ports are left after failed CCLM migration if Hermes pod is down
|
Stale Atlas ports are left after failed CCLM migration if the Hermes pod is down during the migration. As a result, the IP and MAC cannot be claimed by a new vNIC, and new migration attempts will fail.CrossClusterMigrateVm task fails with the following error:
Internal Error: 10000\n :Polled task failed: error code = 10000, detail = Internal Error: 10000\n :Polled task failed: error code = 32, detail = [VM:2252e8af-a4ca-4d6c-92aa-b74a0a1c467c] CCLM failed to create VM on secondary. Error: Anduril failed to handle request for VM 2252e8af-a4ca-4d6c-92aa-b74a0a1c467c: error = 10, details = Unable to communicate with network controller: 10. Rolled back primary VM: 32
Sample events from /home/nutanix/data/logs/atlas.out log on the source Prism Central:
2024-01-08 18:45:53,829Z INFO client.py:508 Raised alert: AncServiceUnresponsive
In the "ecli task.list" output we can see that the Atlas PortDelete task fails for a given cleanup task, but the Acropolis PortDelete succeeds:
473c9888-daf3-4c91-a35f-0ae0da0a7c0f 106e7c1f-64d5-4677-b1c8-883c3c3ed1e6 Atlas 20 kPortDelete kFailed 2024-01-08 18:35:11 2024-01-08 18:45:53
The atlas_cli on the target Prism Central VM will show a port for that IP without a VM name:
nutanix@pccvm:~ atlas_cli port.list
|
Nutanix Engineering is aware of the issue and is working on a fix in a future release.WorkaroundManually delete the atlas port on the target Prism Central instance by using the following steps.
In the target Prism Central instance, run the following command:
atlas_cli port.list
Identify the atlas port UUID in the list by its IP address.In the target Prism Central instance, run the following command:
atlas_cli port.delete <port-uuid>
|
KB16092
|
OVN pods/MySQL pods empty after AHV/PC upgrade
|
This KB provides steps on how to recover OVN and Mysql pods when they get corrupted during AHV or PC upgrade process.
|
Customers might report Flow Networking related tasks (like VPC creation, Subnet creation etc.) fail after performing any of the below upgrades:
Prism Central upgrade from 2022.x.x to 2023.3.xANC from 2.2.x to 3.xAHV
Scenario 1
When Prism Central is impacted, an alert will be raised for "Advanced Networking Controller is not Healthy"
ID : 0afc3926-6acd-43ef-b762-c48c598054a3
Tasks fail with an error like below when performing Flow Networking configuration tasks in Prism Central:
400 Client Error: BAD REQUEST for url: https://anc-hermes-service.default.prism-central.cluster.local:4801/v1/virtual_network/fc652416-b361-477e-814d-f21b8bc16f6c - 400
Failed ecli task details:
task.get 4b7a4258-f6d8-42b2-ae39-b54ba5b05020
The below error can also be observed on Prism Central in /home/nutanix/data/logs/atlas.out:
2023-08-21 20:16:08,529Z INFO base_task.py:598 Running task 5b0cae55-ca9e-4a30-a657-3c7c48cf7967(AtlasFlowGatewayUpdate)
MSP cluster health will not indicate any errors:
nutanix@NTNX-10-x-x-x-A-PCVM:~/data/logs$ mspctl cluster health
ANC hermes will not be able to connect to the SQL DB.Errors like below may be seen in /var/log/ctrlog/default/hermes/anc-hermes_Deployment/hermes.log:
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'anc-mysql-service' (timed out)")
The same log file may show continuous ANC hermes restart:
root@anc-hermes-service:/# cat /var/log/hermes/hermes.log.1 | grep 'Starting her'
Hermes pod can be found in CrashLoopBackOff state:
nutanix@PCVM:~$ sudo kubectl get pods
The CSI plugin logs will show that the PVC belonging to anc-ovn-0 was formatted during the upgrade:
nutanix@PCVM:~$ allssh "grep -nir formatted /home/nutanix/data/sys-storage/cmsp*/kubelet/pods/*csi-node*/"
The formatting step here should normally only be happen for a new container, but here the formatting has occurred against an existing PVC owned by an existing container, during the upgrade.
Checking the CSI driver version present on Prism Central shows version 2.4.7 or lower:
nutanix@PCVM:~$ docker images | grep csi
You can verify the ownership of the recently formatted Persistent Volume Claim using kubectl:
nutanix@PCVM:~$ sudo kubectl get pvc
Here we see pvc-1408e1b9-6be8-4605-9dc7-394d838dc9aa is the volume for "anc-ovn-db", so in the logs above we saw the OVN database volume was reformatted.
Scenario 2
After PC upgrade we can see Atlas related tasks failing with below error. In this particular scenario SpanSessionCreate action in PC UI was failing.
Failed to create traffic mirror test due to operation exceeded maximum retry attempts: Request failed when executing remote operation spanSessionCreate: Atlas is currently not accepting task-based RPCs, due to ongoing recovery. Try later.
The network controller looks healthy
nutanix@NTNX-A-PCVM:~$ atlas_cli network_controller.get 54d28fc8-39a7-4f1c-b918-0582bbc012c7
54d28fc8-39a7-4f1c-b918-0582bbc012c7 {
default_vlan_stack: "kLegacy"
deployment_status: "kDeployed"
logical_timestamp: 0
minimum_ahv_version: "20230302.63"
minimum_nos_version: "5.19"
state: "kHealthy"
target_version {
major: 3
minor: 0
patch: 0
revision: "bc68bbbfb67fd85eabbca68feacf28d148636faa"
}
uuid: "54d28fc8-39a7-4f1c-b918-0582bbc012c7"
version {
major: 3
minor: 0
patch: 0
revision: "bc68bbbfb67fd85eabbca68feacf28d148636faa"
}
}
Pods are healthy, no crashes.
nutanix@NTNX-A-PCVM:~$ sudo kubectl get pods
NAME READY STATUS RESTARTS AGE
anc-hermes-6bd489bf8f-m2tfv 1/1 Running 0 9d
anc-mysql-0 2/2 Running 0 9d
anc-ovn-0 2/2 Running 0 9d
After restarting Atlas leader we can see below errors in atlas logs on all PCVMs. At some point the errors will stop, and Atlas will show no indication of the issue.
2024-06-19 14:33:59,598Z CRITICAL decorators.py:56 Traceback (most recent call last):
File "build/bdist.linux-x86_64/egg/util/misc/decorators.py", line 50, in wrapper
File "build/bdist.linux-x86_64/egg/util/master/master.py", line 125, in _wait_for_leadership
File "/src/main/builds/build-fraser-2023.4-stable-pc-0.2-release-clang-shlib/python-tree/bdist.linux-x86_64/egg/atlas/master/master.py", line 53, in _acquired_leadership
File "/src/main/builds/build-fraser-2023.4-stable-pc-0.2-release-clang-shlib/python-tree/bdist.linux-x86_64/egg/atlas/ipfix_exporter/manager.py", line 58, in start
File "/src/main/builds/build-fraser-2023.4-stable-pc-0.2-release-clang-shlib/python-tree/bdist.linux-x86_64/egg/atlas/ipfix_exporter/manager.py", line 549, in update_flow_tracking_on_controller
File "/src/main/builds/build-fraser-2023.4-stable-pc-0.2-release-clang-shlib/python-tree/bdist.linux-x86_64/egg/atlas/subnet/manager.py", line 2367, in update_controller
File "/src/main/builds/build-fraser-2023.4-stable-pc-0.2-release-clang-shlib/python-tree/bdist.linux-x86_64/egg/atlas/controller/anc_controller.py", line 347, in update_subnet
File "/src/main/builds/build-fraser-2023.4-stable-pc-0.2-release-clang-shlib/python-tree/bdist.linux-x86_64/egg/atlas/common/common.py", line 169, in wrapper
AtlasControllerServiceError: 400 Client Error: BAD REQUEST for url: https://anc-hermes-service.default.prism-central.cluster.local:4801/v1/subnet/a5bc38ca-f4a0-4426-b247-9ad0d55693de - 400 ({
"detail": "Unknown virtual_network: 6636e2a9-f0e3-4a27-a980-ae217c5ae9dc",
"status": 400,
"title": "Bad request",
"type": "about:blank"
}
)
Ovn northdb will be empty
nutanix@NTNX-A-PCVM:~$ sudo kubectl exec -it anc-ovn-0 -c anc-ovn ovn-nbctl show
nutanix@NTNX-A-PCVM:~$
|
The issue is caused by a problem in the CSI driver. The issue may not occur every time but can happen during a hypervisor or Prism Central upgrade. The unintended formatting of the volume happens when a check regarding the condition of the filesystem on the PVC is run before the PVC has been successfully mounted. As of the latest release, steps for recovery from this condition need to be completed by DevEx or the appropriate engineering resource. Please open an ONCALL and involve Flow Networking team to fix the corrupt pods.Please collect below information before engaging DevEx:
MSP cluster information:
mspctl cluster list,
mspctl cluster health,
mspctl debug run
PC upgrade history:
cat ~/config/upgrade.history
CSI driver version:
docker images | grep csi
PC Version:
ncli cluster info
If NC2 cluster then also collect FGW HA status
|
KB2566
|
Alert "Link on NIC vmnic[x] of host [x.x.x.x] is down" being raised if an interface was used previously
|
Alert "Link on NIC vmnic[x] of host [x.x.x.x] is down" being raised if an interface was used previously.
|
The NCC health check nic_link_down_check results in FAIL when the first detection of the change of NICs that were previously used but are now in downstate.
The following alert is seen on the cluster if you have NICs that were previously used but are now in downstate within 24 hours.
Alert:
Link on NIC vmnic[x] of host [x.x.x.x] is down.
NCC failure:
FAIL: One or more NICs is/are down on host x.x.x.x
|
This generally happens when you bring down an interface, then the NIC link down alert is displayed and cluster health turns red.
After the NIC is in a downstate, the NIC status will stay in the file check_cvm_health_job_state.json for 24 hours.
From NCC 4.3.0 onwards, nic_link_down_check https://portal.nutanix.com/kb/2480 will detect the NIC status changed to a downstate and then raise the alert. After 24 hours, if the NIC is still in the downstate and never goes in an upstate within the day, the NIC will be removed from file check_cvm_health_job_state.json and no more alert would be generated nor failure detected by nic_link_down_check https://portal.nutanix.com/kb/2480.
If NCC version is less than 4.3.0, after the NIC is in a downstate, even though the alert is manually "resolved" from Prism > Alerts, the nic_link_down_check https://portal.nutanix.com/kb/2480 will detect it, then will bring the alert back again within 24 hours.
Assuming you are seeing the mentioned alert/failure on a NIC that is disconnected and not being used or both, you can resolve the alert depending on your NCC version.
On NCC 4.3.0 or newer, you can manually resolve the alert from Prism > Alerts without waiting for 24 hours.
If NCC version is less than 4.3.0, upgrade the NCC version to the latest. If you cannot upgrade NCC for some reason, only proceed to follow the procedure less than 4.3.0.
You can leave it for 24 hours for auto-resolve. To resolve the alert immediately and without waiting for 24 hours, or the alert/failure is not gone even after 24 hours, you can delete the problematic check_cvm_health_job_state.json file from all of the CVMs. This is a non-disruptive action, and the file will be recreated by itself. The alert message should get resolved after the next periodical check of nic_link_down_check https://portal.nutanix.com/kb/2480 after the file deletion.
Manually read the file to confirm the link status with the following command.
nutanix@cvm$ allssh cat /home/nutanix/data/serviceability/check_cvm_health_job_state.json
Example output:
Executing cat /home/nutanix/data/serviceability/check_cvm_health_job_state.json on the cluster
Delete check_cvm_health_job_state.json file from all CVMs (Controller VMs):
nutanix@cvm$ allssh /bin/rm /home/nutanix/data/serviceability/check_cvm_health_job_state.json
|
KB12459
|
CPU Thermal throttle event on G8 Multi-nodes platform with BMC 8.0.3
|
In rare conditions, NX-1065-G8/NX-3060-G8 node may encounter a CPU throttle SEL event assertion for less than 15 seconds immediately after BIOS POST.
|
In rare conditions, NX-1065-G8/NX-3060-G8 node may encounter a CPU throttle SEL event assertion for less than 15 seconds immediately after BIOS POST.
BMC 8.0.3 drops the Fan speed to a minimum during POST after the A/C power cycle and does not change the FAN speed. If the POST time is extended for any reason (> 4 minutes), for example, the user enters the BIOS menu or runs ePPR, a brief CPU thermal throttle SEL event assertion can be observed immediately after POST is complete. The CPU thermal throttle SEL event de-assertion will also be logged in a few seconds. This is because the FAN speed will be increased based on the CPU temperature when BMC takes over the control after the POST. If there is more than 1 node in the same block which are already running (not in POST), the node will not hit the issue since the FANs are shared in the block and the BMC on other nodes will increase the FAN speed due to the temperature increase.
Note: The node may be shut down automatically for protection due to the thermal condition with extremely long extended POST time and high environmental temperature.
NCC may also show the ipmi_sel_check alert for the CPU throttle event:
Detailed information for ipmi_sel_check:
|
Nutanix is aware of the issue. BMC 8.0.6 contains the fix. Use LCM to upgrade the BMC to the latest version or follow KB-12447 https://portal.nutanix.com/kb/12447 to upgrade the BMC manually.
|
KB6431
|
Pre-Upgrade Check: test_cvm_reconfig
|
test_cvm_reconfig verifies if CVM reconfiguration is possible during the upgrade.
|
This is a pre-upgrade check that verifies if CVM reconfiguration is possible during the upgrade.
Note: This pre-upgrade check runs on Prism Element (AOS) cluster and Prism Central during upgrades.
|
See below table for the failure message you are seeing in the UI, some further details about the failure message, and the actions to take to resolve the issues.
[
{
"Failure message seen in the UI": "Could not load zeus config proto",
"Details": "Software was unable to load zeus config",
"Action to be taken": "Run NCC health checks to determine if there are any failed checks. If all the checks pass and this pre-upgrade check is still failing, collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Could not obtain host memory map",
"Details": "Software could not obtain the host memory map",
"Action to be taken": "Run NCC health checks to determine if there are any failed checks. If all the checks pass and this pre-upgrade check is still failing, collect NCC log collector bundle and reach out to Nutanix Support."
}
]
|
KB14794
|
Nutanix Kubernetes Engine - How to modify eviction thresholds for kubelet
|
Provides instructions to modify the eviction thresholds for kubelet.
|
This KB provides steps to modify eviction thresholds for kubelet. As of NKE v2.7.0, the following eviction thresholds are supported:
image-gc-soft-threshold (Default 40%)image-gc-hard-threshold (Default 30%)image-gc-soft-grace-period (Default 15m)
If the above default thresholds do not work for the customer, you can modify using the instructions in the Solution section.
|
Step 1: Take a backup copy of karbon_core_config.json file using the following command:
nutanix@NTNX-PCVM:$ allssh 'cp /home/docker/karbon_core/karbon_core_config.json ~/tmp/karbon_core_config.json.bak'
Step 2: Modify the file /home/docker/karbon_core/karbon_core_config.json and add the required gflag under the entry_point section. Below is an example of the modified entry_point section. If Prism Central is scaled-out, perform the operation on all the PCVMs:
{
Step 3: Restart the karbon services on all the PCVMs:
nutanix@NTNX-PCVM:$ allssh 'genesis stop karbon_ui karbon_core' ; cluster start
The changes should be reflected in the kubelet on the next scrub operation. Wait for at least 15 minutes to confirm kubelet has new thresholds updated.Note: These changes will not persist through upgrades of the NKE services (karbon_core and karbon_ui) on Prism Central. Therefore, these steps must be performed again after each upgrade of NKE.
|
KB9876
|
DELL : 1-click Hypervisor Upgrade from ESXi 6.x to ESXi 7.0 fails with "Upgrade bundle is not compatible with current VIBs installed"
|
1-click Hypervisor Upgrade from ESXi 6.x to ESXi 7.0 fails with "Upgrade bundle is not compatible with current VIBs installed" on Dell platforms
|
1-click hypervisor upgrade from ESX 6.x to ESX 7.0 fails with the following error on Dell platforms.
~/data/logs/host_preupgrade.out log in CVM (Controller VM) shows the following error message.
2020-08-03 17:12:21 INFO esx_upgrade_helper.py:123 File /home/nutanix/software_downloads/hypervisor/7.0.0-16324942 copied to host x.x.x.x dest /scratch/image.zip
The reason it fails is that ESX 7.0 requires minimum PTAgent 2.1.0-70 and iSM 3.5.1-1952 binaries VIBs for compatibility. The current existing PTAgent and iSM VIBs are incompatible. These binaries are not backward compatible and are for ESX 7.0 and later.[Refer 1]
Prism 1-click upgrade from ESX 6.x to 7.0 may fail depending on the versions of both PTAgent and iSM installed on the nodes.
The following table outlines the scenarios involved:
When using the PTAgent 1.9.0 and iSM 3.4.0 combination or above versions, the upgrade might succeed even without the removal of either or both VIBs, however, neither will be functional.Manual removal of both the VIBs is mandatory prior to upgrading ESXi, The removal is required as the binaries needed for ESXi 6.7 are 32-bit binaries, and the ones needed by ESXi 7.0 are 64-bit binaries. [
{
"PTAgent / iSM": "1.9.0/3.4 (Or Lower)",
"Pre-upgrade/Upgrade": "Pass",
"Behavior": "The upgrade will succeed however PTA will be non-functional. Both iSM and PTA will require manual removal and updating."
},
{
"PTAgent / iSM": "1.9.4/3.4",
"Pre-upgrade/Upgrade": "Not tested",
"Behavior": "Unlikely seen in the field as never included in Foundation"
},
{
"PTAgent / iSM": "1.9.4/3.5.0",
"Pre-upgrade/Upgrade": "Not tested",
"Behavior": "Unlikely seen in the field as never included in Foundation"
},
{
"PTAgent / iSM": "1.9.6/3.5.1",
"Pre-upgrade/Upgrade": "Fail",
"Behavior": "The above failure message will be displayed"
},
{
"PTAgent / iSM": "2.1/3.5.1",
"Pre-upgrade/Upgrade": "Fail",
"Behavior": "The above failure message will be displayed"
}
]
|
Steps 1 to 3 are common for both Dell XC and PowerEdge nodes.
Log in to the CVM using ssh.
Uninstall PTAgent first, then iSM:
nutanix@cvm$ allssh 'ssh [email protected] esxcli software vib remove -n dellptagent'
Sample output:
nutanix@cvm$ allssh 'ssh [email protected] esxcli software vib remove -n dellptagent'
Note: There is no need to reboot after uninstalling the VIBs. The reboot will be performed during the hypervisor upgrade process.
After you have removed the VIBs, you should be able to upgrade the hypervisor successfully.
Retry upgrading the hypervisor to ESXi 7.0 using the 1-click hypervisor upgrade method https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_18:upg-hypervisor-upgrade-esxi-c.html.
Perform the following steps for Dell XC nodes:
Post ESXi upgrade to ESXi 7.0 on the cluster, perform LCM inventory /5f35dc0f0be3442693e678d0b709eff3 and verify the LCM version is 2.3.4.Under LCM Update Section for Software, select PTAgent and iSM entity and install them on the cluster for all nodes. Refer to LCM User Guide https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Guide-v2_3:2-lcm-update-t.html.Once the LCM task is completed, perform LCM Inventory and verify the PTAgent version as 2.1 and iSM version as 3.5.1.This concludes the ESXi upgrade to 7.0 on a Dell XC cluster.
Perform the following steps for Dell PowerEdge nodes:
Note: Engage Dell Support on call for Dell PowerEdge to run through the below steps, since Dell has the distribution rights for PTAgent/iSM binaries if it is out of band installation. Dell PowerEdge does not support LCM.
Request Dell Support to install PTAgent 2.1-070 and iSM 3.5.1-1952 compatible with ESXi 7.0.
Copy new VIBs for PTAgent 2.1.0 and iSM 3.5.1-1952 to all hosts under /scratch/downloads. Dell has the binary distribution right so Nutanix will not have these binaries.
nutanix@cvm$ for i in $(hostips) ; do scp DEL_bootbank_dellptagent_2.1.0-70.vib root@$i:/scratch/downloads; done
Sample output:
nutanix@cvm$ for i in $(hostips) ; do scp DEL_bootbank_dellptagent_2.1.0-70.vib root@$i:/scratch/downloads; done
Install iSM 3.5.1-1952 VIB first:
nutanix@cvm$ allssh 'ssh [email protected] esxcli software vib install -v /scratch/downloads/DEL_bootbank_dcism_3.5.1.1952-DEL.700.0.0.14828939.vib'
Sample output:
nutanix@cvm$ allssh 'ssh [email protected] esxcli software vib install -v /scratch/downloads/DEL_bootbank_dcism_3.5.1.1952-DEL.700.0.0.14828939.vib'
Install PTAgent 2.1.0-70:
nutanix@cvm$ allssh 'ssh [email protected] esxcli software vib install -v /scratch/downloads/DEL_bootbank_dellptagent_2.1.0-70.vib'
Sample output:
nutanix@cvm$ allssh 'ssh [email protected] esxcli software vib install -v /scratch/downloads/DEL_bootbank_dellptagent_2.1.0-70.vib'
Restart PTAgent:
nutanix@cvm$ hostssh /etc/init.d/DellPTAgent start
Sample output:
nutanix@cvm$ hostssh /etc/init.d/DellPTAgent start
Verify PTAgent Version:
nutanix@cvm$ hostssh /etc/init.d/DellPTAgent version
This concludes the ESX upgrade to 7.0 on a Dell PowerEdge cluster.
|
KB14505
|
Nutanix DRaaS | VPN connection create task might fail if IPSec secret contains special characters
|
When creating a new VPN connection, it might time out due to IPSec password limitations. IPSec password has to be alphanumeric and should not contain special characters.
|
When creating a new VPN connection, the task might time out after 90 seconds with the following error.
On connecting to Xi PC, the following errors are noticed in atlas.out on the atlas service leader:
2023-03-20 18:18:12,242Z WARNING scanner.py:415 VPN Gateway d884ad0a-1404-4e09-8814-32456f703edf not inventoried by LCM
To check further on why the VPN connection create task is timing out, SSH to VPN VM. Steps to connect to the VPN VM can be found in VPN Troubleshooting Guide https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=STK&title=WF%3A+VPN+IPSEC+troubleshooting.
The VPN log location is /var/log/ocx-vpn/vpn.log. The following errors are noticed on the vpn.log file.
2023-03-20 18:17:33,265 - execformat.executor - DEBUG - Executed command "/opt/vyatta/sbin/my_set vpn ipsec site-to-site peer xx.xx.xx.xx authentication mode pre-shared-secret" return code: ('', '', 0)
|
As per the Nutanix Disaster Recovery Guide https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide:ecd-ecdr-draas-pc-c.html, IPSec secret has to be an alphanumeric string and should not contain special characters.
When a customer uses a secret without special characters, the VPN connection create task should succeed without any issues.
This KB was created based on a case where a customer used double quotes in IPSec secret, causing the issue, but in the field, there might be other scenarios that can cause a VPN connection create task timeout. vpn.log file on the VPN VM should have more information than atlas.out for issues related to VPN connection create/update/delete. NET-12848 https://jira.nutanix.com/browse/NET-12848 has been filed to improve task failure error text on Prism Central.
|
KB15136
|
[ LCM ] Host stuck in phoenix with network connectivity during LCM upgrade
|
It has been observed in the field situations where a host during a LCM run stays in phoenix and does not reboot back into the host. mdadm devices are healthy and the host has network connectivity.
|
It has been observed that in various LCM upgrade workflows where the node is rebooted into Phoenix, it gets stuck in the prompt and does not reboot back into the hypervisor. Eventually, the LCM task fails leaving the host in this state.Please ensure that all symptoms match before attaching this KB to a case, as this is different, for example, to the scenario described in LCM upgrade workflows leave the node in an inconsistent Phoenix state /articles/Knowledge_Base/LCM-upgrade-workflows-leave-the-node-in-an-inconsistent-Phoenix-stateDuring the reboot_to_phoenix workflow, and in the node's boot sequence, the following can be observed in the node's IPMI console:The host will be available through the network:
[nutanix@cvm ~] ssh [email protected]
LCM task will eventually time out and fail, noting it could not connect to the Phoenix interface (even if it is pingable and reachable through ssh):
2023-05-17 22:14:46,823 {"leader_ip": "xx.xx.xx.166", "event": "Successfully filtered all available versions", "root_uuid": "238483a1-6dfb-4dd8-5a40-71e859a55df2"}
|
This issue is pending RCA as noted in ENG-567722 http://jira.nutanix.com/browse/ENG-567722 , in case found in the field make sure to collect the following information outputs and attaching them to the case and re-open the ENG ticket if closed. This is critical for Engineering to find why the bootloader was not changed successfully.
Collect the following outputs from the host in the Phoenix:
$ date
Once collected the above information, reboot the host back into the hypervisor using reboot_to_host.pyWhen the host is booted back into the hypervisor and the CVM is taken out of maintenance mode, ensure to collect both LCM logs using LCM Log Collection Utility /articles/Knowledge_Base/LCM-Log-Collection-Utility and a logbay bundle. This is needed as logbay by itself does not collect all the required information.
|
KB7992
|
Restoration of Stargate FT due to under-replicated Oplog episodes after node failure
|
KB describing delayed Stargate FT rebuild due to Oplog.
|
Stargate component fault tolerance indicates Stargate health status across the cluster. It is an indicator of cluster fault tolerance (FT). Due to Stargate service being temporarily unavailable on some node(s), this may indicate that the data (Extent groups and Oplog) are not replicated to the configured replication factor.After a Curator scan, Stargate health FT is restored by Curator back to the same value as it was earlier, even though the node/Stargate previously down has not come back. When Stargate on the CVM goes down Curator service waits for 60 seconds (controlled by Curator gflag curator_max_allowed_stargate_down_time_secs) to trigger a Curator Selective scan for reasons NodeFailure. The vdisks hosted on Stargate of the CVM that went down are rehosted on the Alive Stargate's in the cluster. We can observe below messages in Curator (master) logs which indicate on which specific Stargate the vdisk is hosted post Stargate failure:
I1018 10:02:21.947777 15132 Curator_zeus_helper.cc:143] VDisk 368777615 has moved from host x.y.z.109:2009 to host x.y.z.108:2009
Post the Curator scan for reasons NodeFailure, the ExtentGroups present on the node which went down are identified as under replicated and at the end of scan Chronos tasks would be delegated to other Stargates in cluster re-replicate the under-replicated egroups.
As part of the scan we would also identify the under-replicated Oplog episodes and the Curator would send RPC’s to Stargate to drain/flush the non-fault tolerant Oplog episodes to extent store.
Also, as per the current design, the Oplog recovery mainly involves scanning the Oplog store’s directory hierarchy and making the remote Stargate to take ownership of existing episodes. The Oplog recovery mainly ensures the vdisk writes continue with the write operations continuously.The Oplog recovery does not trigger re-replication of the Oplog episodes to restore the Oplog FT since the data will anyways be migrated to extent store. Once the vdisk is re-hosted we would make the secondary replica of the Oplog episode as primary and Curator would send Flush RPC for under-replicated episode files to drain them to extent store.The Stargate health Fault tolerance is dependent on the counters below:
NumExtentGroupsOnDiskNumNascentExtentGroupsOnDiskNumOplogEpisodesOnDisk
The above counters are present for each of the disks on the node on which Stargate goes offline.Curator index store also maintains a counter to track the under-replicated bytes in the storage pool, which can be seen in the Curator log messages:
I0109 05:06:54.439636 14341 index_store_controller.cc:696] Found 293507072 bytes of under replicated data in the cluster
These counters can be found in the Curator.INFO log on the Curator master logs or by using curator_cli functionality.
The general format to look for the counters in Curator (master) logs would be counter name followed by disk id.
In the below example, 275383184 and 275383183 are the disk IDs that belong to the node which is down:
I1018 10:10:58.242373 14896 mapreduce_job.cc:563] NumExtentGroupsOnDisk[275383184] = 516802
The description of the above counters are given below:
NumExtentGroupsOnDisk[Disk_ID]: Number of egroup files that are present on disk as per medusa/metadata state. NumNascentExtentGroupsOnDisk[Disk_ID]: Number of nascent egroups present on disk. Whenever an egroup is written onto a disk the order of metadata update would be creating an extent group id map entry in Cassandra with a tentative update and then create extent id map followed by vdisk block map pointing to the extents. If Stargate crashes in the middle after creating tentative updates there would be no vdisk block map entry pointing to extent group id map so this egroup becomes inaccessible (nascent state). Sometimes to remove the nascent egroups in the system one complete or multiple Full scans are required. The Nascent egroups on the disk are tracked from the Curator counter NumNascentExtentGroupsOnDisk[Disk_ID] NumOplogEpisodesOnDisk[Disk_ID]: Number of oplog episode files left on disk to be flushed. (This counter is present from AOS 5.10.7)
I1018 20:36:42.083027 15132 Curator_fault_tolerance_info.cc:708] Updated domain 300 with component component_type: kStargateHealth max_faults_tolerated: 1 last_update_secs: 1571423788 tolerance_details_message {message_id: "Based on Stargate health the cluster can tolerate a maximum of 1 rackable unit (block) failure(s)"}
|
Starting with AOS 5.15.1 / 5.17 (ENG-270566), Curator sends Oplog episode flush request with an urgent bit set for any under-replicated Oplog episodes.
Observe the messages in Curator logs that indicate the Curator is sending drain request RPCs to Stargate to drain till certain episode id. In the below example Curator (slave or master) is sending a flush request to Stargate to drain all the episodes till episode id: 703024
I1231 19:06:06.332787 8990 vdisk_oplog_map_task.cc:724] Sending request to flush oplog for vdisk 286451 that has unflushed oplog episodes; svm_id hint: 10; highest episode sequence to flush: 703024
The below entry will be present in Stargate logs, indicating that it has received a flush request till episode id: 703024 with urgent bit set.
I1231 19:06:06.365371 8484 vdisk_controller.cc:3582] vdisk_id=286451 Flush request received - timeout(msecs): -1000 write_empty_checkpoint: 0 is_urgent: 1 episode id: 703024
When an Oplog Flush RPC is sent with an urgent bit set, the lost Oplog FT is restored by draining episodes to extent store which would help in restoring the resiliency of the cluster faster.
|
KB3469
|
NCC Health Check: sar_stats_threshold_check
|
The NCC health check sar_stats_threshold_check verifies any abnormalities with SAR stats (System Activity Report) retrieved from the Controller VMs on the cluster.
|
The NCC health check sar_stats_threshold_check verifies any abnormalities with SAR stats (System Activity Report) retrieved from the Controller VMs on the cluster.
Load average outputs for the last 5 minutes are alerted if the output value is beyond the threshold limit. The load average is the calculated number of processes staged for service by the operating system.
INFO is reported when sar -q command is >= 3 * number of vCPUs configured for the CVM but less than 100.FAIL is reported if ldavg-5 exceeds 100.
This check also generates a Prism Warning Alert (until NCC version 3.7.0.1) and a Critical Alert.
Running the NCC Check
Run the NCC check as part of the complete NCC Health Checks:
nutanix@cvm$ ncc health_checks run_all
Or run the check separately:
nutanix@cvm$ ncc health_checks sar_checks sar_stats_threshold_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 1 minute, by default.This check will generate a Critical alert A6516 after 1 failure. It is temporarily disabled in NCC-4.6.0 and above.Starting with NCC-4.1.0, this check generates a Critical alert A6518 after 15 failures if load_avg remains greater than 100000.Starting with NCC-4.5.0, this check reports an INFO if load_avg is between 100-1000000.
Sample output
For Status: PASS
Running : health_checks sar_checks sar_stats_threshold_check
Note: If this NCC check is run immediately after a cluster is created or right after an AOS upgrade, the check will PASS even though there has not yet been enough SAR data collected to judge whether there is a performance issue on the CVM. This change was made in NCC 4.5 to not confuse the user with the Error message ("ERR: Sar Report is not available") when there is no sign of any problem. If you are monitoring for a CVM performance issue, please wait a few hours to allow the system to collect SAR data and then run NCC again.For Status: FAIL
Running : health_checks sar_checks sar_stats_threshold_check
For Status: INFO
Running : health_checks sar_checks sar_stats_threshold_check
Output messaging
NOTE: Check 6516 has been disabled in NCC-4.6.0 and above.
Note: This hardware-related check executes on all hardware except Inspur and SYS[
{
"Check ID": "Checks whether the system load average is high."
},
{
"Check ID": "The I/O workload on the cluster is high."
},
{
"Check ID": "Redistribute VMs to reduce the load on the affected node.\t\t\tTune the applications to reduce CPU demand.\t\t\tReduce the number of VMs on the host."
},
{
"Check ID": "High I/O latencies may be experienced by some workloads."
},
{
"Check ID": "6516"
},
{
"Check ID": "Checks whether the system load average is critically high."
},
{
"Check ID": "Critically high load on the system."
},
{
"Check ID": "Contact Nutanix Support for assistance."
},
{
"Check ID": "Low performance may be experienced due to cluster-wide I/O latency."
},
{
"Check ID": "A6516"
},
{
"Check ID": "CPU Average Load Critically High on Controller VM."
},
{
"Check ID": "CPU Average Load Critically High on Controller VM ip_address"
},
{
"Check ID": "Average CPU load on Controller VM ip_address is critically high (above critical average load threshold value critical_high_value)"
},
{
"Check ID": "6518"
},
{
"Check ID": "Checks whether the system load average is critically high."
},
{
"Check ID": "Critically high load on the system for quite a long time"
},
{
"Check ID": "Reboot the CVM experiencing a high load average."
},
{
"Check ID": "Low performance may be experienced due to cluster-wide I/O latency."
},
{
"Check ID": "A6518"
},
{
"Check ID": "CPU Average Load critically high on CVM."
},
{
"Check ID": "CPU Average Load critically high for a long time on CVM cvm_ip"
},
{
"Check ID": "alert_msg"
}
]
|
Identification
If there are alerts in Prism due to the warning threshold (<NCC 3.7.0.1), it is recommended to upgrade NCC to the latest version, which includes improved threshold and alert logic. Resolve the alert and re-run the health check to verify PASS after the NCC upgrade. If the issue persists, continue with Step 2.Determine the CVM for which the alert is generated from the NCC check results.
In the example in the following point, the 5-minute load average earlier was 10.42. The load average value is captured every 10 seconds.
Run sar -q to see the previous data from midnight until the current time and look for values greater than 10 under ldavg-5.
nutanix@cvm:~$ sar -q
In the preceding example, the NCC alert is triggered because the 5-minute load average value at 2:30:01 AM is 10.42.If the load average is between 10 and 99, use the tools top, sudo iotop, and iostat to identify the CPU-heavy processes or latent or busy disk activity. If the CPU usage is low for the CVM, but the load average appears high, look for multiple processes in a D state. D-state processes are in uninterruptible sleep or waiting for disk I/O.
Look for the D state processes by using the following command:
nutanix@cvm:~$ top -b -n 1 | egrep "PID| D "
If the load average is higher than 10 in ldavg-15 column of the sar -q command output, or you receive a Critical Alert in the Prism web console or FAIL alert in NCC for exceeding the threshold of 100, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.Nutanix identified an issue where the "CPU load is very high for ldavg-5 for a long time" false-positive alerts are generated due to issues with the accounting of the running process in CVM.This is a cosmetic issue that doesn't have an impact on running workloads.
Upgrade NCC to 4.5.0 or higher which reduces the alert level to INFO if the load_avg value is recorded of between 100-1000000.Review KB-4272 https://portal.nutanix.com/kb/4272 for steps to identify if the CVM is impacted by the cosmetic issue as well as remediation steps.
|
KB8720
|
How to upgrade two node cluster from AOS 5.6
|
This KB is to provide a work flow for the AOS upgrade of a two node cluster on 5.6
|
This KB is designed to provide a work flow for upgrading AOS of a two node cluster that is on 5.6.When attempting this upgrade from the Prism GUI the pre upgrade checks will fail with the following message:"There are XX VMs that are currently powered on. Please power off these guest VMs. All guest VMs on this cluster must be powered off in order for the upgrade to proceed. You will need to power these guest VMs back on after the upgrade."This KB provides the work around for that issue as well as other known issues that a Customer/SRE will hit during the upgrade process from AOS 5.6.WARNING: Support, SEs and Partners should not use CLI AOS upgrade methods without guidance from Engineering or a Senior/Staff SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before proposing or considering these options.
|
1. Verify cluster health and stability.
cluster status | grep -v UP
2. Run the Nutanix Cluster Check (NCC). If the check reports a status other than PASS, resolve before proceeding to next steps.
nutanix@cvm$ ncc health_checks run_all
3. Copy the metadata and binary files to /home/nutanix/ on the CVM, from which you will be performing the upgrade.4. Enable automatic installation.
nutanix@cvm$ cluster enable_auto_install
5. Expand the tar.gz file (example given for nutanix_installer_package-release-euphrates-5.9.2.4-stable.tar.gz in this step. Replace it with the tar.gz file name for your version).
nutanix@cvm$ cd /home/nutanix
6. Start the upgrade. (The metadata file in this step is named euphrates-5.9.2.4-metadata.json as an example.)
nutanix@cvm$ /home/nutanix/install/bin/cluster -i /home/nutanix/install -v /home/nutanix/euphrates-5.9.2.4-metadata.json -p upgrade
7. Monitor the progress with this command:
progress_monitor_cli --fetchall
8. After a CVM is fully upgraded and rebooted, it takes between 15-16 minutes for services to come up.9. Set a timer for 16 minutes. At the end of 16 minutes run this command to verify all the services came online:
genesis status
10. After you verified all the services are up verify if Mantle is stable:
watch -d genesis status
Watch this for 2-3 minutes and see if any of the pids are changing for Mantle.Note: The "watch -d genesis status" command to confirm whether a service is stable or not is not a reliable way to check/confirm the stability of the "cluster_health" and "xtrim" services since these services for their normal functioning spawn new process ID temporarily. The new temporary process ID or change in the process ID count in the "watch -d genesis status" command may give the impression that the "cluster_health" and "xtrim" services are crashing but in reality, they are not. Rely on the NCC health check report or review the logs of the "cluster_health" and "xtrim" services to ascertain if they are crashing.11. If Mantle pids are changing look for this signature before a stack trace in the mantle.out log:
less /home/nutanix/data/logs/mantle.out
12. Verify if the node_info file is missing from /appliance/logical/mantle/node_info/
zkls /appliance/logical/mantle/
13. If you don't see node_info then create it with the following command:
zkwrite /appliance/logical/mantle/node_info
14. Verify that Mantle is now stable now:
watch -d genesis status
Note: The "watch -d genesis status" command to confirm whether a service is stable or not is not a reliable way to check/confirm the stability of the "cluster_health" and "xtrim" services since these services for their normal functioning spawn new process ID temporarily. The new temporary process ID or change in the process ID count in the "watch -d genesis status" command may give the impression that the "cluster_health" and "xtrim" services are crashing but in reality, they are not. Rely on the NCC health check report or review the logs of the "cluster_health" and "xtrim" services to ascertain if they are crashing.15. Verify if the HA route was not removed from the host by running this command:
hostssh "route"
16. Clear the route (if you see one listed for 192.168.5.2 as above) by issuing the following command on the CVM for that host:
genesis restart
17. genesis.out will now show the following signature:
2019-12-08 12:40:52 INFO cluster_manager.py:4754 Shutdown token details ip 172.x.x.x time 1575817691.6 reason nos_upgrade
18. Run a Curator Full Scan with the following command:
allssh curl http://localhost:2010/master/api/client/StartCuratorTasks?task_type=2
19. Upgrade will continue to the other node once all Extent Groups have been moved:
allssh "grep ExtentGroups /home/nutanix/data/logs/curator.INFO"
20. After both nodes have been upgraded verify everything looks correct with the following commands:
upgrade_status (both say "is up to date")
|
KB8922
|
Information captured under Syslog
|
This article is to track the information captured under syslog
|
This article is to track information which are captured under syslog which may be useful in different uses cases for example.1. Who created a new user? Which log shows that information in syslog?2. How to find in syslog whether the new user which is created has admin role or not? 3. Who created the snapshot and when it got created? Which log shows that information in syslog?4. If anyone activates or deactivates the syslog, which log shows that information in syslog?
|
Answer 1: Performed a test, logged to Prism as username "admin" and created a new user with name "newyork". Following logs snippet shows that information under syslog
Dec 17 12:52:56 NTNX-16SM37290159-B-CVM api_audit: INFO 2019-12-17 12:52:55,430 clientType=ui||userName=admin||NutanixApiVersion=1.0||httpMethod=PUT||restEndpoint=/v1/users/newyork/roles||entityUuid=||queryParams=||payload=
Answer 2: The parameters passed to API will not be logged because of security reasons, so audit log will not show the attribute set for user.Answer 3: Performed a test, logged in to the Prism as username "admin" and took a snapshot for VM 'Windows 2008'. Following logs snippet shows that information under syslog
Dec 17 12:45:06 NTNX-16SM37290159-B-CVM api_audit: INFO 2019-12-17 12:44:57,390 clientType=ui||userName=admin||NutanixApiVersion=0.8||httpMethod=POST||restEndpoint=/v0.8/snapshots||entityUuid=||queryParams=||payload=
Answer 4 : This is the current limitation, syslog doesn't capture this information. Filed ENG-277487 https://jira.nutanix.com/browse/ENG-277487Here its why, Remote Syslog entry deletion API call
curl 'https://10.63.19.19:9440/api/nutanix/v3/remote_syslog_servers/6615e51a-6d96-4ac3-8d3d-95c2d0a2a8fa'
Entry in aplos.out
2020-01-06 01:55:40 INFO audit_events.py:76 **Audit Event** (aplos): Event: entity_deletion, Timestamp (secs): 1578304540.971820, Details: {'username': 'admin', 'kind': 'remote_syslog_server', 'entity_name': None, 'owner_uuid': u'00000000-0000-0000-0000-000000000000', 'entity_uuid': '6615e51a-6d96-4ac3-8d3d-95c2d0a2a8fa', 'user_uuid': u'00000000-0000-0000-0000-000000000000'}
As per rsyslog modules configuration, client_tracking_v3.log is not part of rsyslog. That is why there was no loggin in rsyslog when we removed remote syslog entry
File : /usr/local/nutanix/cluster/config/rsyslog_modules.json
|
KB14605
|
NDB - MSSQL AG DB was provisioned successfully but DB is not synced to Secondary replica
|
This article introduces two AG DB provision scenarios.
|
This is relevant to Availability Group Database environments being provisioned by NDB using a backup file as source.Scenario #11. The DB Provision from a backup operation fails at the "Register Database" steps with the error: "Could not get information about the database <DB Name>. Cannot continue. Please check whether the Database is in Online mode."2. The "/sqlserver_database/register_database/<operation_id>.log" shows the below Traceback when fetching the database information:
[2023-03-27 12:01:51,872] [111592] [INFO ] [0000-NOPID],Discovering MSSQLSERVER\<DB Name>
3. The "ERRORLOG" file on the Secondary replica diagnostics log bundle shows "The system cannot find the file specified" errors:
2023-03-27 11:59:03.62 spid92s Always On: DebugTraceVarArgs AR '[HADR] [Secondary] operation on replicas [FD350EA5-FC5C-49C7-9586-25A3ABD0B0EA]->[9079F096-4D50-4827-917F-6F4BDD874C87], database [<DB Name>],
Scenario #2:1. The DB Provision from a backup file operation is completed successfully.2. Check the AG Dashboard on SSMS and you will find that the Secondary replica is in "Not Synchronizing" status. The provisioned DB on the secondary node shows a "Not Synchronizing" status.3. "Remove Database from Availability Group" of "Not Synchronizing" DB within SSMS. It succeeds.But when "Join to Availability Group" of "Not Synchronizing" DB from the secondary node, it fails with the below error:
The following required directories do not exist on replica <DB Name>: C:\NTNX\ERA_DATABASES\<DB Name>\DATA1\data, C:\NTNX\ERA_DATABASES\<DB Name>\DATA4\data"
4. The "C:\NTNX\ERA_DATABASES\<DB Name>\DATA1\data(x)" folder is empty on the secondary node.
|
Definitions:
Manual Seeding - This is the default behavior. This means that the SECONDARY replica will be manually seeded by a DBA via a Database Backup (of the PRIMARY) and restored on the SECONDARY. In NDB's case, we take the place of the DBA and are responsible for the manual backup of the PRIMARY and restore on the SECONDARY.Automatic Seeding - Starting with SQL Server 2016, automatic seeding was introduced where SQL Server automatically creates the SECONDARY replicas for every database in the group. You no longer have to manually back up and restore SECONDARY replicas. NDB currently do not support Automatic Seeding: NDB Current Limitations https://portal.nutanix.com/page/documents/details?targetId=Nutanix-NDB-SQL-Server-Database-Management-Guide-v2_5:top-sql-server-limitations-c.html.
Scenario 1 Explanation:In this particular scenario, since the seeding mode was Automatic, we skip the DB Backup and Restore phase for Manual Seeding completely. If the Automatic Seeding takes a long time, the Registration phase fails and the databases are rolled back.Scenario 1 Solution:To resolve this issue:
Open up SSMS.Right-click on the AG group and select Properties.In the General tab, ensure that Seeding Mode is set to Manual for all replicas.
If you prefer to use SQL Statements, you can use the following command to verify the current seeding mode:
====
And the following command to change the seeding mode:
====
Scenario 2 Explanation: In this scenario, the DB Provisioning of all replicas succeeded. However, due to a difference in datafile and log file locations, the replicas end up in "Not Synchronizing" mode.Here are the related steps that NDB performs during this operation:1. It attaches and mounts the DATA and LOG disks at the below location for both nodes:
{'mount_point': 'C:\\NTNX\\ERA_DATABASES\\DBTEST\\DATA1\\', 'size': 20.0}
2. Once disks are attached and mounted, it imports the database only onto the PRIMARY node from the given backup file.3. It runs a sub-operation to restore the database in the following subdirectories ("data" and "log"):
{'BAH_DB': 'C:\\NTNX\\ERA_DATABASES\\DBTEST\\DATA1\\data\\'}
4. Once the database is imported and restored on the PRIMARY node, a backup will be taken, and the database will be added to the Availability Group. This backup will be used to restore the database on the SECONDARY replica.5. However, since the Automatic seeding is enabled, NDB assumes SQL Server builds the SECONDARY replica from the Primary node by Automatic seeding and skips the creation of the "data" and "log" sub-directories in the SECONDARY replicas. This results in a mismatch in the data and log directory structures between PRIMARY and SECONDARY replicas.6. SQL Server detects the mismatch in the directory structure between PRIMARY and SECONDARY. It puts the SECONDARY replica in "Not Synchronizing" status. Scenario 2 Solution:To resolve this, follow the steps below:
Follow the Solution steps in Scenario 1 in order to switch the seeding mode to Manual.Trigger the DB removal on NDB for both PRIMARY and SECONDARY replicas.NDB will remove the DB on the Primary node. It fails on the secondary node.From Prism Element, open the VM console of the SECONDARY replica, refresh the SQL Server Instance on SSMS and make sure the target DB is no longer listed.Refer to KB-13924 https://portal.nutanix.com/kb/13924 to list the Volume Disk ID and Prism SCSI disk ID mappings.Mark those Volumes of the target DB as offline in Windows Disk manager. Note: Please double check that you are marking the correct Volumes offline. If in doubt, please consult an NDB SME or STL.Delete the SCSI disk on the Prism VM. A "VM disk detach" task will be started. Clean up the DB entry on the NDB repository by executing the following command on an SSH session to the NDB Server:
era > database_group update database_group_name="<DB group name>" database remove database_id=xxx delete_time_machine_for_db=true
Redo the DB Provisioning to the AG again.
|
KB12800
|
Nutanix Database Service - Log Catchup activity fails after restore operations on PostgreSQL DBs
|
After performing one or multiple restore operations for an NDB managed PostgreSQL Single Instance DB, log-catchup operations start to fail with the error “All wal files are older than wal files already captured in Time Machine. Perform a careful manual cleanup of the staging disk and then retry!”.
|
After performing one or multiple restore operations for an NDB-managed PostgreSQL Single Instance DB, log-catchup operations start to fail with the following error:
All wal files are older than wal files already captured in Time Machine. Perform a careful manual cleanup of the staging disk and then retry!
After the first log-catchup failure, all subsequent log-catchup operations continue to fail. There are no issues observed with the functionality of the database.
A possible reason is that The PostgreSQL DB Server VMs staging drive contains WAL files that belong to a timeline that is already caught up. This could happen if the staging drive was altered/managed outside of NDB.
|
In the above situations, a “time machine heal” operation would help reduce/remove the gap that is building up in the NDB time machine. A time machine heal can only be performed using the NDB CLI. Follow the below steps to perform a time machine heal.
Connect to NDB Server via SSH.Launch the NDB CLI by typing ‘era’ in the command prompt.
bash ~ $] era
Inside the NDB CLI, type the following to perform a heal of the time machine.
era > time_machine heal name=”<time machine name>” reset_capability=<true|false> nx_cluster_id=<cluster ID>
Setting reset_capability to true will trigger a database snapshot prior to the heal operation.
|
KB15069
|
NDB: Creating SQL software profile may fail due to Powershell execution policy restriction
|
This article outlines a situation that creating SQL software profile fails due to Execution Policy being set to AllSigned in GPO.
|
Creating SQL software profile may fail when the GPO PowerShell execution policy prevents NDB from executing PowerShell scripts on the source DBSever VM. Here are the steps to identify the issue:1) Access the NDB server, check the operation log under the directory /home/era/era_base/logs/drivers/create_software_profile/<operation_id.log> and filter the error message "failed to get partition info' in get_partition_access_path"
[era@localhost ~] grep -i "failed to get partition info" /home/era/era_base/logs/drivers/create_software_profile/<operation_id.log>
Traceback in the operation logs:2) From the operation dashboard in NDB Web UI, you may see the operation fails with the error message 'Internal Error.\'"\\\'failed to get partition info\\\'in get_partition_access_path"in get_win_mount_drive_mapping\'\r\n'.
3) However, if you execute the PowerShell command "Get-Partition" on the source DB Server VM, there's no issue:
Get-Partition | ForEach-Object {
Here is an example of the output:
4) Access the source DB Server VM, and check the operation log under C:\NTNX\ERA_BASE\logs\drivers\sqlserver_database\create_software_profile\<operation_id.log>, find the following execution policy errors:
The file C:\NTNX\ERA_BASE\era_engine\stack\windows\python\lib\site-packages\nutanix_era\era_drivers\common\host\power_becf68be.ps1 is not digitally signed. You cannot run this script on the current system.
Here is the screenshot of the log snippet:
5) Run the following PowerShell command to check the current execution policy on the Source DBServer VM. In this example, the execution policy is set as AllSigned.
PS> Get-ExecutionPolicy
|
NDB requires the ability to execute scripts on a source DB Server VM. So the PowerShell execution policy should be set to Unrestricted.1) In the Group Policy Management Editor, go to Computer Configuration → Policies → Administrative Templates → Windows Components → Windows PowerShell, enable 'Turn on script execution', and then select "Allow all scripts" in the Execution Policy drop-down box.For more information, please refer to: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-7.3 https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.security/set-executionpolicy?view=powershell-7.3
2) In the source DB Server VM, force an update to the group policy::
C:\> gpupdate /force
3) Run the following command to verify that the execution policy has been updated to Unrestricted.
4) Attempt to create the SQL software profile again.
|
""ISB-100-2019-05-30"": ""ISB-037-2017-02-10""
| null | null | null | null |
KB13337
|
Unable to upload CA root certificate due to unsupported algorithm
|
FIPS 140-2 too restrictive for CA root cert, blocking certificate upload when attempting to replace self-signed SSL certificates.
|
+When attempting to replace the self-signed SSL certificates with CA signed certificates, as outlined in the Prism Security Guide, a root CA or intermediary CA cert chain must be provided.+After selecting the proper file, and hitting the upload button, the dialog box returns with the following error typed in red, highlighted in pink:
Unsupported signature algorithm detected on one or more certificates in CA chain. Please refer to the FIPS 140-2 documentation for more information.
+This indicates an issue with 1 or more certificate blocks in the CA chain file that is trying to be uploaded.
|
+SHA-236 and SHA-384 signed certificates can still include an unsupported SHA-1 signing algorithm, which will block the upload+As a workaround, the CA cert file can be modified to remove the SHA-1 certificate block, which will allow the file to be successfully uploaded and the cert to be valid1. MAKE A COPY OF THE CA ROOT CERT YOU PLAN ON MODIFYING PRIOR TO DOING ANYTHING2. Using a text editor, open the CA root certificate file which should show something like this:
-----BEGIN CERTIFICATE-----
+Copy the Root CA cert contents to a notepad; while logged into a CVM, change to the tmp directory with cd ~/tmp+Using a text editor, create a file for each block of the certificate file, making sure to include the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- in each file.+Run the following command against each file to determine the SignatureAlgorithm used
nutanix@NTNX-19FM6H130137-D-CVM:10.24.135.19:~/tmp$ openssl x509 -text -in test2.cert
+Using the individual certificate blocks in the CA root chain file, identify the block using the SHA-1 signature algorithm, as exampled above.+In the example given, the bottom section of the CA root chain file between the ------BEGIN CERTIFICATE------ and the -----------END CERTFICATE----------- is the section with the SHA-1 encoding. Delete the section, save and exit the file
-----BEGIN CERTIFICATE-----
+Back in Prism, the modified CA cert should now be able to be uploaded and the new SSL certificate should be applied
|
KB15601
|
Invalid callback URL breaks login for IDP users.
|
If an incorrect callback URL was configured for the IDP provider, then attempts to log in to PCVM via the IDP user will fail.
This article describes confirming the callback URL if access to IDP provider configuration is impossible.
|
Import PCVM metadata and check the callback URL (follow Nutanix Security Guide: Adding a SAML-based Identity Provider https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide:mul-security-authentication-saml-pc-iam-t.html for instructions):
<?xml version="1.0"?>
Try to log in with IDP user check logs of iam-user-authn pods:
nutanix@PCVM:~$ for POD in $(sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector='app in (iam-themis,iam-user-authn)' -o jsonpath='{.items[*].metadata.name}'); do echo ${POD}; sudo kubectl -n ntnx-base logs ${POD} > ~/tmp/${POD}.log; done
The following line is observed:
time="2023-09-06T11:43:05Z" level=error msg="Failed to authenticate: expected destination \"https://x.x.x.y:9440/api/iam/authn/callback\" got \"https://some.domain.example.com:9440/api/iam/authn/callback\"" requestID=9798936e-434f-98a8-afdc-e96782e03029
Note that the expected destination and the one in the response might be either IP or Domain name, depending on PCVM configuration.What is important here is that they are different.
|
Reconfigure the URL on the IDP side to match the expected URL available on the metadata XML file.
|
KB16149
|
NVMe SSD is passed through directly to the User VMs in ESXi
|
As this is an unsupported configuration which can cause cluster instability.
|
Although ESXi 6.5 and later versions support passthrough of NVMe SSD drives to User VMs, this is not a supported configuration when used on a Nutanix platform. NVMe SSD drives can only be used for hypervisor boot disk media or as passthrough storage for the Controller VM (CVM). Please consult the Hardware Specifications http://portal.nutanix.com/page/documents/list?type=compatibilityList documentation for your platform to see which disks should be assigned to each use case.If NVMe SSD drives are passed through to User VMs, it is possible to see Stargate crashes.The following messages may also appear in vmware.log on the User VM which is using the drive.
root@ESXi:~# less /vmfs/volumes/<container_uuid>/<user_vm_name>/vmware.log
|
As this is not a supported configuration, you must follow KB-15461 http://portal.nutanix.com/kb/15461 to configure the CVM with passthrough as described in the "ESXi - Example of VMD Controller device passthrough to CVM" section.
|
""Verify all the services in CVM (Controller VM)
|
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Commands to reset statistics""
| null | null | null |
KB11617
|
Nutanix Files: Insights Server using more than 90% of allocated memory alert
|
Investigation of Insights Server memory utilization
|
An increase in new connections can cause the FSVM's Insights Server process to consume most of the allocated memory. This will result in an alert containing the following information:
Message : Insights Server using more than 90% of allocated memory, [Cause: Insights server getting more queries than usual]
This is caused by new connections being established and their corresponding IDF queries. Existing connections have most of their data cached and don't query IDF as often. In this scenario, the minerva:GetFSFreeSpaceInsightsStore query resulted in increased memory usage for insights_server process.
|
1) Run the below on a FSVM to identify the earliest timestamp in the insights_server.INFO log.
FSVM:~$ head -n1 /home/nutanix/data/logs/insights_server.INFO
2) Run the below to identify the latest timestamp on the same insights_server.INFO log.
FSVM:~$ tail -n1 /home/nutanix/data/logs/insights_server.INFO
3) Run the below to get the count of minerva:GetFSFreeSpaceInsightsStore, which will indicate new connections.
FSVM:~$ grep -c "minerva:GetFSFreeSpaceInsightsStore" /home/nutanix/data/logs/insights_server.INFO
4) Divide the number from the above output with the time to get the number of queries per second.Example:
FSVM:~$ head -n1 ~/data/logs/insights_server.INFO
Based on the above outputs:
Time covered in log = 61minNumber of GetFSFreeSpaceInsightsStore instances = 19851 19851 / 61 = 325 queries/min
The observed behaviour is a known issue, and an internal engineer ticket has been raised to enhance the caching mechanism for new connections. To address this, we recommend upgrading Nutanix Files to version 4.4.x This update excludes the RSS usage of the parent process of the Insight Server service and enhances the alert processing functionality.
|
}
| null | null | null | null |
""ISB-100-2019-05-30"": ""Description""
| null | null | null | null |
KB4666
|
Foundation Fails During Reboot After Imaging the Hypervisor on HPE® ProLiant® servers
|
On HPE® ProLiant® servers on which the HPE B140i Dynamic Smart Array controller is running in RAID mode (possibly through the inclusion of the SKU “784308-B21 - HPE FIO Enable Smart Array SW RAID” in the order), Foundation fails with the following message: Found fewer than 1 AHCI controller or because the value of Legacy BIOS Boot Order is incorrect.
|
We have experienced 2 issues during Foundation on HPE® ProLiant® servers:
Foundation fails with the message “Found fewer than 1 AHCI controller”
On servers where the HPE B140i Dynamic Smart Array controller is running in RAID mode (possibly through the inclusion of the SKU “784308-B21 - HPE FIO Enable Smart Array SW RAID” in the order), Foundation fails with the following message:
20190325 11:34:57 INFO Node with ip 172.XX.XX.231 is in phoenix. Generating hardware_config.json
From the phoenix prompt Fdisk -l will not show any satadom
Foundation fails during reboot after imaging the hypervisor.
On HPE® ProLiant® servers that include the SKU 784308-B21 - HPE FIO Enable Smart Array SW RAID in the order, Foundation fails during reboot after imaging the hypervisor.
The issue occurs because the value of Legacy BIOS Boot Order is incorrect.
|
Scenario 1:
Nutanix requires the HPE B140i Dynamic Smart Array controller to run in AHCI mode. You can resolve the issue by changing the mode of the smart array controller to AHCI.
To change the mode of the smart array controller to AHCI, do the following:
Restart the server.During POST, press F9 to access the ROM-Based Setup Utility (RBSU).Go to System Utilities > System Configuration > BIOS/Platform Configuration (RBSU) > System Options > SATA Controller Options.Set the value of Embedded SATA Configuration to Enable SATA AHCI Support.Press F10 to save your changes.
Scenario 2:
You can resolve the issue by changing the value of Legacy BIOS Boot Order from Embedded RAID 1 : Smart HBA H240ar Controller to Embedded SATA Controller #1 by using ROM-Based Setup Utility (RBSU).On new HPE DX360-8 G10 with the HPE Smart Array P408i-a SR Gen10, use Embedded SATA Controller #2.
Perform the following to resolve the issue:
Restart the server.During POST, press F9 to access the RBSU.Go to System Utilities > System Configuration > BIOS/Platform Configuration (RBSU) > Boot Options > Legacy BIOS Boot Order.Select the Embedded SATA Controller #1 (or Embedded SATA Controller #2) option by using the down arrow key and then press + to move this option to the highest position.Press F10 to save your changes.
|
KB16042
|
NDB | Database registration may fail with error message "Failed to get storage mappings"
|
This KB helps to identify the issue that the database registration may fail when an NFS drive is mounted
|
Registering a database with NDB may fail with the error "Failed to get storage mappings".Follow the below steps to identify if an NFS drive is mounted. Here is an example of an Oracle database:
Check the operation log under 10.75.xx.xx-YYY-MM-DD-HH-mm-ss/logs/drivers/oracle_database/register_database/<operation-ID>-YYYY-MM-DD:HH:mm:ss.log.
For example, the operation ID is 35e371a6-1ca7-422a-9522-e356bdc1e6e0
TASK [op_update : Update Operation Status] *************************************
Check Traceback in 10.75.xx.xx-YYY-MM-DD-HH-mm-ss/logs/drivers/eracommon.log, it shows "10.78.xx.xx/Vol_REAxxxx_xxxx_1" is inaccessible. Hence, the operation failed on 'Failed to get storage mappings':
eracommon.log
[2023-11-24 15:53:41,574] [140007127148352] [INFO ] [35e371a6-1ca7-422a-9522-e356bdc1e6e0],Vol_REAxxxx_xxxx_1 not a physical device_path
Check disk_info.txt under the directory 10.75.xx.xx-YYY-MM-DD-HH-mm-ss, an NFS drive is mounted.
df -h
|
NDB does not support NFS mount as part of the database.
If the NFS mount is not part of the database files, unmount the NFS drive and then register the database. After the DB registration is completed, mount the NFS drive back.
|
KB7881
|
Stopped ntpd service on a CVM can cause CVM time to be in future
|
Stopped ntpd service on a CVM can cause CVM time to be in future
|
User is getting alerts about NTP not syncing to CVM (Controller VM).
nutanix@CVM$ ncli alert ls
NCC check also pointing to same thing.
Detailed information for cvm_time_drift_check:
Diagnosis
Checked date on cluster and found 1 node is in future by 2 minutes.
nutanix@CVM$ allssh date
Ran "ntpq -pn" and the node with future timestamp refused connection.
nutanix@CVM$ allssh ntpq -pn
Validated each NTP server using "ntpdate -q" and ping.
nutanix@CVM$ ntpdate -q 0.us.pool.ntp.org
Restarted NTP on Master NTP Server. No help.
nutanix@CVM$ sudo systemctl restart ntpd.service
|
Checked if the NTP service on node in question (with time in future) is running or not. In the following example, the NTP service is dead.
nutanix@CVM$ systemctl status ntpd.service
Restarted NTP service on that node.
nutanix@CVM$ sudo systemctl restart ntpd.service
Waited a bit and CVMs should sync to NTP correctly.
nutanix@CVM$ allssh date
NCC check should also PASS.
Running : health_checks network_checks check_ntp
|
KB10338
|
NDB - SQL DB registration failed with error "Could not install required packages 3221225781”
|
SQL DB registration in NDB failed due to dependent package cannot install with either of following errors
"Could not install required packages 3221225781” or “Internal Error: Cannot validate database server inputs.Please check the DBServer inputs"
|
Registering SQL DB fails with error below:
"Could not install required packages 3221225781” or
Register host logs in log bundle(logs/drivers/register_host/<operationID>.log) show dependent package installation fail.
[2020-11-18 12:11:13,293] [140089820481344] [WARNING ] [0000-NOPID],Function: install_dependent_packages, Exception: Could not install required packages 3221225781
Running python script manually on SQL DB Windows server gives error api-ms-win-crt-runtime-l1-1-0.dll is missing.
|
Install Universal C Runtime (CRT) in Windows from https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows https://support.microsoft.com/en-us/help/2999226/update-for-universal-c-runtime-in-windows. on Windows SQL DB Server.Run SQL DB registration again and this will complete successfully.
|
KB14680
|
AHV upgrade failed due to timeout waiting for prism ready
|
AHV upgrade failed due to timeout waiting for prism ready, and root cause was due to hera not releasing 9081 port.
|
The problem was encountered while upgrading AHV through LCM on the node that owns Prism Leader. The node shutdown caused the migration of the Prism leader to another CVM. However, despite the migration, the Prism service was unable to initiate completely, rendering it incapable of servicing the REST API request from LCM to stage the catalog item for AHV upgrade. As a consequence, the staging task in LCM exceeded its allotted time of 20 minutes, resulting in a failure of the AHV upgrade.You may observe the following signature in lcm_ops.out on LCM leader CVM:
/// LCM AHV upgrade for CVM.83 started ///
The above LCM leader CVM is CVM.82, and staging task was trying to connect to its own 9440 port (prism) for downloading AHV image but the prism session was never established within 20 minutes timeout of LCM.CVM.83 was Prism leader, and got migrated to CVM.84 due to shutdown for AHV upgrade. From the log bundle collected, we can see the Prism leader was transferred from CVM.83 to CVM.84 at 05:22:20, then 20 minutes later it got migrated to CVM.82 at 05:42:26.
$ find ./ -name prism_monitor* | xargs grep 'Current process is chosen to be the master'
Now let's look at the prism_monitor.INFO log on CVM.84 to see what happened after acquiring the Prism leader from CVM.83:
/// CVM.84 got leadership of Prism, and started to restart tomcat ///
The port 9081 on CVM.84 was used by Hera, because the previous Prism Leader was on CVM.83, so non-leader CVM will need to use Hera as proxying service to create SSH tunnels to the prism leader through which it sends the requests and receives the response. During the Prism leader change, CVM.84 became Prism leader hence Hera service should terminate all the existing SSH tunnels towards to the previous Prism Leader first, then release the port 9081 for Tomcat/Prism to start.From hera.out log on CVM.84, Hera realized the leader changed from CVM.83 to CVM.84, and started to proceed terminating SSH tunnels, but stuck for 93 seconds and eventually the operation returned "Error creating SSH client". Hera was able to "Stopped all tunnels" at 05:23:53, and from there the port 9081 should be released. However, from prism monitor side, tomcat has been proceeded to start at 05:23:24, while the port 9081 was still being used by Hera. The tomcat was started but it cannot bind port 9081 hence prism monitor kept throwing heartbeat errors.
I0330 05:22:20.921911Z 27793 main.go:154] New Prism leader from ZK: 130.5.252.84
Also observed following errors in CVM.84 prism_gateway.log, and the errors lasted for 20 minutes until prism/tomcat was restarted by prism_monitor after 20 times (once per minute) tomcat heartbeat failure.
WARN 2023-03-30 05:37:38,294Z pool-8-thread-15 [] web.interceptors.LocalhostServerWorker.connect:45 Failed connection to server localhost/127.0.0.1:9081:
|
Solution:This issue is now resolved in AOS 6.5.4 and higher. Please upgrade to the fix releases.
|
KB2155
|
How to monitor OpLog usage for a vDisk
|
This article details how to monitor the OpLog usage for vDisks. It can be useful when working on performance related scenarios
|
The key role of Oplog is to work like a Persistent write buffer which is similar to a filesystem journal and is built as a staging area to handle bursts of random writes I/Os, coalesce them, and then sequentially drain the data to the extent store.
The Oplog is a shared resource, however, the allocation is done on a per-vDisk basis to ensure each vDisk has an equal opportunity to leverage it. This is implemented through a per-vDisk Oplog limit (max amount of data per-vDisk in the Oplog). VMs with multiple vDisk(s) will be able to leverage the per-vDisk limit multiplied by the number of vDisk(s).
The per-vDisk Oplog limit is currently 6 GB. This limit is controlled by the Stargate service Gflag: vdisk_distributed_oplog_max_dirty_MB.Warning: Do not change this gflag about without guidance from Engineering.
NOTE: In AOS versions 5.19 and above, Dynamic Oplog sizing got introduced so that it can grow beyond 6GB Logical/12GB Physical/1 Million fragments (whichever is hit first) as long as rate IO is more than rate of draining, improving Oplog recovery speed.
By default, the Oplog for a vDisk will drain to Extent Store once I/O's to the vDisk has stopped. However, in case of continuous I/O, once the per-vDisk Oplog threshold is met (default is 85% of the 6GB limit), I/Os to that vDisk would start to notice performance issues as the IO's would have to compete with forced Oplog drain operations happening concurrently. Furthermore, if the Oplog limit is reached, I/O's to the vDisk will be paused until a forced drain occurs. This can have a direct impact on the perceived throughput and response time. Draining Oplog can take some time, which is dependent on many factors. The Oplog can become a bottleneck if I/Os are being generated faster than the Oplog can drain, so knowing how to monitor Oplog usage is critical when troubleshooting performance scenarios.
Below are some example scenarios where we have seen Oplog draining become a factor with performance.
Synthetic tests (IOmeter, HammerDB, etc) run back-to-back without allowing the Oplog to completely drain. If Oplog is not allowed to drain in between performance testing (usually done during a POC), the performance results can become skewed, and give the appearance that performance is poor.Applications which generate very large bursts of random write IOs. If data is being written to Oplog faster than it can be drained, performance can certainly be impacted
|
Oplog usage for a vDisk can be monitored from two Stargate service pages/links
<CVM_IP>:2009
<CVM_IP>:2009/oplog_flush_stats?vdisk_id=<vDisk_ID>
Note: The CVM IP should be of the CVM which is hosting the vDisk. In most scenarios, the vDisk is hosted on the same CVM node where the VM is hosted. The above-mentioned pages/links can be accessed through a Web Browser such as Chrome/Safari/Firefox or Command-line browser like links.
To view the Stargate service page through Web Browser, you must first ensure that port 2009 is open on the CVM's firewall.
DO NOT manipulate the iptables directly, use the modify_firewall command as described in the Security Guide - Opening and Closing Ports https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v5_10:wc-cvm-firewall-block-unblock-ports-cli-t.html.
Once the port is open, the above-mentioned Stargate pages/links can be accessed through a Web Browser.NOTE: Do not forget to close the firewall once done.
In the below example, we would be using the CVM IP x.y.z.211 and would be focusing on a vDisk with ID 20202.
The following URL can be used to open the Stargate service page of CVM IP x.y.z.211
http://x.y.z.211:2009/
The following page loads up:
Scroll the page down to the "Hosted active live vdisks" section. In the KB column, we can see the OpLog usage for vDisk ID 20202 is 5658028 KB
Similarly, we can view the other page using the following URL
http://x.y.z.211:2009/oplog_flush_stats?vdisk_id=20202
Following page loads up and if scrolled down then we can see the logical usage. The Current logical usage value shows the OpLog usage in MBs.
To view the above-mentioned pages/links through Command line browser then use the following commands:
links http:x.y.z.211:2009
links http:x.y.z.211:2009/oplog_flush_stats?vdisk_id=20202
Summary
In general, if a customer is doing a POC and testing performance it is important to verify the OpLog(s) for the VMs being tested with have been drained before starting/resuming testing. If a VM's OpLog still has data when the test begins this could skew the end resultsIf a POC is being performed for a specific application, ensure that the best practices are being followed for the application in use.
|
KB14482
|
LCM firmware update failure with FND 5.3.4
|
LCM firmware upgrade might fail on foundation version 5.3.4
|
LCM Firmware upgrade process might fail to complete its workflows and leave the node stuck in phoenix state.LCM upgrade failure message :
Operation failed. Reason: LCM failed performing action reboot_from_phoenix in phase PostActions on ip address xx.xx.xx.xx Failed with error 'Failed to boot into cvm/host: xx.xx.xx.xx reboot_from_phoenix failed (1305): Failed to run reboot_to_host script: None'
Find the LCM leader CVM IP with the following command:
nutanix@CVM:~$ lcm_leader
On the LCM leader CVM, look for the following traceback in /home/nutanix/data/logs/lcm_ops.out :
Traceback (most recent call last): File "/usr/local/nutanix/cluster/bin/lcm/lcm_actions_helper.py", line 687, in execute_actions_with_wal raise LcmActionsError(err_msg, action_name, phase, ip_addr)LcmActionsError: Failed to boot into cvm/host: xx.xx.xx.xx reboot_from_phoenix failed (1305): Failed to run reboot_to_host script: None
2023-02-17 11:28:17,136Z ERROR lcm_actions_helper.py:365 (xx.xx.xx.xx, kLcmUpdateOperation, a00e4992-6705-4961-5925-ef021342e0fa, upgrade stage [2/2])Traceback (most recent call last): File "/usr/local/nutanix/cluster/bin/lcm/lcm_actions_helper.py", line 352, in execute_actions metric_entity_proto=metric_entity_proto File "/usr/local/nutanix/cluster/bin/lcm/lcm_actions_helper.py", line 687, in execute_actions_with_wal raise LcmActionsError(err_msg, action_name, phase, ip_addr)LcmActionsError: Failed to boot into cvm/host: xx.xx.xx.xx . reboot_from_phoenix failed (1305): Failed to run reboot_to_host script: None
|
The issue has been fixed with Foundation 5.3.4.1; please refer to the Release Notes https://portal.nutanix.com/page/documents/details?targetId=Field-Installation-Guide-Rls-Notes-v5_3:Field-Installation-Guide-Rls-Notes-v5_3.Kindly upgrade to the latest foundation version.
|
KB14775
|
ESXi: Data Corruption in FCD Catalog File (VMware KB 90106)
|
This article points to some Nutanix internal articles regarding to the issue of VMware KB 90106.
|
VMware published their KB 90106 regarding to the file data corruption issue in the catalog file called "shard.dat" for the First Class Disk feature.VMware KB: Data corruption on NFSv3 when parallel appends to a shared file are done from multiple hosts (90106) https://kb.vmware.com/s/article/90106
This KB article mentions that this issue may happen in Nutanix AOS environment because Nutanix AOS does not return file attributes as part of NFS LOOKUP response.
|
This is an NFS client-side issue within ESXi. VMware fixed this issue in ESXi 7.0 U3l per VMware support.
This issue note which is similar to the VMware KB 90106 can be found in the Resolved Issues list within the ESXi 7.0 U3l Release Notes https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3l-release-notes.html:
PR 3063987: First Class Disk (FCD) sync might fail on NFSv3 datastores and you see inconsistent values in Govc reports
|
KB15019
|
Network Segmentation fails with "Failed to configure IPs. Try again after some time"
|
Network Segmentation fails with "Failed to configure IPs. Try again after some time"
|
While trying to enable network segmentation, it fails after several minutes with the error:"Failed to configure IPs. Try again after some time"We can see in the screenshot above the task taking almost 50 minutes before failing and a second attempt failing within 58 seconds.Looking at the logs, we could see that it "Failed to configure gateway for client subnet 10.xx.xx.0/255.255.252.0" >> File exists
nutanix@NTNX-CVM:~$ grep "configure segmented IP failed" -C10 ~/data/logs/network_segment.out
The failed previous attempt to configure network segmentation has left behind the traces of the "Client_subnet", thus the reason for the error with: "answers: File exists"
|
Trying to configure once again while leaving the "Client subnets" blank should work.Alternatively, try to use CLI also leaving blank "Client Subnets".
nutanix@NTNX-CVM:~$ network_segmentation --service_network --ip_pool="iSCSI_separation" --desc_name="iSCSI" --service_name="kVolumes" --service_vlan=195 --mtu=1500 --host_physical_network="VLAN195 (iSCSI)" --gateway="10.xx.xx.1" --host_virtual_switch="DSwitch" --vip="10.xx.xx.10"
|
""ISB-100-2019-05-30"": ""Title""
| null | null | null | null |
KB13431
|
File Analytics - Dual NIC configuration for file analytics server
|
This KB has the procedure to configure FA to use 2 NIC interfaces for network segmentation enabled for volume services on cluster
|
This steps are intended to be used only by the Nutanix support, Engineering or under the guidance of Nutanix engineering team.Prerequisite File analytics version 3.0 or above is installed and configured with single NICKnown Limitations
Logbay command will not be able collect logs from FAVMTeardown from prism UI will fail LCM based upgrade will not work
|
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit"Dual NIC for file analytics can be configured in two ways depending on the specific NIC interface to be used as external or internal. Scenario 1: eth0 interface used for external communication and eth1 interface used for internal communicationExample of NIC configuration:eth0/External IP - 10.46.xx.98 (Used for Fileserver, access FA UI etc)eth1/Internal IP - 10.46.yy.127 (Used to access volume group from CVM)
Steps:
Deploy FA using external IP vlan ( i,e. eth0 interface will be part of VLAN belong to 10.46.xx.xx subnet) Once FA is deployed from Prism UI -> VM page, add additional NIC to FAVMConfigure the NIC to use internal IP vlan (i.e, eth1 interface to be added to VLAN belong to 10.46.yy.yy subnet)Take snapshot of file analytics server Open a SSH session to FA VM and execute the following steps to configure file analytics VM
Create the file ifcfg-eth1 in the path /etc/sysconfig/network-scripts/
nutanix@FAVM$sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1
Configure eth1 interface manually based on intended network information (update the info as per the network vlan)
BOOTPROTO=none
In /etc/sysconfig/network-scripts/ifcfg-eth0, add the following line if it is not exists
DEFROUTE=yes
Ensure file permissions are set to 644 for ifcfg-ethX fileIn the file /etc/sysconfig/network, add the following line
NOZEROCONF=yes
Create the file /etc/sysconfig/network-scripts/route-eth1
nutanix@FSVM$ sudo vi /etc/sysconfig/network-scripts/route-eth1
Ensure file permissions are set to 644 for route-ethX file
nutanix@FSVM$ sudo ls -hla /etc/sysconfig/network-scripts/route-eth1
Add the following contents in the file to define route for eth1 interface.
10.46.zz.0/22 via 10.46.yy.1 dev eth1
Note: Here 10.46.zz.0/22 is the CVM subnet and 10.46.yy.1 is the gateway of eth1 interface on FAVMTake a backup of /etc/sysconfig/iptables file
nutanix@FAVM$cp /etc/sysconfig/iptables /tmp/sysconfig/iptables.orgnl
Modify the /etc/sysconfig/iptables file with following contents
*filter
In the above rules list:
Replace 10.46.zz.0/22 with CVM subnet Replace 10.46.xx.98/32 with eth0 IP (do retain the /32 suffix at the end in each rule)Replace 10.46.yy.0/21 with eth1 subnet used for FA
10. Reboot FA VM for all changes to take effect
Verification:
From CVM, try SSH access via eth1 IP, should workFrom laptop. Try SSH access via eth0 IP, should workFrom Prism UI, open FA link, UI should loadEnable, full scan, events pipeline should work finePulse data is pulled successfullyExecute route -n command on CLI, following should be the output
nutanix@FAVM$ route -n
Default route should be configured to use eth0 (external vlan)There should one routing policy for each subnet - external and internal configured with respective interfacesThere should be one route for CVM subnet, configured with eth1, i.e. internal interfaceThe 172.x.x.x is the docker bridge, no changes expected here
Expected results
From CVM, SSH access to eth0 IP should fail From laptop, SSH access to eth1 IP should failIn browser, replacing eth0 IP with eth1 IP should not load FA UI
Scenario 2: eth1 interface used for external communication and eth0 interface used for internal communicationExample of NIC configuration:eth1/External IP - 10.46.xx.98 (Used for Fileserver, access FA UI etc)eth0/Internal IP - 10.46.yy.127 (Used to access volume group from CVM)
Steps:
Deploy FA using internal IP vlan ( i,e. eth0 interface will be part of VLAN belong to 10.46.yy.yy subnet) Once FA is deployed from Prism UI -> VM page, add additional NIC to FAVMConfigure the NIC to use External IP vlan (i.e, eth1 interface to be added to VLAN belong to 10.46.xx.xx subnet)Take snapshot of file analytics server Open a SSH session to FA VM and execute the following steps to configure file analytics VM
Create the file ifcfg-eth1 in the path /etc/sysconfig/network-scripts/
nutanix@FAVM$sudo vi /etc/sysconfig/network-scripts/ifcfg-eth1
Configure eth1 interface manually based on intended network information (update the info as per the network vlan)
BOOTPROTO=none
In /etc/sysconfig/network-scripts/ifcfg-eth0, add the following line if it is not exists
DEFROUTE=no
Ensure file permissions are set to 644 for ifcfg-ethX fileIn the file /etc/sysconfig/network, add the following line
NOZEROCONF=yes
Create the file /etc/sysconfig/network-scripts/route-eth0
nutanix@FSVM$ sudo vi /etc/sysconfig/network-scripts/route-eth0
Ensure file permissions are set to 644 for route-ethX file
nutanix@FSVM$ sudo ls -hla /etc/sysconfig/network-scripts/route-eth0
Add the following contents in the file to define route for eth1 interface.
10.46.zz.0/22 via 10.46.yy.1 dev eth0
Note: Here 10.46.zz.0/22 is the CVM subnet and 10.46.yy.1 is the gateway of eth0 interface on FAVMUpdate the external IP address in below service config files. Search the internal IP address within file and replace with external IP address.
/mnt/containers/config/kafka/kafka_config/server.properties,
Take a backup of /etc/sysconfig/iptables file
nutanix@FAVM$cp /etc/sysconfig/iptables /tmp/sysconfig/iptables.orgnl
Modify the /etc/sysconfig/iptables file with following contents
*filter
In the above rules list:
Replace 10.46.zz.0/22 with CVM subnet Replace 10.46.xx.98/32 with eth1 IP (do retain the /32 suffix at the end in each rule)Replace 10.46.yy.0/21 with eth0 subnet used for FA
If FAVM version is >= 3.2.1 then modify the external_ip value in the /mnt/containers/config/deploy.env to the eth1 ip address.Reboot FA VM for all changes to take effectNeed to execute the following REST API calls for UI redirection from Files page to work from prism
nutanix@cvm$ curl -k -X GET --header 'Accept: application/json' --user 'admin:password' 'https://PRISM_VIP:9440/PrismGateway/services/rest/v2.0/analyticsplatform/'
Response from above output shows that the eth0 interface is referenced in prism side. We have to modify that to eth1 interface IP
[{"uuid":"78844939-f820-4ab9-ab62-1510ef9d46b3","ip":"10.46.yy.127","fs_list":[],"is_deployed":true,"ad_user":null,"ad_domain":null,"ad_password":null,"data_retention_interval":0,"version":"master"}]
Remove the file analytics details from zeus using below command from CVM
nutanix@cvm$ zkrm /appliance/physical/afsfileanalytics
Use the response from first REST API as payload for the next request - replace the internal vlan IP with external IP and send the payload as a dictionary, not as a list
nutanix@cvm$ curl -k --user 'admin:password' -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{ "uuid": "78844939-f820-4ab9-ab62-1510ef9d46b3", "ip": "10.46.xx.98", "fs_list": [], "is_deployed": true, "ad_user": null, "ad_domain": null, "ad_password": null, "data_retention_interval": 0, "version": "master" }' 'https://PRISM_VIP:9440/PrismGateway/services/rest/v2.0/analyticsplatform/analytics_vms/'
Response
{"uuid":"78844939-f820-4ab9-ab62-1510ef9d46b3","ip":"10.46.xx.98","fs_list":[],"is_deployed":true,"ad_user":null,"ad_domain":null,"ad_password":null,"data_retention_interval":0,"version":"master"}
Note: Replace the respective values marked in bold based on the file analytics configuration
Refresh Prism UI and try accessing FA UI from Files page
Note:
In both cases, please check whether partner servers are configured to use the correct IP address (external IP for FA). If not, then we need to update the partner servers as well in all the fileserver.
|
KB2742
|
HW Scenario: No SSDs or HDDs detected - CVM fails to boot - Chassis backplane issue
|
A backplane issue can cause the HW controller to have no visibility to SSD and HDD disks.
The host will be running but the CVM will not be able to boot when it is powered on.
If this occurs during deployment or fails in the first 60 days of deployment this may match a known HW manufacturing issue that impacts a small but significant number of units - in particular the NX-8150 - shipped prior to September 2015.
|
This article helps identify a known HW manufacturing issue that may impact the backplane for a small but significant number of NX-8150 units and other models in a much less significant number (NX-6X35, NX-1x65, NX-3x60 and NX-9x60). The impacted units shipped prior to September 2015.
Problem signature:
The node model most impacted is the NX-8150. The issue has also been confirmed on a very small percentage of returned chassis for multi-node models NX-6X35, NX-1x65, NX-3x60 and NX-9x60.Symptom: CVM fails to boot and the error in the CVM console reports it is checking for the /.nutanix_active_svm_partition (but does not find a suitable boot disk so cannot boot).
Example of this error message in the Nutanix CVM console:
modprobe : module pci:xxxx not found in modules.dep
Symptom: The SSD and HDD drives are not visible to the controller(s) on the node(s) in the chassis.
NOTE: This can be further confirmed using the Rescue ISO to confirm in the Rescue shell.
Diagnosing disk visibility using the Rescue Shell and command lsscsi is documented here:
https://portal.nutanix.com/#/page/docs/details?targetId=Hardware_Administration_Reference NOS_v4_1:tro_diagnosis_overview_c.html https://portal.nutanix.com/#/page/docs/details?targetId=Hardware_Administration_Reference-NOS_v4_1:tro_diagnosis_overview_c.html
4. The HW controller for the disks will still be visible to the host, as the controller is available but the disks are not visible to it.
For example, on ESXi or Acropolis (AHV) / KVM host the command "lspci" reports the controller is visible.
Example output on NX-8150:
~ # lspci | grep controller | grep LSI0000:05:00.0 Mass storage controller: LSI LSI Logic Fusion-MPT 12GSAS SAS3008 PCI-Express [vmhba1]
OTHER NOTES:
The confirmed HW Failures matching the issue in this KB have occurred either during deployment or occurred after time within the first 60 days of deployment, causing the CVM to be suddenly down and not able to boot on the node(s) in the chassis.The HW failure usually persists once it has occurred. However, it is possible the problem could present in the field as an intermittent failure, based on returned chassis/nodes that have worked temporarily in some cases when tested and then failed again.Attempting recovery with the Rescue ISO (svmrescue.iso) or Phoenix ISO will fail as the drives are not visible to recover:
Example of failure error in Phoenix:
ERROR: SVM imaging failed with exception: Traceback (most recent call last)File "/phoenix/svm.py", line 134, in imagewith open(/dev/%s %disk, "w", 0) as d:IOERROR: [errorno 123] no medium found: ‘dev/sdb’FATAL: Imaging thread "svm" failed with reason [NONE]
|
If the problem signature matches, dispatch a replacement chassis. For the NX-8150 the single node chassis part is X-CHASSIS-2697V2-NX8150. Include a request for Failure Analysis in the dispatch.
NOTE: The NX-8150 is a single node chassis - it does not have a separate node and chassis.
Nutanix is screening all NX-8150 units in depots (estimated completion in October 2015), and any new units shipping from the factory for all other noted models also. Corrective action is in place at the factory to prevent this issue in the future.
CAUTION: The same boot-failure signature as in step #2 can be seen if Hyper-V based systems lose visibility to their disks due to upgrade. This is not a hardware issue, so before sending a node, be sure this is not a software/configuration problem. Noting ONCALL-985 - due to non-Nutanix disks being presented to the Hyper-V host.
|
KB7774
|
XenServer - Location of 1 click upgrade json metadata files
|
Location of the 1-click XenServer upgrade metadata / json files
|
XenServer 1-click upgrades require metadata json files that may not be in the Portal site for customers.Note this is for older XenServer versions up to 7.5. For Xenserver 7.5 the json is already in the Portal and the following link can be used to download it: http://download.nutanix.com/hypervisor/esx/xen-metadata-7_5.json http://download.nutanix.com/hypervisor/esx/xen-metadata-7_5.json
|
All XenServer upgrade metadata (JSON) files can be found at the following temporary internal location: http://10.1.133.14/xen/ http://10.1.133.14/xen/PM and engineering are working on a process to have the json files in the Portal as with other hypervisors. Once that work is completed this KB will be archived.
|
}
| null | null | null | null |
KB16726
|
Cisco UCS - Node Imaging may fail with Error "NodeConfiguration] Failed to Deploy/Activate the profile of the Node"
|
This KB article describes an issue where Node Imaging may fail on Cisco UCS nodes, with an error message as follows "NodeConfiguration] Failed to Deploy/Activate the profile of the Node"
|
This KB article describes an issue where Node Imaging may fail on Cisco UCS nodes, with an error message as follows:
NodeConfiguration] Failed to Deploy/Activate the profile of the Node
Identification of the issue : 1. Foundation Central Logs will have the following log snippet : (~/home/nutanix/data/logs/foundation_central.out on PCVM)
2024-02-22T08:03:18.148Z serversetting_utils.go:102 [INFO] [Deployment="cal73036-60b1-9c82-4398-a2967dd23c6c" Node="246e383d-7f49-4940-457a-62a9981c6251"] Checking ServerSetting(65cc641a657373301fa40c09) Action(PowerCycle) Status Completed: Applied
2. On the BIOS Splash screen the following message is seen :
"Couldn't Create Moklist: Volume Full
|
A permanent fix for this issue is planned by Cisco in one of the upcoming firmware. WORKAROUND:
1. Power on server2. On BIOS splash screen, press F6 to bring up the boot device menu3. Select "Built-in UEFI shell"4. Enter the command
dmpstore -d *Boot00*
5. Reboot the server
|
KB4232
|
How to downgrade ESXi host
|
This documentation deals with downgrading Esxi version and rolling back to older version. This also outlines new guidelines in case of Esxi 7.x
|
Follow steps provided in solution section to downgrade ESXi host version to previous release.NOTE: ESXi host can be downgraded to older version from which it was upgraded.Once ESXi host has been downgraded to older version, normal upgrade procedure need to be followed to come to latest release.
|
For ESXi 6.7 or Earlier
1. Migrate user VMs from the host to be downgraded. (Maintenance Mode, etc.).2. Check that Nutanix Cluster Status is fully healthy.$ cluster status3. Gracefully power down the CVM, refer KB 3270 https://portal.nutanix.com/#/page/kbs/details?targetId=kA0320000004H2NCAU4. Connect to the ESXi's IPMI web port for access to the host's hardware console.5. From the IPMI console control, perform "Power Control -> Reset Server" to Reboot the ESXi host.6. At the ESXi boot up screen, press "SHIFT+R" (after the device mapping) to enter ESXi "Recovery" mode as seen in below screenshot.:7. Below warning is displayed in Recovery mode:CURRENT DEFAULT HYPERVISOR WILL BE REPLACED PERMANENTLYDO YOU REALLY WANT TO ROLL BACK?8. Press Y to roll back the build, and the host will automatically reboot in the previous version of ESXi.For more info, please check VMWare KB: 1033604 https://kb.vmware.com/s/article/1033604
For ESXi versions 7.0 or later:
If the previous version is version 7.0 or above, follow the same procedure as above. However, it is not possible to roll back to releases prior to 7.0, due to partition changes to the boot device. In this scenario, consider the following options:a. Host Boot Disk replacement procedureb. Perform Reinstall of Hypervisor- Select Option "Configure, Install Hypervisor"(Create a KVM only ISO using phoenix- KB-3523 https://portal.nutanix.com/kb/000003523)
|
""Title"": ""SMB strict sync being disabled (default) could result in data loss on AFS shares""
| null | null | null | null |
KB14801
|
File Analytics upgrade failed with Error: Upgrade failed: sudo bash /opt/nutanix/verify_services.sh failed remotely
|
File Analytics Upgrade failure due to read-only filesystem
|
If the File Analytics filesystem is in read-only mode and an upgrade is initiated, the following error is logged in the LCM Logs.
nutanix@NTNX-17FWZG3-A-CVM:X.X.X.205:~$ allssh 'grep -iR "Upgrade failed: sudo bash" ~/data/logs/lcm_op*'
Confirm the read-only file system status by checking the File Analytics dmesg:
Apr 12 09:44:07 NTNX-10-62-4-13-FAVM mount_volume_group.sh[108924]: Try 'dirname --help' for more information.
File analytics GUI will also be inaccessible in this scenario.
|
As a workaround, you can restart the FAVM to resolve the read-only file system issue. Once the reboot is complete, the File Analytics Upgrade should proceed without any issues.
|
KB14091
|
On IAM enabled Prism Central, AD or LDAP user unable to edit VM or open VNC console or launch the registered PEs via PC Cluster Quick Access.
|
On IAM enabled Prism Central, Active Directory or LDAP user may see errors while trying to edit VM or open VM VNC console in case login username contains uppercase letters.
|
On IAM enabled Prism Central (PC), Active Directory or LDAP user may see errors or failures while:
Editing VM orOpening VM VNC console orLaunching the registered Prism Elements (PE) via Prism Central Cluster Quick Access
When users log in to Prism Central and the login username or domain name contains uppercase letters.Identification:
The affected user is an AD/LDAP user and the login username contains uppercase letters, e.g., [email protected]. PE quick access, VM update, VM VNC console works OK for built-in PCVM admin user.An attempt to update the VM results in an error similar to the following in the UI:
Fetch GPUs error: Request failed with status code 500
On an attempt to open VM VNC console, the console does not open, and an error similar to the following is displayed in the UI:
Error fetching VM details
Launching PEs from PC quick launch fails and is just stuck at "loading" forever, and browser API logs show error:
Failed to load resource: the server responded with a status of 500 ()
Prism UI reports error:
Throwing exception from UserAdministration.getLoggedInUser", "Error while retrieving the logged in user
Prism Gateway log may contain error messages similar to the below ones:
nutanix@PCVM:~$ less ~/data/logs/prism_gateway.log
Mercury service may terminate and restart with an error similar to the following:
nutanix@PCVM:~$ less ~/data/logs/mercury.FATAL
Note: Mercury service termination and restart may cause various symptoms manifesting as failure to load the PCVM UI elements or API calls failure.
|
Nutanix Engineering is aware of the issue and is working on a solution. If you observe described symptoms, contact Nutanix Support https://portal.nutanix.com for a workaround and mention this KB number in the case description.
|
KB5033
|
Prism Central: Create an image from the disk of a specific VM
|
This article describes how to create an image directly from the disk of a specific VM by using Prism Central.
|
This article describes how to create an image directly from the disk of a specific VM by using Prism Central (PC).
Image create, update, delete (CUD) behavior depends on whether a cluster (also known as Prism Element) is registered to Prism Central.
If the Prism Element (PE) cluster is newly created and has never been registered with a Prism Central instance (that is, never managed by Prism Central), all image CUD operations will be allowed on the PE cluster.If the PE cluster is registered with a Prism Central instance (that is, managed by Prism Central), CUD operations will be blocked on the PE cluster. Ownership of images is migrated from PE to Prism Central. CUD operations must be performed through Prism Central.
When CUD operations are blocked on Prism Element, they cannot be performed through the version 0.8 API, version 2.0 API, Prism Element web console, or acli. These CUD operations are replaced by the Prism Central version v3 API, Prism Central nuclei command line or Prism Central web console.
The ability to create an image directly from a specific VM's disk was introduced in AOS 6.0.
For versions prior to 6.0, an image can be created via the VM disk's URL.
|
For AOS 6.0 or later:
Follow the instructions in the Adding Images from a VM Disk https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-images-upload-from-vm-disk-pc-t.html section of the Prism Central Infrastructure Guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc:Prism-Central-Guide-vpc.
For AOS prior to 6.0:
SSH to any Controller VM (CVM) on the cluster that contains the image.Use the acli vm.get include_vmdisk_paths=1 option to get the VMdisk NFS path (vmdisk_nfs_path) of the image associated with the VM. The VMdisk NFS path includes the VMdisk UUID and the source storage container name. In the examples below, commands and the information needed are in bold.
List all VMs to get the VM name, then find the VM's vmdisk_nfs_path.
nutanix@cvm$ acli vm.list
Use the Prism Central web console or API to create the image using the following URL and the value of vmdisk_nfs_path from the previous command.
nfs://<cvm_ip_address>/<vmdisk_nfs_path>
For example:
nfs://<cvm_ip_address>/my-container/.acropolis/vmdisk/12c56709-257b-42a2-87c8-04a02b15e502
The Adding Images from a Remote Server https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-images-upload-from-remote-server-pc-t.html section of the Prism Central Infrastructure Guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc:Prism-Central-Guide-vpc describes how to add an image via a URL.
Alternative method for AOS prior to 6.0:
SSH to any Controller VM (CVM) on the cluster that contains the image.Use acli to get the VMdisk UUID and container ID of the image associated with the VM. In the examples below, commands and the information needed are in bold.
List all VMs to get the VM name, then find the VM's vmdisk_uuid and container_id.
nutanix@cvm$ acli vm.list
Use ncli to get the name of the container ID associated with the VMdisk.
nutanix@cvm$ ncli container list id=<container_id>
For example:
nutanix@cvm$ ncli container list id=1234
Use the Prism Central web console or API to create an image using the following URL and the value of vmdisk_uuid and the container name from the previous commands.
nfs://<cvm_ip_address>/<source_storage_container_name>/.acropolis/vmdisk/<vmdisk_uuid>
For example:
nfs://<cvm_ip_address>/my-container/.acropolis/vmdisk/12c56709-257b-42a2-87c8-04a02b15e502
The Adding Images from a Remote Server https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide:mul-images-upload-from-remote-server-pc-t.html section of the Prism Central Infrastructure Guide https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc:Prism-Central-Guide-vpc describes how to add an image via a URL.
|
KB10566
|
ESXi host unresponsive after upgrade to 6.7 P03 or later due to AHCI driver issue
|
ESXi host may enter into an unresponsive state after upgrading to 6.7 P03 or later due to AHCI driver issue. This is a known issue reported by VMware (See VMware KB 81554).
|
ESXi host may enter into an unresponsive state after upgrading to 6.7 P03 or later due to AHCI driver issue. As per VMware KB 81554 https://kb.vmware.com/s/article/81554, this issue is caused by a race condition in the driver while processing I/O resulting in abort failures. This causes the host to become unresponsive in the vCenter Server. The VMs are not impacted because of the issue.
Symptoms:
ESXi host gets disconnected in vCenter Server.vCenter will not be able to manage the affected host and its VMs.Host was recently patched for ESXi 6.7 P03 and uses vmw_ahci driver.Alt+F12 page of the ESXi console and vmkernel logs are flooded with driver abort messages.
[root@esxi]# less /var/log/vmkernel.log
The following log entries may also be seen:
Admission failures in vmkernel log:
vmkernel.2:2020-12-28T09:10:33.610Z cpu19:12643732)MemSchedAdmit: 471: Admission failure in path: ssh/sshd.12643733/uw.12643733
Uhura (/home/nutanix/data/logs/uhura.out on CVM) showing could not connect to host:
uhura.out.20201228-112444:2020-12-28 18:11:52 WARNING connection.py:449 Failed to connect to 8bb54411-1ab7-4464-9cc1-682d54526a44 (x.x.x.x)
Syslog showing hostd "out of memory":
2020-11-29T07:30:00Z crond[2099540]: USER root pid 3639787 cmd /bin/hostd-probe.sh ++group=host/vim/vmvisor/hostd-probe/stats/sh
"Lock is currently held by process" errors in syslog:
2020-11-29T07:25:00Z crond[2099540]: USER root pid 3639016 cmd /bin/hostd-probe.sh ++group=host/vim/vmvisor/hostd-probe/stats/sh
vpxa complains about hostd OOM (Out-Of-Memory):
2020-11-29T07:19:56.849Z info vpxa[2100998] [Originator@6876 sub=hostdstats] Skipping stat update due to stale sample from hostd
vmkwarning also complains SSH resource group encountered OOM issue:
$ zgrep -m1 ssh vmkwarning.*
The hostd log file shows no logging for ~15 minutes:
2020-11-29T07:29:50.393Z info hostd[2100516] [Originator@6876 sub=Vimsvc.ha-eventmgr] Event 230118 : Physical NIC vmnic12 linkstate is down.
For comparison, successful vs. unsuccessful hostd probe:
Successful hostd probe:
2020-11-29T07:20:00.213Z - time the service was last started, Section for VMware ESX, pid=3638157, version=6.7.0, build=16713306, option=Release
Unsuccessful hostd probe, it's missing "Successfully acquired hardware" line after "Current process ID: 3639019" line:
2020-11-29T07:25:00.206Z - time the service was last started, Section for VMware ESX, pid=3639019, version=6.7.0, build=16713306, option=Release
vmkernel complains that hostd is unresponsive:
vmkernel.18.gz:2020-11-29T07:56:05.588Z cpu52:2097232)VmkEvent: 94: Msg to hostd failed with timeout, dropping function 2076 len 96
|
The fix for this issue is present in the following version:
VMware ESXi 6.7 P05, Patch Release ESXi670-202103001 (Build 17700523) or laterVMware vSphere ESXi 7.0 Update 2 or later
Please see VMware KB 81554 https://kb.vmware.com/s/article/81554 for more detail on the issue.Workaround:
Schedule a maintenance window to gracefully shut down the VMs, including the Nutanix Controller VM (CVM) and reboot the ESXi server to bring it back online/manageable in vCenter Server.
Note: You cannot live-migrate the VMs or keep the ESXi host in maintenance mode because the host is disconnected from vCenter. Hence, downtime is required for the VMs.
Follow below steps to gracefully shut down the CVM:
Follow Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide:ahv-cluster-health-verify-t.html chapter to make sure that cluster can tolerate node being down. Do not proceed if cluster cannot tolerate failure of at least 1 node.Shut down the CVM:
nutanix@cvm$ cvm_shutdown -P now
|
KB12981
|
Move | Linux VM migration failing for VMs created from previously migrated template
|
Issue created by leftover files from previous Move runs prevents using a VM for another migration
|
When using Nutanix Move to migrate a Linux VM, the following error message may be seen during a failure:
|
Explanation:
In Move versions below 4.4, several files are left behind on Linux VMs which can cause issues with a second run of Move on the same VM. In the case above, a template was created on an ESXi source from a VM called ubu1 which had previously been migrated with Move, and the following files and directories were left behind on the template:
/etc/sourcevm-idfile/etc/sourcevm-dhcpfile/opt/Nutanix/Below Move version 4.4.0 the file /etc/sourcevm-idfile in particular causes the error seen when trying to migrate.
Solution 1:
Upgrade to Move 4.4.0 or above as the issue has been fixed with ENG-428549 https://jira.nutanix.com/browse/ENG-428549
Solution 2:
If Move cannot be upgraded, remove the file /etc/sourcevm-idfile and re-try migration for the affected VM(s).
|
KB16347
|
NDB - Alert - Pause Time Machine
|
Investigating Paused Time Machine on NDB
|
Overview
This Nutanix Article provides the information required for troubleshooting the alert Pause Timemachine on your NDB instance.
Alert Overview
The Pause Timemachine alert is generated if the Time Machine (TM) of a DB is placed in PAUSED state by NDB. A time machine maintains database snapshots and transaction logs. For every source database provisioned or registered with NDB, NDB creates a time machine.
Cause
Successive TM Auto Snapshots failureFollowing is a complete example of one such alert:
Time Machine '<TM_Name>' has been is PAUSED. Total X create_snapshots have successively failed for Time Machine (id:<TM_UUID>). As per the configured settings, the Time Machine should be PAUSED if the successive failed 'create_snapshot' operation count is greater than 49
Output messaging
[
{
"Alert Title": "Alert Message",
"Pause Timemachine": "Time Machine '' has been is PAUSED"
},
{
"Alert Title": "Schedule",
"Pause Timemachine": "Event-Triggered"
},
{
"Alert Title": "Description",
"Pause Timemachine": "Time Machine is in PAUSED state"
},
{
"Alert Title": "Impact",
"Pause Timemachine": "Snapshots, Log Catchups and Copy Logs will fail for the TM"
},
{
"Alert Title": "Causes of failure",
"Pause Timemachine": "Successive TM Auto Snapshots failure"
},
{
"Alert Title": "Resolutions",
"Pause Timemachine": "Ensure that the DB/DB server VM exists on the cluster, is powere ON, healthy and reachable"
}
]
|
Troubleshooting
Check the Database Server VM status on NDB UI:
Navigate to NDB UIOn the Database Server VMs page, check the status of the DB server VMHover the mouse pointer over the status dot and note the status message.
If the status is shown in green, try unpausing/resuming the TM and taking a manual snapshot.
If the TM is unpaused or resumed successfully and the manual snapshot operation goes through, as soon as the TM is unpaused, snapshot, log catch-up, and copy logs tasks are triggered. Check if these succeed. If not, log analysis would be required.
If the status is shown in red, verify if the DBVM exists on the cluster, is powered on, is reachable, and is running normally.
If the TM does not get un-paused, note the error thrown and check DB server VM accordingly.
For instance, if the DBVM has been powered off, the following error would be thrown:
Create Snapshot and Log Catchup operation can not be dispatched. DBServer (name: <DBVM_Name> ip:<DBVM IP>) associated with ERA_DATABASE (name:<DB Name>) is in 'POWERED_OFF' state . Time-Machine metadata has been updated to retry Resume Create Snapshot and Log Catch Up in some time.
If the DBVM has been deleted from the cluster, the following error might be seen:
None of the DBServers which hosts the database is currently eligible as per the configured Era Active policy
Execute the following commands on the NDB server (or agent) to list the log files which contain the signature that defines the error:
era@localhost: egrep -Rl fatal *
Resolving the Issue
If the DBVM is in powered-off state, power it on after getting confirmation from customer and check if the status goes to green on NDB UI. If yes, unpause TM and monitor the tasks that ensue. If the DBVM is unreachable, the error keyword ERA_DAEMON_UNREACHABLE would normally be seen - check network connectivity between DBVM and NDB and ensure that traffic is not blocked on the firewall.
Refer to the Ports and Protocols page (select appropriate Software Type) and use the following command to verify port connectivity:
nc -vz <destination IP> <port>
Also, check that the era driver/service is running fine on the DBVM and is not blocked by antivirus. Restarting the driver/service will not help if the driver/service is blocked on antivirus.
Collecting additional information
Navigate to the failed operation.Use the “Generate Bundle" button.Use the “Generate” button in the newly opened dialog box. Wait until the generation is complete.Use the “Download” button when activated to download the diagnostics bundle.
Attaching to the case
Navigate to the case on Nutanix Portal > Click on the “Reply” button > Attach the downloaded diag bundle > Add an appropriate comment > Click on “Submit”If the size of the diag bundle being uploaded exceeds 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/000001294.
Closing the case
If this KB resolves your issue and you want to close the case, click on the Thumbs Up icon next to the KB Number in the initial case email. This informs Nutanix Support to proceed with closing the case.
|
KB10238
|
Nutanix Files Unavailable due to Stale ARP entries when ARP Flooding is disabled in Cisco ACI
|
ARP entries in FSVMs are not updated on all FSVMs when ARP Flooding is disabled in Cisco ACI
|
Environments where the ARP Flooding feature is disabled on network devices face issues with cluster connectivity - especially with VIP/DSIP addresses as they tend to move between the CVMs. This has been seen as the default when managing a Cisco ACI environment in Stretched Fabric mode, which manages remote ACI's. When the Prism leader moves to another CVM, the cluster VIP is unconfigured from the previous CVM and configured on the CVM, which has taken the Prism(VIP) leader role. When Stargate NFS leader moves to another CVM, the cluster DSIP is unconfigured from the previous CVM and configured on the CVM, which has taken the Stargate NFS(DSIP) leader role. The Prism leader and Stargate leader do not necessarily reside on the same CVM and do not necessarily change simultaneously, as they are independent services. During the VIP or DSIP configuration process, the VIP or DSIP undergoes a duplicate IP check by executing "arping" and subsequently a GARP is broadcast to update all network devices on the VXLAN to update their locally cached ARP entry.This KB focuses on the issues seen with the DSIP, however this can affect the VIP as well.Issues seen:
Storage access in the UVMs/FSVMs which use the DSIP may lose access to attached iSCSI disks when the DSIP is migrated.FSVM's will panic and reboot, and Nutanix Files Shares will become unavailable/lock up till.
Identifying the Issue (Checking the DSIP ARP cache on FSVM's as an example):Getting the current cluster DSIP configuration can be obtained from the below output:
nutanix@cvm:~$ DSIP="`ncli cluster info|grep "External Data Services"|awk -F ':' '{print $2}'`" ; if [ -n "$DSIP" ] ; then echo "DSIP assigned is : $DSIP" ; allssh "ifconfig |grep -C1 ${DSIP}" ; else echo "No DSIP is assigned" ; fi
From the above, we see the following:
DSIP (Stargate NFS/iSCSI Leader) is on CVM with IP xx.xx.xx.3 with MAC address 50:6b:8d:0f:fd:3fIf the DSIP is configured for the cluster but the above output is empty, contact Nutanix Support to investigate further.
Using arping to resolve IP address to MAC address shows no response received for the DSIP from devices(CVMs/FSVMs/UVMs) in the VLAN for which ARP Flooding is disabled, even when the DSIP is pingable:On FSVM:
nutanix@FSVM:~$ sudo arping -D -c 1 -I eth0 x.x.x.51
If Stargate in the CVM - x.x.x.3 is restarted or CVM is restarted (due to an upgrade or maintenance), we see the DSIP migrate to another CVM:
nutanix@cvm:~$ DSIP="`ncli cluster info|grep "External Data Services"|awk -F ':' '{print $2}'`" ; if [ -n "$DSIP" ] ; then echo "DSIP assigned is : $DSIP" ; allssh "ifconfig |grep -C1 ${DSIP}" ; else echo "No DSIP is assigned" ; fi
From the above, we see the following:
DSIP has moved to CVM with IP xx.xx.xx.1 and MAC address of 50:6b:8d:6a:1c:cd
Checking the FSVM's ARP cache after DSIP change, we see that some still reference the old MAC address for CVM xx.xx.xx.3 (50:6b:8d:0f:fd:3f), which no longer hosts the DSIP address:
nutanix@FSVM:~$ allssh "arp -a |grep xx.xx.xx.51"
When the DSIP has been migrated and ARP Flooding is disabled, the following occurs:
The change of MAC address for the DSIP is not broadcast to all devices on the VXLAN.The local ARP cache entry on these devices only expires the entry after 5 minutes.FSVMs will lose connectivity to the iSCSI disks, and shares hosted on affected FSVMs will become unresponsive.FSVMs will force a reboot after 240 seconds after disk access is lost. After the FSVM has rebooted, the connectivity is restored, as all ARP cached entries are cleared due to the reboot and updated accordingly.
|
If this is the case, the solution would be to enable ARP Flooding in the Cisco ACI for the bridge domain where the CVMs/FSVMs are in.
This can also be validated by using an arping which should receive a response:
nutanix@FSVM:~$ sudo arping -D -c 1 -I eth0 xx.xx.xx.51
|
""Title"": ""Mandatory replacement of NVIDIA X-GPU-M10 PCIe cards in Nutanix NX-3175 and NX-3155 platforms""
| null | null | null | null |
KB10818
|
Move - IPv6 may get enabled after migration, if it was disabled on source VM.
|
IPv6 may get enabled on the guest VM after migration by Move. Disable IPv6 manually in the guest OS settings.
|
IPv6 may be enabled after the migration if it is disabled on the source VM.For example, below are Windows VM NIC details.Source VM, before migration:Target VM after migration:
|
IPv6 is not supported by Move. Disable the IPv6 manually.
|
KB11778
|
Flow: Incorrectly showing DNS traffic as dropped/discovered.
|
This issue occurs because Flow security policies do not allow traffic that is classified as 'related'; this is intentional since allowing related traffic could potentially pose a security risk.
|
In some situations where Flow security policies drop (when enforced), or show as discovered (when monitoring) traffic that seems like it should be allowed based on a configured policy.This most commonly occurs in an Active Directory environment where DNS traffic has been allowed via whitelisting TCP and UDP ports 53, but can occur in other some other uncommon situations as well in which protocols switch ports midway through an exchange (this can occur with FTP).
In the below example of an active directory policy exhibiting this issue, even though UDP port 53 is allowed, traffic is dropped when the policy is enforced and shown as discovered when monitored.
|
When this occurs, additional traffic must be whitelisted to resolve the issue.
In the case of DNS traffic, it is sufficient to also allow all ICMP traffic as part of the same inbound or outbound allowing DNS traffic to resolve this - this allows ICMP error messages to be sent to ensure that traffic is sent on the correct ports. Note that this must be done both at the traffic source as well as the traffic destination if the source and destination are protected by two different policies, otherwise traffic will still be dropped.Please add ICMP as a service to the flow policy
1. Click on update button on flow policy2. Navigate to secure application3. Click on the line which as port 53 as configured between inbound and appTier4. Click on Add a row5. Select ICMP, add port as ANY and enter code as ANY6. Save and update the policy Below is the reference screenshot along with other services and ports
If desired, traffic captures using tcpdump can be performed at the VM tap interfaces and analyzed to determine what ports and protocols if any need to be whitelisted in situations not involving DNS or where the above does not help; if no unexpected un-whitelisted traffic is seen when performing analysis, the issue may be different.
|
KB17229
|
PCDR fails if ICMP is disabled on the Destination PE environment
|
PCDR fails if ICMP is disabled on the Destination PE environment.
|
This knowledge base article discusses a specific pre-check failure during PCDR Restore. As a prerequisite, network connectivity is required between the Destination PE and the PC's gateway IP.
As part of the PCDR restore workflow, connectivity from the PE Destination cluster to the PC gateway IP is verified using ICMP. In environments where ICMP is disabled, the PCDR restore may indicate that the gateway is not reachable.Recover Prism Central wizard will show below error.This issue can be identified by examining the traces in the prism_gateway.log, as shown below.
INFO 2024-03-05 19:30:19,380Z http-nio-127.0.0.1-9081-exec-60 [] services.impl.PcRecoveryServiceImpl.deployPc:1607 Prism Central Deployment RPC returned wit A h: error: "Gateway IP provided is not reachable"
This check is part of the pre-check process before initiating a PCDR restore.Thos issue was observed in ONCALL-17220 https://jira.nutanix.com/browse/ONCALL-17220.
|
Nutanix Engineering is aware of this issue and is working on a fix under ENG-640700 https://jira.nutanix.com/browse/ENG-640700. Currently, we rely on ICMP to check network connectivity.As a part of ENG resolution, we are planning to switch from using ICMP to using socket connections to verify connectivity.
ENG-640702 https://jira.nutanix.com/browse/ENG-640702 - Improvement has ben raised to print the gateway IP address in the prism gateway logs.
|
""Verify all the services in CVM (Controller VM)
|
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Display information about the processor/CPU""
| null | null | null |
KB13810
|
Boot order label at HPE node with UEFI boot mode changed
|
In some situations, you may observe the boot order label as "Red Hat Enterprise Linux" after the node Foundation process.
|
In some situations, you may observe the boot order label as "Red Hat Enterprise Linux" after the node Foundation process.
The latest Foundation adds the boot order as "Nutanix AHV".
|
This is a cosmetic issue and can be safely ignored.
|
KB13590
|
Lenovo - Disk Removal initiated after manual Firmware Upgrade of Toshiba disks on Lenovo Hardware
|
Toshiba HDD Disk Serial Number changes after Disk Firmware Upgrade to TJ65 Firmware on Lenovo HX Hardware.
|
You upgrade Toshiba drive firmware to TJ65 on the Lenovo Hardware using Lenovo-available tools, as Toshiba drive firmware upgrades are not supported through LCM. After the firmware upgrade, the disk serial number changes, initiating the disk removal on the Nutanix cluster for disks with older serial numbers.Toshiba drives are shipped with two different Serial numbers: one is Lenovo's own and another is the Vendor's (disk manufacturer) serial number. Nutanix is designed to read the Vendor serial number. With the Toshiba TJ65 drive Firmware, the drive only presents the Lenovo Serial number to the cluster.
Issue symptoms
Hades (disk service) logs contains messages similar to the below:
nutanix@NTNX-CVM:1:~/data/logs$ grep "Writing ASUP data for disk_serial" hades.out
The disks listed on the node have different serial numbers:
list_disks output:
Disk listed on another node where the upgrade is not performed will follow a different serial number pattern:
nutanix@NTNX-CVM::~/data/logs$ allssh "list_disks"
Following is the SMART output for the disk with the new serial number:
sudo smartctl -a /dev/sdX -T permissive
Following is the SMART output for the disk where the update is NOT performed:
nutanix@NTNX-CVM::~$ sudo smartctl -x /dev/sdk -T permissive
After the disk firmware upgrade, the cluster detects a disk with a new serial number on the same disk slot, it will then initiate the disk removal for the old serial number. Once the disk removal is completed. Disk with new serial number needs to be added to cluster.
|
Lenovo Best Recipe 5.2 https://support.lenovo.com/us/en/solutions/ht514110 mentions not to upgrade Disk firmware. Lenovo is working to provide an updated Disk firmware without the TJ65 firmware included. If the upgrade is already performed and disk removal has been initiated on the cluster follow these instructions:
Verify the upgrade has been performed in one node only. If multiple nodes are affected contact immediately Nutanix Support http://portal.nutanix.com without taking any manual action.Verify if the cluster has enough storage resources to rebuild the data of the disk being removed on the rest of the cluster then allow the disk removal process to complete. If the cluster does NOT have any space to rebuild data for the disk being removed then engage Nutanix Support http://portal.nutanix.com.
If conditions 1 and 2 are verified which means only one node is affected and there is enough capacity in the cluster to complete the disk removal:
Monitor the drive removal operation is fully completed in Prism for all the affected drives.Only once the removal operation is completed, use "Re-partition and Add" workflow on Prism https://portal.nutanix.com/page/documents/details?targetId=Hardware-Replacement-Platform:bre-drive-replace-complete-t.html on the disk with the new serial number to add them back into the cluster. If the removal operation is not making progress after a few hours contact Nutanix Support http://portal.nutanix.com without taking any manual action.
Please refer Lenovo Techtip: https://datacentersupport.lenovo.com/us/en/solutions/ht514200 https://datacentersupport.lenovo.com/us/en/solutions/ht514200 for more details.
|
KB15739
|
Alert - A130397 - VDiskDataMismatchDetected
|
Investigating VDiskDataMismatchDetected issues on a Nutanix cluster.
|
The alert VDiskDataMismatchDetected verifies the consistency of snapshots and their related vdisks and notifies customer if a mismatch is detected related to Field Advisory #116 https://download.nutanix.com/alerts/Field_Advisory_0116.pdf.
Sample Alert
Block Serial Number: 12SMXXXXX78
Output messaging
[
{
"Check ID": "Potential data mismatch detected."
},
{
"Check ID": "Replication configuration"
},
{
"Check ID": "Contact Nutanix Support."
},
{
"Check ID": "Checksum mismatch has been detected in one or more snapshots, which could affect recovery operations. Corrective actions must be taken as soon as possible."
},
{
"Check ID": "A130397"
},
{
"Check ID": "VDiskDataMismatchDetected"
},
{
"Check ID": "Detected vdisk data mismatch due to which recovery from one of the snapshots could be affected. Contact Nutanix support for further analysis."
}
]
|
Resolving the issueFor clusters which encounter the Alert, contact Nutanix Support http://portal.nutanix.com/ to help investigate the source of the vdisk data mismatch and for remediation steps.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command on CVM. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
logbay collect --aggregate=true
Attaching Files to the Case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB9071
|
Phoenix installation failing with "TypeError: ord() expected a character, but string of length 0 found" with Phoenix 4.5.2
|
Phoenix installation failing with "TypeError: ord() expected a character, but string of length 0 found" with Phoenix 4.5.2
|
Phoenix installation failing with the following message and stack trace:
TypeError: ord() expected a character, but string of length 0 found
|
This has been fixed in Foundation 4.5.3. If Foundation imaging is not upgradable or available, use Phoenix version 4.5.3 https://portal.nutanix.com/#/page/Phoenix or later.
|
KB6823
|
Hyper-V - Unable to Install Hyper-V (Bare-Metal)
|
During installation or re-install of Windows 2016, you are unable to install windows on the partition.
|
Whenever performing a bare-metal install of Hyper-V 2016 you may run into an issue where you cannot install Windows on the selected partition:"Windows can't be installed on drive 6 partition 2"When selecting "Show details" the following message may be displayed:"Windows cannot be installed to this disk. This computer's hardware may not support booting to this disk. Ensure that the disk's controller is enabled in the computer's BIOS menu"
|
Ensure the BMC and BIOS is on the latest version. If it is not, update it to the latest version by following the steps in the https://portal.nutanix.com/kb/2896 https://portal.nutanix.com/kb/2896 and https://portal.nutanix.com/kb/2905 https://portal.nutanix.com/kb/2905. Update the BMC first and then the BIOS. If you are still unable to install Windows after updating the BMC and BIOS, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com/.
|
KB8540
|
Pre-Upgrade Check: test_version_check
|
test_version_check checks if all the nodes in the cluster already have the firmware version for a device installed.
|
test_version_check is a Pre-Upgrade check that checks if all the nodes in the cluster already have the firmware version for a device installed.
Failure messages generated by this check:
“Could not read hades config on svm [svm id]”: If hades config on a Controller VM (CVM) could not be read.
“All nodes in the cluster already have [device] firmware [version] installed. Upgrade is not required”: If the device’s firmware has already been installed on all the nodes in the cluster.
The check passes when:
At least one node has a different firmware version installed.Firmware information in hades config is not populated on at least one node.
In these cases, the firmware upgrade is not redundant and is required.
For any other condition not mentioned above, the check is set to fail by default.
|
If the check fails on LCM with “All nodes in the cluster already have [device] firmware [version] installed. Upgrade is not required”:
In the case of Dell hardware:
Engage Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
If it is not a Dell hardware:
You do not need to upgrade that particular device.Uncheck the upgrades for that device and proceed.
If the check fails with “Could not read hades config on svm [svm id]”:
Check if all the CVMs are up and reachable (This command verifies that all the CVMs can be reached via SSH).
nutanix@cvm$ allssh hostname
Check if all the services in the cluster are UP:
nutanix@cvm$ cluster status | grep -v UP
In case all the services are okay and you still encounter a failure in pre-check step, collect the above-mentioned information, and consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
|
KB10997
|
How to remove a decommissioned cluster from the Nutanix support portal
|
Instructions on removal of the decommissioned cluster when it is no longer in service but is still listed on the Nutanix support portal.
|
When reviewing asset information in the Nutanix Support portal https://portal.nutanix.com/page/assets/blocks/list, you see assets that are out of service and you wish to remove them.
|
Contact your account team if you wish to remove a cluster from the portal asset list. Include the serial numbers of the related assets and a confirmation that these assets are no longer in use in your request. If you need further assistance gathering this information or contacting your account team, open a non-technical Support Case https://portal.nutanix.com/page/smart-support/cases/list.
|
KB6455
|
How to determine vdisk usage using stats_tool command when snapshots are in use.
|
There are multiple different ways to check vdisk usage in Nutanix cluster and stats_tool is one of them that can be easily used as Arithmos data for vdisk usage is also eventually read from this output of stats_tool command. This KB explains how to track the changes of vdisk usages in a snapshot chain when snapshots are involved.
|
stats_tool can be used to keep track of vdisk usage changes of all the vdisks in a snapshot chain when snapshots are involved and this data is quite trustable as even Arithmos also eventually refers to the output of the stats_tool command for its stats.
This KB will explain how to check the usage of each vdisk in a snapshot chain by using the stats_tool.
|
By using the stats_tool command, it is possible to find out how much data is inherited from parent vdisks and how much data is written in the child vdisk. Read through the below demonstration steps for understanding how to get usage from each vdisk in a snapshot chain.
Created a vdisk. The size of the vdisk is 8M and the vdisk ID is 162845973.Ran the stats_tool command on vdisk ID 162845973. Output showed user_bytes “8M” and inherited_user_bytes "0".
nutanix@cvm$ stats_tool -stats_type=vdisk_usage --stats_key=162845973
Took snapshot of vdisk ID 162845973, which created child vdisk ID 163141915
nutanix@cvm$ snapshot_tree_printer | grep 162845973
Ran stats_tool on child vdisk ID 163141915. Output showed user_bytes “0” and inherited_user_bytes "8M". This is because 8M was inherited from its immutable parent vdisk ID 162845973.
nutanix@cvm$ stats_tool -stats_type=vdisk_usage --stats_key=163141915
Changed 1M of data in child vdisk ID 163141915 (as it will write 1MB data in child vdisk) and ran stats_tool on child vdisk again. Now, output showed user_bytes “1M” and inherited_user_bytes "7M". This is because 1M sized data was modified and this 1M should not be included as inherited.
nutanix@cvm$ stats_tool -stats_type=vdisk_usage --stats_key=163141915
Conclusion
Vdisk usage of a VM is calculated by summing up "user_bytes" and "inherited_user_bytes" when it is in a snapshot chain. user_bytes means the actual usage written to the child vdisk and inherited_user_bytes is the total capacity being inherited from all of the parent vdisks.
Even when the snapshot chain is long, child vdisk ID can be still used in stats_tool command to come up with total usage of the vdisks in a snapshot chain because inherited_user_bytes observed in the stats_tool output of child vdisk will include accumulated inherited_user_bytes from all of the previous parent vdisks as well.
Arithmos data will also use this output and Prism in the end will display the same usage. To summarise, "user_bytes + inherited_user_bytes" of a child vdisk will decide vdisk usage when it is in a snapshot chain.
|
KB10460
|
IBM iConnect Enterprise Archive (iCEA) Configuration on AHV
|
There are a few Nutanix AHV and Files specific configuration steps required when deploying IBM ICEA on Nutanix AHV with Nutanix Files.
|
IBM iConnect Enterprise Archive (iCEA) on Nutanix Implementation
Installing IBM iCEA on AHV requires some custom configuration. This KB contains details on how to configure IBM iConnect Enterprise Archive (iCEA) Version 13.0 for use with Nutanix AHV and Nutanix Files.
|
iCEA Application VM Configuration
Linux Mode 6 (Balance ALB) bonding and VLAN tagging are not supported on AHV so they need to be disabled. To disable load balancing and vlan tagging before deploying the IBM Merge cluster, edit the /opt/vna/install/configs-9.7/PQL[#].conf file (example PQL009009.conf) and remove the VLAN tag definition by leaving the BE_VLAN column blank. Change the bonding mode (BE_BONDMODE) active-backup.
Example:
# BE_INTINFO="BE_VLAN,BE_BONDMODE,BE_MTU,BE_INTS"
Configure Nutanix Files
Deploying iCEA on Nutanix requires three separate VLANs—one for the public IP address and two private subnets:
Public networkBackend networkHeartbeat network
IBM iCEA uses a private network for access between the application nodes and Nutanix Files (Used for Image storage).
When deploying Files on a private network enter a gateway IP address even if one does not exist. Merge uses NFSv3, so configure NFSv3 only for the file server.For each NFS export set Squash to None.
|
KB9801
|
Sub NUMA clustering on Fujitsu
|
How to disable Sub NUMA clustering on Fujitsu nodes.
|
Fujitsu hardware has Sub NUMA clustering enabled by default which creates 4 NUMA nodes on dual socket hardware. This causes issues with CVM sizing and it is recommended to disable this setting.This also also leads to misconfiguration in CVM NUMA tuning. Prism will display 4 sockets on the hardware instead of 2.We see 4 NUMA nodes on each node.
nutanix@CVM:~$ hostssh "numactl -H"
NUMA tuning is misconfigured.
[root@ahv ~]# virsh dumpxml NTNX-host-CVM |grep "vcpu placement"
|
To correct the setting, you'll need to disable Sub NUMA clustering from BIOS.The procedure is covered in the Fujitsu document below. https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-skylake-RXTX-bios-settings-primergy-ww-en.pdf https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-skylake-RXTX-bios-settings-primergy-ww-en.pdfEnter BIOS Setup > Advanced > Memory Configuration > Sub NUMA Clustering > Disable.After disabling Sub NUMA clustering:
[root@ahv ~]# numactl -H
NUMA tuning was autocorrected in this case after a reboot but if tuning needs to be corrected, please refer KB 4570.
Correct NUMA tuning:
root@ahv ~]# virsh dumpxml NTNX-host-CVM |grep "vcpu placement"
|
KB13308
|
File Analytics - Failed to login to File Analytics console after upgrade with Invalid Credentials
|
PC users cannot login to File Analytics after upgrade to FA 3.1.
|
AD / Local users are unable to log in to File Analytics with Prism Central credentials after upgrading to File Analytics 3.1Login fails with error: "The username or password you entered is incorrect. Please try again."
|
In File Analytics version 3.1, Log in to File Analytics with a local or AD user created/added on Prism Central is not supported.User should be added to Prism element or AD User should be part of AD Group/User added in Prism Element for authorization.
You can configure the AD user for authorization in the Prism element using the link https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v5_20:wc-security-authentication-wc-t.html#task_84b_24f_85.If you are still facing issues, Please contact Nutanix Support http://portal.nutanix.com/ for assistance.
|
KB7509
|
Getting "Failed to fetch the Phoenix ISO location" error during Prism Repair Host Boot Disk
|
Getting "Failed to fetch the Phoenix ISO location" error during Prism Repair Host Boot Disk
|
When using the "Repair Host boot disk" feature in Prism to repair a failed SATADOM in AOS 5.10.4, 5.10.4.1 and 5.10.5, the repair host boot disk workflow is broken. This issue occurs on AOS 5.10.4, 5.10.4.1 and 5.10.5.It initially reports that it can't discover the node (which is expected since the node is down).After a few seconds, it will report "Failed to fetch the Phoenix ISO location", and it will get stuck at this step. Even if you try clicking "Next", the same process will just start over.
|
This is a known issue in AOS 5.10.4, 5.10.4.1, 5.10.5 releases. Testing has found https://jira.nutanix.com/browse/ENG-230610?focusedCommentId=1969371&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-1969371 that this may be caused by a non-default CVM password. As a workaround, you can try restarting Genesis on the Prism Leader CVM with the current non-default password and then try to invoke the feature again in Prism. This is done as follows:
genesis restart --default_cvm_password=<insert_current_non-default_password>
Optional: Alternatively, the password on CVM for the nutanix user can be reset to default using the command
echo 'nutanix:nutanix/4u' | sudo chpasswd
Caution:This workaround creates a security risk by exposing the customer's CVM password to the terminal. It is recommended that SREs stop sharing the customer's desktop while making this change and also ask them to take the following actions:1. Clear the history of CLI entries:
history -c
2. Remove the "genesis restart..." command string from the bottom of the .bash_history file.
vi .bash_history
Or just have them change the CVM password afterward.If this workaround is unsuccessful, capture logs for RCA perform a manual SATADOM repair procedure (install and configure hypervisor workflow).
|
}
| null | null | null | null |
KB9541
|
Nutanix DRaaS and Nutanix DR | SSR with error "Disk attached successfully but partition couldn't be mounted"
|
This artocle contains information on SSR Disk attached but not mounted due to snapshot disk having an inconsistent filesystem on a Linux VM.
|
Note: Nutanix Disaster Recovery as a Service (DRaaS) was formerly known as Xi Leap, and Nutanix Disaster Recovery (DR) was formerly known as Leap.
While attaching and mounting an SSR snapshot, the below error is observed:
Whenever the snapshot disk has an inconsistent filesystem (as indicated by the fsck check), the disk is only attached and not mounted. A complete list of Requirements and Limitations can be found here https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:man-file-level-restore-requirements-r.html.
Manually checking the file system for the attached drive:
[root@localhost logs]# tune2fs -U $(uuidgen) /dev/sdf1
In the above example, the disk attached is /dev/sdf1.
|
Run e2fsck on the attached disk:
[root@localhost logs]# e2fsck -f /dev/sdf1
After running the above command, re-run the tune2fs command:
[root@localhost logs]# tune2fs -U $(uuidgen) /dev/sdf1
Use the mount command to manually mount the disk:
[root@localhost logs]# mount /dev/sdf /mnt/nutannix/sdf/sdf1
|
""Title"": ""When a node is placed into AHV maintenance mode
|
either manually or as part of cluster operations such as AHV upgrades or node removal
|
a race condition can occur which results in UVMs being migrated to a host even though it is marked as Schedulable=False.""
| null | null |
KB7271
|
Error "NCC schedule is already set up from CLI" while changing NCC frequency
|
Prism reports Error "NCC schedule is already set up from CLI" when trying to configure NCC frequency.
|
Prism reports the following error when trying to configure NCC frequency in Prism:
NCC schedule is already set up from CLI.
|
SSH into one of the CVMs and execute the below commands. Use ncc help_opts to display command options.
Check the frequency of the NCC runs:
ncc --show_email_config=1
Delete any email configuration already scheduled:
ncc --delete_email_config=1
Verify email schedule:
ncc --show_email_config=1
You will see a message stating the email config is not configured.
Re-try configuring NCC frequency from Prism.
|
KB15977
|
Nutanix Files - Smart DR policy creation fails with "Problem saving policy: Minerva fm gateway timeout"
|
This KB is related to a scenario where Smart DR policy creation fails with "Problem saving policy: Minerva fm gateway timeout" error. This error is most likely to be seen during new Smart DR configurations.
|
When you try to create Smart DR replication policy on Prism Central, the policy creation can fail with "Problem saving policy: Minerva fm gateway timeout" error as shown in screenshot below,You can find the policy creation task failed in ecli,
nutanix@PCVM$ ecli task.list status_list=kFailed
Failed task will have "Internal server error" in error details,
"message": "Create protection policy test for file server ntxpfs01",
|
Verify if files manager is on latest version by running below command on PCVM,
nutanix@PCVM$ files_manager_cli get_version
If Files manager is not on latest version, please upgrade Files manager to latest version on Prism Central via LCM.If Files manager is on latest version and policy creation is failing, files_manager_service.out logs will have following errors,
nutanix@PCVM$ grep 'i/o timeout' data/logs/files_manager_service.out
The above "i/o timeout" error indicates that port 9440 is not reachable from PCVM to file server VMs on source and destination file servers.Please verify if the required ports are opened as per following documents, https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files%20Manager https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files%20ManagerOnce the required ports are opened, the policy will get created successfully.
|
KB15159
|
NDB - clean up old installation after patching postgres db servers
|
NDB leaves behind old postgres installation after patching the db servers unless "delete_old_database_home_on_success" parameter was provided during patch operation
|
After a succesfull postgres database patch operation, the previous installations are still left behindBelow example, shows the file systems mounted after updating postgres from 13.7 to 13.10
|
This is expected unless the below parameter is provided via API or CLI
"patchInfo": [
For any existing DB Servers that has already been patched, the previous installation locations needs to be manually removed.
|
KB4202
|
Citrix MCS displays Failed Desktops
| null |
The following error is displayed after upgrading the Nutanix MCS plugin to version 1.1.1.0 with MCS 7.12 and afterward. Citrix displays desktops as "Failed" and "Stuck on Boot" although they are running just fine. This issue is cosmetic as the desktops are operating normally.
|
There are 2 recommended steps to fix the above issue:
A timeout value should be raised on Citrix Director process.Citrix provided following recommendation:
Create the following registry keys on all Delivery Controllers:
MaxTimeBeforeStuckOnBootFaultSecs - Location: HKEY_LOCAL_MACHINE\Software\Citrix\DesktopServer - Registry Key Name: MaxTimeBeforeStuckOnBootFaultSecs - Registry Type: DWORD - Value: 1800 (In Seconds) - Default Value: 300 (In Seconds) - Description: How long to wait in seconds after a machine started but did not receive any notification from the HCL that the VM tools are running. After this timeout a machine's fault state would be set to StuckOnBoot
MaxTimeBeforeUnregisteredFaultSecs - Location: HKEY_LOCAL_MACHINE\Software\Citrix\DesktopServer - Registry Key Name: MaxTimeBeforeUnregisteredFaultSecs - Registry Type: DWORD - Value: 1800 (In Seconds) - Default Value: 600 (In Seconds) Description: How long to wait in seconds after a machine started but remains unregistered with the Broker (with or without attempting to register). After this timeout a machine's fault state would be set to Unregistered.
These new registry entries should prevent the above issue.
|
KB10420
|
NCC Health Check: stale_recovery_points_check
|
The NCC health check stale_recovery_points_check checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.
|
The NCC health check stale_recovery_points_check checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted.It can be run as part of the complete NCC check by running
nutanix@cvm:~$ ncc health_checks run_all
or individually as:
nutanix@cvm:~$ ncc health_checks draas_checks protection_policy_checks stale_recovery_points_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 12 hours, by default.This check will generate an alert after 2 failure.
Sample outputFor status: PASS
Running : health_checks draas_checks protection_policy_checks stale_recovery_points_check
For Status: ERROR
Running : health_checks draas_checks protection_policy_checks stale_recovery_points_check
Output messaging
[
{
"Check ID": "Checks if there are any recovery point entries in the Insights database whose underlying snapshots have been deleted."
},
{
"Check ID": "The snapshot was deleted without removing the associated stale entries in the Insights database."
},
{
"Check ID": "Check the running status of the Polaris service."
},
{
"Check ID": "Upcoming replications might be impacted by the stale remnant entries in the Insights database."
}
]
|
If this NCC check fails, engage Nutanix Support https://portal.nutanix.com/.Additionally, gather the following command output and attach it to the support case:
nutanix@cvm:~$ ncc health_checks run_all
|
KB7568
|
Can't configure custom roles for AD user's access to VMs
|
Error while creating a project and role and user group is not getting added in it.
|
This issue is seen when customer needs to give basic access (Power on/off, Access Console) to all the VMs on AHV for a particular user group.aplos_engine.out shows 'INTERNAL_ERROR' for user group not found in any directory service:
2019-05-21 11:51:16 INFO intentengine_app.py:212 <2e4d7106> [ed895480-088f-40b1-8959-ca069f2b08f8] Invoking the plugin for put call on project with uuid 2e4d7106-864d-4879-81d6-8aa88c54e429
|
Check if the Organizational Unit (OU) under which the username exists has any special characters in the OU name. If yes, try moving that specific user to a different OU that does not have special characters in its OU name.Also, make sure you select the "Allow Collaboration" check box while creating a project and then save it.
|
KB11162
|
netcollect_wrapper - Virtual Machine network info, OVS data and tcpdump collection Script
|
netcollect_wrapper - Virtual Machine network info, OVS data and tcpdump collection Script
|
This KB describes how to run the netcollect_wrapper-1.1.py script and how to use various options available with this script.This script is intended to collect the information required to troubleshoot networking-related issues for AHV networks. On a CVM with an internet connection:
Navigate to the ~/tmp directoryDownload the script using the following command:
nutanix@cvm:~/tmp$ wget https://download.nutanix.com/kbattachments/11162/netcollect.tar
On a CVM without an internet connection:
Download the script from netcollect.tar https://download.nutanix.com/kbattachments/11162/netcollect.tarUpload the netcollect.tar file to one of the CVM under /home/nutanix/tmp directory via winscp or similar tool
From the /home/nutanix/tmp/ folder
Untar the folder using the command below:
nutanix@cvm:~/tmp$ tar -xvf netcollect.tar
Verify the script md5sum using the following command:
nutanix@cvm:~/tmp$ md5sum netcollect_wrapper-1.1.py
The netcollect.tar contains two scripts: netcollect_wrapper-1.1.py and netcollect-1.4.sh:netcollect script: The netcollect script is executed on each host to collect network information.netcollect_wrapper script: The netcollect_wrapper script facilitates the execution of the netcollect script on multiple hosts simultaneously and also collects network information from the cluster. Although the same information can be collected manually using commands, we recommend using this script as it would collect all required information in one go.The data collected by the script is saved under /home/nutanix/tmp/netdump directory on the same CVM.Note: Once you have collected the data and uploaded it on the Nutanix FTP site, please clean up the /home/nutanix/tmp/netdump directory manually using the below command:
nutanix@cvm:~$ cd /home/nutanix/tmp/
The script primarily collects the following information:
Cluster network information:
Cluster network configurationVirtual Switch informationVM network information manage_ovs command output
AHV host network information:
Virtual Machine network information: The following files are created as part of VM data collection:
tcpdumps on tap interface and physical interface (both in/out direction) If VM UUID is provided using --uuid option then the script will run tcpdump only for the VM tap interface and all the bridge uplinks where the VM is connected and traffic will be filtered based on the VM Mac address. If no UUID is provided then the script will run tcpdump for all the physical interfaces and the pcap file will be saved using the format:
<UUID>_<tap-interface>_<in/out>.pcap
When no VM UUID is provided then you can use -x, --filter option to specify the tcpdump filters you want to use during tcpdump captures. Example: --filter “icmp or arp” OVS data at specified intervals The following files are created as part of OVS data collection:
The following files are also created as part of this script: commands.txt: contains commands used for collecting VM info and tcpdump data. Version 2.0 would include commands used for collecting OVS data. netcollect.txt: script logs with timestamp [
{
"Filename": "lldpctl.txt",
"Information collected": "lldpctl"
},
{
"Filename": "virsh_list.txt",
"Information collected": "virsh list --title"
},
{
"Filename": "VM-.txt\t\t\t\tVM-interface--.txt",
"Information collected": "virsh domiflist : list of vNIC interface for the VM\t\t\t\tvirsh domifstat : Network statistics for each tap interface"
},
{
"Filename": "kernel_flows.txt",
"Information collected": "ovs-dpctl -m dump-flows"
},
{
"Filename": "kernel_ports.txt",
"Information collected": "ovs-dpctl show"
},
{
"Filename": "mac_table.txt",
"Information collected": "ovs-appctl fdb/show "
},
{
"Filename": "ovs-appctl.txt",
"Information collected": "ovs-appctl bond/show\t\t\t\tovs-appctl bond/list"
},
{
"Filename": "ovs_db.txt",
"Information collected": "ovs-vsctl show"
},
{
"Filename": "ovs_flows.txt",
"Information collected": "ovs-appctl bridge/dump-flows "
},
{
"Filename": "ovs_ports.txt",
"Information collected": "ovs-ofctl dump-ports-desc "
}
]
|
Note: The script will just collect the network information however it is not intended to give you an answer or even solve your case by itself. Please use the data collected by the script to investigate the issue you are working on. Note: If the script is hung or if you wish to terminate the script during execution then you can Ctrl+C out of the script and run the following command to kill any existing tcpdump processes on AHV host:
nutanix@cvm:~$ hostssh 'pkill tcpdump'
Parameters:- a --ahv <comma separated AHV IPs>
Specifies the comma-separated list of AHV IPs for which the data needs to be collected. The tcpdumps will be triggered on all uplinks on the AHV host and you can use -x option to specify the filter.
-u, --uuid <comma separated VM name list>
Specifies the comma-separated list of VM UUID for which the data needs to be collected. The tcpdumps will be triggered for VM’s tap interface and the physical uplinks for the OVS bridge where the VM is connected. Also, tcpdump on the physical interface will be filtered by the VMs MAC address.
-t, --tcpdumps <yes/no>
Indicates whether to collect tcpdump or not. The default action is no.
The size of each pcap file is restricted to 200 MB with 5 files rotation per host.
-o, --ovsdata <yes/no>
Indicates whether to collect ovsdata or not. The default action is yes.
-b, --bundle <yes/no>
Indicates whether to create a netdump.tar.gz bundle for the collected information. The default action is yes. This is helpful in cases where you need to upload data on the Nutanix FTP server for offline investigation.
-d, --duration <number of seconds>
Specifies the duration (in seconds) for which the data needs to be collected. By default, the script will collect the data for 300 seconds. The duration for capture cannot be greater than 900 seconds. If you specify a duration greater than 900 then the data collection will stop after max duration i.e. 900 seconds.
-f, --frequency (number of seconds)
Specifies the frequency at which OVS data sample is collected. By default, the script will collect OVS data every 10 seconds
-x, --filter
tcpdump filters to collect specific type packets on the physical interface.If the VM UUID is used then this option is ignored and the traffic will be filtered based on VMs MAC address.
-v, --version
print script version. No action is performed.
Note: The script should only be executed using python3.Example 1: Run netcollect_wrapper script with default duration of 300 seconds for three VMs running on different hosts.In the below example, the wrapper script will find the hosts where these VMs are running and collect the information about the VMs and OVS data for 300 seconds with an interval of 10 seconds. By default, the script will not collect tcpdump data.The script will copy the netdump from each host and collect the data under /home/nutanix/tmp/netdump directory on the CVM where the script is executed and will create a tar.gz bundle for the collected data.
nutanix@cvm:~$ python3 netcollect_wrapper-1.1.py -u TestVM-1,TestVM-2,TestVM-3 -o yes
Example 2: Run netcollect script to collect VM’s network information, OVS data and tcpdumps for 1 minute. In this example, the script will collect VM’s network information, OVS data and tcpdumps for 60 seconds with OVS data sampled every 5 seconds. Here we are collecting tcpdumps on VM tap interface and the Uplinks.The script will copy the netdump from each host and collect the data under /home/nutanix/tmp/netdump directory on the CVM where the script is executed and will create a tar.gz bundle for the collected data.
nutanix@CVM:~$ python3 netcollect_wrapper-1.1.py -u TestVM-1,TestVM-2,TestVM-3 -o yes -d 60 -f 5 -t yes
Example 3: Run netcollect_wrapper script to collect tcpdump for all physical interfaces for 60 seconds and bundle all the files in tar.gz.
nutanix@CVM:~$ python3 netcollect_wrapper-1.1.py -a 10.69.0.11,10.69.0.12 -d 60 -f 5 -o yes -b yes -t yes
|
{
| null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.