id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB12015
|
False alerts of multiple services restarts in the cluster
|
False alerts of multiple services restarts in the cluster, caused by a Tomcat notification
|
Alerts of multiple service restarts in the cluster are raised every 10 minutes.
Multiple service restarts of ['catalog', 'ergon', 'flow', 'lazan'] across all Controller VM(s). Latest crash of these services have occured at ['Thu Aug 5 09:46:57 2021', 'Thu Aug 5 09:44:33 2021', 'Thu Aug 5 09:46:40 2021', 'Thu Aug 5 09:46:41 2021'] respective
These alerts are raised by the NCC check "cluster_services_status" KB-3378 https://portal.nutanix.com/kb/3378
There are no FATALs on the Cluster for the mentioned services in the last 24 Hours
allssh 'sudo find data/logs/ -name *.FATAL -mtime -1 -exec ls -lrt {} \;'
There are no failed Tomcat heartbeats in ~/data/logs/prism_monitor.INFO but a notification is still being raised incorrectly:
I20210805 10:50:57.028865Z 13513 prism_monitor.cc:1211] Heartbeat to tomcat server succeeded
|
If the mentioned services haven't crashed recently, the NCC log will show that Tomcat heartbeat succeeds, but then 2 things happen incorrectly:
TomcatFrequentRestart alert is raised, but with is_resolve_notification=trueThe notification is flushed to the alert manager and an alert is raised
I20210805 10:50:57.741645Z 13509 alerts_notification_client.cc:580] timestamp_usecs: 1628160657029167 notification_name: "TomcatFrequentRestart" notification_members { member_name: "cvm_ip" member_value { string_value: "x.x.x.119" }
The issue is still being investigated, The current workaround is to disable the NCC check cluster_services_status following KB- https://portal.nutanix.com/kb/4859 4859 https://portal.nutanix.com/kb/4859
|
KB7282
|
Extracting CPU stats in readable format from esxtop logs
|
A handy tool for log analysis.
Suggestion to utilize esxtop stats logs for analyzing performance issue by extracting CPU stats into readable and practical CSV file.
|
Introduction:
vCPU stats of VMs are useful information for troubleshooting performance issue. Especially "CPU Ready" is important stats for reviewing VM density on a host.esxtop provides following vCPU stats for VMs.
Performance log (generated by collect_perf) contains esxtop stats log for CVM.
Problem:
CPU ready can be found on Illuminati once the performance log is collected. However, in some case, Illuminati is unable to show the stats based on esxtop.
"Esxtop Latency" report shows nothingUseful graph for vCPU's "% ready" can be found in each node's report but cluster-wide"esxtop" on each node's report shows raw esxtop data in CSV format. But it is difficult to manipulate this log.
esxtop stats in performance log has helpful information, but it has thousands of columns in CSV format and difficult to extract targetted information from it.The result of "esxtop -b" command on ESXi host can contain more columns. It can be used "replay" with esxtop command but not suitable for Spreadsheet(ex: Excel) or anything for viewing it as a general CSV file.
Suggestion:
Utilise a quick tool(python script) for extracting CPU stats only in CSV format.
Use case:
Obtaining CPU stats in practical and re-usable format will help us in the following situation for example.
RCA report needs a graph with specific stats and/or by aggregating them.Needs monitoring transition of CPU stats across the cluster following VM migration, backup task and so on.Browsing multiple stats by plotting them on a common grid for analysis, finding some trend.[]
|
"CPUfromEsxitop.py" extracts vCPU stats from esxtop stats logs (result from "esxtop -b").
This script generates CSV files with the following 16 columns.
Date,Time,Host,Instance,Component,% Used,% Run,% System,% Wait,% VmWait,% Ready,% Idle,% Overlap,% CoStop,% Max Limited,% Swap Wait
Link
CPUfromEsxitop.py https://jira.nutanix.com/secure/attachment/318382/CPUfromEsxitop.py (temporary location with ONCALL-5840)
Usage:
python CPUfromEsxitop.py --out=<CSV file with extracted vCPU stats> <source esxtop CSV file(s)>
Example 1: Generating a single CSV to stdout with all factors
$ python CPUfromEsxitop.py *esxtop.csv
Example 2: Generating a single CSV file with all factors
$ python CPUfromEsxitop.py --out=vCPUstats-all.csv *esxtop.csv
Example 3: Generating separate CSV files for each factor
$ python CPUfromEsxitop.py --out=vCPUstats-#.csv *esxtop.csv
Sample case: Generating vCPU-stats CSV files from performance log
Obtain a performance log bundle (tar-gz format)
Example
Decompress it to the files per node
Example
Decompress the *.tgz files of targetted CVMs and gather all extracted *esxtop.csv.gz files
Example
Decompress all *_esxtop.csv.gz files Generate a CSV file(s) for CPU stats
single CSV file with all factors
Example
multiple CSV files for each factor ( "#" in --out option)
Example
Result : vCPUstats-GroupCPU.csv, vCPUstats-vmx.csv, vCPUstats-vmx-vcpu-0.csv,....
Sample row output:
[
{
"Date": "2019/05/01",
"Time": "00:20:21",
"Host": "NTNX-xxxx-A",
"Instance": "NTNX-xxxx-A-CVM",
"Component": "GroupCPU",
"%Used": "126.54",
"%Run": "130.39",
"%System": "0.90",
"%Wait": "1164.71",
"%VmWait": "0.66",
"%Ready": "0.17",
"%Idle": "666.40",
"%Overlap": "0.14",
"%CoStop": "0.00",
"%Max Limited": "0.00",
"%Swap Wait": "0.00"
}
]
|
""SSD/HDD"": ""Form Factor""
| null | null | null | null |
KB2233
|
Recover CVM's nutanix user Password Through the Prism Web Console
|
If you are locked out of the CVM after changing the nutanix user's default password, or the new password is lost, or the default password is not working, follow the instructions in this article to log in to the CVM without a password and reset the password if needed.
|
For security reasons, Nutanix administrators might have changed the nutanix user's default password.
If the new password is lost or if unable to log in to the CVM (Controller VM) through SSH using the nutanix user's default password, then you can reset the nutanix user password by creating a password-less authentication through Prism Web Console.This article has a link to another Knowledge Base Article which has information on creating password-less authentication and steps for logging into the CVM to reset the nutanix user password.
|
Leveraging admin user to change the password for user nutanix
Log in to the CVM using admin user.Using the following command trigger change of password for the user nutanix:
admin@CVM$ sudo passwd nutanix
Validate new password settings by logging in with new credentials.
Recovering the CVM password
If the method above for any reason is not an option you can recover CVM password by configuring password-less authentication leveraging RSA keys.Note: You must have access to the Prism Web Console to perform the following steps.
Follow KB 1895 http://portal.nutanix.com/kb/1895 on how to set up password-less SSH.
Note: When generating a new SSH key, make sure it is SSH-2 RSA. SSH-1 RSA is not supported. To explicitly set SSH2-RSA in PuTTYgen, click the Key menu and select "SSH-2 RSA key". See screenshot below:
After setting up password-less ssh, connect to the CVM. You should not be prompted for the password.Once logged in, change the password for the nutanix user.
nutanix@cvm$ sudo passwd nutanix
|
KB14226
|
File Analytics - Upgrade via LCM fails for 'task_check_zk_avm_ip failed'
|
File Analytics upgrade via LCM could fail if the password stored in the FA config for Prism user 'file_analytics' is incorrect.
|
File Analytics upgrade via LCM could fail if the password stored in the FA config for Prism user 'file_analytics' is incorrect.LCM task in Prism:
The following error is logged In lcm_ops.out on the LCM leader, as well as in /var/log/nutanix/fa_upgrade.log on the FAVM:
2022-12-06 15:47:09 INFO 70929216 run_upgrade_tasks.py:148 Running CheckZkAvmIp
During the LCM upgrade, one of the scripts executed is check_zk_avm_ip.py. It is making two calls to Prism API to check if the current FAVM IP matches the one stored in zookeeper. The calls are made using user 'file_analytics' and are failing due to an incorrect password stored in the FAVM config.
https://PRISM_VIP:9440/PrismGateway/services/rest/v2.0/analyticsplatformhttps://PRISM_VIP:9440/PrismGateway/services/rest/v2.0/vms/FAVM_UUID/nics
Test the response of these API requests manually to confirm if this issue is being encountered. For example:
[nutanix@FAVM]$ curl -k --user file_analytics:Nutanix/4u111111111 -X GET "https://PRISM_VIP:9440/PrismGateway/services/rest/v2.0/analyticsplatform"
|
If the above is confirmed follow these steps to resolve the issue with the invalid password: 1. Check in Prism if user file_analytics actually exists. 2. Obtain the password stored for the user on the FAVM by running the below script:
nutanix@FAVM$ python /opt/nutanix/analytics/bin/retrieve_config
Example output where the password is 'Nutanix/4u111111111'
{u'cvm_virtual_ip': u'xx.xx.xx', u'cvm_user': u'file_analytics', u'nos_cluster_name': u'cluster_name', u'cvm_password': u'Nutanix/4u111111111', u'avm_uuid': u'7d1823f5-xxxx-xxxxx-xxxx-0a6d28dd24f1', u'nos_version': u'el7.3-release-euphrates-xxxx', u'cvm_cluster_ips': [u'xx.xx.xx.1', u'xx.xx.xx.2', u'xx.xx.xx.3', u'xx.xx.xx.4', u'xx.xx.xx.5']}
3. Set the password from above output in Prism for user file_analytics (Nutanix/4u111111111 in the above example) 4. Test if the password is working by using curl from the FAVM. In this example it is still failing:
[nutanix@FAVM]$ curl -k --user file_analytics:Nutanix/4u111111111 -X GET "https://PRISM_VIP:9440/PrismGateway/services/rest/v2.0/analyticsplatform"
5. If the test is successful, retry the LCM upgradeIf the above steps do not resolve the issue and the API calls are failing for bad credentials, try the following:- Delete the 'file_analytics' user in Prism- Recreate the same user from FAVM using the reset_password.py script (see KB-8242 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CrxACAS)
nutanix@FAVM$ python /opt/nutanix/analytics/bin/reset_password.py --user_type=prism --password=Nutanix/4u111111111 --prism_user=admin --local_update=False --prism_password=ADMIN_PASS
|
KB11927
|
How to check current AHV version and upgrade history
|
The following KB article describes how to check the current AHV version and the upgrade history on a cluster.
|
The following KB article describes how to check the current AHV version and the upgrade history on a cluster.
|
Check the current AHV versionPrism UIOn the Prism Home page check the "Hypervisor Summary" widget.On the Prism Hardware page select the host and check the "Hypervisor" field in the "Host details" section.CLIRun the following command on any CVM:
nutanix@cvm:~$ hostssh uname -r
Sample output:
============= x.x.x.1 ============
Check the AHV upgrade historyRun the following commands on any CVM:
nutanix@cvm:~$ allssh cat ~/config/hypervisor_upgrade.history
~/config/hypervisor_upgrade.history file was used to track AHV upgrade history during 1-click upgrades, which was later changed to /var/log/upgrade_history.log once AHV upgrades transitioned to use LCM. As result, both files need to be checked.Sample output:The file hypervisor_upgrade.history in the CVM shows old AHV upgrades:
nutanix@cvm:~$ allssh cat config/hypervisor_upgrade.history
The most recent upgrades can be found in the upgrade_history.log file in the AHV hosts:
nutanix@cvm:~$ hostssh cat /var/log/upgrade_history.log
|
KB11745
|
Alert - A130362 - VolumeGroupReplicationTimeExceedsRpo
|
Investigating VolumeGroupReplicationTimeExceedsRpo issues on a Nutanix cluster
|
This Nutanix article provides the information required for troubleshooting "VolumeGroupReplicationTimeExceedsRPO" on your Nutanix clusterAlert Overview
The A130362 - Volume Group Recovery Point Replication Time Exceeds RPO alert occurs when replication of the Volume Group takes longer than the RPO scheduled.
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output Messaging
[
{
"130362": "Volume Group Recovery Point Replication Time Exceeded the RPO Limit.",
"Check ID": "Description"
},
{
"130362": "Change rate of the Volume Group might be too high.",
"Check ID": "Causes of failure 1"
},
{
"130362": "The issue will resolve itself once the change rate subsides.",
"Check ID": "Resolution 1"
},
{
"130362": "If the cluster is running on AWS and protected using cluster protection feature, data protection service may not be stable.",
"Check ID": "Causes of failure 2"
},
{
"130362": "Please contact Nutanix Support",
"Check ID": "Resolution 2"
},
{
"130362": "Replications for the next snapshots will be delayed",
"Check ID": "Impact"
},
{
"130362": "A130362",
"Check ID": "Alert ID"
},
{
"130362": "Volume Group Recovery Point Replication Time Exceeded the RPO",
"Check ID": "Alert Title"
},
{
"130362": "Replication time of the snapshot created at '{recovery_point_create_time}' UTC for Volume Group {volume_group_name} exceeded the RPO limits",
"Check ID": "Alert Message"
}
]
|
Troubleshooting and Resolving the Issue
This alert occurs due to replication taking longer than the RPO set and is mainly due to heavy workload, network congestion, or any other prevailing network issues in the environment.If the issue persists even after making sure that there are no network-related issues and that there is enough bandwidth between the sites, consider engaging Nutanix Support at https://portal.nutanix.com. https://portal.nutanix.com./
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
If the logbay command is not available (NCC versions prior to 3.7.1, AOS 5.6, 5.8), collect the NCC log bundle instead using the following command:
nutanix@cvm$ ncc log_collector run_all
Attaching Files to the Case
Attach the files at the bottom of the support case on the support portal.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
|
KB13680
|
Nutanix Database Service | API support for Oracle Listener Port configuration
|
This article describes how to customize the Oracle listener port via API.
|
This article describes how to customize the Oracle listener port via API.
Note: Nutanix Database Service (NDB) was formerly known as Era.
|
When provisioning a New DBServer
As of NDB 2.4.1, users can provision DBServers with an ASM instance that listens on a port of their choice using a REST call. Users must provide the key-value pair in the actionArguments of the API payload:
{
An example payload is provided below.
@POST https://<user_ndb_server_ip>/era/v0.9/dbservers
Example payload:
{
When provisioning a Single instance Clone from a Time Machine
As of NDB 2.4.1, users can provision single instance clones that listen on their chosen ports. In either workflow, clone into existing DBServer and clone with DBServer create, users must provide the key-value pair in the actionArguments of the API payload.
{
Example payloads are provided below.
@POST https://<user_ndb_server_ip>/era/v0.9/tms/<time_machine_id>/clones
Example payload when provisioning into a new DBServer:
{
Example payload when creating a clone into an existing DBServer:
{
Note: in the clone with DBServer create flow, the cloned database will listen on the user-provided port. However, the ASM instance will always listen on port 1521.
|
{
| null | null | null | null |
KB13201
|
Prism Central - v3 vms/list endpoint natively performs some filtering
|
As of PC 2022.4, the Prism Central v3 vms/list endpoint performs filtering by default, and will exclude ESXi UVMs and Acropolis CVMs.
|
The Prism Central v3 vms/list endpoint performs filtering by default, and will exclude ESXi UVMs and Acropolis CVMs. This can be confirmed by placing Aplos in debug mode, and by looking for the following signature:
data/logs/aplos.out.20220602-154351Z:2022-06-02 15:59:39,790Z DEBUG interface.py:2102 The filter criteria being passed to groups processor is _created_timestamp_usecs_=lt=1654185506000000;is_cvm==0; platform_type==[no_val]; is_acropolis_vm==1;(vm_state==[no_val],(vm_state!=transient;vm_state!=deleted;vm_state!=uhura_deleted))
|
The Prism Element v2 list vms endpoint https://www.nutanix.dev/api_references/prism-v2-0/#/b3A6MjU1ODkwNzk-get-a-list-of-virtual-machines does support ESXi UVMs, and can be utilized to retrieve the required data.
|
KB10735
|
NCC Health Check: rsyslog_forwarding_check
|
The NCC health rsyslog_forwarding_check determines if remote syslog server forwarding failed.
|
The NCC health check rsyslog_forwarding_check checks if the syslog was able to forward the logs to the remote server in a specified time interval.RSyslog maintains queues to store messages if forwarding runs into any issues. This check monitors the size of the queue and if the size has been consistently increasing since the last run it flags a forwarding failure. Running NCC Check
Run this check as part of the complete NCC Health Checks
nutanix@cvm$ ncc health_checks run_all
Or run this check separately
nutanix@cvm$ ncc health_checks system_checks rsyslog_forwarding_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.Sample Output:For Status: PASS
For Status: PASS
For Status: FAIL
Running : health_checks system_checks rsyslog_forwarding_check
Output messaging
[
{
"Check ID": "Check remote syslog server log forwarding."
},
{
"Check ID": "Remote syslog server forwarding failed."
},
{
"Check ID": "Check for forwarding failure causes with RSyslog server. Refer KB 10735"
},
{
"Check ID": "Remote syslog will not be receiving log output from the cluster."
},
{
"Check ID": "Remote syslog server forwarding failed."
},
{
"Check ID": "Remote syslog server forwarding failures detected on CVM: {svm_ip}"
},
{
"Check ID": "This check is scheduled to run every 5 min, by default."
},
{
"Check ID": "This check generates a Failure alert after 3 consecutive failures"
}
]
|
Find the current rsyslog server settings using the below command:
nutanix@cvm$ ncli rsyslog-config ls
Check the RSyslog queue from the below file to check the timestamp and queue size:
nutanix@cvm$ cat /home/log/rsyslog-stats
Following are the reasons for failure in forwarding logs to rsyslog server :
Network problem with the rsyslog server.
Check the basic network connectivity between the Controller VM (CVM) and rsyslog server using ping.
Syslog service not responding to the rsyslog server.
Check if port 514 is listening on the server:
nutanix@cvm$ netstat -a|grep 514
Check CVM network settings.
In case the above-mentioned steps do not resolve the issue, engage Nutanix Support https://portal.nutanix.com/page/home, gather the output of "ncc health_checks run_all" and attach it to the support case.
|
KB9002
|
Metro Availability PD get disabled due to lockups/freezes or resource starvation on CVM acting as Cerebro leader
|
Troubleshooting scenarios when Metro availability PD disabled automatically due to lockups/freezes or resource starvation on CVM acting as Cerebro leader.
|
Metro Availability can go into a disabled state if the Controller VM (CVM) holding the Cerebro leader goes into a lockup/freeze situation or have CPU/Memory resource starvation.Usually, we can see that:
all active Metro PDs (active on the metro side where Cerebro leader CVM was having issues) go into the disabled state;If there Standby Metro PDs on the same side (active on another side) - they still stay enabled;checking network between clusters shows no issues/disconnections.
The list below exemplifies some causes that can lead to a Metro disabled by Cerebro leader.
Disk controller firmware issues such causes lockups/freezes on CVM, as described in KB 8896 - Multiple disks marked offline on node running PH16 firmware https://portal.nutanix.com/kb/8896OOM scenarios on CVM, such as the one described in KB 8228 - Alert - A1124 - AutomaticBreakMetroAvailability https://portal.nutanix.com/kb/8228Constantly high CPU utilization on CVM coming from high priority Stargate processes and leaving not enough CPU cycles for Cerebro Leader.Kernel lockups
This has the side effect of services such as the Cerebro leader (assuming the Cerebro leader is hosted by the CVM suffering from one of the conditions) to temporarily freeze. Even considering the changes from ENG-113681 https://jira.nutanix.com/browse/ENG-113681 (AOS 5.5.2 and higher), where we have less reliance on the Cerebro leader when driving heartbeats, there are situations where it might still not be enough and Metro still breaks.
Background:
In the current Stretch Auto Break code path, Cerebro Alarm Handle has strict requirements for being executed in a timely fashion. If there are OOM/lockup/freeze situations or other causes where Cerebro cannot get enough CPU time, Cerebro Leader might not be able to generate enough stretch pings in a given timeout window. To mitigate this, stretch auto break has a logic to dynamically increase the Auto Break timeout (10 seconds by default) by a given window (refactor time).For AOS 5.10.9 and earlier, default refactor value is 1.25 (10 * 1.25 = 12.5 seconds).Starting from AOS 5.10.10 (and AOS 5.15), default refactor value changed to 2 (10 * 2.0 = 20 seconds).Note: This is irrespective of the PD takeover (promote) timer, which is still staying at 120 seconds.This KB provides hints on how to root cause the scenario described above. The first symptom is the following alert being raised in Prism (see more details in KB 8228 - Alert - A1124 - AutomaticBreakMetroAvailability https://portal.nutanix.com/kb/8228):
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
Issue identification and confirmation steps:
NOTE: For Identification purposes, ensure to collect/review the logs from both Active and Standby sites. Also bear in mind that the Cerebro leader might change during the lockup situation so review all Cerebro logs to match the leader with the timestamp where Metro Availability was disabled.1. Ensure the customer did not suffer from network availability issues during the time the alert was raised.Review the following logs from both Active / Standby sites for potential unreachables:
~/data/logs/sysstats/ping_remotes.INFO~/data/logs/sysstats/ping_hosts.INFO
Metro Availability is a sync-replication technology so even one site having "local network problems" can potentially affect the other side.2. If the network is confirmed to be stable, look in the Cerebro leader logs (~/data/logs/cerebro.INFO) for the following or similar log signatures, as per the sequence of events below:
In the logs below, StretchPingRemote was not triggered for more than 5 seconds.
W0120 08:00:17.209161 18776 cerebro_master.cc:8111] STRETCH-PING-STATS : StretchPingRemote not triggerred for 11805 msecs. Last stretch ping was at 0120-08:00:05.402073
Alarm handlers started to get missed:
W0120 08:00:08.370491 18776 cerebro.cc:1443] 2971667 usecs elapsed since the last alarm handler, but max func stats files threshold reached
Orange zone was not entered until 08:00:17 (In Orange Zone, Stretchping frequency is increased to determine if the remote cluster is completely unreachable or down):
I0120 08:00:17.219676 18776 cerebro_master.cc:8281] In orange zone for remote site axxx-ntx-cluster-hybrid1-2
Stretch break replication timeout (refactor) was increased by 1.25. This means that the Cerebro thread was still somewhat responsive, but the last successful Stretchping was recorded 13 seconds ago (13 > 12.5 seconds), so Metro Availability got broken anyway:
W0120 08:00:19.076743 18776 cerebro_master.cc:8957] Increasing stretch break replication timeout factor of protection domain: axxx-ntx-ma-clh1-01-1 with stretch to remote site: axxx-ntx-cluster-hybrid1-2 because of missed ping attempts.
During this time, the Cerebro leader was unable to communicate with the local Stargate, these can be seen from the Stargate INFO logs:
[rbouda@diamond NTNX-Log-2020-01-24-163543-1579865179-PE]$ zgrep "RPC timed out:" */cvm_logs/stargate* |grep "peer=10.44.239.96:2020" |grep -v ListProtectionDomain |grep 'I0120 08:00:1'
Root cause identification steps:
As already mentioned above - there can be different causes that lead to the same issue with Metro PD gets disabled when network connectivity was fine at this time.
Example #1, the CVM has experienced OOM conditions.OOM may happen on CVM for a prolonged period of time and Cerebro thread was not hung that long as it could do the refactoring for break timeout:
W0120 08:00:19.076743 18776 cerebro_master.cc:8957] Increasing stretch break replication timeout factor of protection domain: axxx-ntx-ma-clh1-01-1 with stretch to remote site: axxx-ntx-cluster-hybrid1-2 because of missed ping attempts.
But unfortunately, it increased only by a factor only by 1.25 which was not enough in this situation.
I0120 08:00:19.121765 18776 cerebro_master.cc:8813] No successful pings to remote site: axxx-ntx-cluster-hybrid1-2 Protection domain: axxx-ntx-ma-clh1-01-1 Ping time usecs: 1579503605394719 Time since last successful ping (usecs): 13336804
So, as result we see metro PD disabled:
E0120 08:00:19.318953 18776 cerebro_master.cc:8987] Stretch ping response queue is empty for remote site axxx-ntx-cluster-hybrid1-2
In such cases from messages logs on CVM, we can see oom-killer invoked for some processes:
38724:2018-03-14T07:18:33.137856-07:00 NTNX-xxxxxxx-A-CVM kernel: [11505845.147075] Pithos invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=-1000
It also worth checking MemAvailable values on CVM around this time, this can be done by checking ~/data/logs/sysstats/meminfo.INFO files on CVM or building grasp for the same value in the panacea report. Usually, we see MemAvaialble go to really low values (much less than 515 MB) around the time of oom-killer invoked and Metro disabled.
Example #2, CVM was losing access to disks.The CVM is running a problematic PH16 firmware version for SAS 3008 disk controller. If the customer is running this firmware, ensure to review ISB-106 https://confluence.eng.nutanix.com:8443/x/SOxKB and KB 8896 https://portal.nutanix.com/kb/8896 as the signatures are not always identical:
2020-02-12T15:38:20.026055-05:00 NTNX-xxxxxxxxx-A-CVM kernel: [426791.365121] sd 2:0:2:0: attempting task abort! scmd(ffff8e0a96644880)
But keep in mind that any other issues with disk access on CVM can lead to lockup/freezes and affect Cerebro leader possibility to drive stretch pings on time.
Example #3, the CVM acting as Cerebro leader suffered from very high CPU usage.At some time, CPU %idle stayed below 1% for longer than 1 minute and this caused multiple RPC to timeouts on CVM acting as the Cerebro leader.This can be found from ~/data/logs/sysstats/top.INFO on CVM (or better to build graph for CPU usage in Panacea report). Example from logs:
#TIMESTAMP 1594715245 : 07/14/2020 10:27:25 AM
In this case, was determined that increased CPU usage around this time caused by:
Recently configured several new Metro PDs, with initial (full) sync in progress (here, it is expected to have increased load for Cerebro and CDP services);Curator partial scan generated 70k CopyBlockmapMetadata tasks around the same time (here, it is expected to have increased load on CPD services);The setroubleshootd process, which is caused by an incorrect SELinux setting for rsyslog, see ENG-322827 https://jira.nutanix.com/browse/ENG-322827 (here, it was unexpected additional usage led to issues).
#TIMESTAMP 1594715275 : 07/14/2020 10:27:55 AM
And at the same time in kernel logs, we see:
2020-07-14T10:28:01.552445+02:00 NTNX-XXXXXXXXXXX-A-CVM setroubleshoot[9703]: SELinux is preventing /usr/sbin/rsyslogd from open access on the directory /home/nutanix/data/logs. For complete SELinux messages run: sealert -l 2f2fd5e9-78d2-492c-b52d-b75b3f19b86e
[
{
"Check ID": "Metro availability is disabled."
},
{
"Check ID": "Remote site unreachable."
},
{
"Check ID": "Check if cluster service is healthy at remote cluster."
},
{
"Check ID": "Metro availability operation is disabled."
},
{
"Check ID": "A1124"
},
{
"Check ID": "Metro Availability Is Disabled"
},
{
"Check ID": "Metro availability for the protection domain 'protection_domain_name' to the remote site 'remote_name' is disabled because of 'reason'."
}
]
|
Determine the root cause of the issue. That can be one of the example issues described earlier, or it can be something new.Depending on the root cause - different recommendations can be considered/applied to resolve the issue or provide relief to the customer.
1. OOM in Cerebro leader CVM:WARNING: It has been an uptick of OOM Cerebro-related cases on AOS 5.20+, ensure to open a TH for analysis as with some of the newer fixes on Cerebro should make it more resilient for OOM conditions, it is needed to evaluate if those changes are efficient or not.
Check if CVMs have enough memory configured, depending on the used functionality of AOS. See "CVM Memory Configuration" section of Prism Web Console Guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_15:wc-cvm-memory-configuration-c.html. If CVM memory configured as per the documentation but OOM conditions still appearing - it worth investigate deeper to exclude issues like memory leaks or incorrect memory limits set on some processes. For example - see KB-9781 https://portal.nutanix.com/kb/9781 which describes issues with setroubleshootd process on CVM, that cause increased memory and CPU usage.In case if there are no obvious issues found with memory usage per process (example TH-3883 https://jira.nutanix.com/browse/TH-3883) - consider increasing CVM memory together with increasing allocation to Common Memory Pool (CMP). CMP is the amount of CVM memory used by all services except the Stargate service. Starting from AOS 5.15.4 memory allocated to CMP increased automatically with CVM memory size growing. On the earlier versions of AOS after CVM memory increased, we need also to set a gflag to allocate more memory to CMP otherwise all extra memory will be consumed by Stargate. More details about this can be found in ISB-101 https://confluence.eng.nutanix.com:8443/x/hiIsAw.
2. Issues with disk access caused by problematic PH16 SAS 3008 firmware:
Review guidelines on KB 8896 https://portal.nutanix.com/kb/8896 and ISB-106 https://confluence.eng.nutanix.com:8443/x/SOxKB.
3. Very high CPU usage on CVM with idle counter staying below 1% for 20 seconds or longer.Increased CPU usage on CVM may be caused by different issues:
Any (un)known issues leading to increased CPU usage on CVM, for example - see KB-9781 https://portal.nutanix.com/kb/9781 which describes issues with setroubleshootd process on CVM, that was causing increased memory and CPU usage.Anomaly high load on cluster generated by some scheduled task. For example, in TH-5961 https://jira.nutanix.com/browse/TH-5961 was found that all UVMs on metro cluster have antivirus scan scheduled to happen at the same time. Antivirus was configured to scan archive files and was unpacking them into the temporal location (so it was not only read-intensive but also write-intensive load). This was generating so high load for Stargate (load average constantly above 50), that sometimes nothing was left for Cerebro leader to drive stretch-pings. In such case, we can recommend changing the schedule in a way that it starts at a different time for different VMs.It possible that CVMs VMs are placed to resource pools at vCenter level, and that they getting not enough CPU resources from the ESXi host during CPU contention time (this really can have a negative effect only if/when we see ESXi CPU usage is very high, like 90% and more).Incorrect sizing of CVM for current workload, so allocated vCPU count is just not enough to handle normal incoming IO and background tasks.
If there are no obvious issues found that lead to very high CPU load periods on CVM is found - then:
Attach your case to ENG-332943 https://jira.nutanix.com/browse/ENG-332943 where we requested higher priority for Cerebro service in Metro clusters. And comment on ENG with short summary from your analysis.Consider increasing vCPU allocation to CVM - an extra 2-6 vCPUs per CVM (make sure to stay within NUMA node size) may give more processing power to Stargate under load, so it can actually leave enough CPU cycles to other services, including Cerebro.Before ENG-332943 is resolved - open TH to discuss with STL other possibilities to provide relief. Possible options to discuss are:
Increasing default break replication timeout for Metro PDs to will give more time for Cerebro Leader to generate stretch pings. The downside is that during real issues with the network - it will take longer for Cerebro to break the metro to resume write operation for VMs on the active side of the metro. So, User VMs may be affected and found in a hung/rebooted state after a real (Network related) metro break happened.If CVM already has 14 vCPUs or more and the issue still happening often - we may discuss with the engineering team possibility to exclude additional CPUs from Stargate.
4. Other reasons not listed:
Try to RCA the reason for the lockup state. If unable, engage an STL via TH for further assistance.
|
KB2710
|
Configurable Parameters of Self-Service Restore Feature
|
common error seen during SSR operation.
|
Requirements of Self-Service Restore:
Refer to " Requirements and Limitations of Self-Service Restore https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_7:man-file-level-restore-requirements-r.html" of the Self-Service Restore guide.
You may encounter the following issues while using the self-service restore feature.
Case 1
Powershell cmd on UVM is timing out intermittently. The attached disk is falling with an error
Unable to fetch information about physical disks.
Case 2
Virtual IP address changes, and unmount and mount of the Nutanix CD cannot be done.
Attach disk fails and Timed out communicating with Nutanix Data Protection Service message is displayed.
|
Case 1 solution:
Increase the timeout value as follows.
Open the command prompt as an administrator.Go to the CD drive with label NUTANIX_TOOLS.Go the ngtcli folder using command.
cd ngtcli
Run the below command. Replace timeout_value with the value that you want to specify for timeout.
ngtcli.cmd -w timeout_value
Case 2 and 3 solution:
You change the virtual IP address directly from the ngtcli as follows.
Open the command prompt as an administrator.Go to the CD drive with label NUTANIX_TOOLS.Go the ngtcli folder using command.
cd ngtcli
Run the below command. Replace the ip_address with the new virtual IP address.
ngtcli.cmd -i ip_address
|
KB14094
|
API calls to Prism Central intermittently fail due to Mercury deadlock
|
Summary: API calls to Prism Central intermittently fail due to Mercury deadlock
|
Intermittent API call failures due to Mercury service deadlock can lead to performance degradation and instability within the Prism Central UI, as well as issues with entity-centric snapshots or 3rd party backup applications. In some situations, users can't even log in to the UI.Rebooting the PCVM alleviates this issue temporarily, but after a while, the API calls start to fail again. This condition has been identified in PC 2022.6
We can find symptoms of this condition in the following Prism Central logs:
prism_gatewayprism_monitorInsightsMercury
NCC checks don't report any service crashes.
Log Identification:
Transport errors found in mercury.INFO logs
E20221101 15:43:02.500140Z 17035 request_processor_handle_v3_api_op.cc:2552] <HandleApiOp: op_id: 93 | type: GET | base_path: /PrismGateway/services/rest/v1/users | external | XFF: XX.XX.XX.XX> Routing API op encountered error kTransportError Http request to endpoint 127.0.0.1:9080 failed with error. Response status: 2
You may see frequent service restart messaging for Tomcat, as well as failure to shut down prism gateway in the prism_monitor.INFO logs.
E20221101 15:43:01.434576Z 17457 prism_monitor.cc:1100] Failed to shutdown prism gateway! Killing with kill
You may also see transport errors in the prism_gateway.log while trying to send RPC’s
2022-11-01 08:42:53,695Z rolled over log file
You may see issues within the insights_monitor.INFO while trying to fetch entities.
I1102 13:51:00.018305Z 22023 insights_watcher.go:1231] Hit error while fetching EntityWithMetrics for entityType: recovery_plan_stats, err: NotFound: 18
|
This issue is caused by ENG-489629 https://jira.nutanix.com/browse/ENG-489629 and resolved in pc.2023.1. As a workaround, we can manually restart the Mercury service to provide short-term relief. This KB provides a script that can be scheduled through crontab on a PCVM to restart Mercury every hour automatically. Download the mercury_restart.py script https://download.nutanix.com/kbattachments/14094/mercury_restart.py in the /home/nutanix/bin directory on a PCVM
nutanix@pcvm cd /home/nutanix/bin
Add a crontab entry to run the script hourly
@hourly bash -lc "/usr/bin/python /home/nutanix/bin/mercury_restart.py" > /dev/null 2>&1
|
KB5315
|
Use snmpwalk/snmpget in conjunction with Nutanix MIB and monitoring component value within CVM
| null |
Intention: When you have a 3rd party application monitoring such as PRTG network monitoring tool and if the monitoring fails (using Nutanix MIB values), please refer below to confirm if the functionality works within and extend the troubleshooting outside of the cluster if need be.Use snmpwalk / snmpget in conjunction with Nutanix MIB and monitoring component value (OID) within CVM or a Linux VM outside of cluster to test the SNMP calls.
|
The Nutanix MIB file can be parsed via the Paessler MIB importer to read the OID values for various supported componentsHow to read the OID values for the respective component from the CVM?
To retrieve all components on the Nutanix MIB file, run below and save it to a file,
dip=X.X.X.103; snmpwalk -v3 -l authpriv -u user -a SHA -A nutanix/4u -x AES -X nutanix/4u $dip -Ct NUTANIX-MIB::nutanixNote: grep the file to find the value for a particular component.
To get the component value using the component name from the MIB.txt,
Ex component: hypervisorMemoryUsagePercent
nutanix@NTNX-16SM65110106-A-CVM:~$ dip=X.X.X.103; snmpwalk -v3 -l authpriv -u user -a SHA -A nutanix/4u -x AES -X nutanix/4u -l authPriv $dip -Ct -On NUTANIX-MIB::hypervisorMemoryUsagePercent
Now, the OID value for the component hypervisorMemoryUsagePercent via the Paessler tool would be '.1.3.6.1.4.1.41263.9.1.8'. The last digit in the output above indicates the node count which is not part of the OID value we get from the Nutanix MIB file.3. To get the component value using the OID from the MIB.txt,
nutanix@NTNX-16SM65110106-A-CVM:~$ dip=X.X.X.103; snmpwalk -c public -v 3 -OX -u user -a SHA -A nutanix/4u -x AES -X nutanix/4u -l authPriv $dip -On 1.3.6.1.4.1.41263.9.1.8
If you would like to get the value for a particular node, you can use either snmpget / snmpwalk command with the node value. For ex. for node 1,
nutanix@NTNX-16SM65110106-A-CVM:~$ dip=X.X.X.103; snmpwalk -c public -v 3 -OX -u user -a SHA -A nutanix/4u -x AES -X nutanix/4u -l authPriv $dip -On 1.3.6.1.4.1.41263.9.1.8.1
Note: snmpget works against the node OID value as above while a snmpget against the component will not have an instance to collect the value,
nutanix@NTNX-16SM65110106-A-CVM:~$ dip=X.X.X.103; snmpget -c public -v 3 -OX -u prad -a SHA -A nutanix/4u -x AES -X nutanix/4u -l authPriv $dip -On 1.3.6.1.4.1.41263.9.1.8
4. To test the snmpwalk from one CVM to another or from a Linux VM outside the cluster to the Cluster CVM, the change in the above command would be to set the IP via the 'dip' command and/or use allssh
|
KB11619
|
General troubleshooting knowledge base article
|
This article contains the basic commands to run while starting with the initial triage.
|
This article contains the basic commands to run while starting with the initial triage on your Nutanix cluster. The outputs of these following commands will help the engineer you are working with to get a general idea of your cluster when the case is created.
|
Run Nutanix Cluster Check (NCC)Runs the Nutanix Cluster Check (NCC) health script to test for potential issues and cluster health. This is a great first step when troubleshooting any cluster issues.
nutanix@cvm$ ncc health_checks run_all
Check local CVM service statusRun the following command to check a single CVM’s service status from the CLI. This command shows if all the services on your CVM are UP and running.
nutanix@cvm$ genesis status
The command can also be run in an abbreviated form gs:
nutanix@cvm$ gs
Expected output:
nutanix@cvm$ gs
Note: The PIDs next to the services will be different when you run this command on your cluster
Using the following command is a simple way to check if any services are crashing on a single node in the cluster.Note: The watch -d genesis status command is not a reliable way to verify the stability of cluster_health and xtrim services since they spawn new process ID temporarily for their normal functioning. The new temporary process ID or change in the process ID counted in the "watch -d genesis status" command may give a false impression that the cluster_health and xtrim services are crashing. Rely on the NCC health check report or review the logs of the "cluster_health" and "xtrim" services to ascertain if they are crashing.
nutanix@cvm$ watch -d genesis status
For large clusters, you may have to execute the command one node at a time. Hence it may be a wise decision to correlate with the alert on what CVM so as to check to see which services are reported to be crashing.
Cluster Commands
Check cluster statusTo check the cluster status of all the CVM's in the cluster from the CLI, run the following command below
nutanix@cvm$ cluster status
The command cluster status can also be run in its abbreviated form cs:
nutanix@cvm$ cs
The following command will give the output for all the nodes in the cluster wherein the services are UP and running.
nutanix@cvm$ cs | grep -v UP
Expected output:
nutanix@cvm$ cs | grep -v UP
CVM: xx.xx.xx.xx Up, ZeusLeader CVM: xx.xx.xx.xx Up CVM: xx.xx.xx.xx Up 2021-07-27 15:55:43 INFO cluster:2875 Success!
Start cluster or local service from CLITo start stopped cluster services from the CLI, run the following commands.
Start stopped services or a single service:
nutanix@cvm$ cluster start
Note: Stopping some of the services may cause production impact if you are unsure about using these commands, please contact Nutanix Support
Networking
Ping informationThe following directory consists of files that can help in finding information for pings stats.
nutanix@cvm$ cd /home/nutanix/data/logs/sysstats
Similarly, pinging the nodes can help with basic troubleshooting to check if the nodes (hosts/CVMs) are up and running.
Software versions
Find the NCC version
nutanix@cvm$ ncc --version
Find AOS version:
nutanix@cvm$ allssh "cat /etc/nutanix/release_version"
Find Foundation version:
nutanix@cvm$ allssh cat ~/foundation/foundation_version
Find the host version:
nutanix@cvm$ hostssh "uname -a"
Find LCM version
nutanix@cvm$ allssh "cat ~/cluster/config/lcm/version.txt"
Note: All the above information can also be found via the Prism web console. Under the LCM tab or under Settings > Upgrade Software
UpgradesUpgrades at Nutanix are always designed to be done without downtime for User VMs and their workloads. Refer to KB-6945 https://portal.nutanix.com/kb/6945 for an introduction to how each type of upgrade works and for some useful best practices for administrators. You will find similar information in the the Acropolis Upgrade Guide https://portal.nutanix.com/page/documents/details?targetId=Acropolis-Upgrade-Guide-v5_18:Acropolis-Upgrade-Guide-v5_18 (remember to always choose the guide version that matches the AOS currently running on your cluster).
Check upgrade status
nutanix@cvm$ upgrade_status
Hypervisor upgrade statusCheck hypervisor upgrade status from the CLI on any CVM
nutanix@cvm$ host_upgrade_status
LCM upgrade status
nutanix@cvm$ lcm_upgrade_status
Detailed logs (on every CVM)
nutanix@cvm$ less ~/data/logs/host_upgrade.out
Logs
Find cluster error logs
Find ERROR logs for the cluster:
nutanix@cvm$ allssh "cat ~/data/logs/<COMPONENT NAME or *>.ERROR"
Example for Stargate:
nutanix@cvm$ allssh "cat ~/data/logs/stargate.ERROR"
Find cluster fatal logsFind FATAL logs for the cluster:
nutanix@cvm$ allssh "cat ~/data/logs/<COMPONENT NAME or *>.FATAL"
Example for Stargate:
nutanix@cvm$ allssh "cat ~/data/logs/stargate.FATAL"
Similarly, you can also run the following script to list the fatals across all the nodes in the cluster:
nutanix@cvm$ for i in `svmips`; do echo "CVM: $i"; ssh $i "ls -ltr /home/nutanix/data/logs/*.FATAL"; done
Find cluster IDFind the cluster ID for the current cluster:
nutanix@cvm$ ncli cluster info | grep "Cluster Id"
Cluster information Find the cluster information for the current cluster from CLI:
nutanix@cvm$ ncli cluster info
Multi-cluster information To find the multi-cluster (Prism Central) information for the current cluster from CLI use the command below:
nutanix@cvm$ ncli multicluster get-cluster-state
Node reboots/DIMM/SEL information This information can be found under the IPMI web page via the Prism web console under Server Health > Event log in the IPMI UI. Note: Ensure you are looking at the latest timestamps. To re-arrange the timestamps, simply click on the timestamps field in the table
TasksThe following command can be used to see the tasks in progress in the Prism web console:
nutanix@cvm$ ecli task.list include_completed=false
Verifying cluster health status Before you perform operations such as restarting a CVM or AHV host and putting an AHV host into maintenance mode, check if the cluster can tolerate a single-node failure. The link to the document is found AHV Administration Guide: Verifying the Cluster Health https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v6_7:ahv-cluster-health-verify-t.html
MaintenanceThe following article provides commands for entering and exiting maintenance mode for CVM (Controller VM) and hypervisor. The link can be found here. https://portal.nutanix.com/kb/4639
Link to other generic issues and troubleshooting KB articles
[
{
"Description": "Alert - A111066 - Failed to send alert Emails",
"KB": "KB 9937"
},
{
"Description": "Troubleshooting alert emails not being sent",
"KB": "KB 1288"
},
{
"Description": "Support Portal Insight Discovery Overview and Troubleshooting",
"KB": "KB 5782"
},
{
"Description": "Nutanix Remote Support Tunnel Troubleshooting Guide",
"KB": "KB 1044"
},
{
"Description": "cvm_services_status verifies if a service has crashed and generated a core dump in the last 15 minutes.",
"KB": "KB 2472"
},
{
"Description": "cluster_services_status verifies if the Controller VM (CVM) services have restarted recently across the cluster.",
"KB": "KB 3378"
},
{
"Description": "LCM: (Life Cycle Manager) Troubleshooting Guide",
"KB": "KB 4409"
},
{
"Description": "HDD or SSD disk troubleshooting",
"KB": "KB 1113"
},
{
"Description": "check_ntp verifies the NTP configuration of the CVMs (Controller VMs) and hypervisor hosts, and also checks if there are any time drifts on the cluster.",
"KB": "KB 4519"
},
{
"Description": "Alert - A1050, A1008 - IPMIError",
"KB": "KB 4188"
},
{
"Description": "PE-PC Connection Failure alerts",
"KB": "KB 6970"
},
{
"Description": "Alert - A1054 - Node Marked To Be Detached From Metadata Ring",
"KB": "KB 8408"
},
{
"Description": "Alert - A6516 - Average CPU load on Controller VM is critically high",
"KB": "KB 4272"
},
{
"Description": "Alert \"Link on NIC vmnic[x] of host [x.x.x.x] is down\" being raised if an interface was used previously",
"KB": "KB 2566"
},
{
"Description": "Alert - FanSpeedLow",
"KB": "KB 5132"
},
{
"Description": "Alert - A120094 - cluster_memory_running_out_alert_insights",
"KB": "KB 9605"
},
{
"Description": "Alert - A700101 - Tomcat is restarting frequently",
"KB": "KB 8524"
},
{
"Description": "NX Hardware [Memory] – Checking the DIMM Part Number and Speed for ESXi, Hyper-V, and AHV",
"KB": "KB 1580"
},
{
"Description": "NCC Health Check: ipmi_sel_uecc_check/ dimm_sel_check",
"KB": "KB 7177"
},
{
"Description": "ipmi_sel_cecc_check fails even after replacement of bad DIMM",
"KB": "KB 8474"
},
{
"Description": "NCC Hardware Info: show_hardware_info",
"KB": "KB 7084"
},
{
"Description": "AHV host networking",
"KB": "KB 2090"
},
{
"Description": "Check that all metadata disks are mounted",
"KB": "KB 4541"
},
{
"Description": "Failed to update witness server with metro availability status for a protection domain",
"KB": "KB 4376"
},
{
"Description": "Health warnings detected in metadata service",
"KB": "KB 7077"
},
{
"Description": "Curator scan has failed",
"KB": "KB 3786"
},
{
"Description": "Disk capacity is above 90%",
"KB": "KB 3787"
},
{
"Description": "The drive has failed",
"KB": "KB 6287"
},
{
"Description": "One or more critical processes on a node are not responding to the rest of the cluster",
"KB": "KB 3827"
},
{
"Description": "This alert is generated when the 5-minute load average exceeds the threshold of 100 on a Controller VM",
"KB": "KB 4272"
},
{
"Description": "The Stargate process on a node has been down for more than 3 hours",
"KB": "KB 3784"
},
{
"Description": "The SATADOM has exceeded a wear threshold",
"KB": "KB 4137"
},
{
"Description": "ECC errors over the last day have exceeded the one-day threshold",
"KB": "KB 4116"
},
{
"Description": "ECC errors over the last 10 days have exceed the 10-day threshold",
"KB": "KB 4116"
},
{
"Description": "Hardware Clock Failure",
"KB": "KB 4120"
},
{
"Description": "A physical drive in the node has been reported as bad",
"KB": "KB 4158"
},
{
"Description": "One of the power supplies in the chassis has been reported down",
"KB": "KB 4141"
},
{
"Description": "A node in the cluster has an abnormally high temperature",
"KB": "KB 4138"
},
{
"Description": "SATA DOM in the node cannot be reached",
"KB": "KB 7813"
},
{
"Description": "SATA DOM in the node has failed",
"KB": "KB 1850"
},
{
"Description": "Number of Shell vDisks in the cluster is above the threshold",
"KB": "KB 8559"
},
{
"Description": "Check for number of UECC errors for last one day in the IPMI SEL",
"KB": "KB 8885"
},
{
"Description": "Incorrect LCM family can cause upgrade failure",
"KB": "KB 9898"
},
{
"Description": "LCM upgrade failed during reboot_from_phoenix stage",
"KB": "KB 9437"
},
{
"Description": "IPMI SEL UECC Check",
"KB": "KB 8885"
},
{
"Description": "Investigating ECCErrorsLast1Day and ECCErrorsLast10Days issues on a Nutanix NX nodes",
"KB": "KB 4116"
},
{
"Description": "Determine if a DIMM module is degraded.",
"KB": "KB 3357"
},
{
"Description": "Overview of Memory related enhancements introduced in BIOS: 42.300 or above for G6, G7 platforms",
"KB": "KB 9137"
},
{
"Description": "Understanding and enabling ePPR (extended Post Package Repair) for G6, G7 platforms",
"KB": "KB 9562"
},
{
"Description": "NX Hardware [Memory (CECC)] - G6, G7, G8 platforms - hPPR Diagnosis",
"KB": "KB 11794"
},
{
"Description": "NX Hardware [Memory] – G6, G7 platforms - DIMM Error handling and replacement policy",
"KB": "KB 7503"
},
{
"Description": "power_supply_check",
"KB": "KB 7386"
}
]
|
KB8173
|
LCM only shows firmware updates for platforms that support them
|
LCM only shows firmware updates for platforms that support them.
|
When attempting to upgrade firmware via LCM, you may see the following message on the Inventory or Updates > Firmware pages after running an Inventory operation from LCM GUI in Prism.
LCM only shows firmware updates for platforms that support them. For details, see
|
This error is returned when LCM does not currently support the hardware platform on which the Inventory operation was performed for firmware updates. Platforms that support firmware updates via LCM:
Nutanix NXDell XC / XC Core SeriesLenovo HX / HX Ready SeriesHPE DX SeriesHPE DL Series (Only Gen10)Fujitsu XFIntel DCSInspur InMergeCisco UCS
If the hardware model belongs to the list above and you still see the error message, or if you believe it is supported despite not being on the list, engage Nutanix Support http://portal.nutanix.com.
|
KB9672
|
Prism Central aplos_engine service crashing after upgrading to 5.17.x or newer
|
This KB describes a scenario in which aplos_engine service starts crashing on PC after an upgrade to 5.17 or newer due to cerberus stuck tasks in the cluster.
|
Nutanix Self-Service (NSS) is formerly known as Calm.After upgrading to Prism Central 5.17.x, PC.2020.x or newer the aplos_engine service begins crashing continuously.This happens as the service tries to process stuck cerberus tasks.
Symptoms
There is an alert for cluster services restarting indicating aplos_engine as the affected service:
ID : yyyyy-yyy-yyy-yyy-yyyy
NCC health check shows:
Detailed information for cvm_services_status: Node x.x.x.x: FAIL: aplos_engine crashed more than 10 times in last 15 minutes, Refer to KB 2472 (http://portal.nutanix.com/kb/2472) for details on cvm_services_status or Recheck with: ncc health_checks system_checks cvm_services_status --cvm_list=x.x.x.x
A high number of Cerberus tasks are present in Ergon and they do not make progress. Some are running and most are queued:
nutanix@PCVM:~$ ecli task.list include_completed=false
The aplos_engine FATAL logs continuously exit with signal: 9:
2020-07-14 09:50:29 ERROR 21892 /home/jenkins.svc/workspace/electric-seeds/postcommit/euphrates-5.17.1-stable/gcc-release-nosharedlibs-pc-x86_64/infrastructure/cluster/service_monitor/service_monitor.c:201 StartServiceMonitor: Child 12723 exited with signal: 9
|
As per discussion at ENG-278035 https://jira.nutanix.com/browse/ENG-278035, PM has decided to deprecate Cerberus service and there will not be a fix for the issue.
Customers affected by this problem should run the steps below as a workaround.If the customer is using Projects, the Quota feature will no longer work after applying the workaround. SREs should assist customers with migration to Calm.
Workaround
The workaround consists of disabling the Cerberus service by applying a gflag to aplos_engine, restarting the service, and then aborting all Cerberus tasks:
Apply gflag to aplos_engine ( KB 1071 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA0600000008SsHCAU) to disable Cerberus:
Note on scale-out PC this step has to be applied to all 3 PC VMs
disable_cerberus=True (Default = False)
Restart the aplos_engine service in the PC VM(s):
nutanix@PCVM:~$ genesis stop aplos_engine; cluster start
Abort Cerberus tasks:
First, get a list of all cerberus tasks currently running:
nutanix@PCVM:~$ ecli task.list include_completed=false |grep cerberus
Then, mark each task UUID as aborted
nutanix@PCVM:~$ ~/bin/ergon_update_task --task_uuid='172ae3b8-3a6c-4613-9ba5-53afcaa31f9c' --task_status=aborted
If there are numerous Cerberus tasks in queued status, the below command can be used. This will delete all Cerberus tasks that are in queued status
nutanix@PCVM:~$ for task_uuid in $(ecli task.list component_list=cerberus status_list=kQueued limit=1000 | awk 'NR>1{print $1}'); do yes | ergon_update_task --task_uuid $task_uuid --task_status=aborted; done
Monitor the environment and make sure aplos_engine is no longer crashing and there are no further Cerberus tasks or aplos_engine FATALsVerify the size of the domain entities in IDF does not exceed 100000:
Execute the following command on PCVM:
nutanix@PCVM:~$ links -dump http://127.0.0.1:2027/detailed_unevictable_cache_stats > ~/tmp/idf_cache.txt
Collect and analyze idf_cache.txtNote the size and quantity of entities for the 'domain' entity type.if there are over 100000 domains or the aggregate size is over 10MB, open a new ONCALL with reference ENG-326396 and ensure Lonnie Hutchinson is involved
If aplos_engine is still crashing, open a new ONCALL.
|
KB13182
|
File Analytics Audit Trail reports file and folder path change as a "Rename" operation
|
File Analytics Audit currently does not have a separate filter to report any changes location for files/folders. This operation is seen as a "Rename" operation
|
File Analytics Audit trail help to identify different type of operation performed on files and folders. You can filter specific operations (i,g create, delete, rename, read, permission change, etc) performed by user on specific files or folder. The list of operations and filters are available in filenalytics user guide https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v3_1:ana-fs-analytics-audit-trails-c.html. One such common operation user might perform very frequently is moving the files or folder from one folder to another folder (i,e cut-paste operation is windows or mv command in Linux). Currently File Analytics consider this as "Rename" operation. Here is the example of file move Audit looks like: File Temp within folder Welcome. File Temp moved to Folder MigrateView Audit details shows rename event for both name change and folder move.
|
This is expected behaviour in the product.
|
""ISB-100-2019-05-30"": ""Title""
| null | null | null | null |
KB13204
|
LCM : Upgrade from installed version is not supported by LCM
|
If the installed version for the firmware is not known to LCM, the upgrade for that firmware is disabled. This KB outlines the way customers can move out of this situation and re-enable the upgrades
|
If the installed version for the firmware is not known to LCM, the upgrade for that firmware is disabled with the below reason
Upgrade from version xxxxx is not supported by LCM. Please follow KB https://portal.nutanix.com/kb/XXXX for more information.
|
Note: If you have a higher version of Firmware installed than what is available over LCM, this message is expected and no further action is needed.In case of lower firmware versions, please follow the below instructions for the respective platforms
NX: Upgrade is not shown because the version is not supported by LCM
To bring your cluster to an LCM-supported version of the firmware, upgrade to the latest version of firmware manually by following the KB-10634 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000bvCoCAI.
For all OEM and third-party platforms please reach out to their support teams for assistance with manual firmware upgrades. Following are some known scenarios on Dell and HPE platforms and their respective solutions.
CISCO:
You may see a Disabled reason message stating upgrade from version 4.2(2a)C is not supported by LCM. In such cases, please reach out to Cisco Support team to manually upgrade the firmware version to atleast 4.2(3c)C for UCSM clusters
DELL:
iSM:ESX 6 iSM Utility:dell_gen_14: iSM version installed on the host is either lower than 3.4.0 or higher than 3.6.0 . Please refer to KB-11516 http://portal.nutanix.com/kb/11516.Dell Update Manager: PT Agent on ESX 6:dell_gen_14: The PTAgent version installed on the host is either lower than 1.9.0-371 or higher than 2.4.0.43. Please refer to KB-11516 http://portal.nutanix.com/kb/11516.
HPE:
If the version is shown as "unverified", "none" or "undefined" in LCM please refer to KB-8707 https://portal.nutanix.com/kb/8707 for details. For the unavailability of SPP upgrades under LCM and Manual SPP Upgrades refer to KB-11979 https://portal.nutanix.com/kb/11979. Note: BIOS should be updated with the SPP version when the manual upgrade is done
INTEL, LENOVO, INSPUR, FUJITSU : Please reach out to Hardware vendor support for assistance with Manual firmware updates.
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/.
|
KB14090
|
Prism Central API failed to fetch images intermittently after upgrade to pc.2022.9
|
Prism Central API failed to fetch images intermittently after upgrade to pc.2022.9
|
Prism Central API failed to fetch images intermittently after upgrading to pc.2022.9 from pc.2022.4.x or pc.2022.6.x.If a user uploads images before upgrading to pc.2022.9 and then tries to retrieve a list of images post-upgrade, it may intermittently fail with the following error:If the user logs in as a project user and tries to create VM, it may fail with this error:
|
As a workaround, please restart "APLOS" in all PCVMs
nutanix@PCVM:~$ allssh "genesis stop aplos; cluster start"
The issue is tracked via ENG-492451 https://jira.nutanix.com/browse/ENG-492451 and will be resolved in a future release.
|
KB9913
|
AOS upgrade stuck on hyper-v clusters "Could-not-process-the-host-manifest-Errno-104-Connection-reset-by-peer"
|
Issue observed in several Hyper-V clusters and reproduced internally in LAB on ONCALL-11201
|
AOS upgrade could fail to continue on the CVM.
nutanix@CVM~$ allssh stargate --version
Found below error in genesis logs.
CRITICAL node_manager.py:2624 Failed to run the finish script, ret 1
In finish script on CVM in question it fails as it cannot find task in question.
2020-08-25 08:17:12 INFO hyperv.py:541 DiskMonitorService is not applicable to Windows 2016. Skipping..
In the Hyper-V host for the CVM noted above we can see the following Error ID 0 Application Events (Event Viewer -> Windows Logs -> Application )
Log Name: Application
|
Please NOTE
As mentioned in ONCALL-11201 create a mandatory oncall before applying the workaround understand what is causing the issue, engineering need to debug live when issue is happening as logs doesn't indicate which process is holding the file.
Workaround
1 - Rename the ipmicfg folder on the problematic Hyper-v node.
Run command below on host
> cd "C:\Program Files\Nutanix\"
> ren ipmicfg ipmicfg.old
2 - Restart genesis on CVM in question
nutanix@CVM~$ genesis restart
|
KB15233
|
Prism Central: IAM users intermittently get 403 Access Denied error due the cape pod DB replica not in sync
|
This article explains break-fix procedure when PC login intermittently fails due to the cape pod DB replica is out of sync
|
Intermittent inconsistent authorization related problems possible if IAMv2 cape Postgres database is out of sync on CMSP-enabled Prism Central. Symptoms may include user or group sporadically missing assigned ACPs causing authorization errors. Another symptom observed: deleted custom ACP role reappearing sporadically in PCVM UI or nuclei output.
Symptom: Intermittently, an IAM user receives error 403 rather than a successful login.
Intermittently, an IAM user receives error 403 rather than a successful login.
nuclei user.get run from PC intermittently returns the user details with empty access_control_policy_reference_list.
for example, in sample below for user uuid 0c3a85c0-25aa-51a3-940d-9e69b3383d8a Successful attempt, we see user has assigned ACPs:
nutanix@PCVM:~$ nuclei user.get 0c3a85c0-25aa-51a3-940d-9e69b3383d8a
Sporadically for same user, we may get output with empty access_control_policy_reference_list:
nutanix@PCVM:~$ nuclei user.get 0c3a85c0-25aa-51a3-940d-9e69b3383d8a
In aplos.out, the following signature can be found filtered by the troublesome user UUID:
2023-05-11 11:08:18,569Z ERROR iamv2_auth_info.py:201 User user@domain [0c3a85c0-25aa-51a3-940d-9e69b3383d8a] is not allowed to access the system without access control policy.
Symptom: Inconsistent data populated in Prism when updating Projects, users and groups in projects, or when viewing nuclei output
Users may also experience inconsistent information populated during editing projects, such as "Add/Edit Users & Groups" where the Users and Role being mapped is sometimes populated with data and sometimes not. For example, when creating a project, there is a warning shown in Prism against the project:
access policy 38e918ad-9960-4678-85af-2adf6a5a92b2 not found
nuclei access_control_policy.list run from PCVM command line intermittently returns http error codes when querying this UUID.
When logged into PCVM as "nutanix" user, enter "nuclei" to enter the nuclei interface:
<nuclei> access_control_policy.get 38e918ad-9960-4678-85af-2adf6a5a92b2
or
<nuclei> access_control_policy.get 38e918ad-9960-4678-85af-2adf6a5a92b2
However when run on other occasions, the list is returned, and the access_control_policy UUID in the error above won't be seen.
Attempts to edit the roles, shows an error in Prism may show an error appearing in red at the top of the Prism view:
Internal Server Error. role 556a37ce-7d64-419f-b67a-cObbf63cb219 not found
For both symptoms: Check IAM postgres database in cape pods sync status.
Use steps below to check if pods are in sync; different TL value and not 0 Lag in MB value indicates problem:
nutanix@PCVM:~$ LEADER_POD=$(sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector=pg-cluster=cape,role=master -o jsonpath='{.items[*].metadata.name}')
Cause:
To process authorization requests IAMv2 themis service uses HA Postgres database running in the cape pods. Replication configured between cape pods and leader/replica pods is expected to be in sync.Themis service is configured to execute read-only SELECT statements on a random cape pod. If the cape is out of sync when themis executes SELECT statement on a lagging replica - themis may receive wrong outdated data and provide the wrong authorization response.
|
Workaround:
Identify and fix the cause of the out-of-sync cape replica. Inspect replica cape pod logs to identify the cause of the lag.The most common cause is the WAL segment that was removed on the leader while replication was down for any reason, in this scenario when the replica pod is up it will be unable to resume replication:
nutanix@PCVM:~$ REPLICA_POD=$(sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector=pg-cluster=cape,role=replica -o jsonpath='{.items[*].metadata.name}')
Note: if cause of replica lag is different, investigate the cause, engage STL if necessary.After confirming the cause, reinitialize cape replica from leader:
nutanix@PCVM:~$ REPLICA_POD=$(sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector=pg-cluster=cape,role=replica -o jsonpath='{.items[*].metadata.name}')
After replica reinit, confirm leader and replica pods are in sync, Lag in MB should be 0 and TL value should be identical:
nutanix@PCVM:~$ LEADER_POD=$(sudo kubectl -n ntnx-base get pods --field-selector=status.phase=Running --selector=pg-cluster=cape,role=master -o jsonpath='{.items[*].metadata.name}')
|
KB3027
|
NCC Health Check: multiple_fsvm_on_single_node_check
|
The NCC health check multiple_fsvm_on_single_node_check verifies that there is only one File Server VM per node when using Nutanix Files.
|
The NCC health check multiple_fsvm_on_single_node_check validates if any of the nodes have more than one File Server VM (or NVM). If a File Server VM is running during a node failure, live migration begins and the File Server VM is migrated to a different node. If the new node has a File Server VM already running, then it will be running two File Server VMs.
Running the NCC Check
Run this check as part of the complete NCC Health Checks:
nutanix@CVM:~$ ncc health_checks run_all
Or run this check individually:
nutanix@CVM:~$ ncc health_checks fileserver_checks fileserver_cvm_checks multiple_fsvm_on_single_node_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every hour, by default. This check will generate an alert after 1 failure.
Sample output
For Status: PASS
Running /health_checks/fileserver_checks/fileserver_cvm_checks/multiple_fsvm_on_single_node_check [ PASS ]
For Status: FAIL
Running /health_checks/fileserver_checks/fileserver_cvm_checks/multiple_fsvm_on_single_node_check [ FAIL ]
Output messaging
[
{
"Description": "File server VMs are running on a single node"
},
{
"Description": "Contact Nutanix support."
},
{
"Description": "File server multiple VMs on single node check"
},
{
"Description": "File server file_server_name has multiple VMs on single node"
}
]
|
Troubleshooting
If the check reports a FAIL status, then there is more than one File Server VM on a node.
Resolving the Issue
The solution would be to Migrate or vMotion the most recent File Server VM to its original node.Refer to VMware vSphere Product Documentation: Move a Virtual Machine to a Cluster https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.resmgmt.doc/GUID-2A95A99B-D431-484E-9D47-E2313C9DE496.html for instructions on migrating VMs to other hosts in the vSphere client (ESXi).
If the original node is no longer available, select a different node in the cluster that does not contain a File Server VM.
If the file server has been configured for VF passthrough, run the command to migrate an FSVM off a host:
nutanix@FSVM:~$ afs infra.migrate_vf_passthrough_fsvm <fsvm_name> host_uuid=<host_uuid>
For assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com./ Collect additional information and attach them to the support case.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@CVM:~$ logbay collect --aggregate=true
Attaching Files to the Case
To attach files to the case, follow KB 1294 https://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
Requesting Assistance
If you need further assistance from Nutanix Support, add a comment to the case on the support portal asking Nutanix Support to contact you. If you need urgent assistance, contact the Support Team by calling one of our Global Support Phone Numbers https://www.nutanix.com/support-services/product-support/support-phone-numbers/. You can also click the Escalate button in the case and explain the urgency in the comment, and Nutanix Support will be in contact.
|
KB10367
|
File Analytics - Share Scan from File Analytics fails with "failed (mount error (13): Permission Denied) "
|
File Analytics - Share Scan from File Analytics fails with "failed (mount error (13): Permission Denied) "
|
On Nutanix File Analytics, when a user attempts a scan operation for a share or set of shares together, the following error may occur:
Scan failed (mount error (13): Permission Denied)
|
Scan failed (mount error-13 Permission Denied) is normally seen during the authentication issue between the FA and the File server ->
Verify AD Password is correct/unchanged for the File Server Admin account used in FA. If the password is changed, please update the credentials in the FA UI -> Click gear icon > Update AD/LDAP ConfigurationVerify SMB health on the File server and confirm that NTLM/Kerb authentication is working (PASSED or FAILED):
nutanix@FSVM:~$ afs smb.health_check verbose=trueCluster Health Status:Primary Domain Status: PASSEDcmd = /usr/local/nutanix/cluster/bin/sudo_wrapper smbclient '//<FS_FDQN>/IPC$' -P -k -c ""Performing share access passed for share: IPC$Kerberos Share Access: PASSEDcmd = /usr/local/nutanix/cluster/bin/sudo_wrapper smbclient '//<FSVM_External_IP>/IPC$' -P -c ""Performing share access passed for share: IPC$NTLM Share Access: PASSED
Node: xx.xx.xx.xx Health Status:Clock Skew (0 second(s)): PASSEDsmbd status: PASSEDwinbindd status: PASSED
Node: xx.xx.xx.xx Health Status:Clock Skew (0 second(s)): PASSEDsmbd status: PASSEDwinbindd status: PASSED
Node: xx.xx.xx.xx Health Status:Clock Skew (0 second(s)): PASSEDsmbd status: PASSEDwinbindd status: PASSED
3. Check NTLM from any FSVM and confirm that you are getting domain output.
nutanix@FSVM:~$ sudo smbclient '//<fsvm_external_ip>/IPC$' -P -c ""Domain=[LABDC] OS=[] Server=[]
Note: It's highly recommended to make sure file analytics scans are functional or there will be inconsistent stats on file analytics UI.
|
{
| null | null | null | null |
KB16576
|
How to interact with the Git repo hosted in Gitea
|
How to interact with the Git repo hosted in Gitea
|
Background
DKP's Kommander deploys Gitea, which is where the various Kommander apps artifacts are hosted. FluxCD's GitRepository is then pointed to this instance of Gitea, as a source.
There could be scenarios where there is a need to interact with the Git repo hosted inside Gitea. For example, the dkp upgrade kommander command can exit with the following error:
✗ Ensuring application definitions are updatedfailed to ensure "Ensuring application definitions are updated": failed to update app definitions: failed to clone repository https://localhost:42377/kommander/kommander.git: could not clone repo: repository not found
or you might see a kustomize error like:
kustomization path not found: stat /tmp/kustomization-52138408/services/gatekeeper/3.8.1: no such file or directory
Therefore it would be helpful in troubleshooting, if we are able to clone the git repo and check if the artifacts exist or not.
|
A quick way to clone the git repo is via the dkp cli
./dkp experimental gitea clone
This would clone the repo into a 'kommander' directory in your machine.
Another way would be to port forward into the gitea instance and then perform your git operations.
kubectl port-forward gitea-0 -n kommander 9100:3000
While port-forward is active, on another terminal
git clone -c http.sslVerify=false https://$(kubectl -n kommander-flux get secret kommander-git-credentials -o go-template='{{.data.username|base64decode}}'):$(kubectl -n kommander-flux get secret kommander-git-credentials -o go-template='{{.data.password|base64decode}}')@localhost:9100/kommander/kommander.git
cd kommander/
From here you can do various git operations.
Please note that the Gitea instance is only supported for use by Kommander, it is not supported for use with your applications or projects. Also, take caution when interacting with the repo. Any changes pushed to the git repo will be reflected onto your Kommander cluster. Any changes to this repo should only be made with guidance from support.
|
KB11689
|
Global Recipient List Missing
|
From AOS 5.19, "Global Recipient List" option is missing.
|
Customer has reported that when creating a new rule under Alert Email Configuration in AOS 5.19 and above they can not see "Global Recipient List" option.
Rules created in older AOS version where "Global Recipient List" is selected will be retained and work after upgrading AOS to 5.19 and above. However, if we uncheck Global Recipient List checkbox it will disappear from old rules permanently.
|
Engineering team has confirmed that the changes are intentional and it is not a bug.
"Global Recipient List" is retained after upgrade for legacy reasons.
|
""ISB-100-2019-05-30"": ""Description""
| null | null | null | null |
KB1496
|
NCC Health Check: disk_storage_pool_check
|
The NCC health check disk_storage_pool_check verifies that all disks belong to a storage pool. If a disk does not belong to any storage pool, it will not be used.
|
The NCC disk_storage_pool_check verifies that all disks belong to a storage pool. If a disk does not belong to any storage pool, it will not be used.
This check returns a PASS status if all disks belong to a storage pool. Otherwise, if a disk does not belong to any storage pool, it returns a FAIL status.
Running the NCC Check
It can be run as part of the complete NCC check by running:
nutanix@cvm$ ncc health_checks run_all
or individually as:
nutanix@cvm$ ncc health_checks hardware_checks disk_checks disk_storage_pool_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every 1 hour, by default.This check will generate an INFO alert A1063 after 1 failure across scheduled intervals.
Sample Output
For Status: PASS
Running : health_checks hardware_checks disk_checks disk_storage_pool_check
For Status: FAIL
Running /health_checks/hardware_checks/disk_checks/disk_storage_pool_check on the node [ FAIL ]
Output messaging
[
{
"Check ID": "Checks if all disks in the cluster are assigned to a storage pool."
},
{
"Check ID": "The cluster was not configured correctly during the installation or drive replacement was not completed correctly."
},
{
"Check ID": "Add the unused disk to the storage pool."
},
{
"Check ID": "The cluster was not configured correctly during the installation or a drive replacement was not completed correctly."
},
{
"Check ID": "A1063"
},
{
"Check ID": "Disk Unused"
},
{
"Check ID": "Drive with disk id disk_id on Controller VM service_vm_external_ip is not part of any storage pool."
}
]
|
If disk_storage_pool_check returns a FAIL, add the disk to the storage pool.
Get the list of storage pools (there's usually one):
nutanix@cvm$ ncli sp ls | grep Name
Add all free disks to a storage pool:
nutanix@cvm$ ncli sp edit add-all-free-disks=true name=[Storage pool name from above]
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com/.
|
KB14309
|
NDB - MySQL software profile creation fails for commercial MySQL versions
|
MySQL software profile creation fails for commercial MySQL versions. NDB currently does not support commercial MySQL versions.
|
Attempting to create a software profile for MySQL commercial version fails with the following error:
'"local variable \'db_version\' referenced before assignment'
|
Check the MySQL version on the DB Server used for the profile creation. The standard Linux tools can be used for it. For example:
[root@VM ~]# rpm -qa | grep -i mysql
NDB currently does not support commercial MySQL versions. Only community versions are now supported.Commercial versions of MySQL are planned to be supported in a future NDB release.
|
KB9836
|
[Hyper-v] NIC Driver Version 4.1.131.0 causing issue with the Intel X550 Network Adapters
| null |
While Imaging a new node using Portable Foundation for Windows version 4.5.3.1, the fImaging fails. The foundation installs the hypervisor and AOS and then fails at the first boot and foundation fails with a timeout :
20200722 16:25:30 INFO Installation of Acropolis base software successful: Installation successful.
Checking the firstboot.log we see network adapters are not up. The firstboot fails right after installing chipset and NIC drivers.
7/22/2020 6:40 AM WarningAn exception occurred: Unable to connect to the remote server
When you check the NIC status, all the NIC's are in disconnected state :The Network adapters generally takes around 6-10 minutes to change the status from Disconnected to UP. Performing a Restart-NetAdapter, the NIC might take around 10 minutes to come up.
|
Still waiting on ENG-331191 https://jira.nutanix.com/browse/ENG-331191 for a Solution. Workaround:
While portable foundation is running, there is a manual intervention required. There are two ways to fix this:1. Rollback the drivers after First_boot has completely failed.
Let the first_boot script fail.Firstboot will fail after nic_drivers_installedRemove the fail marked from D:\markers.
del D:\markers\firstboot_fail
Uninstall the NIC drivers by running the below file: (This will force the NIC drivers to use the native version 3.12.11.1)
D:\sources\intel-driver\APPS\SETUP\SETUPBD\Winx64\PROUnstl.exe
This will need a reboot to get the adapters back to default version.
Configure the firstboot to run after the reboot so it starts automatically after the reboot.
2. Rollback the drivers while First_boot is running.
Monitor the first_boot script runWhile it is installing the chipset drivers, create a new file with name nic_drivers_installed.This will skip the upgrade of the NIC drivers but a reboot is still pending.In this case, the first_boot will fail, because the reboot will be pending.Just remove the fail marker and restart the node to finish the firstboot.
del D:\markers\firstboot_fail
|
KB15942
|
Openshift Installation fails due to PE-PC Connectivity Issues
|
During Openshift installation, the process could fail due to incorrect connectivity between PC and PE clusters
|
While installing the OpenShift cluster using the Installer-provisioned infrastructure https://docs.openshift.com/container-platform/4.13/installing/installing_nutanix/installing-nutanix-installer-provisioned.html, the "openshift-install" script could fail if it cannot gather information from the Prism Element (PE) cluster attached to Prism Central (PC):
ocp-lab-ntx:/ocp-install # ./openshift-install create install-config
After a validation with Red Hat, the "openshift-install" script sends the following curl to PC in order to get information from all of the PE clusters registered in PC:
curl -X 'POST' 'https://PCFQDN:9440/api/nutanix/v3/subnets/list' -H 'accept: application/json' -H 'Content-Type: application/json' -H 'X-Nutanix-Client-Type: ui' -d '{ "kind": "subnet", "filter": "cluster_name==" }' -k -v -u PC_user
If one or more PE clusters registered on PC are not sending the information (in this case, for the cluster VLANs), the installation cannot continue.A quick way to validate the expected VLANs configured across the clusters is by running the following command from PCVM:
nutanix@PCVM$ nuclei subnet.list count=0
One of the causes of not getting all the info across the clusters is when the connectivity between PE and PC is not working correctly, as in the following example:
nutanix@PCVM$ ncli multicluster get-cluster-state
|
OpenShift deployment script requires all of the registered PE clusters are correctly responding to PC, so it is necessary to make sure they are working correctly.To validate the connectivity of PE clusters, review KB 6970 http://portal.nutanix.com/kb/6970. If there is a stale entry of an unregistered PE at the PC, KB 4944 http://portal.nutanix.com/kb/4944 requires being reviewed.
|
KB15988
|
PC-DR - Protected Nutanix Files servers are not visible after Prism Central restore operation
|
Protected Nutanix Files servers are missing on the PC interface after a PC-DR restore operation
|
After successfully restoring a Prism Central that was protected via PC-DR, Nutanix Files servers / SmartDR / Smart Sync policies are not listed in the Prism interface.
|
Starting with Nutanix Files 4.2, when a PE is registered to a PC, all the file-servers in the PE will sync IDF with the PC.It's important to confirm that Prism Central is running at least Files Manager 4.2. If not, upgrade it via LCM on the restored Prism Central.To obtain FM version via cli use the below command or check LCM page on Prism Central:
nutanix@PCVM# files_manager_cli get_version
After FM upgrade, Smart DR policies and Files servers information should be correctly displayed. Since the replication sync status is stored in the cluster, it should also be fine on Prism Central.
|
""ISB-100-2019-05-30"": ""Title""
| null | null | null | null |
KB14311
|
NDB - Provisioning Database from a backup on a DB Server running a German Windows version the operation fails with “failed to get partition list”.
|
When provisioning a Database from a Backup on a Windows Server the operation fails with “failed to get partition list” on German OS version.
|
When provisioning a Database from a Backup on a Windows Server the operation fails with “failed to get partition list” on German OS version.You can see that the Pre-processing task completes but the Attach and mount disks fail, then there is a Rollback.
In the log Operation ID log in /logs/drivers/sqlserver_database/ you can see similar errors like this one:
[2023-02-06 15:18:28,128] [140051686676288] [ERROR ] [0000-NOPID],error is:
This happens when NDB cannot parse German characters from DISKPART output then the workflow does not complete because the partitions could not be listed. NDB 2.5 introduced support for Windows Servers with German language pack, and it allows to provision new databases successfully on Windows servers, but the improvement did not cover provisioning a database from a backup file.
|
There is no workaround at the moment and engineering in working on a code fix to cover this scenario as well.
|
""Verify all the services in CVM (Controller VM)
|
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Various partitions
|
their mount points and the used and available space on each""
| null | null |
KB16137
|
File Analytics - Upgrade hidden from LCM when only PC Deployed File Servers exist on the hosted AOS cluster
|
File Analytics will be hidden from LCM upgrades if only PC Deployed File Servers exist on the hosted AOS cluster.
|
Users attempting to upgrade File Analytics via LCM will not see File Analytics as an upgradeable software component when only PC Deployed File Servers exist on the hosting AOS cluster.This is due to a pre-check that looks to match compatible AOS and Files versions to the intended upgrade version of File Analytics.Nutanix hides PC Deployed File Server information by design from Prism Element (AOS).As such, when the LCM inventory runs, it interprets a 'null' value as no File Server version is found.Nutanix Engineering is reviewing this to address it in the future.
|
As workaround would be to deploy a single FSVM File Server from Prism Element.This would satisfy the version requirements that LCM inventory is searching for.Requirements:
1x FSVM
1 Internal IP (storage)1 External IP (client)
On deployment:
Do not enable SMB or NFS protocolsA domain name will be filled in but will not be joined to the domainUncheck the create PD option during deployment
Nutanix Files User Guide: See File Server Basics https://portal.nutanix.com/page/documents/details?targetId=Files:fil-file-server-create-wc-t.html
Fill in a domain/DNS name (even though we are not joining it)Select Customize > Configure Manually and reduce the FSVM count down to 1.Proceed in the wizard to configure networking.Uncheck "Create PD" on the last page of the wizard.Wait for FS creation to complete.Run LCM inventoryUpgrade FA
|
KB13229
|
Nutanix Files - After upgrade to 4.0.3 the nfs IO gets stuck for 6-7 mins with error- "nfs IO stuck with error "Reconnect with STATD retry attempt: 0"
|
This KB shows explains the root cause of extended NFS export outage after upgrade to AFS 4.0.3 version. The improvement is in 4.1.0.1 and 4.2.0 releases.
|
The ganesha.log will shows the errors followed by NFS restart event.
8/04/2022 20:43:15Z : ntnx-x-x-x-x-a-fsvm 34840[::ffff:x.x.x.x] [io_14] nsm_monitor :NLM :EVENT :Monitor failed x.x.x.x SM_MON failed: RPC: Timed out
The messages log on the NFS Client (User) shows hung tasks and nfs server not responding messages
Apr 8 16:22:45 prdappsvr1 kernel: INFO: task mv:9630 blocked for more than 120 seconds.
|
By design during HA takeover we have a grace period of 90s where all IOs could be stalled.The takeover in the particular scenario was 32 sec so minimum 122 sec IO stalled would have been observed by clients, which can be higher as the client doesn't retry immediately.The Ganesha log shows that the Ganesha thread was blocked on nlm_connect for 5mins after which it restarted. This happened 2 time which further contributed to blocking client IO for long time if client is taking/reclaiming lock.So ganesha wasn't able to reach out statd to update the lock request. This resulted in 10 retries over 5mins before ganesha was restarted. The following log signature are for verification in RCA analysis.
08/04/2022 20:37:52Z : ntnx-x-x-x-x-a-fsvm 30627[::ffff:x.x.x.x] [io_20] rpc :TIRPC :EVENT :svc_rqst_hook_events: 0x7ff56d472000 fd 1024 xp_refcnt 1 sr_rec 0x7ff58184d030 evchan 2 ev_refcnt 41 epoll_fd 29 control fd pair (27:28) direction in hook failed (9)
As a workaround once on the AFS version 4.0.3 is to reduce the retries to 2 before ganesha restarts in such situation
nutanix@FSVM:~$ afs nfs.set_config_param block=NFS_CORE_PARAM param=statd_connect_retries value=2
Example:
<afs> nfs.set_config_param block=NFS_CORE_PARAM param=statd_connect_retries value=2
In addition to avoid such instances reduce the grace period as follows from default 90 sec to 30
nutanix@FSVM:~$ afs nfs.set_config_param block=NFSv4 param=Grace_Period value=30
This issue is addressed already and improvement is already filed in the AFS versions 4.2, AFS 4.1.1, AFS 4.1.0.1. You can request customer to upgrade to any of these versions if possible after using this KB to explain the root cause analysis for the previous outage.
|
KB13623
|
DR - Application-consistent snapshot failure on Windows Client OS during third-party backup
|
App-consistent snapshot of a Windows 10 VM from third party fails, and a crash consistent one is taken instead. However, when attempting to do this from PC, we are able to take the app-consistent snapshot.
|
As part of FEAT-11940 https://jira.nutanix.com/browse/FEAT-11940, Application consistent snapshot improvement is implemented in AOS 6.1 which has enhanced capability during the backup and restore. Backup software can now request VSS snapshot properties (like backup_type, writer_list, store_vss_metadata etc) during the snapshot request on the windows server operating system. Backup software requesting VSS snapshot properties on the windows client operating system would lead to app consistent snapshot failure. Identification:1. Application consistent snapshot request failed with an error "Failed to apply the requested VSS snapshot properties"
ID : 1804db8b-01d9-421d-bec2-0c258a4700aa
2. The NGT version on the VM should be the latest for that specific AOS version. 3. Verify KB-12176 https://portal.nutanix.com/kb/000012176 to ensure that quiescing the VM is not an issue.4. Ensure that NGA-CVM Communication Link is true and VSS is enabled.5. Third-party (in this case HYCU) logs show that POST calls for app-consistent snaps are made, and then the system defaults to crash-consistent snapshotting after failing to take an app-consistent snapshot
2022-06-03T07:36:09.997 UTC
4. From the aplos logs, we can see that the API calls from third-party software requested for APPLICATION_CONSISTENT along with vss_snapshot_properties.
...
5. Error in cerebro.INFO when snapshot is taken from third-party software. Notice that there is no metadata payload from guest VM -
I20220603 07:36:22.277163Z 22773 snapshot_consistency_group_sub_op.cc:7614] <parent meta_opid: 4166218, CG: cg_1651233777771443_107>: VSS query s
6. When trying to create an application-consistent recovery point from PC, we are able to do so without any errors. App consistent snapshot during recovery point from PC doesn't request for vss_snapshot_properties. So it created the app's consistent snapshot successfully. cerebro.INFO logging in this case -
I20220603 07:40:29.335882Z 22773 snapshot_consistency_group_sub_op.cc:7614] <parent meta_opid: 4166251, CG: cg_1651233777771443_107>: VSS query s
7. Operating system running on the guest is the windows client operating system.
vm_info_vec {
|
There is no workaround available since VSS snapshot properties are not supported with the Windows client version. Avoid requesting VSS snapshot properties during app consistent snapshots from the backup software side. Request the customer to reach out to the backup vendor to make these changes. ENG-388737 is tracking support for this feature on Windows Client OS.
|
KB1469
|
VMWare Datastore browser - Size vs Provisioned Size
|
This KB explains how to query the "size" parameter in the Datastore browser.
|
You may encounter an issue where the VMWare datastore browser displays 0.00 KB in the Size field for a VMDK. The Provisioned Size for that very same VMDK will show up correctly.This will look like the following.
|
The vmkfstools command that prints out the size value.Browse to the datastore.
cd /vmfs/volumes/<datastorename>
Run the following command.
vmkfstools -qv10 DSQ16BGD_4/DSQ16BGD_4.vmdk
Note: The command does not work on the *flat.vmdk files. The utility figures out the link to the flat file by itself.Sample output
nutanix_nfs_plugin: Established VAAI session with NFS server 192.168.5.2
If you quickly want to check all the VMDK sizes in a particular datastore you can go to cd /vmfs/volumes/<datastorename> and run the following command.
# find . -name *_?.vmdk -exec vmkfstools -qv10 {} \;
|
KB10167
|
Backup, Restore or any connection to iSCSI targets via Cluster VIP fails
|
With ENG-274125 in place, customers may reach out to Nutanix Support with regards to backup or restore failures. Note that the issue is not limited to backup solution, but also for any client solution that may use Cluster Virtual IP address for iSCSI target discovery and communication.
|
With ENG-274125 https://jira.nutanix.com/browse/ENG-274125 in place, customers may reach out to Nutanix Support with regards to backup or restore failures. Note that the issue is not limited to backup solutions, but also for any client solution that may use Cluster Virtual IP address for iSCSI target discovery and communication.
Taking an example of Veeam backup, if iSCSI DS IP address was not specified on AHV cluster, we will be able to use cluster Virtual IP address (automatically) to work with iSCSI connections. This did work fine till 5.18.With the AOS upgrade to 5.18, if iSCSI address was not specified on AHV cluster, any discovery or port communication over 3260 will not work and operations such as backup or restore would fail.
So when the cluster is configured only with the cluster VIP and not the DS IP on AOS 5.18.x (prior to 5.18.1.1), we can see the Veeam backup jobs (or restore) were failing. Below is the log snippet for the failed communication from Veeam.
Note: VIP: 10.48.200.100 & DS IP: 10.48.200.101
[2020-10-19] [06:07:28.381] [02520] [Info ] [NxMount] Mount started. Portal: 10.48.200.100. IsCluster: false. Target: veeam-8e9d5a17-d6ad-43be-9fdd-7a173ca77d66
Right away after configuring the DS IP and saving the configuration, we see the backup operation succeeding.
[2020-10-19] [06:03:53.069] [02134] [Info ] [NxMount] Mount started. Portal: 10.48.200.101. IsCluster: true. Target: veeam-081c916d-56eb-48c8-9898-b7d7e18f3427
|
Ensure Data Service IP is configured and used for all discovery and communication to target iSCSI disks. With ENG-274125 https://jira.nutanix.com/browse/ENG-274125 in place, which is integrated into 5.18, we now do not expose or allow iSCSI target use via Cluster VIP. This is restricted to Data Service IP as this is what Nutanix recommends as documented in https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Volumes-Guide:vol-cluster-details-modify-wc-t.html https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Volumes-Guide:vol-cluster-details-modify-wc-t.html.Engineering and release management have decided on temporarily reverting this change to allow the VIP access but we still intend to move forward enabling all iSCSI target communication using Data Service IP. This is verified in AOS versions 5.19,5.18.1.1, 5.20.2, 6.0.2 and 6.1 via ENG-350711 https://jira.nutanix.com/browse/ENG-350711.We also have Backup vendors updating their customer documents reflecting the same to use DS IP for iSCSI communications. Few of them can be found here:
Commvault - Uses the HotAdd or NAS Transport Mode (so does not use DSIP)
https://documentation.commvault.com/commvault/v11/article?p=114442.htm https://documentation.commvault.com/commvault/v11/article?p=114442.htm
Veritas - NetBackup 8.x and 9.0 uses the NAS Transport Mode (so does not use DSIP), NetBackup 9.1+ added iSCSI transport, hence will require DSIP ( https://www.veritas.com/content/support/en_US/article.100051980 https://www.veritas.com/content/support/en_US/article.100051980)
https://www.veritas.com/content/support/en_US/doc/127664414-140673866-0/v127786165-140673866 https://www.veritas.com/content/support/en_US/doc/127664414-140673866-0/v127786165-140673866
HYCU - Uses the Volume Group Transport Mode and does not require the DSIP as mandatory
https://www.hycu.com/wp-content/uploads/2017/03/HYCU_UserGuide_4.1.0.pdf https://www.hycu.com/wp-content/uploads/2017/03/HYCU_UserGuide_4.1.0.pdf
Veeam - Veeam uses Volume Group Transport Mode and states the DSIP as mandatory
https://helpcenter.veeam.com/docs/van/userguide/system_requirements.html?ver=20 https://helpcenter.veeam.com/docs/van/userguide/system_requirements.html?ver=20
Rubrik - Rubrik uses Volume Group Transport Mode and requires the DSIP as mandatory
https://drive.google.com/file/d/1Wj0eN6fdgCQU_LpGBaeWX4AaTul43-4B/view https://drive.google.com/file/d/1Wj0eN6fdgCQU_LpGBaeWX4AaTul43-4B/view
Cohesity - Uses Volume Group Transport Mode and requires the DSIP as mandatory
https://drive.google.com/file/d/190GbO2dpxu4bOQGkCgj280PW3XpWqAiT/view https://drive.google.com/file/d/190GbO2dpxu4bOQGkCgj280PW3XpWqAiT/view
Arcserve - Uses the HotAdd or Volume Group Transport Mode and requires the DSIP as mandatory for Volume Group Transport Mode
https://documentation.arcserve.com/Arcserve-UDP/available/7.0/ENU/Bookshelf_Files/HTML/Nutanix/default.htm#Nutanix/ht_udp_nutanix_addng_nodes_chap.htm http://https://documentation.arcserve.com/Arcserve-UDP/available/7.0/ENU/Bookshelf_Files/HTML/Nutanix/default.htm#Nutanix/ht_udp_nutanix_addng_nodes_chap.htm
Unitrends - Uses Volume Group Transport Mode and requires the DSIP as mandatory
https://www.unitrends.com/wp-content/uploads/ub-ahv-deployment.pdf https://www.unitrends.com/wp-content/uploads/ub-ahv-deployment.pdf
Storware - Uses the HotAdd Transport Mode (so does not use DSIP)
https://storware.gitbook.io/storware-vprotect/deployment/protected-platforms/virtual-machines/nutanix-acropolis-ahv https://storware.gitbook.io/storware-vprotect/deployment/protected-platforms/virtual-machines/nutanix-acropolis-ahv
|
KB2747
|
Data protection replication to remote site fails to complete
|
In some environments with specific security requirements, most communication between source and destination cluster might be closed. It can happen that needed ports are not globally open for a restricted set of CVMs (Controller VMs). If that is the case, PDs (Protection Domains) replication may eventually fail reporting network communication issues.
|
For data replication to function properly between clusters, IP ICMP and TCP ports 2009 (Stargate) and 2020 (Cerebro) need to be open for each Controller VM (CVM) IP and Cluster VIP between the source and destination cluster.
When replicating the data from source to destination, each Controller VM participates in sending the data.
If only some Controller VMs are consistently reachable, then the protection domain (PD) replication stops for others as TCP sockets are unavailable. The replication can fail or stop and remain stuck at this point permanently.
If you check the progress for the transmission at that point, it appears stopped or stuck at a certain percentage as follows.
nutanix@cvm$ ncli pd ls-repl-status
|
In the sample outputs below:
Source protection domain cluster subnet: x.x.x.0/24Destination protection domain cluster subnet: y.y.y.0/24
To determine if the Controller VM ports are open, run the following commands using netcat (nc) from any Controller VM on the source site:
nutanix@cvm$ allssh 'for i in remoteIP1 remoteIP2 remoteIp3 remoteVIP; do nc -z -w 1 -v $i 2009; done'
Do the same on the destination site.
Sample output:
================== x.x.x.1 =================
If you notice the output "Connected to <IP address>:<port>", the connection was successful.
If you notice an output similar to "Connection timed out", the connection failed to get through.
If one or more nodes do not report Controller VMs as reachable, most likely the traffic is blocked by a firewall in the transmission path and further investigation is required.
Alternatively, as the -z option was removed from netcat's latest versions, /dev/tcp can be leveraged instead.
nutanix@cvm$ allssh 'for a in remoteIP1 remoteIP2 remoteIp3 remoteVIP; do timeout 5 bash -c "cat < /dev/null > /dev/tcp/$a/2009"; echo $?; done'
A return code of 124 (timeout) confirms that communication cannot be established.
nutanix@cvm$ allssh 'for a in y.y.y.1 y.y.y.2 y.y.y.3 ; do timeout 5 bash -c "cat < /dev/null > /dev/tcp/$a/2009"; echo $?; done'
If replication failures were seen, then additionally check the cerebro and stargate logs for the error string "yet we got an extent read error earlier".
nutanix@cvm$ allssh 'grep "yet we got an extent read error earlier" /home/nutanix/data/logs/{cerebro,stargate}*'
If the error string "yet we got an extent read error earlier" is found in the cerebro or stargate logs, then the sysstats ping logs should be checked for circuit quality issues. The below command can be leveraged to identify points in time where the cluster's gateway or remote sites had network latency above 200ms and/or the gateway or remote site was unreachable. If the times when higher latency and "unreachables" are discerned correlate to times when the above "yet we got an extent read error earlier" error is seen, then the circuit quality should be investigated.
nutanix@cvm$ for SVM in $(svmips) ; do echo -e "\n################################\nSVM: $SVM\n"; ssh -q $SVM 'cat /home/nutanix/data/logs/sysstats/ping* | egrep -v "IP : time|IP : latency" | awk "/^#TIMESTAMP/ || \$3>200.00 || \$3=unreachable" | egrep -B1 "gw.*ms|gw.*unreachable|rs.*ms|rs.*unreachable" | egrep -v "\-\-"'; done
If a CVM cannot reliably establish communication with another CVM on the remote site, ask customer's Network Team for assistance in rectifying the network circuit. For example, if there is a firewall blocking communications between the source and destination IP addresses and/or ports 2009/2020, or if there is mis-configured multi-pathing resulting in packet loss.
If the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com.
|
KB15130
|
Prism Central upgrade may stuck if upgade was initiated when genesis service was unstable
|
genesis goes into crashloop when Prism Central upgrade is initiated
|
Summary: Genesis service on Prism Central might enter into a crash loop if genesis crashed/restarted after pre-upgrade activity.Consider following scenario:
Genesis is unstable and crash looping due to any reasonPCVM upgrade initiated, pre-upgrade activity successfully set /appliance/logical/prism/prism_central/upgrade_metadata zknode. The presence of this zknode is critical for the upgrade process to determine the location of the upgrade bundle; missing zknode will cause a genesis crash loop in the upgrade flow.Genesis crashes due initial issue in point 1 or restarted for any reason. During genesis init after the restart, it will remove /appliance/logical/prism/prism_central/upgrade_metadata zknodeGenesis will then enter the upgrade flow, try to parse /appliance/logical/prism/prism_central/upgrade_metadata content, and fail as it will be empty.
In this state, the upgrade will get stuck and never get completed. Identification:
Genesis crashing in get_installer_storage_path() with signature similar to:
2023-06-13 08:18:39,549Z CRITICAL 95997936 decorators.py:47 Traceback (most recent call last):
The command upgrade_status shows that the upgrade is still running
nutanix@PCVM:~$ upgrade_status
/appliance/logical/prism/prism_central/upgrade_metadata zknode is missing:
nutanix@PCVM:~$ zkcat /appliance/logical/prism/prism_central/upgrade_metadata
/home/nutanix/data/logs/preupgrade.out shows zknode was created
nutanix@PCVM:~$ less /home/nutanix/data/logs/preupgrade.out
Genesis crashed/restarted for any reason shortly after pre-upgrade, and genesis log show that zknode was deleted.
nutanix@PCVM:~$ zgrep upgrade_metadata /home/nutanix/data/logs/genesis*
|
Nutanix Engineering is aware of the issue and is working on a permanent fix in a future release. Workaround:Until a permanent fix is released, the following workaround can be applied to stabilize genesis, progress, and complete the upgrade successfully.
Determine and fix the initial cause of the genesis crash/restartUsing zknode value determined in step 4 of the Identification section, add the /appliance/logical/prism/prism_central/upgrade_metadata zknode manually to resume 1-click upgrade. Value could be True or False depending on PCVM version. Considering sample above with {'is_upgrade_disk_flow': True}
nutanix@PCVM:~$ echo -n '{"is_upgrade_disk_flow": true}' | zkwrite /appliance/logical/prism/prism_central/upgrade_metadata
|
KB4646
|
MCU (LCMC) 1.40 FW Upgrade Guide G4/G5 DRT Platform
|
Flashing MCU (LCMC) FW
|
Note: MCU Upgrade procedure needs approval from customer on Cluster downtime.This kb is applicable only for G4 DRT and G5 DRT platforms.MCU is an independent upgrade and does not have any binding on order of upgrade in terms of BMC or BIOS first.Also there is no enforcement that MCU has to be updated for it to be compliant with BIOS G4G5 4.0 and BMC 3.63.MCU 1.40 Binary https://s3.amazonaws.com/ntnx-sre/G4_G5_MCU/lcmc_1.40_0714.srec (md5sum: 8f18b33d89590d45a873be6be3d86994)
MCU 1.40 Release Notes
**Below information is for Nutanix internal only. Can't go out to field**------------------------------------------------------------------------------
Date: 07/17/2017
File: lcmc_1.40_0714.hex / lcmc_1.40_0714.srec[Fix from 1.30]
1. Added Front Panel Temperature sensor reading through PMBus.If sensor is not present, will stop polling after 10 tries
------------------------------------------------------------
Date: 06/16/2017
File: lcmc_1.30_0616.hex / lcmc_1.30_0616.srec[Fix from 1.12]
1. Fixed bug, FRU write from node 4
2. Added I2C related debug options
------------------------------------------------------------
Date: 09/20/2016
File: lcmc_1.20_0920.hex / lcmc_1.20_0920.srec[Fix from v1.10]
1. Fixed I2C low time during EEPROM write and FW update to prevent BMC timeout.
------------------------------------------------------------
Date: 09/29/2015
File: lcmc_1.10_0929.hex / lcmc_1.10_0929.srec[Fix from v1.09]
1. Disabled UART Debug Port as default
2. Changed i2c bitbang timing.
3. fixed unwanted writing to PSU.
4. Changed Live Data scanning time from 100msec to 280msec for PSU with Microchip MCU's.
5. Added PSU scan start and stop command
6. Added PSU FRU scan command
|
1. Preparing the Cluster for MCU Upgrade Scenario A: If Cluster is Not block aware
Shutdown the cluster by following Stopping the Cluster https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:app-cluster-stop-t.html#task_ih2_wq4_3h. Place every node in the cluster in maintenance mode by following the hypervisor-specific procedure. [Do not shutdown the node.]
vSphere (vCenter): Shutting Down a Node in a Cluster (vSphere Web Client) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:vsp-node-shutdown-vsphere-t.html#task_n22_jlt_dfAHV: Shutting Down a Node in a Cluster (AHV) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:ahv-node-shutdown-ahv-t.html#task_mrr_b2d_skHyper-V: Shutting Down a Node in a Cluster (Hyper-V) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:hyp-node-shutdown-hyperv-t.html#task_mrr_b2d_sk1Xen-Server: Shutting Down a Node in a Cluster (Xen Server) https://portal.nutanix.com/#/page/docs/details?targetId=Hardware-Admin-Ref-AOS-v56:xen-xenserver-shutting-down-node-t.html
Scenario B: If Cluster is Block Aware Perform the Node Shutdown pre-checks https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:bre-node-shutdown-precheck-t.html.
Place every node in the block in maintenance mode by following the hypervisor-specific procedure. [Do not shutdown the node.]
vSphere (vCenter): Shutting Down a Node in a Cluster (vSphere Web Client) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:vsp-node-shutdown-vsphere-t.html#task_n22_jlt_dfAHV: Shutting Down a Node in a Cluster (AHV) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:ahv-node-shutdown-ahv-t.html#task_mrr_b2d_skHyper-V: Shutting Down a Node in a Cluster (Hyper-V) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:hyp-node-shutdown-hyperv-t.html#task_mrr_b2d_sk1Xen-Server: Shutting Down a Node in a Cluster (Xen Server) https://portal.nutanix.com/#/page/docs/details?targetId=Hardware-Admin-Ref-AOS-v56:xen-xenserver-shutting-down-node-t.html
2. MCU/LCMC upgrade can be performed from any node within the same block.3. Login to the IPMI web UI of any node in that block. Note: There is only one LCMC per block in multi node system. Only need to perform LCMC upgrade once per block4. Check the current LCMC version from "Server Health--> Multi Node".5. Open up the LCMC update page by clicking on "Maintenance--> LCMC Update".6. Download the MCU 1.40 .srec file from here https://s3.amazonaws.com/ntnx-sre/G4_G5_MCU/lcmc_1.40_0714.srec (md5sum: 8f18b33d89590d45a873be6be3d86994)9. Click Choose File and then select the *.srec file. Then click Update.10. Click OK on below pop-up.11. MCU update starts.12. All nodes on block will power cycle when process is complete. Do not Power cycle the node while the update is taking place.13. Once the nodes come back up, login to IPMI Web UI and check the LCMC version again from "Server Health--> Multi Node".14. Preparing the Cluster post MCU Upgrade Scenario A: If Cluster is Not block aware
a) Start every node in the cluster by following the hypervisor-specific procedure.
vSphere (vCenter): Starting a Node in a Cluster (vSphere client) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:vsp-node-start-no-verify-vsphere-t.html#task_etb_4lr_zkHyper-V: Starting a Node in a Cluster (Hyper-V) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:hyp-node-start-hyperv-no-verify-t.html#task_c51_lfn_1l1AHV: Starting a Node in a Cluster (AHV) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:ahv-node-start-ahv-no-verify-t.html#task_c51_lfn_1l
b) If you stopped the cluster, restart the cluster by following Starting a Nutanix Cluster https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:app-cluster-start-chassis-sn-t.html#id9912f5ba-e27f-4386-a2eb-32f919aba7eb.
Scenario B: If Cluster is Block Aware a) Start every node in the block by following the hypervisor-specific procedure.
vSphere (vCenter): Starting a Node in a Cluster (vSphere client) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:vsp-node-start-no-verify-vsphere-t.html#task_etb_4lr_zkHyper-V: Starting a Node in a Cluster (Hyper-V) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:hyp-node-start-hyperv-no-verify-t.html#task_c51_lfn_1l1AHV: Starting a Node in a Cluster (AHV) https://portal.nutanix.com/#/page/docs/details?targetId=Chassis-Replacement-Platform-v58-Multinode:ahv-node-start-ahv-no-verify-t.html#task_c51_lfn_1lXen-Server: Starting a Node in a Cluster (Xen Server) https://portal.nutanix.com/#/page/docs/details?targetId=Hardware-Admin-Ref-AOS-v56:xen-xenserver-starting-a-node-t.html
15. Run ncc health_checks run_all as a final health check.
|
KB14987
|
Nutanix Files - Recall request has failed
|
This article describes an issue where recall request fails on a Nutanix Files cluster with tiering enabled after being upgraded to version 4.3. This results in files not being accessed from the s3 store.
|
The issue occurs on clusters with Tiering enabled after upgrading the Files cluster from ≥ 4.0 and < 4.3 to version 4.3. Recall request fails, resulting in files not being accessible from the s3 store.
Identifying the problem
In the "Data Lens Dashboard", under the Tasks, you see a failed "Recall Request".When you manually recall tiered data https://portal.nutanix.com/page/documents/details?targetId=Data-Lens:ana-analytics-tiering-recall-manual-t.html, the task either fails after a prolonged time or takes longer than usual to succeed.If the recall request is successful, the recalled file is partially full or cannot be opened due to an error.The base URL value of the tiering profile is prefixed with the bucket name as "bucket_name.endpoint":
nutanix@FSVM$ afs tiering.list_profiles
|
Nutanix is aware of the issue and is working on a resolution. Contact Nutanix Support https://portal.nutanix.com for immediate assistance.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB-2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB-2871 https://portal.nutanix.com/kb/2871.Collect Files related logs. For more information on Logbay, see KB-3094 https://portal.nutanix.com/kb/3094.
CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run:
nutanix@CVM$ afs info.get_leader
Once you are on the Minerva leader CVM, run the following:
nutanix@CVM$ ncc log_collector --file_server_name_list=<fs_name> --last_no_of_days=5 --minerva_collect_sysstats=True fileserver_logs
For example:
nutanix@CVM$ ncli fs ls | grep -m1 Name
Attaching files to the caseTo attach files to the case, follow KB-1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB16220
|
New blueprint launch or runbook execution fails in Self-Service (formerly Calm)
|
NCM Self-Service (formerly Calm) users are unable to launch new applications, execute runbooks or perform day-2 actions even after applying a valid NCM License.
|
NCM Self-Service (formerly Calm) users are unable to launch new applications, execute runbooks or perform day-2 actions even after applying a valid NCM License.
Sample error messages:
Application launch has failed due to expired licenses on the following clusters
|
This issue happens because the NCM Self-Service license state has not been refreshed, and hence, the launch of new applications or runbook execution fails even after applying a valid license. Please follow these commands to refresh the NCM Self-Service license state:
Connect to Prism Central via SSH.Run the following command:
nutanix@PCVM:~$ docker exec -i nucalm sh -c 'source /home/calm/venv/bin/activate;/home/calm/bin/ncm_license.sh'
|
KB9716
|
NCC Health Check: stale_secondary_synchronous_replication_configuration_check / stale_synchronous_replication_configuration_check
|
NCC 4.0.0 | The NCC health check stale_secondary_synchronous_replication_configuration_check / stale_synchronous_replication_configuration_check checks if there are any stale secondary synchronous replication configurations present on Prism Central.
|
The NCC health check stale_secondary_synchronous_replication_configuration_check / stale_synchronous_replication_configuration_check checks if there are any stale synchronous replication configurations present on Prism Central. This check runs only on Prism Central and it is scheduled by default to run every 24 hours. This check only applies to VMs that are protected by a protection policy using AHV Synchronous Replication.
Running NCC Check
You can run this check as a part of the complete NCC health checks
nutanix@cvm:~$ ncc health_checks run_all
Or you can run this check individually
nutanix@cvm:~$ ncc health_checks data_protection_checks ahv_sync_rep_checks stale_synchronous_replication_configuration_check
From NCC 4.3.0 and above, stale_synchronous_replication_configuration_check has been renamed to stale_secondary_synchronous_replication_configuration_check.
Use the following command for the individual check:
nutanix@cvm:~$ ncc health_checks data_protection_checks ahv_sync_rep_checks stale_secondary_synchronous_replication_configuration_check
This check is scheduled by default to run every 24 hoursSample Output
Check Status: PASS
Running : health_checks data_protection_checks ahv_sync_rep_checks stale_synchronous_replication_configuration_check
From NCC 4.3.0 and above
Running : health_checks data_protection_checks ahv_sync_rep_checks stale_secondary_synchronous_replication_configuration_check
Check Status: FAIL
Running : health_checks data_protection_checks ahv_sync_rep_checks stale_synchronous_replication_configuration_check
From NCC 4.3.0 and above
Running : health_checks data_protection_checks ahv_sync_rep_checks stale_secondary_synchronous_replication_configuration_check
Output messaging
From NCC 4.3 and above
[
{
"110456": "Check if there are any stale synchronous replication configurations present on Prism Central",
"Check ID": "Description"
},
{
"110456": "Stale synchronous replication configuration left behind",
"Check ID": "Causes of failure"
},
{
"110456": "Cleanup the stale synchronous replication configuration left behind on Prism Central",
"Check ID": "Resolutions"
},
{
"110456": "Stale entries could cause unwanted tasks to be generated and they cannot be cleaned up automatically on the secondary as it could affect the VM stretch state hence require a manual cleanup",
"Check ID": "Impact"
},
{
"110456": "This check is scheduled by default to run every 24 hours",
"Check ID": "Schedule"
},
{
"110456": "A110456",
"Check ID": "Alert ID"
},
{
"110456": "Stale synchronous replication configuration found.",
"Check ID": "Alert Title"
},
{
"110456": "Stale synchronous replication configuration found for vms {err_msg}.",
"Check ID": "Alert Message"
},
{
"110456": "110456",
"Check ID": "Check ID"
},
{
"110456": "Check if there are any stale secondary synchronous replication configurations present on Prism Central",
"Check ID": "Description"
},
{
"110456": "Stale secondary synchronous replication configuration left behind",
"Check ID": "Causes of failure"
},
{
"110456": "Cleanup the stale secondary synchronous replication configuration left behind on Prism Central",
"Check ID": "Resolutions"
},
{
"110456": "Stale entries could cause unwanted tasks to be generated and they cannot be cleaned up automatically on the secondary as it could affect the VM stretch state hence require a manual cleanup",
"Check ID": "Impact"
},
{
"110456": "This check is scheduled by default to run every 24 hours",
"Check ID": "Schedule"
},
{
"110456": "A110456",
"Check ID": "Alert ID"
},
{
"110456": "Stale secondary synchronous replication configuration found.",
"Check ID": "Alert Title"
},
{
"110456": "Stale secondary synchronous replication configuration found for vms {err_msg}.",
"Check ID": "Alert Message"
}
]
|
The check returns a FAIL if stale secondary synchronous replication configuration entries are left behind. Stale entries could cause unwanted tasks to be generated and they cannot be cleaned up automatically on the secondary cluster as it could affect the VM stretch state hence require a manual cleanup.Engage Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/ to resolve the issue.
|
KB14869
|
LCM firmware upgrade on Intel nodes fails and the node is left in a powered off state
|
LCM firmware upgrade on Intel nodes fails and the node is left in a powered off state
|
When performing an LCM based firmware upgrade on an Intel node to Intel DCF 9.1 fails with the below error:
Operation Failed. Reason: LCM failed staging to env 'phoenix-' at ip address x.x.x.217. Failure during step 'Prepare', error 'Failed to prepare staging location /home/nutanix/tmp/lcm_staging' was seen. Logs have been collected and are available to download on x.x.x.216 at /home/nutanix/data/log_collector/lcm_logs__x.x.x.216__2023-04-25_08-53-42.709568.tar.gz
Looking into the lcm_ops.out log for more details, the below is observed:
2023-04-19 01:40:07,683 {"leader_ip": "x.x.x.223", "event": "Handling LcmStagingError: LCM failed staging to env 'phoenix-' at ip address x.x.x.217. Failure during step 'Prepare', error 'Failed to prepare staging location /home/nutanix/tmp/lcm_staging' was seen.", "root_uuid": "a2db4593-ea2e-4e89-ad09-2e550b4fc6f1"}
The node that was undergoing an upgrade, is then left in a powered off stateThe only way to power the node back on, and recover access to it, is to physically drain the power from the node(Unplug it) and then connect it to power againOnce this is done, the IPMI is responsive again and the node can be powered on and per normal.
|
Solution:This issue is when these nodes encountering a soft power control failure and a thermal trip FIVR fault which is related to a known issue that the Intel engineering team is working to fix. Tentatively scheduled for June 2023Workaround:Manually upgrading the CPLD and BMC on affected nodes and then initiating and LCM upgrade
|
KB6634
|
Cloud Connect AWS CVM unable to upgrade due to loadavg high
|
CVM is unable to upgrade due to load average is high.
|
Followed the documentation:
Cloud Connect CVM Upgrade https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-:wc-cloud-connect-upgrade-cvm-t.html
After running the upgrade method, it does not proceed as loadavg on AWS CVM is high.
nutanix@cvm$ install/bin/upgrade_cloud_cvm --installer_file=nutanix_installer_package-release-euphrates-5.5.7.1-stable.tar.gz --metadata_ file=euphrates-5.5.7.1-metadata.json
|
Workaround:
Restart AWS VM instance with the following command:
nutanix@cvm:~$ cvm_shutdown -r now
Retry the upgrade_cloud_cvm command.
For example:
nutanix@cvm:~$ install/bin/upgrade_cloud_cvm --installer_file=nutanix_installer_package-release-euphrates-5.5.7.1-stable.tar.gz --metadata_file=euphrates-5.5.7.1-metadata.json
Monitor the progress by running the upgrade_status command and checking the install.out file.
nutanix@cvm:~$ upgrade_status
|
KB11903
|
Move : Online upgrades from newly deployed Move 4.1.0 appliances are not working
|
This article describes an issue where online upgrades from newly deployed Move 4.1.0 VMs are not working.
|
Online upgrades from newly deployed Move 4.1.0 VMs are not working.
Troubleshooting:
SSH to the Move VM. (See Accessing Move VM with SSH https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v4_1:v41-access-ssh-t.html.)
Run the /opt/xtract-vm/scripts/check_upgrade_connectivity.sh script. It shows the error "curl: no URL specified!"
For example:
admin@move on ~ $ sh /opt/xtract-vm/scripts/check_upgrade_connectivity.sh
|
This issue is fixed in Move 4.1.2. Use Move 4.1.2 or later.
Workaround:
Perform the following steps:
SSH to the Move VM.Type the rs command and enter the password.Open the check_upgrade_connectivity.sh file in an editor and delete the contents of the file.
admin@move on ~ $ vi /opt/xtract-vm/scripts/check_upgrade_connectivity.sh
Copy and paste the following content:
#!/usr/bin/env bash
Retry Upgrade Software from the Move UI https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v4_8:top-upgrade-binary-upload-t.html.
|
KB17170
|
CSI: Hypervisor-attached Volume Groups may prevent Kubernetes cluster deletion
|
Using the CSI 3.0 driver to connect hypervisor-attached volume groups to Kubernetes nodes may prevent the Kubernetes cluster from being deleted. Any Kubernetes distribution utilizing the CSI 3.0 driver may be impacted.
|
The Nutanix Container Storage Interface (CSI) Volume Driver for Kubernetes uses Nutanix Volumes and Nutanix Files to provide scalable, persistent storage for stateful applications. A new feature of the CSI 3.0 driver https://portal.nutanix.com/page/documents/details?targetId=CSI-Volume-Driver-v3_0:CSI-Volume-Driver-v3_0 supports hypervisor-attached volumes. Hypervisor-attached volumes support leverages the hypervisor internal network for data traffic instead of iSCSI and is considered more secure than iSCSI. However, an effect of using hypervisor-attached volumes is that Prism Central APIs will not allow VM deletion if a Nutanix Volume Volume Group (VG) is hypervisor-attached. As a result, for Kubernetes deployments using the CSI 3.0 driver, deletions of Kubernetes clusters or VMs may fail until the hypervisor-attached volume has been detached.
|
To allow Kubernetes cluster or VM deletion, detach the hypervisor-attached volume.
To detach a hypervisor-attached volume via Prism Central:
Browse to the Compute & Storage > Volume Groups page, select the checkbox next to the volume group, then go to Actions > Manage Connections.Under Select Virtual Machines, select the X for the VM that you wish to detach the volume group from, then choose Save.
To detach a hypervisor-attached volume via Prism Element:
Browse to the VM page, select the VM, and click UpdateScroll down to Volume Groups to view the list of attached volume groupsSelect the X next to the volume groupConfirm your choice.
Once all hypervisor-attached volumes have been detached from a VM, it should no longer prevent the deletion of a Kubernetes cluster/VM.
|
KB8124
|
Aplos service crashing due to error "No such file or directory: '/home/nutanix/aplos/www/static/v3/swagger.json'"
|
Aplos service crashing due to error "No such file or directory: '/home/nutanix/aplos/www/static/v3/swagger.json'"
|
Aplos service crashing due to error "No such file or directory: '/home/nutanix/aplos/www/static/v3/swagger.json'".
Looking at aplos.out logs, you may see following:
2019-09-05 13:39:21 INFO server_cert_utils.py:250 Using RSA key for JWT sesions
Diagnosis
This error means some of the files were not copied or removed. One potential reason is when you recover a CVM using a Rescue SVM ISO but not follow the complete process, for example, when following KB 7669 http://portal.nutanix.com/kb/7669.
Verify if file is present on all nodes or not:
nutanix@cvm$ allssh ls -lrth /home/nutanix/aplos/www/static/v3/swagger.json /usr/local/nutanix/aplos/www/static/pc/v3/swagger.json
For example, in affected CVM, the soft link was missing:
nutanix@cvm$ allssh ls -lrth /home/nutanix/aplos/www/static/v3/swagger.json /usr/local/nutanix/aplos/www/static/pc/v3/swagger.json
This is just a softlink, so we can create it.
|
Create softlink for aplos in /home/nutanix. This will make sure the "/home/nutanix/aplos/www/static/v3/swagger.json" is available for Aplos.
nutanix@cvm$ ln -s /usr/local/nutanix/aplos /home/nutanix/aplos
Once this is done, verify the softlink is present and compare that with other nodes.
nutanix@cvm$ allssh "ls -lrth /home/nutanix/aplos/www/static/v3/swagger.json /usr/local/nutanix/aplos/www/static/pc/v3/swagger.json"
Aplos service should now stabilize.
|
{
| null | null | null | null |
KB13265
|
NGT installation fails with 0x80070666
|
NGT installation fails with 0x80070666
|
During installation or upgrade of NGT, the process fails with the following symptoms:
Installation fails with error 0x80070666NGT installation logs shows that installation of vcredist_2015_x64.exe is failing with
[1C30:1EA8][2022-06-12T13:48:39]i301: Applying execute package: vcredist_2015_x64, action: Install, path: C:\ProgramData\Package Cache\FF15C4F5DA3C54F88676E6B44F3314B173835C28\vcredist_2015
The windows machine has a version of Microsoft Visual C++ 2015 Redistributable (x64) higher than 14.0.24123
|
Reason:
NGT Tools contains vcredist_2015_x64 with version 14.0.24123 and Windows can't install it if there is another package with higher version is installed.
Workaround:
Please remove existing Microsoft Visual C++ Redistributable and install NGT again.
|
KB9750
|
Error "Image upload stream was interrupted" may be shown during image upload from local computer
|
The error message "Image upload stream was interrupted" may be shown during image upload.
|
The error message "Image upload stream was interrupted" may be shown during image upload from the local computer due to the following reasons:
The browser is refreshed while image upload from a local computer is in progress.Target Prism Element stops receiving data from the local computer due to network issues.The size of the upload is too large for the browser. Most browsers have a 4 GB limit for HTTP PUT operations.
|
The following verifications may help to isolate the issue:
While image upload from the local computer is in progress, do not refresh or close the browser.Make sure that a stable network connection is available between the local computer and the target Prism Element cluster.If the upload fails, try uploading the file from a computer in a different network or from a computer in the same network as the Prism Element cluster to isolate the issue.Make sure that a source file is readable and not corrupted. Try copying the file to a different disk on the local computer to verify it.
|
KB14507
|
Alert - A160171 - FileServerTieredFileRestoreFailed
|
Investigating FileServerTieredFileRestoreFailed issues on a Nutanix cluster.
|
This Nutanix article provides the information required for troubleshooting the alert FileServerTieredFileRestoreFailed for your Nutanix cluster.
After deleting a file server or a share, there could be files tiered referring to objects in the object-store. These objects could become stale if there are no references to them.
Nutanix recommends expiring these objects after the retention period configured for the object store profile.
Alert Overview
The FileServerTieredFileRestoreFailed alert is generated on share delete and FS delete operation.
Sample Alert
Block Serial Number: 16SMXXXXXXXX
Output messaging
[
{
"Check ID": "Tiered File Restore Failed."
},
{
"Check ID": "Tiered file restore failed unexpectedly."
},
{
"Check ID": "Refer to KB article 14507. Contact Nutanix support if issue still persists or assistance needed."
},
{
"Check ID": "Tiered file is in inconsistent state."
},
{
"Check ID": "A160171"
},
{
"Check ID": "Tiered File Restore Failed"
},
{
"Check ID": "{message]"
}
]
|
Troubleshooting
The message from the alert will provide one of two reasons for failing to restore the file.
Scenario 1: "No object store configuration was found for profile." This means the profile was somehow deleted or not accessible.Scenario 2: When object was not found because it was deleted manually from object store or it was removed from object store as part of garbage cleaning.
Resolving the issue
Scenario 1: Contact Nutanix Support for assistance. See "Collecting Additional Information" below.Scenario 2: Attempt to undo the share-restore and try again with a newer snapshot.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB 2871 https://portal.nutanix.com/kb/2871.Collect Files related logs. For more information on Logbay, see KB-3094 https://portal.nutanix.com/kb/3094.
CVM logs stored in ~/data/logs/minerva_cvm*NVM logs stored within the NVM at ~/data/logs/minerva*To collect the file server logs, run the following command from the CVM. Ideally, run this on the Minerva leader as there were issues seen otherwise. To get the Minerva leader IP on the AOS cluster, run:
nutanix@CVM$ afs info.get_leader
Once you are on the Minerva leader CVM, run:
nutanix@CVM$ ncc log_collector --file_server_name_list=<fs_name> --last_no_of_days=5 --minerva_collect_sysstats=True fileserver_logs
For example:
nutanix@CVM$ ncli fs ls | grep -m1 Name
Attaching files to the case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294. If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB14589
|
Unable to license a cluster with Compute-only node(s) due to malformed CSF file
|
This article explains break/fix of licensing issue when the cluster has compute-only nodes and CSF file is malformed and prevents the license from being applied properly.
|
This article explains break/fix of licensing issue when the cluster has compute-only nodes and CSF file is malformed and prevents the license from being applied properly.The following sequence of events leads to this issue:
The manual workflow from the user guide Applying an Upgrade License https://portal.nutanix.com/page/documents/details?targetId=License-Manager:lmg-upgrade-license-applying-t.html is followed in a cluster with one or more Compute-only nodes. After following the steps from the guide and uploading the LSF (License Summary File) to Prism, the license is not upgraded, and the UI shows that "licensing is incomplete".
Verification
The CSF (Cluster Summary File) doesn't have the information about the compute-only nodes populated because this information is not in IDF. Example of a CSF from an affected cluster: HCI node:
"nodes": [
Compute-only node:
{
Note that the "blockSerialName" and "modelName" fields are empty, which prevents generating the correct answer file in the Portal.
|
This issue is resolved in:
AOS 6.7.X family (STS): AOS 6.7
Upgrade AOS to versions specified above or newer.
Workaround
Download the script https://download.nutanix.com/kbattachments/14589/fetch_and_update_co_nodes_v1.py https://download.nutanix.com/kbattachments/14589/fetch_and_update_co_nodes_v1.py to /home/nutanix/cluster/bin/ of any CVM.Verify the md5 signature of the file matches the expected value: 65dc6d9e3cd89023f8c3cafe2dd72428Execute the script:
nutanix@cvm:~$ python ~/cluster/bin/fetch_and_update_co_nodes.py
Repeat the licensing procedure.
|
KB8800
|
HPE DX2200-190r 2U2N platform may have issues with cluster creation and disk serviceability
|
HPE DX2200-190r platform chassis supports two nodes. The second node on the chassis may have the failure symptoms described below with some AOS versions. First node on this chassis is not affected.
|
HPE DX2200-190r platform chassis supports two nodes. The second node on the chassis may have the failure symptoms described below with some AOS versions. The first node on this chassis is not affected.
Symptom 1: The node imaged with AOS 5.10.x or AOS older than 5.11.1.5, 5.11.2, 5.16 using Foundation 4.5.1 or older will not have this issue. However, when such nodes are upgraded to AOS 5.11.1.5, 5.11.2, or 5.16, list_disks may fail and cause disk serviceability workflows to fail.list_disks command output:
nutanix@CVM:~$ list_disks
Disk ID LED On/Off operations may fail. Corresponding /home/nutanix/data/logs/hades.out is shown below:
2020-01-15 00:12:36 INFO disk_manager.py:3871 Led locate request for disks ['/dev/sdb']
Symptom 2: New deployment with AOS 5.11.1.5, 5.11.2, 5.16 + Foundation 4.5.1 or older, list_disk will fail.
nutanix@CVM:~$ list_disks
|
Symptom 1: Upgrade AOS to 5.16.1, 5.11.3, 5.17, or any higher version.Symptom 2:For AOS 5.10.x use Foundation 4.5.1 or lower version. For AOS 5.11.x or higher use Foundation 4.5.2 or higher. Do not use Foundation 4.5.2 with AOS below 5.10.10.
|
KB13774
|
SVM Rescue failing as rescue shell is unable to find disk model for PCIe drives
|
SVM Rescue failing as rescue shell is unable to find disk model for PCIe drives
|
There can be multiple instances where rescuing a CVM is of necessity. The correct method of creating the svmrescue iso is followed (procedure here https://confluence.eng.nutanix.com:8443/display/STK/SVM+Rescue%3A+Create+and+Mount) but when the script is run, it fails and throws the below error trace on the console:
2022-10-11 08:54:42,752Z ERROR nvme_disk.py:807 Unable to find model for disk /dev/nvmeaaa
The issue to notice here is that the script is unable to find the disk model for the NVME/SSD-PCIe drive.Symptoms:
The script is failing with the above trace.All drives are listing in the rescue shell
rescue / # sudo nvme list
The disk model will be compatible when checked in the rescue shell ruling out the model issue
rescue / # grep -i MZXL57T6HALA-000H3 /mnt/cdrom/config/hcl.json
nvme-cli command which we can see is being used to get the drive info is showing as not present
rescue config # nvme-cli
NVME version is less than 1.12
rescue config # nvme version
|
The reason for the issue is that the NVME version which is bundled with the rescue iso during creation is old and hence is unable to detect the drives. As part of ENG-501489 https://jira.nutanix.com/browse/ENG-501489, we will be updating the Phoenix foundation and svm_rescue with new nvme-cli version (>= 1.12), which will help us in execution nvme-cli commands needed for NVMe namespace on the eligible drives.Workaround:1. Check the version that is currently present on any of the working CVMs and confirm its >= 1.12:
nutanix@CVM::~$ nvme version
2. Create the svm rescue iso with network capability as mentioned in the Optional - Configure networking (MUST) section. Link : SVM Rescue: Create and Mount https://confluence.eng.nutanix.com:8443/display/STK/SVM+Rescue%3A+Create+and+Mount . Continue with the process and boot the CVM into rescue shell.3. Verify that you are able to ssh into the rescue CVM (using root user and default nutanix credentials) from the host where its running.
root@ahv:: ssh [email protected]
4. Go back to the CVM where the correct version was confirmed and copy the nvme binary to the host where the rescue CVM is running
nutanix@CVM::~$ scp /usr/sbin/nvme root@<host-ip>:~/nvme.cvm
5. Copy the binary to the rescue CVM
scp nvme.cvm [email protected]:~/
6. ssh into the Rescue CVM using command in Step3 and modify the nvme binary to use the latest copied one, and verify the version
rescue ~ # mv /usr/sbin/nvme /usr/sbin/nvme.old
7. Run the svm rescue script again and it should succeed. Follow the complete process of rescuing the CVM to finish the process.
|
{
| null | null | null | null |
KB8775
|
False positive Alert A103088: "CVM {} is unreachable" generated due to out of memory condition for cluster_health cgroup
|
This article describes a situation where the alert "CVM x.x.x.x is unreachable" (alert number A103088) is generated intermittently even if the NCC check "ncc health_checks network_checks inter_cvm_connections_check" passes.
|
We have seen a situation where the alert "CVM x.x.x.x is unreachable" (alert number A103088) is generated intermittently even if the NCC check "ncc health_checks network_checks inter_cvm_connections_check" passes.
The following alert may be generated even at a time when it is suspected there is no issue with inter-CVM (Controller VM) communications:
nutanix@CVM$ ncli alert ls | grep -B1 -A11 unreachable
Use the following information to confirm if this is a false alarm and this KB article applies.
Cross reference the time of this alert against the ping statistics collected around this time and check if it correlates to an issue with inter-CVM communications.
Ping statistics between CVMs are recorded on each CVM and any issues with inter-CVM communications would be evident with "unreachable" being listed against a CVM. From the CVM showing the alert, run the following command:
nutanix@CVM$ cat /home/nutanix/data/logs/sysstats/ping_hosts.* | egrep -v "IP : time" | awk '/^#TIMESTAMP/ || /^x.x.x.142/' | grep -B1 "unreachable"
If no results are returned, then there were no ping drops during regular pings between CVMs.
If results are returned and are close in time to the alert, then the alert is genuine. For example, running the above command and receiving output as below could mean network outage or a CVM down and this KB article does not apply.
#TIMESTAMP 1579153482 : 10/10/2020 05:44:42 AM
Check if the NCC check inter_cvm_connections_check has not failed.
When you check health_server.log, you will see the NCC check inter_cvm_connections_check status as PASS. Filter the log as below and look for the times around the alert to confirm status of the NCC check.
nutanix@CVM$ grep "Status for plugin health" /home/nutanix/data/logs/health_server.log | grep inter_cvm_connections_check
Check that health_server.log is logging failed for ping to the CVM but not returning any results.
When checking health_server.log around the time of the alert, it shows the below ERROR logging for the CVM indicated in the alert and other CVMs, which means ping has failed for the CVM IP, and after this we see the alert(s) triggered. For example:
nutanix@CVM$ grep -A 2 -B 4 "Ping failed for" /home/nutanix/data/logs/health_server.log
The log extract shows that the ping command was executed for x.x.x.102 but returned with ERROR and the output is a blank line.
If the output shows "Destination Host Unreachable", then it shows a network connectivity issue or a CVM down scenario.
After all the above conditions have been verified, run the following command to check the oom-killer (out of memory) event logs for cluster_health cgroups occurring when the ping test was executing. For example:
nutanix@CVM$ dmesg | grep -A 19 "cluster_health killed as a result of limit"
When the cluster health task is killed, all health checks at this time will not complete successfully. This is what is causing the inter_cvm_connections_check to return an unexpected return value and cause the false alert.
|
In such situations, it is recommended to upgrade NCC to the latest version as there have been some improvements placed to handle cluster_health OOM issues in every release.
|
KB15157
|
Lenovo: Could not open device at /dev/ipmi0
|
Foundation unable to access /dev/ipmi0 on Lenovo hardware
|
While attempting to foundation new hardware from Lenovo, foundation error unable to open /dev/ipmi0
|
Lenovo recommending disabling KCS access. With KCS access disabled, in band ipmi access fails.Lenovo settings : Enable KCS access and restart XclarityReference: https://pubs.lenovo.com/xcc/NN1ia_c_IPMIaccess https://pubs.lenovo.com/xcc/NN1ia_c_IPMIaccess
IPMI over Keyboard Controller Style (KCS) Access
|
}
| null | null | null | null |
""Firmware Link\t\t\t\tMD5 Checksum"": ""Link\t\t\t\tMD5=41734c63e4f00dd11a071f57d04a26e8""
| null | null | null | null |
KB2245
|
Troubleshooting common stargate FATAL errors
|
This KB details how to troubleshoot some common Stargate FATAL errors you may see
|
When the Stargate process encounters a crash or Fatal error, the process will restart, and the cause of which will be printed under /home/nutanix/data/logs/stargate.FATAL. There are many reasons why the Stargate process might crash, so this document lists some common FATAL errors you may see, and how to troubleshoot them.Below are some common Stargate FATALs you may see:
Fail-fast after detecting hung stargate ops: Operation with ID X hung for 60secs
Watch dog fired
The 60 seconds is based on the Stargate gflag: stargate_operation_hung_threshold_secs https://sourcegraph.ntnxdpro.com/gerrit/main@cdp-master/-/blob/cdp/server/stargate/stargate.cc?L161 (the default value is 60 seconds).WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without review and verification by any ONCALL screener or Dev-Ex Team member. Consult with an ONCALL screener or Dev-Ex Team member before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit and KB-1071 https://portal.nutanix.com/kb/1071
|
Troubleshooting hung op timeouts/Watch dog fired errors
Check whether Stargate FATALs are observed during Cassandra compaction jobs. Reference KB- https://nutanix.my.salesforce.com/kA03200000097nz 2757 https://nutanix.my.salesforce.com/kA03200000097nz to identify compaction jobs.
Fail-fast after detecting hung Stargate ops: Operation with ID X hung for 60secs
Fail-fast FATALs indicate that a particular operation could not be processed by Stargate within 60 seconds. When this happens Stargate will restart itself to try and clear the ops in question. However if there is an underlying condition causing the ops to not be cleared you will likely see repeated instances of hung op timeouts (potentially on multiple cluster nodes.
When a hung op timeout is encountered, Stargate will flush all tracing data and stats to two files: "stargate.stats.XXXX.html", and "stargate.activity_traces.XXXX.HTML". The activity_traces file is of particular importance, as this will have the details of what types of operations are becoming hung, which can help deduce whether the issue lies with Stargate, or an underlying component (Cassandra, Pithos, Cerebro, etc.).
Capturing the operation in the trace
Stargate traces do not capture the operation that hung if the Stargate gflag value activity_tracer_default_sampling_frequency https://sourcegraph.ntnxdpro.com/gerrit/main@util-master/-/blob/util/misc/activity_tracer.cc?L45=1 was not set before the FATAL error. Activity traces are off by default (see ENG-35160 https://jira.nutanix.com/browse/ENG-35160). This gflag needs to be set to 1 to capture the trace details for every operation.It is safe to use this setting in production while isolating a problem.
The performance impact is minimal and not noticeable. Note: gflag value is set to 1 by collect_perf when troubleshooting performance issues.AOS 5.0 incorporates the fix https://sourcegraph.canaveral-corp.us-west-2.aws/gerrit/main/-/commit/8aaec75ebe05daf0727291c19fce6b10b83f35c8 for " ENG-16819 https://jira.nutanix.com/browse/ENG-16819 Reduce tracing overhead of activity tracing" which greatly reduces the impact of having activity traces enabled.
For troubleshooting Stargate FATALs, you can set gflag dynamically to quickly capture traces covering the hung op, if Stargate is failing.It may be necessary to set this persistently afterwards - as the dynamic setting will only be effective for one stargate FATAL, but that may be enough to start to isolate the cause.
Determine the current value (in case you need to restore it dynamically after capturing the data):
nutanix@CVM$ allssh "curl http://0:2009/h/gflags 2>/dev/null | grep activity_tracer_default_sampling_frequency"
Note: The activity_tracer_default_sampling_frequency gflag is whitelisted in the ~/config/stargate_gflags_whitelist.json so it is allowed to change it live without the use of gflags_support_mode first, which is needed for most gflags on 6.5.1 and above — as described in the KB-1071 https://portal.nutanix.com/kb/1071.
Change the value to equal 1 (enable traces):
nutanix@CVM$ allssh "curl http://0:2009/h/gflags?activity_tracer_default_sampling_frequency=1 2>/dev/null | grep activity_tracer_default_sampling_frequency"
After you finish your troubleshooting/collection - restore the value identified in the first step. (typically 128 for pre-4.5 releases and 0 after 4.5)
nutanix@CVM$ allssh "curl http://0:2009/h/gflags?activity_tracer_default_sampling_frequency=0 2>/dev/null | grep activity_tracer_default_sampling_frequency"
Watchdog fired
FATALs with error "watch dog fired" indicate that Stargate control thread 0 was busy and was unresponsive for more than 20 seconds. Watch dog fired errors are usually associated with hung op timeouts, but you may also see only watch dog fired errors present as well. Overall this simply indicates that the system was extremely busy, to the point where it could not respond to Stargate within 20 seconds.
Watchdog fired FATALs will also generate an activity_trace file under /home/nutanix/data/logs. As with troubleshooting hung op timeouts, reviewing the trace file is the first step in deducing the cause of watchdog fired FATALs.
Analysis example
Below is an example. In this case, operation 75030345 was not cleared in 60 seconds, which caused a hung op timeout FATAL...
F0813 06:58:39.381965 16449 stargate.cc:1369] Fail-fast after detecting hung stargate ops: Operation with id 75030345 hung for 60secs
Trace file "stargate.activity_traces.16250.html" was generated as a result of this hung op timeout. Looking at the traces there are multiple operations that are taking very long to process...In this example, from the CVM in question, use the following command to open the trace file.
nutanix@CVM$ links stargate.activity_traces.XXXX.html
The logs can also give us a clue as to what might be the cause of the FATALs. Since Stargate is the process encountering issues, reviewing the Stargate logs is the next step in troubleshooting.From the stargate.INFO log, we see multiple instances of disk writes taking over 100 milliseconds to complete. Before the hung op timeout we also see a warning noting that simple blockmap copy tasks are not completing within 45 seconds...
I0813 06:58:38.853996 16492 local_egroup_manager.cc:268] AIO disk 39 write took 181 msecs
The above info from the activity traces, and Stargate logs point to something in the vdisk layer that is causing operations to take much longer than they should. In this case, heavily fragmented vdisks and block maps was the direct cause of the long running operations (note that the operations in question are related to Vdisk reads/writes). Specifically, since the underlying vdisk blockmaps were so heavily fragmented, all write and copy operations were taking exponentially longer than they should have (Reference: ENG-20874 https://jira.nutanix.com/browse/ENG-20874). Resolving this required defragmenting the vdisks in question, which prevented further FATALs of this nature.This is just one example of how you can use the activity_trace files, and logs to help deduce where a problem may lie.
Other useful info
Common causes of Stargate FATALs
Here are some scenarios that are known to cause issues with Stargate
Intermittent network connectivity between CVMs causes hung op timeouts/watchdog fired. See KB-3843 https://portal.nutanix.com/kb/3843 for this scenario.
This can be checked inside the ~/data/logs/sysstats/ping_all.INFO on any of the CVMs. Below is a command to check on all the CVMs in the cluster.
nutanix@CVM$ allssh 'grep -C 20 unreachable data/logs/sysstats/ping_all.INFO /dev/null | cat -n | ( tail -40 ) | grep -e TIMESTAMP -e unreachable'
NOTE: In AOS releases prior to 5.19, grep the ping_hosts.INFO log rather than ping_all.INFO.
For more details, see ENG-73184 https://jira.nutanix.com/browse/ENG-73184. Note, that fpings are performed from each CVM every 15 seconds, in prior releases ping was done once per minute.
SSD tier is 95% full or close to it. The "df" command is not going to accurately gauge tier usage, so it is recommended to use Curator's tier usage page to accurately gauge the tier usage. The following commands will take you to the Curator 'Tier Usage' page on Curator master:
nutanix@CVM:~$ curator_cli get_master_location
You can also search the Stargate logs for errors like "kDiskSpaceUnavailable" in the stargate.INFO logs to see if Stargate is starved for space.
Cassandra process dropping ops, which causes Stargate hung op timeouts. To check for this, search for "Op was dropped by admission controller" in the Cassandra logs.Cassandra JVM garbage collection (GC) pause - look for TotalPauseTime high values (above 20 seconds) in the cassandra_gc_stats.INFO logs, or follow KB 7704 https://portal.nutanix.com/kb/7704.If the hung op in the trace is OplogRecoveryOp, refer to internal KB 1942 https://portal.nutanix.com/kb/1942for monitoring Oplog usage.
Viewing activity traces live
If you are in a scenario where you want to take a live look at the operations going through Stargate, you can use the following URL on a CVM to view the activity traces:
http://<cvm_ip_address>:2009/h/traces
NOTE: Port 2009 has to be opened in the CVM iptables firewall, otherwise you will not be able to connect to this URL.
On AOS 5.5 and above you will need to use the modify_firewall method as described in the AOS Security guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Security-Guide-v5_10:wc-cvm-firewall-block-unblock-ports-cli-t.html:
nutanix@CVM:~$ modify_firewall -o open -i eth0 -p 2009 -a
IMPORTANT: Make sure to close the firewall for 2009 port world access after completing your live troubleshooting!
Manually dumping activity traces
In the event Stargate does not produce a FATAL resulting in creation of trace logs under /home/nutanix/data/logs/ then the following method may be used to manually scrape the Stargate traces pages for offline analysis.This creates two files "Stargate60MinCompletedTraces.html" and "Stargate60MinErrorTraces.html". These files can be opened with any web browser; however, please note that hyperlinks will not work as expected unless viewed live on the cluster.
If the amount of time for which traces are required needs to be increased or decreased the value of the first variable set "TraceTimeInMS" can be modified.
TraceTimeInMS=3600000; for TraceLink in $(links http:0:2009/h/traces --dump | grep http | awk '{print $2}'); do curl -ps "$TraceLink&expand=&low=0&high=$TraceTimeInMS&a=completed" 2>&1 >> Stargate60MinCompletedTraces.html; curl -ps "$TraceLink&expand=&low=0&high=$TraceTimeInMS&a=error" 2>&1 >> Stargate60MinErrorTraces.html; done;
|
KB14285
|
Lenovo AMD based nodes running UEFI versions below 2.40 might run into a PSOD on ESXi 7.0
|
This KB article talks about an issue where Lenovo AMD based Nodes running UEFI version below 2.40 might encounter a PSOD on ESXi 7.0.
|
Nutanix is aware of an issue where Lenovo HX Nodes running AMD platforms below UEFI version 2.40 might run into a PSOD when running ESXi 7.0
In order to identify if you are hitting this issue, Please check the following:
1. Check the Hardware Model you are running on the cluster
nutanix@cvm$ ncli ru ls | grep -i “rack model”
2. Check the ESXi version(s) on the cluster node(s).
nutanix@cvm$ ncli ms ls | egrep -i "Hypervisor Version| Name"
3. Attached below is the PSOD Screenshot
|
In order to resolve the issue, kindly upgrade to the Lenovo Best Recipe to v1.5 https://support.lenovo.com/us/en/solutions/ht514073. This Best Recipe configuration package contains UEFI version 2.40. Which has a fix for this issue.
Please reach out to Nutanix Support http://portal.nutanix.com in case of any queries.
|
KB4453
|
CVM to reboots continuously logging EXT4-fs (md125): couldn't mount as ext3 due to feature incompatibilities in console
|
Downlevel Dell HDD Drive backplane firmware may cause SAS HDD failure and causes CVM to go into a boot loop logging "EXT4-fs (md125): couldn't mount as ext3 due to feature incompatibilities" errors in . console.
|
Configuration:Model:Dell XC730xd-12 HBA330 Firmware: 13.17.03.00 Backplane Firmware: 3.31Hypervisor:ESXi 6.XAOS:4.7.XSymptoms
From Prism: All drives show with no failures and CVM Powered on"cluster status" show CVM Down Console to the CVM shows CVM in a boot loopLogging into the XC730 integrated Dell Remote Access Controller/Storage/Physical shows several drives as “0GB” and the VPD (Vital Product Data) for the drive is missing. Collect the ServiceVM_Centos.0.out shows the following:
Example:================================== [ 1.957614] usbserial: usb_serial_init - usb_register failed [ 1.958697] usbserial: usb_serial_init - returning with error -19 mdadm main: failed to get exclusive lock on mapfile mdadm: /dev/md/phoenix:2 has been started with 2 drives. mdadm: /dev/md/phoenix:1 has been started with 2 drives. mdadm: /dev/md/phoenix:0 has been started with 2 drives. Checking /dev/md125 for /.nutanix_active_svm_partition [ 12.062697] EXT4-fs (md125): couldn't mount as ext3 due to feature incompatibilities [ 12.064417] EXT4-fs (md125): couldn't mount as ext2 due to feature incompatibilities 130082 blocks mdadm: stopped /dev/md125 mdadm: stopped /dev/md126 mdadm: stopped /dev/md127 warning: can't open /etc/mtab: No such file or directory mknod: `/dev/null': File exists [ 15.590391] piix4_smbus 0000:00:07.3: Host SMBus controller not enabled! [ 30.646918] end_request: I/O error, dev fd0, sector 0 [ 30.696956] end_request: I/O error, dev fd0, sector 0 [ 30.720971] end_request: I/O error, dev fd0, sector 0 [ 31.181337] end_request: I/O error, dev fd0, sector 0 [ 31.205353] end_request: I/O error, dev fd0, sector 0 [ 31.229372] end_request: I/O error, dev fd0, sector 0
|
Solution:Check HBA and Drive backplane firmware. If it is downlevel, upgrade the firmware to version 3.35HBA Firmware: Firmware http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=64NJMDell 13G PowerEdge Server Backplane Expander Firmware- NEW Latest firmware Version 3.35, A00-00 http://www.dell.com/support/home/us/en/04/drivers/driversdetails?driverId=HYPYYDell 13G PowerEdge Server Backplane Expander Firmware - OLD Latest firmware Version 3.32, A00-00 http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=V23C6Dell 13G PowerEdge Server Backplane Expander Firmware -OLD Problem version Version 3.31, A00-00 http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=0CKTJ
|
KB13117
|
PC CMSP post-upgrade script fails
|
This article describes an issue where the post-upgrade script on Prism Central (PC) with CMSP fails.
|
On Prism Central (PC) with CMSP enabled, during PC host upgrade, the post-upgrade script is run for CMSP. If the script does not complete successfully, k8s core components may not be running on PC and you may encounter issues such as:
kubectl command not found.CMSP cluster is unhealthy.mspctl cluster kubeconfig returns the following error:
No route to Host
How to verify if the post-upgrade script has failed
Check the log file /home/nutanix/data/logs/cmsp_upgrade.out. If there are any "ERROR" entries, then the post-upgrade script has failed.
|
If the script has failed due to a systemd service not coming up in time as expected, rerun the script.
nutanix@pcvm$ python /home/nutanix/cluster/bin/cmsp_post_upgrade.py
Observe the logs in /home/nutanix/data/logs/cmsp_upgrade.out. If there are no "ERROR" entries, k8s core components should come up and CMSP cluster should become healthy.
If the script has failed due to some other reason, or rerunning it still fails, or the script was successful but there are still issues with CMSP after PC upgrade, raise an ONCALL ticket with the MSP Engineering Team.
|
Use ""sudo nvme smart-log /dev/nvmeXn1"" to check for percentage_used
|
critical_warning or media_error.Note the media_errors count in the example below. When considered along with the NVMe disk model number and firmware revision
|
this is a symptom of a known issue described in KB-14727 https://portal.nutanix.com/kb/14727.
| null | null |
KB12563
|
Changes in application-consistent snaphots for Nutanix workloads
|
Changes in application-consistent snaphots for Nutanix workloads.
|
For Nutanix Virtual Machines, there are two types of snapshots https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_0:wc-cluster-snapshots-wc-r.html that can be selected or triggered via API from any Backup Software:
Crash-consistentApplication-consistent snapshots.
Windows Virtual machines that are protected on Nutanix using the application-consistent method leverage Microsoft Volume Shadow copy technology to freeze the application while the snapshot is taken. Backup requestors invoke the VSS workflow and coordinate with the NGT/Cerebro components to take the snapshot on Nutanix in a fixed time window that the VSS writers have halted the application. Note that the maximum stun time is 10 seconds.Types of backup requestors:
Legacy Protection Domains Backup solutions that leverage RESTAPI to protect virtual machines (e.g. Hycu, Veeam, Rubrik)Leap
Refer to the Microsoft article that describes the Overview of Processing a Backup Under VSS https://docs.microsoft.com/en-us/windows/win32/vss/overview-of-processing-a-backup-under-vss?redirectedfrom=MSDN.Until Nutanix 6.0, the default VSS snapshot type was Full (VSS_BT_FULL). This resulted in log truncation after each snapshot for workloads like SQL/Exchange, offering limited control in differentiating between a Disaster Recovery Snapshot or a Full backup. To offer better control on backup and RTO/RPO requirements and to overcome limitations with SQL DB restore in case of FULL snapshots, with Nutanix AOS 6.0 the default VSS Backup type has been changed to Copy (VSS_BT_COPY). As a result of this change, protected workloads using application-consistent snapshots will no longer truncate logs by default. For workloads like Microsoft Exchange, this results in higher disk space consumption.Below are changes to default VSS snapshot types:
The default and only backup type for VSS snapshot is VSS_BT_FULL in AOS versions up to and exclusive to AOS 6.0.x except for AOS 5.20.4, which uses VSS_BT_COPY. Change has been reverted to VSS_BT_FULL in 5.20.5 and newer 5.20.x.With AOS 6.0.x, the default and only backup type for VSS snapshots changed to VSS_BT_COPY.With AOS 6.1.x/6.5.x, Backup vendors can choose between VSS_BT_FULL and VSS_BT_COPY backup types.
|
Depending on the protection strategy used, there are several ways to control VSS backup type:
For Virtual Machines that are protected using a backup solution, Nutanix has exposed with AOS 6.1.x/6.5.x the v3 /vm_snapshots API that can be used by backup applications to specify backup_type:
Creates a snapshot containing the given VM with the snapshot ID obtained earlier. This endpoint is hidden in the v3 API explorer.
Contact your backup vendor for details on the use of this API. The prerequisite is AOS 6.1.x/6.5.x and the ability to perform application-consistent snapshots. For more information about conditions and limitations, refer to the Conditions for Application-consistent Snapshots https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_0:wc-dr-application-consistent-snapshots-wc-r.html documentation.
For Virtual Machines that are protected using Legacy DR or Leap, if log truncation is required, for example, for Microsoft Exchange, the VSS post-thaw scripts can be used to simulate a backup.Note: In Microsoft Exchange, we do not recommend enabling circular logging to resolve the space usage caused by lack of truncation as it will limit the restore option to the last point in time.Refer to the following documentation for information regarding Pre_Freeze and Post_Thaw Scripts https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide-v6_0:wc-application-consistent-pre-freeze-post-thaw-guidelines-r.html.
These scripts should exist under the following paths on Windows VMs (using the respective system drive letter as the root of the path): C:\Program Files\Nutanix\scripts\pre_freeze.bat and C:\Program Files\Nutanix\scripts\post_thaw.bat, respectively.For Linux VMs running NGT 1.1 version or later, you must create the pre_freeze and post_thaw script under the following paths, respectively – /usr/local/sbin/pre_freeze and /usr/local/sbin/post_thaw – these scripts must have root ownership and root access with 700 permissions.
Example of using diskshadow in Post-thaw script to truncate logs for drives N and M:
Create the backup file (use colon after driver letters) -- add data and log volumes C:\backup\backup.txt
add volume N:
Configure the Post-thaw script:
@echo off
Event Viewer can be a great tool to verify that log truncation is occurring and that the scripts are running properly.
Create a filter for "Event sources" like ESE and MSExchangeISSome key Event ID's to be on the lookout for are:
ESE – Event ID 2005 – Starting a Full Shadow Copy BackupMSexchangeIS – Event ID 9811 – Exchange VSS Writer preparation.ESE – Event ID 224 – Logs are now purgedMSExchangeIS – Event ID 9780 – Backup is now complete.
If there are use cases where the post-thaw script method is not applicable, contact Nutanix Support http://portal.nutanix.com.
|
KB3491
|
How to get missing Prism Pro licenses associated to account
|
How to get missing Prism Pro licenses associated to account
|
You may come across cases wherein the customer (End-customer account) has purchased/upgraded to Prism Pro licenses but the same are not listed under the account.This article describes the process for getting the missing Prism Pro licenses associated to account.
|
Follow below steps when you find that the Prism Pro licenses are missing.1. Obtain relevant order information: Order invoice, Order number etc.2. Raise a JIRA ticket with the Biz Apps team to associate the license to account. JIRA: https://jira.nutanix.com/browse/BA-36568 https://jira.nutanix.com/browse/BA-36568How Prism Pro license looks like on SFDC – see screen-shot below.
|
KB10748
|
VMware Snapshot consolidation task failing in ESXi with error: 'Filesystem timeout' due to HDD overwrites
|
This KB walks you through a workaround when snapshots consolidation task fails in ESXi.
|
Background:
VMs running on ESXi hypervisor might contain snapshots that were not generated from a Nutanix cluster. This snapshots are usually created by a 3rd party backup tools that do not leverage Nutanix API or due to manual user intervention.VMware snapshot technology can cause a VM to not be recoverable due to the way snapshots are handled on ESXi environments. More details can be found in the following Alert - A130183 - A130195 - Protected VM(s) Not Recoverable /articles/Knowledge_Base/Alert-A130183-A130195-Protected-VM-s-Not-RecoverableMoreover, due to the way metadata is handled in VMware snapshots and split I/O between the snapshot disks, performance can be negatively impacted.VMware snapshots can cause high stun times.Due to all the potential problems noted above, consolidation of snapshots on a VM might be required. Consolidation merges all delta snapshots disks into a single vmdk that contains the most up-to-date data of the VM.
Description:
In some instances, a consolidation task might fail with the following error message (Filesystem time-out / OK to retry). The task might fail at different percentages.
A filesystem timeout error will also be seen on the ESXi host logs:
2020-11-27T13:44:52.077Z cpu77:19558915)WARNING: SVM: 5761: scsi0:1 VMX took 3513 msecs to send copy bitmap for offset 2559800508416. This is greater than expected latency. If this is a vvol disk, check with array latency.
Identification:
In order to understand why the consolidation task is failing, assist the customer collecting a performance capture (The following Kbs instruct how to collect this data and upload it to Illuminati parsing tool: ( How to use collect_perf to capture performance data /articles/Knowledge_Base/How-to-use-collect-perf-to-capture-performance-data / Collect_Perf (illuminati) Scraper Script /articles/Knowledge_Base/Collect-Perf-illuminati-Scraper-Script)
Launch a performance capture
Start the snapshot consolidation process
Leave the performance capture running until the consolidation fails
Stop the performance capture
Upload the performance capture for analysis to illuminati
Consolidation task is similar to storage vmotion, the way that is performed by VMware on a live VM is by sending small writes which are categorized as random by our I/O pattern analyzer, hence going to Oplog. We can corroborate this data by reviewing the illuminati performance capture.
As seen in the screenshot below, one (ore more) vdisks will be crossing the oplog limit consistently. Oplog is Nutanix caching mechanism to absorb random writes fast (More information regarding Oplog design and role in a Nutanix cluster can be found here https://confluence.eng.nutanix.com:8443/display/~arjun.balacha/Oplog+Internals)
By selecting in the left pane the node where the vdisk with the high oplog utilization is (CVM .244 in this example), more information about this node can be seen. By selecting 2009 link in the left pane, dumps of 2009 pages can be accessed:
From the 2009 page, correlate the vdisk id with the name of the vmdk from the oplog limit alert:
For identification, the relevant information is in the 2009/vdisk_stats from the dropdown box, then, with the find option in the browser, the relevant information for this particular vdisk can be found:
In particular, Oplog drain destination section will show HDD as drain destination it its majority. This will be consistent even when scrolling the bar to change between dump pages. When oplog drains to HDD on hybrid systems, this means that the I/O is an overwrite. Oplog drains usually go to SSDs estore (as it is faster) except when it is an overwrite, since draining twice (SSD first and then HDD) would be more inefficient.
To further confirm, Organon performance analysis tool will show (Note that Organon is an internal tool that is still under development but the expectation is that it would be publicly available for the global SRE team soon). In the screenshots below, oplog size can be consistently seen at the limit (6 GB) as noted by Illuminati and also, the IOPS - writes stop the moment the oplog gets full, as at that point oplog draining becomes a priority to allow more space on it to absorb further random writes. When this happens, performance is severely impacted, and this correlates with the reduced number of writes.
In the screenshot below, can be observed the write destinations are oplog and HDD which points to overwrites:
Root cause:
Oplog draining to HDD (overwrites) plus aggressive draining due to oplog being full causes Nutanix cluster not to be able to keep up with the storage demand from the hypervisor for this operation.
|
In order to be able to keep up with the hypervisor storage demand, the following workaround can be leveraged. At a glance, by creating a snapshot of the VM that has the disks needed consolidation, a new live vdisk is generated and all the writes going to this live vdisk will be new, hence no overwrites will happen and no draining to HDD will occur. With this workaround, cluster should be able to keep up with the consolidation process.
Steps:
Ensure cluster has enough free space to accommodate a snapshot of the VM that needs consolidation.Create a protection domain (Follow the Data Protection guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_19:wc-dr-wc-nav-u.html for instructions on how to create a protection domain)Add the vm with the disks to consolidate in the protection domain. The following Alert - A130183 - A130195 - Protected VM(s) Not Recoverable /articles/Knowledge_Base/Alert-A130183-A130195-Protected-VM-s-Not-Recoverable might appear, but can be ignored.Take a local snapshot on the protection domain GUI, any expiration.Perform the consolidationOnce the consolidation is completed, remove the snapshots and the protection domains.
|
KB16039
|
CMSP - Users/groups might get access denied although they have the correct roles/permissions to perform the required action
|
Users/groups might get access denied although they have the correct roles/permissions to perform the required action due to incomplete Cache, resolved in PC.2022.9 or higher.
|
Users and/or groups accessing Prism Central might get 403 "access denied" although they have the correct roles/permissions to perform the required action due to an incomplete Cache of user-ACP mapping.When AuthZ tries to get the User-ACP mapping from the cache during the /authorize call. And even if there is a single ACP present for either the user or any single group then we will Not fetch this information from the database. This issue arises when we have incomplete data in the cache and we might end up giving false 403 or incomplete data to the clients.For example, during the investigation in ONCALL-15469 https://jira.nutanix.com/browse/ONCALL-15469, one of the AD users had the Super Admin role assigned, but that user could not perform some tasks in PC WebUI, buttons were greyed out or was receiving 403 access denied messages, which is caused by ENG-500613 https://jira.nutanix.com/browse/ENG-500613:
nutanix@PCVM:~$ nuclei user.get XX-XX-XXX-b0c1ae07f4c8|grep -i super
Investigating the IAM pods' logs, it was noted that the response for the Authz requests for this user did not include the "Super Admin_acp":
{
|
This issue is resolved in PC.2022.9 or higher, please ask the customer to upgrade PC to PC.2022.9 or higher.Workaround:If upgrading PC is not possible then a possible workaround would be performing any roles-related operations such as assigning the user the "User Admin" role via the legacy role binding in PC then removing it again wich would invalidate the cache.Another option, if possible would be restarting IAM pods on PCVMs to clear the cache.
|
KB17011
|
PC Deployment/Upgrade to version PC2024.1.0.1 fails with the error - Domain name resolution for prism-central.cluster.local failed.
|
PC Deployment pc.2024.1.0.1 is failing with the error - Prism Central is not compatible to enable Microservice Infrastructure: Domain name resolution for prism-central.cluster.local failed
|
PC Deployment - pc.2024.1.0.1 is failing with the below error :
PC Deployment fails with the Task Failure message - Encountered Exception in post_deployment step: Failed to enable micro services infrastructure on PC: Encountered error in cmsp sub task 'Performing CMSP prechecks': Prism Central is not compatible to enable Microservice Infrastructure: Domain name resolution for prism-central.cluster.local failed.
This issue is seen when the DNS Name servers are not resolving / taking too much time to resolve the query.We have observed a new pre-check being added for PC 2024.1.0.1 deployment which checks for a DNS response for the MSP domain name(prism-central.cluster.local) from the DNS servers.Identification Steps :1. Check the genesis.out logs, you should see the time-elapsed timeout errors :
2024-06-14 08:44:42,863% WARNING 07219072 command.py: 226 Timeout executing dig +short prism-central.cluster.local: 3 secs elapsed2024-06-14
Micro-services is failed to enable at this stage, and we observe that we have a DNS timeout of 3 seconds which is pre-defined for the microservices deployment pre-check in PC 2024.1.0.1.2. Confirm how many DNS Name servers are configured with this PC : cat /etc/resolv.conf3. Try to perform a nslookup and dig, just to confirm if the DNS is querying properly and check what is the time-sync :
nutanix@NTNX-X-X-X-X-X-PCVM$ nslookup
Command to run dig command with the DNS IP -
time dig +short prism-central.cluster.local @[DNS IP]
Example :
nutanix@NTNX-X-X-X-X-PCVM$ time dig +short prism-central.cluster.local @X.X.X.X [This will be your DNS IP].
|
As a normal practice, you should ask the customer to fix the DNS Name-servers, or share the different DNS IP's which has the least-resolving time, like example :
nutanix@NTNX-X-X-X-X-PCVM:~/data/logs$ time dig +short prism-central.cluster.local @X.X.X.X [This will be your DNS IP].
Once we have fixed the DNS issue, you will have to reach-out to your STL/Subject Matter Expert to clean-up the CMSP from the PCVM - Please follow KB - Remove failed Microservice Infrastructure deployment steps /articles/Knowledge_Base/Remove-failed-CMSP-deployment-steps Post the CMSP-Clean-up - Login to PC WebGUI > Prism Central Administration > You should be seeing "Enable Microservices". Go ahead and enable the Microservices, and wait for the task to complete.
|
}
| null | null | null | null |
KB16756
|
NGT Install/Upgrade from PC causes Anduril to crash and stuck tasks in PE
|
Stuck Anduril tasks "Put / vmUpdate" in "kRunning/kQueued" states and 0% progress.
|
This issue occurs when bulk NGT install / upgrade request is triggered from Prism Central and the user account specified for some/all UVM/s does not have sudo permission.Stuck Anduril/Uhura tasks (Put/vmUpdate) can be seen on the PE after NGT installation is being initiated on VMs from PC.
nutanix@NTNX-CVM:~$ ecli task.list include_completed=0
Anduril.out (leader CVM) stack trace similar as below. Key information to identify the issue having the line of channel.py in the trace which noted is used for ssh sessions. paramiko is the python ssh2 library is failing here.
2024-04-17 14:11:27,544Z WARNING task_slot_mixin.py:73 Watchdog fired for task 7c528a97-12a8-4ebb-777e-73adc4130221(PutTask) (no progress for 3022867.58 seconds)
|
Nutanix engineering team is aware about this and currently issue has been tracked via ENG-580256 https://jira.nutanix.com/browse/ENG-580256. Attach the cases which has same symptoms and follow the workaround below to break the ssh session and it will help to clear the stuck tasks on PE and PC. Workaround
How to find anduril leader.
nutanix@cvm$ panacea_cli show_leaders | grep -i anduril
2. SSH to the anduril leader CVM.
nutanix@cvm$ ssh <Anduril Leader CVM IP>
3. Run the command to restart the Anduril service.
nutanix@cvm$ genesis stop anduril; cluster start
If workaround does not fix please engage with STL.
|
KB16747
|
Blueprint launch fails at Image Fetch task
|
When the Blueprint is expected to launch from a remote PC, the Image Fetch task may fail even if the image works on a local PC.
|
When setting up a blueprint to run on a remote PC, the blueprint was seen to fail with "No output" printed in the GUI. The Blueprint Launch page shows that it failed at the Image Fetch task, but with no further output.All services, pods, and containers are healthy.The related Hercules task fails, but there are no errors or tracebacks in the hercules logs (/home/docker/nucalm/log/hercules_*.log) pertaining to that task UUID:
nutanix@PCVM~: ecli task.list status_list=kFailed
Image task failures can be found in /home/docker/epsilon/log/durga*.log, and /home/docker/epsilon/log/narad*.log and /home/docker/epsilon/log/indra*.log, but the error message does not indicate the reason for the failure.Below is an example of the failed task error message from indra.log:
nutanix@PCVM~: less /home/docker/epsilon/log/indra.log
|
If the image has a checksum in its spec, it will fail to fetch on the remote PC. Self-Service has no provision to input checksums from the downloadable image field, so the checksum field will be blank on the remote site. The image cannot have a checksum configured if the desire is to run the blueprint on a remote PC.WorkaroundSelect the image directly on the target PC account rather than utilizing the downloadable image option. Refer to Prism Central Infrastructure Guide: Adding an Image https://portal.nutanix.com/page/documents/details?targetId=Prism-Central-Guide-vpc_2023_3:mul-image-add-pc-t.html for more information.
|
KB14092
|
Objects deployment prechecks may fail "No NTP servers are reachable" although no issue is seen when checking manually
|
This article describes particular scenario where Objects deployment prechecks provide a misleading error "No NTP servers are reachable" while in fact they are reachable.
|
Objects deployment prechecks may fail with error:
Trying NTP Servers n.n.n.21. No NTP servers are reachable.
But port UDP 123 is not blocked from Object Store Storage Network to NTP server and other clients are syncing fine with the NTP servers.
If PCVM and/or CVMs are deployed in the same subnet as Object Store Storage Network, they may sync with NTP servers and produce no errors.
|
Contrary to CVM/PCVM, Objects storage network to NTP server connectivity is checked by an "ntpclient" package that is installed in "predeployment_port_vm" VM.To further debug the issue, we need to check the output of the ntpclient command by running it manually.While Objects deployment prechecks are running, clone the created "predeployment_port_vm" VM (with NIC in the same storage subnet) and login inside from aoss container:
nutanix@PCVM:~$ docker exec -it aoss_service_manager bash
Run this command to test connectivity to NTP server(s):
tc@box:~$ timeout -t 3 ntpclient -c 1 -h <NTP server> -d ; echo $?
Scenario 1
Exit code 143:
tc@box:~$ timeout -t 3 ntpclient -c 1 -h n.n.n.21 -d ; echo $?
In above example Dispersion is greater than 1000000 (65536 * 15,2587890625), thus server is rejected.Note that "Refid=76.79.67.76" is not an actual reference IP, but a default value displayed when there is not reference NTP server on the queried NTP server.
Check NTP config from PE or PC side.In this example, NTP server n.n.n.23 is just a reference to n.n.n.21. NTP server n.n.n.21 is primary source and has "refid" > ".LOCL.", meaning it is not synchronizing to any external source.
nutanix@PCVM:~$ ntpq -pn
If this NTP server is a Windows machine, replace it with a non-Windows time source per Nutanix recommendations: https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_5:wc-system-ntp-servers-wc-c.html https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_5:wc-system-ntp-servers-wc-c.htmlAlternatively, make sure it is synchronized with reliable upstream NTP servers (public if possible). This might decrease the Dispersion value.ENG-519908 was created to provide more verbosity to the reason of NTP server rejection and avoid misleading error output.
Scenario 2
Exit code "0" (no error):
tc@box:~$ timeout -t 3 ntpclient -c 1 -h ntp0.example.com -d ; echo $?
Check if DNS servers can resolve localhost:
tc@box:~$ nslookup localhost x.x.x.48 ; echo $?
In the above example, DNS server check fails with exit code "1".
Explanation:
You can review prechecks bash script logic:
tc@box:~$ cat port_check.sh
DNS servers are checked by doing socat (netcat alternative) on tcp port 53
for dns in $dns_ips
But to check NTP servers we first wipe /etc/resolv.conf and populate it by testing each DNS server to resolve "localhost"
# Add PE DNS servers to /etc/resolv.conf
In a scenario when DNS servers are reachable on TCP:53 but resolving "localhost" is prohibited, the /etc/resolv.conf might end up empty even though DNS precheck is passing.As a result, NTP precheck fails since NTP server FQDN cannot be resolved.Note that during manual checks in a cloned VM, script might not have been run yet, so /etc/resolv.conf will be populated properly.The same NTP server is not reported by NCC as it is using dig to test DNS servers, which returns 0 exit code for the same test.
dig localhost @<NTP> +noall +comments
Please attach your case to ENG-550041 https://jira.nutanix.com/browse/ENG-550041 if you are hitting this scenario.
Workaround
Workaround is temporary (for the time of Objects deployment) either to:
configure at least one NTP server as IP (not FQDN) on PEenable localhost resolution on DNS serverchange PE DNS to one which allows localhost resolution
Note: While it is not strictly mandated by a specific standard that every DNS server must resolve "localhost" to the loopback address, it is widely accepted as a standard practice among DNS server implementations and is expected behavior for most networking applications.So unless there is a specific reason not to, this should be configured on the DNS server.Ref: https://www.rfc-editor.org/rfc/rfc6761.html
2. Application software MAY recognize localhost names as special, or
MAY pass them to name resolution APIs as they would for other
domain names.
3. Name resolution APIs and libraries SHOULD recognize localhost
names as special and SHOULD always return the IP loopback address
for address queries and negative responses for all other query
types. Name resolution APIs SHOULD NOT send queries for
localhost names to their configured caching DNS server(s)
|
KB13426
|
NX-8150-G8 or NX-8155-G8 hosts with NVMe drives may get stuck in a boot loop after an upgrade that involves a reboot or power reset
|
NX-8150-G8 or NX-8155-G8 nodes populated with NVMe drives may get stuck on a reboot loop after a power reset or a reboot. This may occur during maintenance activities that require the host to be rebooted, such as hypervisor or firmware upgrades. In order to recover the node from this condition, a BMC reset has to be performed.
|
Platforms affected:
NX-8150-G8 with NVMe drivesNX-8155-G8 with NVMe drives
NX-8150-G8 nodes populated with NVMe drives may get stuck on a boot loop after a power reset or a warm reboot, or after LCM firmware upgrade or hypervisor upgrade, or any other workflow that requires hypervisor reboot operations. Issue is present on BMC firmware 8.1.7 and 8.0.6 .
In order to recover the node from this condition, a BMC cold reset has to be performed.
The issue is a rare race condition in the BMC software and does not require a hardware replacement.
Refer to ENG-489101 https://jira.nutanix.com/browse/ENG-489101 for more details.
Symptoms:
The host keeps rebooting, is just stuck at POST, and is not progressing.
No hardware errors are found in the IPMI Health Event Logs.The IPMI UI > Maintenance > Maintenance Event Log (MEL) has repeated entries of the message below during the boot loop:
The user attempted to get the host FW a user password:
Sample screenshot:
|
Workaround:
If the above symptoms are observed and the host is still stuck in the boot loop, the workaround is to do a BMC reset through IPMI. But before doing this, collect the data requested below:
Navigate to “IPMI UI > Maintenance > BMC Reset > Unit Reset” or run the following command:
ipmitool bmc reset cold
See KB 1091 https://portal.nutanix.com/kb/1091 for more details.
After the reset, the host is expected to boot to the respective hypervisor. If that is not the case, further investigation is required and this KB does not apply.
Resolution:
The issue is resolved on BMC-8.1.9 https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-BMC-BIOS%3Atop-Release-Notes-G8-Platforms-BMC-v8_1_9-r.html&a=ae0e099654b6a484a761c4b2a6e3aa53a87e8f59a2ad677a5bd63b4f04d2e2e253862ed773799a2d.
This is not a hardware issue, so there should not be any hardware replacement done.
|
KB13388
|
[Prism | rsyslog] - Missing signatures in Prism Gateway after moving to log4j2
|
after moving to log4j2, prism gateway | log4j2 | login/logout log lines not printed to prism_gateway.log
|
On AOS 5.20.3.5 and above, user login/logout log lines similar to following:
"User %s from ip %s has logged out from browser: %s"
are not being printed to /home/nutanix/data/logs/prism_gateway.log.In previous AOS releases, these lines were printed to prism_gateway.log.Missing login/logout lines on AOS 5.20.3.5 and above can cause audit problems if the customer previously relied on PRISM Syslog module to collect these lines from prism_gateway.log for system access audit purposes.This issue might also affect PC versions where IAMv2 is not enabled by default (e.g. PC 2022.6.x).Identification: 1. Check that the cluster has properly configured Syslog and that ports are open towards the Syslog servers:
<ncli> rsyslog-config ls
2. Check that the PRISM module have been enabled. PRISM module required to collect lines from prism_gateway.log file:
<ncli> rsyslog-config ls-modules server-name=SyslogServer1
3. Check that the Syslog config status is set to true:
<ncli> rsyslog-config get-status
4. SSH to the CVM which is prism leader (refer to KB-1841 https://portal.nutanix.com/kb/1841 to determine who is the prism leader).Check whether you can find "logged in" or "logged out" signatures under prism_gateway.log when users log in or log out in Prism Web UI.If this issue is hit, then these signatures should not be visible:
INFO 2022-05-09 10:40:44,433Z http-nio-127.0.0.1-9081-exec-25 [] com.nutanix.syslog.generateWarnLevelSyslog:23 User [email protected] from ip xx.xx.xx.26 has logged out from browser: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.74 Safari/537.36 Edg/99.0.1150.55
|
The issue is fixed in AOS 6.5.3+ and AOS 6.7+.Note: Fix is not included with AOS 6.6.----log4j2 that is in use on 5.20.3.5 does not print entries to file by default if AppenderRef is syslogAppender only, without the "file" field a line below:
<Syslog name="syslogAppender" facility="authpriv" host="localhost" port="514" protocol="UDP">
Workaround: It is possible to configure log4j2 to print these lines to prism_gateway.log as previously by adding AppenderRef file to log4j2 config:
<Logger name="com.nutanix.prism.syslog" additivity="false" level="info">
To do so, these changes must be made in all log4j2 config files of CVM's located in /home/nutanix/config/prism/log4j2.xml and /srv/salt/security/CVM/tomcat/log4j2.xml.To automate file edits, the below 2 one-liners will automatically modify both files on all CVMs:
//change /home/nutanix/config/prism/log4j2.xml:
Service restart is not needed.After edits, verify that "logged in" or "logged out" lines are printed to prism_gateway.log during user login/logout in Prism WebUI.
|
KB8258
|
Alert - A130207 - DrNetworkSegmentationVipUnhosted
|
Investigating Hosting Network Segmented VIP issues on a Nutanix cluster.
|
This Nutanix article provides the information required for troubleshooting the alert A130207 - Failed Hosting Network Segmented VIP for your Nutanix cluster.
Alert Overview
The A130207 - Failed Hosting Network Segmented VIP alert occurs when hosting of the virtual IP of the DR service network segmentation has failed.
This could be due to the following:
Network configuration might have been changedNetwork configuration has not been configured properly
Sample Alert
Block Serial Number: 18SMXXXXXXXX
Output messaging
[
{
"Check ID": "Hosting of virtual IP of the DR service network segmentation failed."
},
{
"Check ID": "Network configuration might have been changed or has not been configured properly."
},
{
"Check ID": "Reconfigure the DR service network configuration."
},
{
"Check ID": "Replication of the protected entities will be affected."
},
{
"Check ID": "A130207"
},
{
"Check ID": "Hosting of Virtual IP of the Network Segmentation DR Service Failed."
},
{
"Check ID": "Virtual IP virtual_ip of the DR service network segmentation could not be hosted on the network interface network_interface."
}
]
|
Troubleshooting
Verify the network configuration for DR service network segmentation within Prism.
Compare the configuration with your networking team to see if any changes have been made.
Resolving the Issue
Verify that the ntnx0 interface is configured on all CVMs - this can be verified by running the "allssh ifconfig" command from any CVM.Reconfigure the DR service network configuration:
Log into the Prism Web ConsoleClick on the gear iconSelect 'Network Configuration'Click on 'Disable' for the 'ntnx0' interface (it will take some time for the removal of the interfaces - anywhere from 10-30 minutes depending on the number of nodes)Re-create the interface by clicking on 'Create New Interface'Enter the Interface Details and create a new IP Pool (specify the correct networking details)Click 'Next' and select 'DR' as the feature and then specify the gatewayClick on 'Save'
NOTE: The reconfiguration process takes at least 20 minutes to complete (may take longer depending on the number of nodes within your environment).
If you need assistance or in case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com. Collect additional information and attach them to the support case.
Collecting Additional Information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, refer to KB 2871 https://portal.nutanix.com/kb/2871.Upload the NCC output file ncc-output-latest.log, created on running the NCC check. Refer to KB 2871 https://portal.nutanix.com/kb/2871 for details on running NCC and collecting this file.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
Attaching Files to the Case
To attach files to the case, follow KB 1294 http://portal.nutanix.com/kb/1294.If the size of the files being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations.
|
KB16250
|
Nutanix Files - [4.3.x] TLDs missing or operations fail
|
Documenting an offshoot of ISB-128 that only impacts Files 4.3.x
|
Background:
ISB-128-2023 https://confluence.eng.nutanix.com:8443/x/ASbiE documents a known issue to Files 4.4.0.A similar behavior can be observed in Files 4.3.0/4.3.0.1; however, the underlying cause is unique even though the verification and remediation are the same as the ISB.This KB article describes the specific cause of this behavior in Files 4.3.0 and 4.3.0.1.When a File Server is deployed prior to Files 3.5 (aka sabine), the zfs 'casesensitivity' attribute was set to 'insensitive' for lookups.In the past, only owner FSVMs performed the lookup / operations on their respective TLDs, where as starting in Files 4.3.0, remote lookups and operations leverage minerva_store instead.If a File Server (pre-sabine) was deployed and later upgraded and scaled-out (adding additional FSVMs), a new format is leveraged referred to as 'mixed' sensitivity. This creates a mismatch between pre-existing zfs datasets from the original FSVMs as 'insensitive' vs new FSVMs datasets set to 'mixed.'Depending on which FSVM was performing a lookup of which TLD, if the lookup or operation is passed in all lowercase, the operation will fail.
Symptoms:
The following symptoms may be reported by end users on a sub-set of TLDs:
TLD operations via MMC such as renaming, modification, or deletion may fail with "Access to the path <> is denied."VDI profile managers may fail to load the user's TLD and provide the user with a new blank profileTLD may not be visible in Windows Explorer or a show a "Could not find a part of the path" error when attempting to access / modifyDFS path may show the File Server FQDN instead of specific FSVM FQDN
Scenario:
To encounter this issue, the following must take place in the same order:
The File Server must have been deployed on Nutanix Files 3.2.x or earlier
One or more distributed shares created while on Nutanix Files 3.2.x or earlier
Upgraded to Nutanix Files to 3.5.x through 4.2.x
Performed a scale-out of the File Server (adds one or more additional File Server VMs) while on Nutanix Files to 3.5.x through 4.2.x
At any point after the scale-out, upgrade to Nutanix Files 4.3.0 or 4.3.0.1
Identification:
On any FSVM, get the share uuid of the share in question:
nutanix@FSVM:~$ afs share.list sharename=<sharename> | grep "Share UUID"
Using the share uuid gathered above, the command will have been run on each FSVM.The output listed below is from a File Server impacted by the issue:
nutanix@FSVM:~$ allssh 'for ds in $(df -kh | grep -v backup_diff | grep <share_uuid> | cut -d " " -f1); do echo "++++++++++ $ds +++++++++++++++"; sudo zfs get -o value -Hp casesensitivity $ds ; done'
The issue above is that the 'casesensitivity' context is mismatched between 'mixed' and 'insensitive'.
|
Note: While ISB-128-2023 https://confluence.eng.nutanix.com:8443/x/ASbiE is specific to Nutanix Files 4.4.0, the above issue seen in 4.3.x requires the same verification and workaround steps in order to update the 'casesensitivity' setting from 'insensitive' to 'mixed.'Follow all verification steps from ISB-128-2023 https://confluence.eng.nutanix.com:8443/x/ASbiE to look for Duplicate TLDs.If any duplicate TLD is found, stop and do not proceed to the workaround until the duplicate TLDs have been renamed or removed.Once there are NO duplicate TLDs reported by the script and/or manual validations in the ISB, proceed to the workaround section and follow ALL steps.This will need to be repeated on all shares that have a mismatched context.
|
KB10650
|
Script to verify the LDAP search latency
|
Ldap search latency script can be used to identify the AD latency issue while troubleshooting the AD user login failure in Prism
|
There might be a cases were Active Directory users might not able to login to the prism with error "No role mapping is defined in prism for user <username>. Authorization failed" even though the user configurations in AD and role mapping is correct Prism.Please check the KB-3363 https://portal.nutanix.com/kb/3363 to troubleshoot the possible configuration issue. This KB explain about the LDAP search which can be used to identify any latency issue on the AD causing the authentication issues. The prism gateway log reports the below error during the login with error "No role mapping is defined in prism for user <[email protected]>. Authorization failed"
INFO 2020-10-15 11:00:27,376 http-nio-0.0.0.0-9081-exec-5 [] com.nutanix.syslog.generateInfoLevelSyslog:19 An unsuccessful login attempt was made with username: X-Nutanix-Client-Type=ui; NTNX_SESSION_META=invalid from IP: xx.xx.xx.xx and browser: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36
This error might be occurring because the ldap search is getting timed out due to AD search latency as seen in TH-5052.
|
To identify the latency on the ldap search query, use the below script. 1. Created the ldap_user_search.py file in the /home/nutanix/tmp directory2. Copy the content mentioned in the codeblock below
#!/usr/bin/python
3. Modify the domain and domain user informations you ware going to search (marked in Bold letters in above script)
<port> --- Ldap port (389, 636)<domain_user@domain_name> --- Domain user has the permission to perform the ldap search query against LDAP server<password> --- Domain user password<domain_name> --- Domain name <user DN> --- User distinguished name<user_name> --- User name as per AD<user_pname> --- User principal name <timeout_in_sec> --- Set the timeout for the search operation in sec.
4. Run the script
nutanix@cvm$python /home/nutanix/tmp/ldap_user_search.py
Sample output
2020-10-30 07:58:23.758733
5. Perform the test for different user and timeout setting to identify the latencies on the ldap search queries to identify the average time taken to complete the search. 6. Based on the above results aplos_engine gflag can be applied on the cluster to increase the timeout values.
"WARNING: Support, SEs and Partners should never make Zeus, Zookeeper or Gflag alterations without guidance from Engineering or a Senior SRE. Consult with a Senior SRE or a Support Tech Lead (STL) before doing any of these changes. See the Zeus Config & GFLAG Editing Guidelines ( https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit https://docs.google.com/document/d/1FLqG4SXIQ0Cq-ffI-MZKxSV5FtpeZYSyfXvCP9CKy7Y/edit)"
--directory_service_search_timeout_seconds=30.0 (default 1.0)
|
KB15412
|
Flow Network Security (FNS) 4.0.x or higher - Migrating FNS to FNS Next-Gen
|
The article explains the procedure to migrate to FNS Next-Gen after upgrading to FNS 4.0.x or higher
|
Flow Network Security (FNS) Next-Gen is Nutanix’s next-generation Micro-segmentation solution that offers an Enhanced Policy Model with advanced posture management powered by Advanced Policy Operations and Enterprise readiness features. In addition, FNS Next-Gen adds significant improvements to performance, scalability, and resilience. To learn more about the Next-Gen, refer to the release notes https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Flow%20Network%20Security%20Next-Gen.
This article only applies to clusters that have upgraded FNS to 4.0.0 or later.Prerequisites for upgrading to FNS 4.0.x are AOS version 6.7 or later and PC version 2023.3 or later.FNS 4.1.0 is bundled with AOS version 6.8 and PC version 2024.1.
For new FNS customers who enable microsegmentation for the first time after deploying release version 4.0.x or higher, FNS Next-Gen will be the default experience.Nutanix recommends FNS Current Generation customers to first upgrade to FNS 4.1.0 or higher prior to attempting Next-Gen migration for the best migration experience due to improvements and fixes since FNS 4.0.x. Customers using FNS in Flow Virtual Networking (FVN) VPC-only environments will be using FNS Next-Gen by default after FNS 4.0.x or higher upgrade.
|
FNS Current Generation customers with user-defined policiesA passcode is required for existing FNS current-generation customers with user-defined policies to continue migrating to FNS Next-Gen.
FNS Current Generation customers with no user-defined policiesExisting FNS current-generation customers with only system-defined quarantine policies (without user-defined policies) do not need a passcode to migrate to FNS Next-Gen. Refer to the Migration Experience section in the FNS 4.1 user guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Network-Security-VLAN-Guide-v4_1_0:fns-migration-experience-pc-c.html for more details on migrating to FNS Next-Gen.
Nutanix enables all current-generation FNS customers with a customer-managed migration experience to FNS Next-Gen in phases. Certain phases of this enablement are curated with a one-time passcode which must be entered during the Next-Gen migration workflow in PC in order to commence.
Starting with FNS 4.1.0, phase 1 current-generation customers no longer require a passcode for migration. Following the initiation of your designated phase, you will receive migration instructions from Nutanix that includes the passcode. The phases are as follows:Phase 1 (current phase)Total no. of policies: up to 5 (including 2 default quarantine policies)Total no. of AppTiers per Policy: up to 2 tiersTotal no. of vNICs: up to 200Total no. of Subnets: up to 20Phase 2Total no. of policies: up to 10 (including 2 default quarantine policies)Total no. of AppTiers per Policy: up to 5 tiersTotal no. of vNICs: up to 1200Total no. of Subnets: up to 80Phase 3Total no. of policies: greater than 10 (including 2 default quarantine policies)Total no. of AppTiers per Policy: AnyTotal no. of vNICs: AnyTotal no. of Subnets: AnyNote: The total number of vNICs and subnets is the total number of vNICs and subnets in the Prism Central instance and not just those protected by FNS policies. For more information on determining the number of policies and AppTiers, refer to the Flow Network Security (FNS) documentation https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Network-Security-VLAN-Guide:fns-migrate-fns-to-fns-ng-r.html. For more information on determining the number of subnets, refer to the Flow Virtual Networking (FVN) documentation https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Virtual-Networking-Guide:ear-flow-net-subnet-c.html.If every VM in the environment has only one vNIC, then the total number of vNICs equals the number of VMs. If any VMs in the environment have more than one vNIC, these additional vNICs should also be counted when determining the total number of vNICs. If you are in Phase 1 (current phase), upgrade to PC 2024.1, AOS 6.8 (and recommended AHV) which bundles FNS 4.1.0, as a passcode for migration is not required.If you are in Phase 2 or Phase 3, Nutanix will reach out to you via email to your registered address(es) of your Nutanix portal account at a later date with the passcode once Phase 1 enablement is complete. If you have any questions prior to obtaining a passcode that is not covered in this KB article or product documentation, then reach out to your Nutanix account team for further information.If you are running a Dark Site https://portal.nutanix.com/page/documents/details?targetId=Life-Cycle-Manager-Dark-Site-Guide-v2_7:Overview, or if Pulse https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v6_7:wc-support-pulse-recommend-wc-c.html is not enabled in your environment, the information required to place you into the relevant phase needs to be collected and reviewed manually. Review the above phase definitions and reach out to your Nutanix account team for further guidance.For more information on how to use the passcode and migrate to FNS Next-Gen, refer to the migration section in FNS 4.0 user guide https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Flow-Network-Security-VLAN-Guide:fns-migrate-fns-to-fns-ng-r.html.
|
KB13972
|
LCM inventory failing with error "Inventory failed with error: [local variable 'vendor_id' referenced before assignment]"
|
LCM inventory failing with error "Inventory failed with error: [local variable 'vendor_id' referenced before assignment" on hyper-V cluster.
|
LCM inventory failing with error "Inventory failed with error: [local variable 'vendor_id' referenced before assignment]"
2022-09-23 03:05:08,757Z ERROR 56241680 catalog_utils.py:765 Found failed kLcmInventoryTask task: 2b8ec522845e4762901f6b26605445ec. Task Status: kFailed
LCM Inventory is getting failed while trying to query the Mellanox CX3 nics due to missing vendor_id field in the response while querying the nic cards.
2022-10-10 11:54:14,758Z ERROR 56241680 exception.py:86 LCM Exception [LcmExceptionHandler]: Inventory Failed - found the following errors:
|
Inventory is getting failed while trying to query the Mellanox CX3 nics due to missing vendor_id field in the response while querying the nic cards.We normally don't support inventory operation for CX3 nics, but here we are going that code path due to MLNX_WinOF2 folder being available on below path.
Workaround:We can either move the MLNX_WinOF2 to any other path than 'C:Program Files/Mellanox/MLNX_WinOF2/Management Tools' or rename MLNX_WinOF2 folder to other name.
For example: as below file has been renamed it to MLNX_WinOF2_Copy folder and above error stage is passed for the inventory operation.
192.168.5.1> dir
|
KB15841
|
NCC Health Check: toshiba_drives_firmware_version_check
|
The NCC health Check toshiba_drives_firmware_version_check checks the current firmware version installed on any existing Toshiba drives.
|
The NCC health check toshiba_drives_firmware_version_check checks if the Toshiba drive installed on your Lenovo node may have a Serial number change post firmware upgradeBefore running this check, upgrade NCC to the latest version. This check was introduced in NCC 5.0.0.
Running the NCC CheckYou can run this check as part of the complete NCC Health Checks.
nutanix@cvm$ ncc health_checks run_all
Or you can run this check separately.
nutanix@cvm$ ncc health_checks nu_checker disk toshiba_drives_firmware_version_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.This check is scheduled to run once daily.This check runs only on Lenovo Hardware
Sample outputs
For Status: PASS
Running : health_checks nu_checker disk toshiba_drives_firmware_version_check
For Status: WARN
Running : health_checks nu_checker disk toshiba_drives_firmware_version_check
Output messaging
[
{
"Check ID": "Checks the current firmware version installed on any existing Toshiba drives."
},
{
"Check ID": "The firmware installed on your Toshiba drives may change serial number during an upgrade."
},
{
"Check ID": "Please refer to KB-15841 for more details."
},
{
"Check ID": "This firmware may cause issues on your cluster if upgraded."
},
{
"Check ID": "A6050"
},
{
"Check ID": "Firmware upgrade on the drive might cause disk serial number change."
},
{
"Check ID": "Please check KB article https://support.lenovo.com/cl/nb/solutions/ht514200-toshiba-hdd-sn-changes-after-drive-firmware-is-upgraded-lenovo-thinksystem."
}
]
|
After upgrading the firmware of some Toshiba hard drives, the hard drive serial number (SN) can change in OS. This impacts the drive mount points on the Nutanix cluster after the firmware upgrade.For more details about this issue, refer to the Lenovo article.Contact Lenovo support and reference the above article for assistance.
|
KB15845
|
Self Service-How to Find Policy Engine Version from CLI
|
Find Policy Engine version From CLI
|
While troubleshooting Self-service (formerly Calm) issue, it becomes important to know the Policy Engine version as well.While we can get the same from LCM > Inventory on PC, we might not have access to GUI, for example, while working over tunnel.
|
To get the Policy Engine version, one of the following 2 approaches can be used:Easy Way:From the PCVM, run the following command:
nutanix@PCVM:~$ policy_ip=$(zkcat /appliance/logical/policy_engine/status | awk -F'"' '/ip/ {print $8}');ssh $policy_ip 'docker exec policy bash -ic "source ~/.bashrc; cat /home/policy/conf/commit_ids.txt"' | grep -iE "quantum|plugin-framework|policy-engine|main"
Alternate Way:If for some reason the above command is not returning the output, we can go step by step to find the same:
Get the Policy Engine VM IP from the zknode:
nutanix@PCVM:~$ zkcat /appliance/logical/policy_engine/status
SSH to the Policy Engine VM and run the following command to list the commits:
[nutanix@ntnx-calm-policy-vm ~]$ docker exec policy bash -ic "source ~/.bashrc; cat /home/policy/conf/commit_ids.txt"
The version highlighted in red is the Policy Engine version.
|
KB3795
|
UCS Foundation Domain Mode Fails at 27%: Preparing to reboot Into Installer
|
UCS Foundation fails with error: "StandardError: Failed to connect to Phoenix".
|
UCS Foundation domain mode fails at 27%.
Debug.log for Foundation shows the following error message:
|
This occurs when the CVM/Hypervisor IPs do not reside in the same subnet as the vlan named Default for the UCS. Foundation for Nutanix on UCS places the CVM and Hypervisor onto the vlan named Default for the UCS.To use a different vlan for CVM/Hypervisor traffic, modify /home/nutanix/foundation/templates/ucsm_template.json.
"vlan_name": "default"
Replace default with the name of the vlan on the UCS that the customer wants to use (the subnet where CVM or Hypervisor IPs reside).ExampleIf the customer needs to use a vlan named new_vlan_name then change the line to the following.
"vlan_name": new_vlan_name
This sets the UCSM vlan new_vlan_name as the native vlan on the vNIC (in the service profile).
|
KB14721
|
NDB - Users fail to login on NDB Console with error "Failed to fetch cluster details. Retry login."
|
Users fail to login on the NDB Console with error "Failed to fetch cluster details. Retry login."
|
Local and AD Users will be unable to log in to the NDB Console with the error "Failed to fetch cluster details. Retry login."The login to the NDB Server via SSH will work as expected.This error could be seen if the certificate has expired on NDB.
|
To resolve the issue, the following steps can be done depending on the type of certificate:
1. If the customer is using a Self-Signed Certificate, the expiry can be extended using the following CLI command on the NDB server:
era-server > security ssl extend_self_signed_certificate
This is explained under SSL Server Certificate Authentication https://portal.nutanix.com/page/documents/details?targetId=Nutanix-NDB-User-Guide-v2_5:top-ssl-authentication-c.html
2. If the customer is using a CA-signed certificate, a new certificate will need to be applied using the CLI command on the NDB server:
era-server > security ssl add_custom certificate_file=file_path private_key=file_path ca_certificate=file_path
file_path will need to be replaced with the security certificate file location.This is explained under Configuring an SSL Certificate https://portal.nutanix.com/page/documents/details?targetId=Nutanix-NDB-User-Guide-v2_5:top-ssl-certificate-install-t.htmlAfter the certificate is extended or renewed, the login will work successfully.
|
KB7689
|
AOS upgrade stuck because of the node_shutdown_priority_list
|
The AOS upgrade gets stuck without any progress.
|
AOS upgrade gets stuck on the first node. genesis.out on the Genesis leader shows the following message:
2019-06-21 14:18:28 INFO cluster_manager.py:3664 Master 172.18.216.12 did not grant shutdown token to my ip 172.18.216.12, trying again in 30 seconds.
The shutdown token is empty:
nutanix@cvm$ zkcat /appliance/logical/genesis/node_shutdown_token
Restarting Genesis does not give any progress.
|
Check the node_shutdown_priority_list:
zkcat /appliance/logical/genesis/node_shutdown_priority_list
Example:
nutanix@cvm$ zkcat /appliance/logical/genesis/node_shutdown_priority_list
You can check the request time by converting the epoch time:
nutanix@cvm$ date -d @1496860678.099324
In the example, the priority list was created 2 years before the actual AOS upgrade.To resolve the issue:
Remove the node_shutdown_priority_list.
WARNING: Check with a Senior/Staff SRE or an Escalations Engineer before executing this command.
nutanix@cvm$ zkrm /appliance/logical/genesis/node_shutdown_priority_list
Restart Genesis across the cluster:
nutanix@cvm$ allssh genesis restart
|
KB13311
|
[Objects][Alerts] Alert - KubeClientCertificateExpiration
|
Investigating the KubeClientCertificateExpiration alert on a Nutanix Objects object store cluster.
|
Alert OverviewOn Nutanix Objects deployed on MSP (Microservices Platform) clusters, the certificate used to authenticate to the API server does not get automatically rotated prior to the certificate expiring unless an upgrade of MSP is performed and the certificate expiration date is within 1 month. The KubeClientCertificateExpiration alert is generated when the certificate is within a certain threshold of expiration.If MSP is upgraded and the certificate expiration date is within 30 days, the certificate will be automatically rotated with a new certificate and an expiration date further in the future.If MSP is upgraded and the certificate expiration date is further than 30 days, the certificate is not automatically rotated, and the certificate will expire on the expiration date.If the certificate expires before being rotated, object stores may become unavailable until a new certificate is generated.The KubeClientCertificateExpiration alert is generated as follows:
With MSP version 2.4.2 and higher:
Warning alert is generated 90 days prior to certificate expiration.Critical alert is generated 45 days prior to certificate expiration.
Sample Alert
Severity: Warning
Severity: Critical
|
TroubleshootingTo check the expiration date of the certificate for the MSP cluster:
SSH into the MSP cluster master VM using KB 8170 https://portal.nutanix.com/kb/8170 following the steps in “How To 1: Get MSP cluster info and ssh into the VMs.”For each MSP cluster, run the following command:
[nutanix@objects-default-0 ~]$ sudo openssl x509 -enddate -noout -in /etc/docker-plugin-certs/cert
The expiration date for the example above is May 21 22:21:00 2023 GMT.Resolving the IssueTo avoid certificate expiration, plan and execute an upgrade of MSP prior to the certificate expiration date. An upgrade of MSP will automatically rotate the certificate.For more information on using Life Cycle Manager (LCM) to upgrade MSP, see the Life Cycle Manager Guide https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LCM.If MSP cannot be upgraded prior to the certificate expiration date, immediately create a support case with Nutanix Support https://portal.nutanix.com/.Collecting Additional InformationBefore collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle as described in KB 7446 https://portal.nutanix.com/kb/7446. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.Attaching Files to the CaseAttach the files at the bottom of the support case on the support portal.If the size of the log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.
|
KB12887
|
Prism Central root partition full due to duplicate CA cert bundles
|
Prism Central / filesystem is filled up to 100% because of duplicate CA cert bundles that are caused by an immutable permission bit set on the files.
|
Issue Description:In 2022.X versions of Prism Central (PC), it was noticed that the certificate files have immutable bits set on them in the pem and openssl sub directory locations at "/etc/pki/ca-trust/extracted/". This causes genesis to fill the root partition up with tmp copies of these files which can result in PC services become unstable. We are supposed to remove the immutable attribute of the certificate files before updating them and then add it back after that is complete. Unfortunately this process has a defect in it and that is not occurring.Identification:1) Confirm that root "/" is the filesystem showing significantly elevated usage and nearly 100% full or is 100% full. Typically this filesystem is in the 60's or less.
nutanix@NTNX-A-PCVM:~$ allssh 'df -h /'
2) Check to see if the pem and openssl subdirectories are consuming more than a couple MB.
Commands to check:
3) Confirm how many files are in the directories and what is in them to document. There is one .pem file and one .crt file that are the cert files themselves. There can also be temp files that have random chars after the .pem and .crt extensions and are just left behind temp files which is typical. If you are seeing more than one base file and a few tmp files you are likely hitting this issue:
Commands to use:
4) Check to see if the immutable bits are present via the following commands (Note the i attribute assigned to the existing attributes of the file):
Commands:
|
Workaround - PC VM(s):1) We want to stop services on the affected PC VM so that the garbage files aren't continuously being created while we're cleaning up. Typically we are going to see one node have service instability and a full root drive because of this issue. It's possible we might see more than one node affected but that should be rare and in those instances its likely that other nodes are seeing full partitions because of other reasons as well. If you see more than one node affected by this same issue you will have to pick one at a time to run steps 1 and 6 on. When a Linux VM runs out of space on its root partition and other partitions services that leverage those filesystems for config, temp and other files typically get in an inconsistent state and need to be completely restarted. The best way to accomplish that and guarantee you don't run into dependency problems is to restart the PCVM that ran into this condition. Hence if the filesystem is full or near full such that services are crashing/affected you need to perform steps 1 and 6 in a rolling fashion. Steps 2 and 3 are performed on the entire PCVM cluster at once so it will typically not need to be done again but if multiple nodes are unstable the other node that you don't start with may get some additional file buildup while you're working on another PCVM and need steps 2 and 3 run again.If this is happening on more than one PC VM, pick one of the PC VM(s) to start with. Only do this on the affected PC VM one at a time. 1) Stop services on the node to prevent more garbage file buildup as we're cleaning up:
genesis stop all
2) Below the commands only need to run from one PC VM but they will remove the immutable bit on the cert files for all the PC VM(s):
allssh 'sudo chattr -i /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem'
3) Remove the duplicate pem and crt files but not the base original files with the following command:
allssh 'find /etc/pki/ca-trust/extracted/{pem,openssl} -type f -name '\''*ca-bundle.*'\'' -not \( -name '\''*.crt'\'' -o -name '\''*.pem'\'' \) -print0 | sudo xargs -0 -r rm -v'
OR if you have a single PC VM instance or "allssh" does not work then you can use the following command:
find /etc/pki/ca-trust/extracted/{pem,openssl} -type f -name '*ca-bundle.*' -not \( -name '*.crt' -o -name '*.pem' \) -print0 | sudo xargs -0 -r rm -v
4) Confirm space usage is back to around normal:
allssh 'df -h /'
5. Restart the original affected PC VM that had run out of space in the root.6. Confirm services are healthy and coming back up.
cluster status | grep -i down
|
KB13786
|
NGT Communication Link Inactive for VMs after NGT Upgrade Using a Custom PowerShell Script
|
The NGT version is reported as outdated in Prism Web console and NCC health checks report on errors due to the communication issue between CVM and user VMs.
|
The NGT version is reported as outdated in Prism Web console and NCC health checks report on errors due to the communication issue between CVM and user VMs.
C:\Program Files\Nutanix\logs\guest_agent_service.log contains the below ERROR on the user VMs:
2020-09-14 09:44:00 ERROR C:\Program Files\Nutanix\python\bin\guest_agent_service.py:485 kNotAuthorized: System identifier mismatch
NGT communication link is failing due to SSL connection failure between the CVM and the user VM. This can happen due to below reasons:
VM was cloned from a master VM with NGT installed.The same NGT installer file from a VM is being used for installing/upgrading the NGT.
In this scenario above, a PowerShell script was used to copy the same NGT installation files from a VM to other VMs in the cluster and an NGT upgrade was performed on those VMs. This leads to certificate duplicates and thereby SSL connection failure.
Note that NGT requires a unique association with the CVMs. During a normal NGT installation, when NGT is enabled in a VM, a certificate pair is generated for that specific VM and it is embedded in an ISO that is configured for that VM. The security certificate is installed inside the VM as part of the installation process. The NGA service running inside the VM initiates an SSL connection to the virtual IP port 2074 of the Controller VM to communicate with the Controller VM.
|
As a workaround for this scenario,
Disable NGT on the user VMsRe-enable NGT for the VMs to generate their individual certsMount NGT iso again to inject those certsAlso, it would require an NGA service restart on the VMs if the VMs are already powered ON.
Note: If an external script is being used to perform bulk installation, add a step to mount the NGT ISO on individual VMs and then run setup.exe from the ISO location.
As a permanent solution, use Prism Central for bulk installation or upgrade of the NGT.
|
KB1863
|
NCC Health Check: sufficient_disk_space_check
|
The NCC health check sufficient_disk_space_check checks if there is sufficient storage space on the cluster to provide node resiliency.
|
The NCC health check sufficient_disk_space_check checks if there is sufficient storage space on the cluster to provide node resiliency.
Running the NCC Check
It can be run as part of the complete NCC check by running the following command from a Controller VM (CVM) as the user nutanix:
ncc health_checks run_all
or individually as:
ncc health_checks system_checks sufficient_disk_space_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This NCC check is scheduled to run every 1 hour.This check will generate Critical alert A101047 after 1 failure.
Sample Output
For Status: PASS
Running : health_checks system_checks sufficient_disk_space_check
For Status: INFO
/health_checks/system_checks/sufficient_disk_space_check [ INFO ]
For Status: WARNING
/health_checks/system_checks/sufficient_disk_space_check [ WARN ]
For Status: FAIL
/health_checks/system_checks/sufficient_disk_space_check [ FAIL ]
Output messaging
[
{
"Check ID": "Check if the storage pool has sufficient free space to maintain the desired replication factor in the event of a node failure, based on highest replication factor container requirements"
},
{
"Check ID": "Inadequate free space on the cluster"
},
{
"Check ID": "Clean out unused data from guest VMs, unneeded reservation at container level, or VMs from the container to be below the threshold."
},
{
"Check ID": "Cluster can not tolerate single node(RF2) or two node(RF3) failure and guarantee available storage to provide data redundancy."
},
{
"Check ID": "A101047"
},
{
"Check ID": "Cluster does not have enough free space to tolerate a node failure"
},
{
"Check ID": "Cluster does not have enough free space to tolerate num_node_failure_limit node failure(s)"
},
{
"Check ID": "Cluster cannot tolerate num_node_failure_limit node failure. Current usage of used_bytes bytes exceeds max limit of usable_bytes bytes."
}
]
|
For node resiliency and redundancy, the disk usage must be below the Max Usable value, as shown in the output.
Nutanix recommends the following:
If the NCC version is below 4.1, upgrade NCC to the latest version, which includes improvements for more accurately determining sufficient disk space.Clean out unused data or VMs from the data store to be below the threshold to ensure the cluster withstands a node failure:
Consider deleting PD-based DR snapshots. Refer to Data Protection and Recovery with Prism Element https://portal.nutanix.com/page/documents/details?targetId=Prism-Element-Data-Protection-Guide:wc-dr-delete-protectiondomain-wc-t.html for additional instructions.Consider deleting Acropolis snapshots. Refer to the Prism Web Console https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-vm-delete-snapshot-manually-wc-t.html guide for additional instructions. Delete VMs if possible (follow the Prism Web Console guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-vm-manage-esxi-t.html for instructions)Note: After removing snapshots, it may take several background scans for the space to be reclaimed and reflected in the dashboard. It could typically take at least 24 hours for the storage utilization to start dropping. In case if VMs, VM disks, or VGs are deleted, the cleanup will be deferred by default by 24 hours in addition to the possible background scans delay described above. Refer to Recycle Bin chapter https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-vm-restore-recycle-wc-r.html in the Prism Web Console guide.
To remove or reduce the reservation at the container level, if any, see Creating a Storage Container https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism:wc-container-create-wc-t.html in the Prism Web Console Guide to change the reserved capacity of the container.
Refer to KB 1557 https://portal.nutanix.com/kb/1557 for recommended guidelines on maximum storage utilization on a cluster.Refer to KB 6633 https://portal.nutanix.com/kb/6633 for details regarding data resiliency status calculation by Prism/NCLI vs NCC.
If this check returns a FAIL output even though Prism reports sufficient free space to tolerate a single Node failure under Data Resiliency Status, kindly upgrade to the latest NCC Version to fix the issue or refer to KB 6633 https://portal.nutanix.com/kb/6633 for details.
Known issues
In case the above-mentioned steps do not resolve the issue, consider engaging Nutanix Support https://portal.nutanix.com/.
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Collect the NCC output file ncc-output-latest.log. For information on collecting the output file, see KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command on the CVM as user nutanix. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
logbay collect --aggregate=true
Attaching files to the case
When viewing the support case on the support portal, use the Reply option and upload the files from there.If the size of the NCC log bundle being uploaded is greater than 5 GB, Nutanix recommends using the Nutanix FTP server due to supported size limitations. See KB 1294 https://portal.nutanix.com/kb/1294.[
{
"Index": "1.",
"Issue Summary": "The check on NCC version 3.10.0 and above may report a false positive FAIL message that the cluster does not have enough space to tolerate one node failure if the following conditions are met:\nThe Redundancy factor is set to 3 in Prism under Settings > Redundancy State.All storage containers report 2 as the replication factor in Prism under the Storage > Storage Container table replication factor column.The Data Resiliency Status widget on the home page reports data resiliency status as 1 for Free space.",
"Action": "Nutanix Engineering is aware of the issue and working on a fix.The cluster has the ability to tolerate one node failure if the conditions mentioned above are met."
},
{
"Index": "2",
"Issue Summary": "NCC sufficient_disk_space_check shows the wrong message for FT2 cluster that contains only RF2 containers",
"Action": "Upgrade to NCC-4.1.0 and above."
},
{
"Index": "3",
"Issue Summary": "Run as LCM precheck, this NCC check may fail under conditions similar to those described in the known issue 1, particularly when:\nThe Redundancy factor is set to 3 in Prism under Settings > Redundancy State.All storage containers are set to have replication factor 2 in Prism under Storage -> Storage Container table, Replication Factor column.The Data Resiliency Status widget on the home page reports data resiliency status as 1 for Free space.",
"Action": "Nutanix Engineering is aware of the issue and working on a fix.Contact Nutanix Support if you need to proceed with LCM operations."
}
]
|
KB12258
|
Nutanix Kubernetes Engine - Prevent Auto-Deletion of VMs after a failed deployment
|
Nutanix Kubernetes Engine (NKE) 2.3 and earlier deletes VMs that are part of a failed deployment. NKE 2.3 and later deletes the airgap VM after a failed airgap enablement. This KB provides a way to prevent auto deletion of these VMs in case further debugging is required on the VMs themselves.
|
Nutanix Kubernetes Engine (NKE) is formerly known as Karbon or Karbon Platform Services.
NKE 2.3 and earlier deletes Kubernetes VMs that are part of a failed deployment. There might be situations where the exact reason why the deployment fails is not clear from the karbon_core.out log (PCVM: /home/nutanix/data/logs/karbon_core.out). In such a situation, it helps to keep the VMs that were deployed for troubleshooting purposes.
Starting with NKE 2.3, Kubernetes VMs from a failed deployment are retained; however, even with NKE 2.3 and later, airgap enablement failure (via the karbonctl airgap enable command) will result in the airgap VM (airgap-0) being deleted, which may impede troubleshooting the cause of airgap enablement failure.
When troubleshooting a deployment failure with NKE 2.3 or earlier, or an airgap enablement failure with NKE 2.3+, a parameter may be added to the karbon_core service to prevent deletion of the VMs after failure.
|
Append the following gflag to the karbon_core_config.json file and restart the karbon_core service:
-rollback_node_create_failure=false
NOTE: In a scale-out Prism Central (PC) Deployment, the gflag needs to be added on each PCVM.
Back up the existing karbon_core_config.json:
nutanix@PCVM:~$ sudo cp /home/docker/karbon_core/karbon_core_config.json ~/tmp/karbon_core_config.json.bkp
Edit the file:
nutanix@PCVM:~$ sudo vi /home/docker/karbon_core/karbon_core_config.json
Add the gflag below the last line of gflags under the entry point array:
Before:
"entry_point": [
After:
"entry_point": [
Restart the karbon_core service:
nutanix@PCVM:~$ genesis stop karbon_core;sleep 5;cluster start
Confirm that the new karbon-core and karbon-ui services are up and running:
nutanix@PCVM:~$ docker ps | grep -i karbon
Confirm that the new gflag setting is in effect in the karbon_core.out log.
nutanix@PCVM:~$ grep rollback_node_create_failure ~/data/logs/karbon_core.out -B30
Watch karbon_core.out and ensure it is not crashing for a few minutes.Now, retry the deployment. If the deployment fails, the Kubernetes VMs deployed via NKE should not get auto-deleted.
Important Notes
The previous line to where you add the "rollback_node_create_failure" needs to have a comma ",".Since the new gflag is a new line, ensure it does not have a "," at the end of its line.You can validate the JSON config file after the changes by copying the entire karbon_core_config.json and parsing it with JSONLint (e.g.: https://jsonlint.com/ https://jsonlint.com/).
|
KB14902
|
Nutanix Kubernetes Engine : Missing cert from IDF on NKE (Karbon) 2.5 and below
|
NKE task would fail with following error "failed to get auth proxy cert with uuid"
|
Nutanix Kubernetes Engine (formerly Karbon) cluster versions 2.5 and below may race condition during certificate rotation task resulting in malformed IDF entity pointing to missing certificate.Problem may happen due to race condition between thread running certificate rotation task and some other thread that may update NKE cluster config entity in the IDF using cached data (i.e. cluster scrubbing routine)This problem fixed in Karbon 2.6 and above.Identification:
NKE (Karbon) version is 2.5 or below.
nutanix@PCVM:~$ docker ps | grep -i karbon
karbon_core.out log would show it is failing to find the certificate (UUID 7908793c-685c-4653-5aaf-04123dbc6d37 in the sample below):
nutanix@PCVM:~$ grep -i 'failed to get auth proxy cert with uuid' data/logs/karbon_core.out | tail
The NKE cluster reports itself as unhealthy due to a missing certificate.
nutanix@PCVM:~$ karbon/karbonctl cluster health get --cluster-name nke-cluster
IDF missing certificates_entity record with this certificate UUID, considering the sample above with UUID "7908793c-685c-4653-5aaf-04123dbc6d37":
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e certificates_entity -k 7908793c-685c-4653-5aaf-04123dbc6d37
|
This problem should not happen on NKE (Karbon) 2.6 and above.High-level sequence that may cause this scenario on 2.5 and below:
(go routine A) cluster scrub triggered, this running in a separate thread with in-memory clusterStatus struct with values of cert uuids at this point in time(go routine B) cert rotation triggered, running in a separate thread(go routine B) certs rotation completed - at this point we actually generated new certificates with new UUIDs updated clusterStatus struct in IDF with new values and deleted old cert entities from the IDF(go routine A) cluster scrub thread completed and updated IDF with its in-memory clusterStatus struct that has original certuuids before rotation - at this point in IDF we have records pointing to original cert uuids which are removed already
The following steps should only be followed by a Support Tech Lead/Staff SRE or DevEx. Refer to TH-10761 https://jira.nutanix.com/browse/TH-10761 for RCA details.
The entities correspond to the authentication proxy certificates, you can validate it with the following command, where the UUID is for the NKE cluster.
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e k8s_cluster_config -k e487d059-d138-45a1-5018-4f52835146d9
Log into the control plane (master) node and list the proxy certificates.
[nutanix@nke-cluster-e487d0-master-0 ~]$ ls -lhart /var/nutanix/etc/kubernetes/ssl/ | grep -i aggr-proxy
Display the contents and copy the contents to your notes. (Output is truncated to avoid cluttering the code box)
[nutanix@nke-cluster-e487d0-master-0 ~]$ cd /var/nutanix/etc/kubernetes/ssl/
Create the certificate entity in IDF, note that the UUID is the same UUID that the NKE cluster is complaining about.
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli add entity -e certificates_entity -k certificate_uuid certificate_uuid=7908793c-c-4653-5aaf-04123dbc6d37 is_certificate_authority=false public_key="" deployment_platform="" cert_type=""
If a query is run towards the UUID that just got created, the entity is still missing two parameters "cert" and "private_key", the next step is to inject those values. Take notes of the "cas_value".
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e certificates_entity -k 7908793c-685c-4653-5aaf-04123dbc6d37
First, the certificate will be added, with the copied value from the file "aggr-proxy-client.pem" located in the NKE master node.
NOTE: The flag "--cas-value" has to be incremented by one from the output above when querying the entity, in this case, from 2 to 3.
~/bin/idf_cli.py update-entity certificates_entity 7908793c-685c-4653-5aaf-04123dbc6d37 --full-update false --cas-value 3 --bytes cert "-----BEGIN CERTIFICATE-----
When this command is pasted into a terminal, it might appear as below. The ">" characters are prompt continuations inserted by the terminal. This happens because the terminal expects a single-line input. When a newline character is encountered within the quoted string, the terminal interprets it as an incomplete command and prompts for the continuation of the input with ">".
nutanix@PCVM:~$ ~/bin/idf_cli.py update-entity certificates_entity 7908793c-685c-4653-5aaf-04123dbc6d37 --full-update false --cas-value 3 --bytes cert "-----BEGIN CIFICATE-----
Now that the certificate is added, the private key will be next from the copied outputs of file "aggr-proxy-client-key.pem", the "cas-value" should be this time 4 as it is incremented by 1.
~/bin/idf_cli.py update-entity certificates_entity 7908793c-685c-4653-5aaf-04123dbc6d37 --full-update false --cas-value 4 --bytes private_key "-----IN RSA PRIVATE KEY-----
A similar output will be displayed when adding the private key.
nutanix@PCVM:~$ ~/bin/idf_cli.py update-entity certificates_entity 7908793c-685c-4653-5aaf-04123dbc6d37 --full-update false --cas-value 4 --bytes private_key "-----IN RSA PRIVATE KEY-----
Now if a query is made to the certificate entity that was missing, it shows both "cert" and "private_key" values in it.
nutanix@PCVM:~$ /home/docker/msp_controller/bootstrap/msp_tools/cmsp-scripts/idfcli get entity -e certificates_entity -k 7908793c-685c-4653-5aaf-04123dbc6d37
At this point, the NKE cluster should not report any issues with the certificates anymore. If a different error is shown in the health checks, that may be another error.
nutanix@PCVM:~$ karbon/karbonctl cluster health get --cluster-name nke-cluster
|
KB2274
|
NX7000 - Unable to determine the device handle for GPU
|
This article highlights how to troubleshoot a NVIDIA GRID K1/K2 which may have power problems.
|
In a VDI environment where a NX7100 is used, it is essential the GPUs are operational. If the number of entries from lspci and nvidia-smi differ, then there could be a potential hardware issue.To list the number of NVIDIA controllers:
~ # lspci |grep NVIDIA
Total number of NVIDIA controllers as above : 4
~ # nvidia-smi -l
Total number of NVIDIA controllers as above : 2. 0000:06:00.0 and 0000:07:00.0 are missing.If the working NVIDIA controllers are placed in passthrough mode, the result may differ:
~ # nvidia-smi -l
|
Run the script below on the esxi host to generate a log file:
~ # nvidia-bug-report.sh
Extract the log file and view the contents. The following lines under the xorg.log section indicate there could be a power issue:
Xorg.log:
Next step would be to check the power connects connecting to the GPU Card. Refer to the GPU Replacement Documentation > Replacing a GPU https://portal.nutanix.com/#/page/docs/details?targetId=GPU_Replacement-NOS_v3_5-NX7000:break_fix_gpu_replace_t.htmlcard for the location of the power connectors.
|
KB12715
|
Understanding Your Hardware and Field Engineer Entitlements
|
This article explains how your entitlements determine the part estimated time of arrival and Field Engineer availability.
|
This article explains how your entitlements determine the part estimated time of arrival and Field Engineer availability. Use this information to ensure you work within your entitlements.
|
Hardware support for Nutanix NXCustomers that select Nutanix NX hardware also benefit from a choice of hardware support engagement models:
Production Support Hardware replacement delivery occurs between of 8 am and 5 pm NBD (Next Business Day) when the dispatch is submitted by 3 pm LT * (Local Time). Note that a specific delivery time isn't available with this entitlement. The FE (Field Engineer) is scheduled automatically by OnProcess after parts are onsite unless a later date is requested by the customer. The earliest an FE can be scheduled is 8 am, with work finishing by 5 pm LT. For additional insights into the dispatch validation process, refer to KB-12686 https://portal.nutanix.com/kb/12686.* The full description of NBD delivery by region can be found under "When Can I Expect A Replacement Part or Field Engineer To Arrive?" on the Support Polices & FAQs page https://www.nutanix.com/support-services/product-support/support-policies-and-faqs?show=accordion-10.Production Support + Field Engineer After HoursThe Field Engineer (FE) After Hours Program allows customers to schedule a FE ND (Next Day) for hours after 5 pm LT as well as the weekends, not including holidays. This is intended to provide customers with the flexibility of parts replacement during scheduled maintenance windows after business hours. Hardware replacement delivery still occurs NBD between the hours of 8 am and 5 pm LT. Note that a specific delivery time isn't available with this entitlement.*** FE After hours is available for purchase as an add-on to Production Support. Details can be found under "When Can I Expect A Replacement Part or Field Engineer To Arrive?" on the Support Polices & FAQs page https://www.nutanix.com/support-services/product-support/support-policies-and-faqs.Mission Critical SupportHardware replacement delivery occurs within 4 hours of dispatch submission unless otherwise delayed by the customer.* The Field Engineer (FE) will also arrive onsite within 4 hours of dispatch submission unless otherwise delayed by the customer. For additional insights into the dispatch validation process please refer to KB-12686 https://portal.nutanix.com/kb/12686. Please ensure access to the DC (Data Center) can be guaranteed on such short notice otherwise select a delay in the FE ETA to allow for the required time to set up access.** 4-hour arrival guarantee not available in all locations. Details can be found under "When Can I Expect A Replacement Part or Field Engineer To Arrive?" on the Support Polices & FAQs page https://www.nutanix.com/support-services/product-support/support-policies-and-faqs.Business RequirementsWhen the business requirements of your cluster do not match your entitlements please reach out to your account team to inquire about making the necessary changes.For additional details please refer to the following links. Nutanix Product Support Programs https://www.nutanix.com/support-services/product-support/product-support-programs Nutanix Support Quick Reference Guide https://www.nutanix.com/viewer?type=pdf&path=/content/dam/nutanix/resources/support/nutanix-support-quick-reference-guide.pdf Support Polices and FAQs https://www.nutanix.com/support-services/product-support/support-policies-and-faqs Refer to "When can I expect a replacement part or field engineer to arrive?" for the selection list. [
{
"Hardware Support Plans": "Production Support + FE After Hours",
"Hardware Replacement Delivery": "Between 8 am and 5 pm NBD",
"Field Engineer for parts replacement": "After working hours or on weekends scheduled next day"
},
{
"Hardware Support Plans": "Mission Critical Support",
"Hardware Replacement Delivery": "Within 4 hours of dispatch submission\t\t\tNot available in all areas",
"Field Engineer for parts replacement": "Within 4 hours of dispatch submission\t\t\tNot available in all areas"
}
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.