id
stringlengths 1
25.2k
⌀ | title
stringlengths 5
916
⌀ | summary
stringlengths 4
1.51k
⌀ | description
stringlengths 2
32.8k
⌀ | solution
stringlengths 2
32.8k
⌀ |
---|---|---|---|---|
KB14252
|
Prism Central - Incompatible bundled App in Marketplace
|
This article describes the process of accessing the compatible version of an App with the new Prism Central Experience shipped with pc.2023.1.
|
Versions Affected: pc.2023.1
In the first release of Marketplace, i.e. Prism Central (PC) version pc.2023.1, some of the default bundled portfolio/Nutanix apps, such as Objects or Nutanix Kubernetes Engine (NKE - formerly Karbon), may not be optimized for the new unified Prism Central Experience.
In certain cases, during upgrades from an older version to the newer version (brownfield scenarios), you may skip upgrading some of these Nutanix apps to the recommended optimized version, i.e. you may continue to use the version that is not optimized for the new Prism Central Experience.
|
In all scenarios mentioned above, there shall be no impact to the feature functionality of the App; however, you may have inconsistent/different/un-optimized user experience.
For the best user experience, it is recommended to upgrade to the latest optimized version as mentioned in the table below.
Optimized Versions of Nutanix Apps for the New Prism User Experience
Upgrade Process
You can upgrade the above products through LCM (Life Cycle Manager) in Prism Central https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LCM https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=LCM.[
{
"Service": "Files",
"Default Version": "4.2.1",
"Recommended Version": ">= 4.2.1"
},
{
"Service": "Objects",
"Default Version": "3.6",
"Recommended Version": ">= 4.0"
},
{
"Service": "Foundation Central",
"Default Version": "1.5",
"Recommended Version": ">= 1.5"
},
{
"Service": "Self-Service (formerly Calm)",
"Default Version": "3.6.2",
"Recommended Version": ">= 3.6.2"
},
{
"Service": "Move",
"Default Version": "4.7.0",
"Recommended Version": ">= 4.7.0"
},
{
"Service": "Nutanix Database Service (NDB - formerly Era)",
"Default Version": "2.5.1.1",
"Recommended Version": ">= 2.5.1.1"
}
]
|
KB6426
|
Pre-Upgrade Check: test_version_compatibility
|
test_version_compatibility tests version compatibility with help of either proto or metadata file.
|
This is a pre-upgrade check that tests version compatibility with help of either proto or metadata file.
Note: This pre-upgrade check runs on Prism Element (AOS) and Prism Central during upgrades.
|
See below table for the failure message you are seeing in the UI, some further details about the failure message, and the actions to take to resolve the issues.
[
{
"Failure message seen in the UI": "Failed to get current cluster version.",
"Details": "The software is unable to get the current version of the cluster.",
"Action to be taken": "SSH to the cluster and run command 'ncli cluster info'. Confirm if current cluster version is displayed.\n\n\t\t\tRun NCC health checks to determine if there are any failed checks. If all the checks pass and this pre-upgrade check is still failing, collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Failed to get upgrade info.",
"Details": "The software is unable to get the upgrade information.",
"Action to be taken": "Run NCC health checks to determine if there are any failed checks. If all the checks pass and this pre-upgrade check is still failing, collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Version [Version to which upgrade is attempted] is not compatible with current version [Version of the cluster]",
"Details": "The upgrade path is not valid. \t\t\tAn upgrade is being attempted where the current version cannot be upgraded to the target version.\t\t\tThis check is based on proto file.",
"Action to be taken": "Verify the upgrade path using the Upgrade Paths page on the Nutanix Support Portal. If a valid upgrade path is being followed and the pre-upgrade check keeps failing, then collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Failed to find version [Cluster target version] in proto",
"Details": "The software is unable to find the version information in proto",
"Action to be taken": "Collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Version [Cluster target version] is same as the current version [Cluster current version]",
"Details": "Upgrade is being attempted where the target and current cluster version are same. This is not supported.",
"Action to be taken": "Verify the upgrade path using the Upgrade Paths page on the Nutanix Support Portal. If a valid upgrade path is being followed and the pre-upgrade check keeps failing, then collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Installer directory does not exist",
"Details": "Software is unable to reach the installer directory.",
"Action to be taken": "Collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Incorrect metadata file, Package validation failed.",
"Details": "Issues were found with the uploaded metadata file.",
"Action to be taken": "Verify that the right metadata json file is being used. Re-upload the correct metadata json. If the right file is being used and the check is still failing, then collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Metadata file [Metadata file path] does not exist",
"Details": "Software is unable to reach the metadata file path",
"Action to be taken": "Collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Failed to read metadata from [Metatdata file path]",
"Details": "Software is unable to retrieve the metatdata at the metadata file path",
"Action to be taken": "Re-upload the metadata json. If the check is still failing, then collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Current version [Current cluster version] is not upgrade compatible with version [Target version]",
"Details": "The upgrade path is not valid. \t\t\tAn upgrade is being attempted where the current version cannot be upgraded to the target version.\t\t\tThis check is based on metadata file.",
"Action to be taken": "Verify the upgrade path using the Upgrade Paths page on the Nutanix Support Portal. If a valid upgrade path is being followed and the pre-upgrade check keeps failing then please collect NCC log collector bundle and reach out to Nutanix Support."
},
{
"Failure message seen in the UI": "Failed to retrieve information from metadata file [Metadata file path] with [Error message]",
"Details": "Software was unable to retrieve information from the metadata file",
"Action to be taken": "Re-upload the metadata json. If the check is still failing, then collect NCC log collector bundle and reach out to Nutanix Support."
}
]
|
KB13324
|
Volumes RBAC permissions for common scenarios
|
This KB describes Nutanix Volumes RBAC permissions for common use-case scenarios.
|
Prism Central release pc.2022.9 via FEAT-12997 https://jira.nutanix.com/browse/FEAT-12997 introduces the "Fine grained RBAC support for VGs to govern appropriate access control".
This KB describes the three common use-cases identified in the PRD, and which specific permissions are required to perform the VG operations. Example screenshots for VG permissions configuration are provided in the Solution section as well.
|
Below are screenshots showing how the corresponding menus look like in PC UI:
Scope - iSCSI
Scope - VG
VG - Permissions
VG - Custom Permissions
[
{
"Scenario": "ACPs to allow VG disk creation on specific containers",
"Permissions": "VolumeGroup FullAccess",
"Scope": "Set cluster/category scope for VG, Cluster,\n\n\t\t\tiscsiClient, AHV VM. Set StorageContainer to specific container."
},
{
"Scenario": "Scope user to only manage externally connected VGs",
"Permissions": "VolumeGroup FullAccess - exclude\n\n\t\t\tAllow VM Volume Group Connection\n\n\t\t\tUpdate Connections with Direct-attach AHV VMs\n\n\t\t\tView VM",
"Scope": "All Entity Types - Specify cluster_uuid(s) or category\n\n\t\t\tCluster - Specify cluster_uuid(s) or category"
}
]
|
KB9491
|
NCC Health Check: container_on_removed_storage_pool
|
This check was introduced in NCC (3.10.1) in order to check if there are containers built on top of the storage pool marked for removal. The check is WARNING level and is scheduled to check hourly.
|
The NCC health check plugin container_on_removed_storage_pool checks for any storage container is provisioned on storage pools that are marked for removal.Starting with NCC version 4.6.3 another check is added for any storage container marked for removal, where the removal process is running longer than 24 hours.
Running the NCC Check
It can be run as part of the complete NCC check by running
nutanix@cvm$ ncc health_checks run_all
or individually as:
nutanix@cvm$ ncc health_checks stargate_checks container_on_removed_storage_pool
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run hourly.This check will not generate an alert after a detected failure.Starting with NCC version 4.6.3 this check will generate a Warning alert A20024 after 1 failure.Starting with NCC version 4.6.3 this check will generate a Critical alert A20032 after 24 concurrent failures across scheduled intervals.
Sample output:
For Status: PASS
Running : health_checks stargate_checks container_on_removed_storage_pool
For Status: FAIL
Running : health_checks stargate_checks container_on_removed_storage_pool
Output messaging:
The check returns a PASS if the following is true:
No container uses any of the storage pools listed as marked for removal.
The check returns a WARN if the following is true:
There is a container detected that uses one or more of the storage pool(s) listed as marked for removal. It will display the UUID of the storage pool in question.
[
{
"20024": "Check if any storage container is provisioned on storage pools that is marked for removal.",
"Check ID": "Description"
},
{
"20024": "There are some storage containers that are built on top of removed storage pools. This can potentially block disk removal.",
"Check ID": "Causes of failure"
},
{
"20024": "Refer to the KB-9491 for the cleanup procedure.",
"Check ID": "Resolutions"
},
{
"20024": "Can potentially cause a disk removal to get stuck.",
"Check ID": "Impact"
},
{
"20024": "A20024",
"Check ID": "Alert ID"
},
{
"20024": "Storage container(s) ctr_list is/are built on top of removed storage pools",
"Check ID": "Alert Title"
},
{
"20024": "The following storage containers are built on removed storage pools: Container ctr_name on pool pool_name",
"Check ID": "Alert Message"
},
{
"Check ID": "20032"
},
{
"Check ID": "Check if containers are marked for removal."
},
{
"Check ID": "Container removal process is stuck for over 24 hours."
},
{
"Check ID": "Refer to the KB-9491 for more information. Please contact Nutanix Support"
},
{
"Check ID": "Can potentially cause a disk removal to get stuck."
},
{
"Check ID": "A20032"
},
{
"Check ID": "Storage container container_name is marked for removal for over 24 hours"
},
{
"Check ID": "Containers ctr_names with uuids ctr_uuids are marked for removal for over 24 hours"
}
]
|
For the check_id 20024, to verify the container on a removed storage_pool condition, collect the following information:
nutanix@cvm$ ncli ctr ls name={Container name(s) from the alert} | grep "Storage Pool Id"
For example, if the container in question was named NutanixManagementShare, the request would look as follows:
nutanix@cvm$ ncli ctr ls name=NutanixManagementShare | grep "Storage Pool Id"
Note: The pool ID does not match with the unique pool identifier located after the double colon.
For the check_id 20032, the storage container reported by the check might remain marked as 'to_remove' for extended period of time without progressing.See KB 14303 https://portal.nutanix.com/kb/14303 for some of the reasons that might've prevented container deletion request in the first place.
In this NCC check though, container is successfully scheduled for removal but cannot proceed. This requires troubleshooting into what's causing the container removal stall.Engage Nutanix support at https://portal.nutanix.com https://portal.nutanix.com.
Additionally, please gather the following command output and attach it to the support case:
nutanix@cvm$ ncc health_checks run_all
|
KB16761
|
CVM High System CPU Usage From Cassandra
|
A single CVM in the cluster may exhaust its CPU with most of the load in system space (%sys). This is in direct correlation with elevated Cassandra CPU usage.
|
We have seen some instances where a single CVM in the cluster will exhaust its CPU resources with the majority of the cycles spent in system space (%sys). This can be observed from mpstat as follows:
nutanix@CVM:x.x.x.X:~$ mpstat -P ALL
This can also be observed from top or top.INFO by paying attention to the "sy" value in the summary. A sample from top.INFO is below:
nutanix@CVM:x.x.x.X:~$ egrep "Cpu|TIMESTAMP" data/logs/sysstats/top.INFO
In parallel, we see that Cassandra is consuming almost all the CPU in the CVM (in sample below note Cassandra moves from 56% to 1023% CPU usage)
nutanix@CVM:x.x.x.X:~$ /bin/ps auxww | /bin/grep CassandraDaemon | /bin/grep -v -w grep | /bin/awk '{print $2}'
This condition persists until Degraded Node Detection triggers against the afflicted CVM and its Cassandra service restarts. While the issue is active, we may observe service restarts to non-CDP services and higher than expected disk latency to the UVMs.
|
Nutanix engineering team is actively investigating this via ENG-657002. If the problem is still active, please collect the data from the internal notes and proceed to ONCALL If the Cassandra service on the afflicted CVM has already been restarted (node degraded) then the data needed to debug the issue is lost.
|
KB4334
|
Phoenix failed with error "UnicodeDecodeError"
|
Phoenix can fail with error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 221: ordinal not in range(128)"
|
When imaging a node using Phoenix, it can fail with the following errors:
"Fatal exception encountered:
|
First boot log location on ESXi host: /bootbank/Nutanix/firstboot/first_boot.logFrom the errors in first_boot.log, it seems that first_boot script encountered exception when it tries to modify /etc/ssh/sshd_config file.Looking into the contents of /etc/ssh/sshd_config file, those comment lines at the bottom of the config file can have some non-ascii characters in place of those quote characters. An example of those lines are: Before modification
# sshd(8) will refuse connection attempts with a probability of “rate/100”
Since they are comments, simply delete those lines, remove /bootbank/Nutanix/firstboot/.firstboot_fail, then re-run /bootbank/Nutanix/firstboot/esx_first_boot.py. That should complete Phoenix without issue.
|
KB5507
|
Manual deployment of Citrix Cloud connector VMs to AHV cluster
|
This article describes how to manually deploy Citrix Cloud Connector VMs to an AHV cluster.
|
You may need to manually deploy a Citrix Cloud connector VM for one of the reasons mentioned below:
Prism 1-click deployment is failing for some reasonYou need to set up more than an HA pair for better performanceYou need to set up a Nutanix cluster with multiple Citrix Cloud accounts
Note that deploying the Citrix Cloud connector manually does not allow management of the connector through Prism.
|
For every Citrix Cloud Tenant, do the following:
Set up the necessary number of Citrix Cloud connector VMs based on Citrix instructions here https://docs.citrix.com/en-us/citrix-cloud/citrix-cloud-resource-locations/citrix-cloud-connector/installation.html.On every Citrix Cloud connector VM, install the Nutanix CWA (Citrix Workspace Appliance) Plug-In for Citrix Cloud Connector 1.0.0.0 or later, available from the Nutanix Support Portal Downloads/AHV/Nutanix AHV Plugin for Citrix https://portal.nutanix.com/page/downloads?product=ahv. The consolidated installer for Citrix contains plug-ins for Citrix XenDesktop, Cloud Connector, and Provisioning Services. For installation instructions for the CWA MSC AHV Plugin refer to Nutanix AHV Plugin for Citrix https://portal.nutanix.com/page/documents/details?targetId=NTNX-AHV-Plugin-Citrix:ahv-plugin-install-t.html.Once the above two steps are completed successfully, log on to your Citrix Cloud account and access Citrix Studio. Create a Hosting Connection and choose Nutanix AHV in the dropdown.Create a catalog and provision desktops.
|
KB11147
|
Scavenger services crashing frequently on CVM
|
This article describes steps to troubleshoot frequent scavenger services on CVM despite having no disk and memory resource crunch.
|
In AOS versions below 5.15.4, scavenger services might keep crashing frequently.Verify if there are enough resources such as disk space or memory on the CVM for the processes to run normally and there is no evident resource crunch.Make note of the node and CVM uptimes as well.~/data/logs/acropolis.out constantly logs WARN as below:
2021-01-22 11:34:12 WARNING arithmos_publisher.py:468 Failed to collect stats from host 2ab5d277-80c8-408c-8ef3-f3e3ac5d3786: [Errno 11] Resource temporarily unavailable
Restarting scavenger service would hold off the alerts for some time, but they recur.
nutanix@CVM:~$ genesis stop scavenger
Scavenger crashes again with the following FATAL snippet showed:
Traceback (most recent call last):
Note: Scavenger crashes might also result in /home utilization piling up. Keep monitoring it and clearing it up so that the /home does not hit 100% usage.
Check the max user processes limits using `ulimit -a`. Default values are at 2048.
nutanix@CVM:~$ ulimit -a
The max limit is configured at 2048, but the CVMs had threads beyond this limit.
nutanix@CVM:~$ allssh "ps -eLF | awk '{print $1}' |grep -c nutanix"
In one instance, we saw about 250 threads taken up by the nfs_dd process.
nutanix@CVM:~$ top -b -n 1 | awk -F ' ' '{print $NF}' | sort | uniq -c | sort -rn | head -10
Checking the ps commands to see the status for 'nfs_dd' processes:
nutanix@CVM:~$ ps -auxf | grep nfs_ | wc -l
The number of tasks in the top command was at around ~900Check for any old stale tasks running or queued
nutanix@CVM:~$ ecli task.list include_completed=0 limit=1000
|
This bug has been fixed in 5.18, 5.17.1.5, 5.15.4, and Later versions.Verify and clear the tasks which are stuck for months. Depending on the Task type there is a KB for each task.Clearing these tasks might help free up some of the threads. Monitor the thread usage for some time:
nutanix@CVM:~$ allssh "ps -eLF | awk '{print $1}' |grep -c nutanix"
If the thread count does not decrease significantly even after some time and the cluster isn't stabilized refer to ENG-318286 https://jira.nutanix.com/browse/ENG-318286 and check if the scavenger is spawning unnecessary threads and hitting nproc limits.If the upgrade isn't a feasible option now, follow the below temporary workaround.Workaround:
The changes must be reverted after the upgrade, Do not use this workaround unless there is absolutely no possibility of upgrading
To workaround this issue we will need to change the soft nproc value from 2048 to 4096 and soft nofile value from 10240 to 20480 in all of the following files:/srv/salt/security/CVM/core/limitsoff,/srv/salt/security/CVM/core/limitson,/etc/security/limits.conf
Increase the soft nproc value to 4096 and soft nofile value to 20480
nutanix soft nproc 4096
After making the changes above, verify what the current user limits are
nutanix@CVM:~$ ulimit -a
If the value of "max user processes" isn't updated to 4096, reboot the CVM for the change to take effect.Once the workaround stabilizes the cluster, upgrade the cluster at the earliest to the fixed versions in order to prevent a recurrence.Once upgraded, revert the changes soft nproc and soft nofile limits altered in the workaround back to default values.
|
KB12929
|
Debugging VPC tunnel connection failures
|
Debugging VPC tunnel connection failures.
|
This article describes how to debug VPC tunnel connection failures.Nutanix Self-Service is formerly known as Calm.
|
Debugging disconnected tunnel connection
Check if the following requirements are met:
Advanced Networking Controller should be enabled.Policy Engine in Calm should be enabled.External subnet should be attached to the VPC for which the user wants to deploy the Tunnel VM.Users should be able to ping Prism Central (PC) or Policy Engine from a VM deployed in the same VPC after attaching the external subnet.Traffic on port 2222 using TCP connections should be permitted from VPC to the Policy Engine VM.
Tunnel status is inferred through a periodic tunnel sync job that runs every 5 minutes to check if tunnel connection is healthy. The tunnel state may frequently toggle if there are intermittent network issues. In cases where the tunnel was restarted, or there was a network issue between tunnel client VM and policy VM, wait for at least 5 minutes for the next tunnel sync to display the correct tunnel state.
Example screenshots when restarting tunnel VM:
Navigate to the tunnel detail view in UI and check if the tunnel is in error state due to failures from the last run action. For such scenarios, you can click the Download Audit Logs button and share the bundle with Nutanix Support https://portal.nutanix.com.
If the tunnel stays disconnected even after the next tunnel sync, you can deploy a UBVM in the same VPC and subnet and check if the external connectivity is working fine by pinging the Policy Engine VM and trying to ssh into the PC (where Calm is deployed).If the Tunnel VM was accidentally deleted from the PC, you can delete the tunnel entity and create a new one.Log collection:
For calm and epsilon, logs can be collected via logbay using the following command:
nutanix@cvm$ logbay collect -t nucalm,epsilon
For policy-engine, logs can be collected via logbay using the following command:
nutanix@cvm$ logbay collect -t calm_policy_engine
For tunnel lifecycle logs, navigate to the tunnel detail view in UI and click the Download Audit Logs button.
|
KB7146
|
Citrix ELM Cannot read property 'uuid' of null'
|
When deploying a VM using Citrix applayer, the customer is getting A failure occurred while attaching disk to the virtual machine. The error is 'Cannot read property 'uuid' of null' error and is not able to deploy the image.
|
When deploying a VM using Citrix applayer, the customer is getting A failure occurred while attaching disk to the virtual machine. The error is 'Cannot read property 'uuid' of null' error and is not able to deploy the image. We found that the time between the POST and GET is very small, which is causing the 500 error in the logs. As shown below, ELM is doing a GET vm very shortly after it has done a POST vm.prism_gateway.log:
ERROR 2019-03-01 12:39:47,723 http-nio-127.0.0.1-9081-exec-12 prism.aop.RequestInterceptor.invoke:178 Throwing exception from VMAdministration.getVM
ELM log:
{"uuid":"7f7ede2a-1f36-4d9d-bbaf-a8076653123a","metaRequest":{"methodName":"VmClone"},"metaResponse":{"error":"kNoError","errorDetail":""},"createTime":1551440384333567,"startTime":1551440384361866,"completeTime":1551440386926207,"lastUpdatedTime":1551440386926207,"entityList":[{"uuid":"a8da0485-3340-4dd8-925f-76fc0b8da601","entityType":"VM","entityName":""},{"uuid":"914b76e8-4b3d-4354-8f43-62e6ffb76560","entityType":"Snapshot","entityName":""},{"uuid":"69fb3af5-a651-4811-80d2-e28fb5c48db8","entityType":"VM","entityName":""}],"operationType":"VmClone","message":"","percentageComplete":100,"progressStatus":"Succeeded","parentTaskUuid":"94137ed0-4693-4d39-a63c-32dcbd771fdd","subtaskUuidList":["af9c6aa3-d100-4e98-8445-9786b45027c1","44935ab1-5034-4f00-bebd-df9cf9ff2c38","110b6cd5-0746-4fee-abc1-90651e5a7b22"]}
|
Refer customer to Citrix support. At the time of writing this KB, Citrix support provided the reply below.Regarding the issue with DeployVmFromTemplate, Citrix solved this by modifying the DeployVmFromTemplate file in which timeout / retries code is adjusted. According to Citrix, the fix will be implemented in version of 19.03 of AppLayering. Advice customer to upgrade to version 19.03. Make sure to check this link https://docs.citrix.com/en-us/citrix-app-layering/4/system-requirements.html https://docs.citrix.com/en-us/citrix-app-layering/4/system-requirements.html to verify AOS compatibility.Another issue you may encounter is regarding AttachVolumeGroupToVm. According to Citrix, this issue currently has a bug fix open for it. This issue is similar to DeployVmFromTemplate where the API call times out too quickly. Temporary workaround is to disable the cache when deploying the VM.Note: Disabling the cache will not resolve the DeployVmFromTemplate issue.
|
KB17081
|
AD login breaks due to firewall/packet inspection between the CVMs and the AD server
|
AD login breaks due to firewall/packet inspection between the CVMs and the AD server
|
Logging into the PE/PC with the AD user shows the below error-
The username or password you entered is incorrect. Please try again.
The Prism Gateway Debug logs(Can be enabled via KB-1633 https://portal.nutanix.com/kb/1633) show the ConnectionError to the AD server due to SSL socket exception-
DEBUG 2024-04-19 06:57:39,158Z http-nio-127.0.0.1-9081-exec-14 [] auth.commands.LDAPAuthenticationProvider.authenticate:128 Trying to match domain in directory list with domain provided by username : test.com
Attempting Test of user via AD config page on PE shows a simple bind failure message-
Authentication test failed. simple bind failed: xx.xx.xx.13:636
Connection from the CVMs/PCVMs to the AD server is fine-
nutanix@NTNX-CVM:~$ allssh "nz -vz xx.xx.xx.13 636"
Attempting a connection to the AD server via openssl fails-NOTE: Ensure checking the connectivity via openssl from all the CVMs/PCVMs in the cluster.
Problematic output-
nutanix@NTNX-CVM:~/data/logs$ openssl s_client -connect xx.xx.xx.13:636
Expected output-
nutanix@NTNX-CVM:~$ openssl s_client -connect xx.xx.xx.13:636
|
[]
|
KB8623
|
Nutanix Files - Access Based Enumeration (ABE) not working on a Home/Distributed Share
|
There have been a few cases where customer think that ABE is not functioning properly. This KB attempts to address how it might be verified that ABE is indeed working as it is supposed to.
|
Access-based enumeration (ABE) is a Microsoft Windows (SMB protocol) feature. ABE restricts user-access by only letting you view the files and folders you have read access to when browsing content on a file server.If the customer claims that ABE does not work as designed.1. Find out what type of share he is using Refer to https://confluence.eng.nutanix.com:8443/display/STK/ABE+Basics+in+AFS https://confluence.eng.nutanix.com:8443/display/STK/ABE+Basics+in+AFS for basic understanding of ABE
|
If the complaint is regarding a General Share, involve the Files SME's in your region to further look into this.In most cases we have seen, customer complaining about Home shares where ABE is not working:
The structure of a Distributed/Home Share is such that :A) You cannot write to the root of the shareB) You need to create TLD via the MMC Snap-In Console or Power shell commands and then write to those.To a General/Standard Share, you can write to the root of the share.The implementation of ABE on a General/Standard share is straight forward, user will only see the Files and Folders they have access to.
The implementation for ABE is slightly different Distributed/Home Share, i.e they are only enforced on the Top Level Directories (TLD ) and not to what is inside the TLD.ABE in general has this performance overhead with query directory response.The typical use case for the Home Share for SMB Access is User Profiles. If the HOME share is used for storing the user profiles, all the sub folders underneath the TLDs should belong to the same user. But enforcing the ACL checks(due to ABE) those levels causes unnecessary delays. Hence, we decided to disable ABE after TLD to reduce the impact.In short, ABE is enforced on the TLD's of a Home Share, not inside the TLD's . If the customer requires implementation within TLD's , make them aware of the performance overhead ( browsing the TLD might get slower )To set ABE within the TLD:
1. Log into any fsvm2. Use the following command:
nutanix@FSVM:~$afs smb.set_conf "abe tld subtree" true section=global
Now ABE should be enforced within TLD's as well.
If a customer uses roaming user profile, the customer should set the correct share permission to distribute shares for ABE to work correctly.Please refer "Required permissions for the file share hosting roaming user profiles" on the following page.https://docs.microsoft.com/en-us/windows-server/storage/folder-redirection/deploy-roaming-user-profiles
|
KB10456
|
ESXi 6.5 failure when imaging with Foundation 4.5.4.2
|
This article describes an issue where imaging using Foundation version higher than 4.5.4.2 and lower than 4.6.1 with ESXi 6.5 or lower and a CX4 NIC installed on Lenovo HX fails with a PSOD (purple screen of death).
|
When imaging using Foundation version higher than 4.5.4.2 and lower than 4.6.1 with ESXi 6.5 or lower and a CX4 NIC installed on Lenovo HX, the imaging fails with a PSOD (purple screen of death).
|
The issue is resolved in Foundation 4.6.1. Use Foundation 4.6.1 or higher for Imaging to resolve this issue.
|
KB13107
|
LCM Operations on ESXi Clusters Configured with a Single Compute Node Will Fail Due to Lack of Prism Leader
|
In a very specific configuration in which a cluster is comprised of a single compute node running ESXi, and any number of storage-only nodes, LCM operations on the compute node will fail due to a failure by Prism to elect a leader. This leads to a catalog service failure, and ultimately, LCM update failure.
|
In a very specific configuration in which a cluster is comprised of a single compute node running ESXi, and any number of storage-only nodes, LCM operations on the compute node will fail due to a failure by Prism to elect a leader, as seen here in lcm_ops.out:
2022-04-02 09:16:22,903Z INFO catalog_staging.py:697 (10.xxx.xxx.92, update, 27dd69b9-90d4-4639-a1eb-bb2dd9c021db) Staging catalog item nutanix.redpool.plugins.supermicro
During the LCM firmware upgrade operation, the catalog service must retrieve an authentication token from the prism leader in order to stage update packages. Since Light compute (Storage only) nodes may not volunteer for Prism leadership unless no other node is detected at the time the service starts, the prism leadership will not be migrated to any of the storage-only nodes. This will cause the catalog service to fail to authenticate, and after 10 minutes, the LCM operation will fail.
nutanix@CVM:~$ allssh "grep 'This SVM will not volunteer for Prism leader' ~/data/logs/prism_monitor*INFO* |tail -n 3"
In this example, the LCM operation is being performed on 10.xxx.xxx.95 and all other nodes are configured as minimal compute nodes:
nutanix@CVM:~$ allssh grep minimal_compute_node /etc/nutanix/hardware_config.json
|
As a workaround, restart the Prism service on a Light Compute (LC) node after the ESXi node has gone down for maintenance. When Prism restarts, it will recognize that there are no standard nodes available to become Prism leader and the cluster will allow the Light Compute node to host the leadership role.If it has been more than 10 minutes since the LCM operation began, it will have already failed and must be retried.
|
KB6177
|
ESXi - VMs with more than 32 GB of memory and Nvidia P40 cards fail to boot.
|
Resolved in VMware vSphere ESXi 6.7. Workaround available.
|
A VM with over 32 GB of RAM and an Nvidia Tesla P40 GPU card on an ESXi host may fail to power on.
The power on task on the vSphere client will be stuck (usually on 52%), and the VM console will not come up.
As per Nvidia information, this only affects:
ESXi 6.0 Update 3 and later updatesESXi 6.5 and later updates
|
This issue is resolved in VMware vSphere ESXi 6.7.Nvidia has provided the following workaround to this issue:
Workaround
If you want to use a VM with 32 GB or more of RAM with Tesla P40 GPUs, use this workaround:
Create a VM to which less than 32 GB of RAM is allocated.Choose VM Options > Advanced and set pciPassthru.use64bitMMIO="TRUE".Allocate the required amount of RAM to the VM.
For more information, see VMware Knowledge Base Article: VMware vSphere VMDirectPath I/O: Requirements for Platforms and Devices (2142307) http://kb.vmware.com/kb/2142307.
Reference https://docs.nvidia.com/grid/latest/grid-vgpu-release-notes-vmware-vsphere/index.html#bug-2043171-vms-more-than-32-gb-ram-fail-to-boot-p40
|
KB6089
|
How to create backplane network when AHV hypervisor is reinstalled manually - network segmentation is enabled.
|
When AHV node is manually reimaged using phoenix, br0-backplane is not created automatically and as result genesis fails to start on Network segmentation enabled cluster due to missing br0-backplane interface.
|
WARNING: manage_ovs is the preferred vehicle to update bond/bridge configurations in AHV hypervisors. Do not use ovs-vsctl command directly on the hosts unless there is no other option and proceed with extreme care. Improper use of the command can lead to creating a network loop which will cause a cluster outage and general network unavailability at customer premises. Starting from 5.15.2 (ENG-308490), manage_ovs will work as described in KB 9383 even when the cluster is down and acropolis or insights are not working. If having doubts, engage a STL via Tech-Help before attempting any modification via ovs-vsctl.When AHV node is manually reimaged using phoenix, br0-backplane is not created automatically and as result genesis fails to start on Network segmentation enabled cluster:
2018-08-16 14:28:39 ERROR ipv4config.py:1625 Unable to get the KVM device configuration, ret 1, stdout , stderr br0-backplane: error fetching interface information: Device not found
Note: same error messages can be seen on clusters that do not have Network segmentation enabled and can be ignored there.Since genesis crashes it also means that ovs switch will only have basic configuration:
root@ahv # ovs-vsctl show
|
The following steps should be used to fix the issue:1. Create br0-backplane:
root@ahv# ovs-vsctl add-port br0 br0-backplane -- set interface br0-backplane type=internal
2. Create ifcfg-br0-backplane file with IP configuration (similar to other CVMs; IP address is taken from zeus config) on AHV host:
# Auto generated by phoenix
3. Change VLAN on br0-backplane interface (if needed):
root@ahv# ovs-vsctl set port br0-backplane tag=<vlan.no>
4. Bring interface down and up:
root@ahv# ifdown br0-backplane
5. After that genesis will be able to start and push correct bridge configuration:
root@ahv # ovs-vsctl show
|
KB5644
|
Nutanix Self-Service - Blueprint Save throws validation error "NIC setting mismatch" for VMware blueprints
|
This article helps in troubleshooting the error "NIC mismatch setting" when saving Nutanix Self-Service VMware blueprint.
|
Nutanix Self-Service (NSS) is formerly known as Calm.
VMware blueprint save fails with error:
Nic setting mismatch - Number of network setting should be equal to the number of NICs on the VM
|
Based on the VMware template that is used for the service, there can be one or multiple NICs attached to it.
In the following example, the template used for the service has one NIC attached:
There is only one Template NIC from the above screenshot. Check if there is "Network Settings" created for this Template NIC.
Note: The "Network Settings" section is hidden in between "DOMAIN AND WORKGROUP SETTINGS" and "DNS SETTINGS".
From the below screenshot, there is no network settings configuration created for the template NIC.
Now, click on the "+" next to "NETWORK SETTINGS" and select either "Use DHCP" or assign it a static IP address based on the requirements:
Note: The number of Template NICs should equal the number of "Network Settings" configured. If there is more than one Template NIC, configure an equal number of "Network Settings".
|
KB8098
|
How to renew your expired licenses or update your cluster with renewed licenses
|
This article describes the procedure for renewing expired licenses and for buying new licenses.
|
This article describes the procedure for renewing expired licenses, purchasing new licenses and where to look for information on updating each cluster with renewed licenses.
|
If you need to renew expired licenses in your portal inventory please contact your Account team and the Renewals team at [email protected]. You may also initiate a renewal request from the support portal here https://portal.nutanix.com/page/assets/expiring/list.If you have already renewed your licenses but the cluster still reflects the original expiration date you may need to update the cluster's licensing. You can review the steps for this as described in the License Manager Guide https://portal.nutanix.com/page/documents/details?targetId=License-Manager:lmg-licmgr-pnp-licensing-convert-t.html. These steps are also described in the article How to update Prism Element and Prism Central with renewed licenses. https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e000000CqHhCAK
|
KB4057
|
Unable to access IPMI IP from local CVM and VMK when using shared port
|
IPMI Shared port can't bridge L2 traffic from server to IPMI and vice versa.
So, local CVM and host can't access to IPMI through IP.
|
IPMI Shared port can't bridge L2 traffic from server to IPMI and vice versa.And almost all intelligent switch doesn't have ability that forward packet from Interface X to Interface X.It means local CVM and host can't access to IPMI through IP address.This image describes packet from CVM to IPMI.But actually, the communication might fail at ARP level
|
Use dedicated IPMI port.Shared port can't avoid this problem.
|
KB16881
|
File Analytics fetch old cluster IPs via API Call
|
File Analytics fetch old cluster IPs via api calls even after correct cluster IPs held in the Cluster.
|
Despite successfully updating the CVM, host, and cluster IP addresses, FAVM continues to send API calls to the old cluster IP address.
|
Troubleshooting steps:1. Confirm current CVM IPs from host file.
nutanix@NTNX-22SH5H150160-A-CVM:x-x-x.28:~$ allssh sort -k2 /etc/hosts
2. Confirm CVM IPs from zeus.
nutanix@NTNX-22SH5H150160-A-CVM:x-x-x.28:~$ allssh sort -k2 data/zookeeper_monitor/zk_server_config_file
3. On FAVM fetch retrieve_config info to confirm correct CVM and Cluster IPs are updated.
[nutanix@NTNX-x-x-x-20-FAVM ~]$ sudo /opt/nutanix/analytics/bin/retrieve_config
4. If the fetch info from retrieve_config is incorrect run below command to update the correct Cluster IPs. Note: With File Analytics 3.3.0 or upward, the command python needs to be changed for python3; apply the same on the required scenarios accordingly.
nutanix@FAVM~$ sudo python /opt/nutanix/analytics/bin/update_config --cvm_virtual_ip=yy.yy.72.30 --cvm_cluster_ips=yy.yy.72.118,yy.yy.72.128,yy.yy.72.89
5. After updating the correct Cluster IPs, re-confirm the information from retrieve_config.6. On FAVM under logs mnt/logs/host/avm_management/avm_management.log.INFO confirm API calls to OLD Cluster IP 10.240.x-x.
2024-03-07 10:02:15Z,052 INFO 136880 run_avm_management:run_file_server_sync: 51 - Running NOSFileserverSyncManager
7. We can observe that the OLD IPs were being fetched using __get_cvm_ips_from_nos function from nos_cvm_config_sync_manager.py
creation_time": "2022-06-17T07:45:03Z", "categories_mapping": {}, "categories": {}}}]}
8. On further Observation we can see that FAVM is failing to Sync NOS IPs.
2024-04-08 12:02:11Z,724 ERROR 222764 nos_cvm_config_sync_manager.py:__sync_nos_ips: 104 - NOS version information not available.
8. With respect to the NOS Sync function we retrieve details using Virtual IP. Using command 'edit-zkmigration-state print_zkserver_config_proto' we can fetch OLD Cluster IPs history.
nutanix@NTNX-22SH5H150166-A-CVM:10.0.x-x:~$ edit-zkmigration-state print_zkserver_config_proto
9. On Aplos Engine logs, we have a cache entry that is being referenced during "Aplos Prism Element - Prism Central Remote Connection". We can see the stale Token Entry in the local connection information referenced during the remote connection "refresh_token_endpoint: "https://10.240.x-x:9440/api/nutanix/v3/oauth/token"
2024-04-19 07:27:32,757Z DEBUG local_connection_info.py:58 Update local connection leader called.
10. To clear the stale token entry, restart the aplos and aplos_engine.
allssh genesis stop aplos aplos_engine; cluster start
|
KB10931
|
zkalias_check_plugin fails in particular nodes even though there is no problem with zookeeper configuration
|
There was a case that zkalias_check_plugin failed on particular nodes, even though there was no problem with zookeeper configuration.
|
There was a case that zkalias_check_plugin was failing on a particular node detecting zookeeper configuration discrepancy in Zeus, even though there was no problem with zookeeper configuration, for example:
Detailed information for zkalias_check_plugin:
But, the zookeeper configuration had been found valid, based on the steps described in KB 1700 http://portal.nutanix.com/kb/1700, as follows.
The content of /home/nutanix/data/zookeeper_monitor/zk_server_config_file file was identical on all CVMszeus_config_printer showed zookeeper_myid in the node_list blocks of CVMs which had been chosen as zookeeper node, as expectedThe content of /etc/hosts file was identical on all CVMsThe latest configuration printed by "edit-zkmigration-state print_zkserver_config_proto" matched with the correct configuration which zkalias_check_plugin printed in FAIL messageAll zookeepers were running well in CVMs which had been chosen as zookeeper node, and responses correctly to zkServer.sh status"zookeeper_monitor was running well on all nodes, no FATAL
|
Nutanix is aware of this issue, and is investigating about the cause.
|
KB2758
|
Empty Interface Configuration Files on CVMs May Cause Cluster Services to not Start
| null |
If extra interface configuration files are created on a CVM, cluster services may not start after the next CVM reboot since it may get stuck trying to start an interface that does not exist. An example of how this might occur is if during a CVM IP address change, the filename is mistyped and an extra file is created, for example, ifcfg-eth9 rather than ifcfg-eth0.You may see log entries similar to the following in /home/nutanix/data/logs/genesis.out:
2015-10-05 11:41:39 WARNING ipv4config.py:294 Failed to load network config from /etc/sysconfig/network-scripts/ifcfg-eth9: [Errno 13] Permission denied: '/etc/sysconfig/network-scripts/ifcfg-eth9'
|
Verify the additional interface configuration file(s) are not needed. Keep in mind that some CVMs may potentially have, for example, ifcfg-eth2 or even ifcfg-eth3 files in use if the CVM is multi-homed. KB 2127 https://portal.nutanix.com/kb/2127 covers multi-homing a CVM.As long as the additional interface configuration file(s) are not needed, you can remove those files from the /etc/sysconfig/network-scripts/ directory on the CVM.Repeat the above steps for any additional CVMs in the cluster that have unneeded interface configuration files.Cluster services should start normally shortly after the unneeded interface configuration files have been removed.
|
KB15745
|
Nutanix Files , Data Lens - Files Recall issue
|
In some scenarios customer can perform a Recall of Files from DataLens which may be successful from Data Lens Dashboard but customer cannot open the files recalled
|
Description:
A customer can recall files from Data Lens dashboard, with a status success, but the file is previewed with an X sign and customer may not be able to copy or to open the file.
Identifying the problem:
Get the share name and the file they tried to recall. With this information, identify the owner FSVM of the share; refer to KB 4062 https://portal.nutanix.com/kb/4062Connect with SSH to the Owner FSVM and search into /home/log/tiering
E20231027 15:03:34.520661Z 138318 async_s3_client.cc:1657] PutObjectDone, error processing request request_url=https://s
This error may be due to either invalid Certificate being used or Wrong CA Cert while communicating with the Object store.
|
The Solution is to ensure that a Valid Certificate is used or that the correct CA was Uploaded. Check with Customer and ensure he has the Secret Key, the CA Customer from the S3 Provider.
Go to Update tiering locationProvide Secret and other informationsOn the Bottom on certificate, Upload the CA Certificate in PEM format and click on Validate
Workaround:
In case the Customer does not have the CA Certificate, we can uncheck the Validate Certificate, but this will require to restart tiering engine on FSVM.
To restart tiering engine kill minerva_tiering on each FSVM.
|
KB15924
|
SSO configuration through Microsoft Intra ID fails to parse metadata
|
IDP Authentication fails for Azure Active directory
|
While configuring SSO for Microsoft Intra ID on Prism Central, uploading an XML file for IDP authentication fails with the below error message:
"Error while parsing metatdata: Unicode string with encoding declaration are not supported. Please use bytes input or XML fragments without declaration"
|
Download the Federation Metadata XML fileOpen the XML file in a text editorModify the values "<?xml version="1.0" encoding="utf-8"?>" to "<?xml version="1.0"?>"
For example:
Original XML file:
<?xml version="1.0" encoding="utf-8"?><md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" validUntil="2018-03-30T11:27:58Z" cacheDuration="PT1522841278S" entityID="https://[Identity Service Provider].com"> <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> <md:KeyDescriptor use="signing">
Modified XML file:
<?xml version="1.0"?> <md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" validUntil="2018-03-30T11:27:58Z" cacheDuration="PT1522841278S" entityID="https://[Identity Service Provider].com"> <md:IDPSSODescriptor WantAuthnRequestsSigned="false" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"> <md:KeyDescriptor use="signing">
4. Finally, save the XML file and proceed to upload the file onto Prism Central
|
KB13281
|
Nutanix discontinues DELL Poweredge Firmware ISO qualification
|
This KB article describes about the discontinuation of Firmware Qualification on DELL poweredge platforms.
|
Nutanix Platforms Engineering team ‘qualified’ Dell PowerEdge Firmware Bundles (Platform Specific Bootable ISO's) as per the regular release cadence from Dell. Post qualification and approvals, Nutanix updated the qualified Firmware versions on Dell PowerEdge Hardware Firmware Compatibility List ( HFCL https://portal.nutanix.com/page/documents/details?targetId=Dell-Hardware-Firmware-Compatibility:del-dell-14g-sw-fw-compat-r.html). As there is no support for LCM on Poweredge platforms, Customers and Support relied on HFCL updates to learn what is the latest and greatest FW bundle supported.
Dell has transitioned DELL platform-specific bootable ISO’s to DELL Repository Manager(DRM) on version 3.4 and above. As these ISOs are no longer available - Nutanix would not be able to qualify and publish a support matrix (HFCL) for these specific ISO content.
The last qualified bootable ISO by Nutanix was BOOTABLE_21.12.00.24 which is now unavailable on the Dell portal
Nutanix now plans to update the HFCL with the latest qualified iDRAC and BIOS as qualified on equivalent DELL XC platform variants.
For more details around qualified Firmware bundles or individual Firmware - Kindly contact DELL Support.
|
Refer to the DRM Documentation here : https://www.dell.com/support/kbdoc/en-in/000177083/support-for-dell-emc-repository-manager-drm
https://www.dell.com/support/kbdoc/en-in/000177083/support-for-dell-emc-repository-manager-drm https://www.dell.com/support/kbdoc/en-in/000177083/support-for-dell-emc-repository-manager-drm
|
KB11645
|
HPE: Boot to hypervisor fails intermittently on DX380-24 with Tinker card
|
Boot to hypervisor fails intermittently on DX380-24 with Tinker card
|
On HPE platforms with "HPE NS204i-p Gen10+ Boot Controller" (Tinker Card) having ILO 2.33 firmware version, sometimes hypervisor may not boot with Tinker card following a host restart. This is because host failed to identify the Tinker card Boot disk and the host will struck at UEFI boot mode and try to boot with other boot options and fail.
Affected Platforms:
DX380-24 Platform with Tinker card Boot disk.
|
Verify Server installed with "HPE NS204i-p Gen10+ Boot Controller" from ILO --> System Information --> Device Inventory before performing the below workaround.Workaround:
Login to ILOSelect "Momentary Press" to power-off.Again click on "Momentary Press" to power-on server.Select "F11" to enter boot menu on "POST Screen".Select "HPE NS204i-p Gen10+ Boot Controller" on boot menu to boot server with Tinker card.
|
KB3508
|
Checking the MAC Address Table and Interface Statistics on an AHV Cluster
|
This KB article describes some troubleshooting tips to resolve any issues with the Open vSwitch bridge on an AHV cluster.
|
Checking the MAC address table and interface (I/F) statistics on the Open vSwitch bridge is useful when you experience any network issues on an AHV cluster.
In the following figure, consider that VM C on Node D fails to communicate with other the VMs.
To troubleshoot the problem on the Open vSwitch bridge (br0 and others), check the following on the AHV host.
ConfigurationARP TableMAC Address TableI/F Counter (TX/RX Stats)Packet Capture
|
Checking the AHV Host of the VM
You can check the AHV host on which the VM is running by using either of the following ways.
To check the AHV Host of the VM by using the Prism web console, do the following.
Go to VM > Table.Check the VM Name and Host fields.
To check the AHV host of the VM by logging on to the Controller VM, do the following.
Run the following Acropolis CLI command to get the UUID of the host.
nutanix@cvm$ acli vm.get VM_name
Run the following command to list all the hosts and then compare the UUID of the host you obtained in step 1 with the host names listed in the output to determine the name of the AHV host on which the VM is running.
nutanix@cvm$ ncli host ls
Checking the vSwitch Setting
To check the vSwitch setting on an AHV cluster, see KB-3488 https://portal.nutanix.com/#page/kbs/details?targetId=kA032000000CmltCAC.
Checking the MAC Address Table
Run the following command to check the MAC address table.
root@host# ovs-appctl fdb/show bridge_name
Example
root@host# ovs-appctl fdb/show br0
The port is of-port.
If the VLAN ID is 0, the VLAN is an untagged VLAN and you can use the grep command (SOURCE) to get the source address of the sender VM.
If there is an entry on the collect port, this means that the bridge has received the packet.
To map of-port with the vNIC name, run the following command.
root@host# ovs-vsctl --columns=name,ofport list Interface
Example
root@host# ovs-vsctl --columns=name,ofport list Interface
To check which vNIC the VM is using on the Open vSwitch, see KB-3488 https://portal.nutanix.com/#page/kbs/details?targetId=kA032000000CmltCAC.
Checking the I/F Counter
Packets may be dropped at the host/OVS/host-driver level and cause that counter to increment, or they may be delivered through to the guest OS and dropped at the guest/NIC/guest-driver level and cause the in-guest counter to increment.Run the following command to check the I/F counter if counter needs to be investigated on host/OVS/host-driver level. Otherwise you can use ifconfig command inside guest VM.
root@host# virsh domifstat VM_ID NIC_name
Example
# virsh domifstat 6 tap2
To check the ID of the VM, see KB-3488 https://portal.nutanix.com/#page/kbs/details?targetId=kA032000000CmltCAC.
The same interface statistics are shown in Prism UI. Interface counters from inside the guest VM are not reflected in Prism UI.Determining which packet flow caused the increase in the I/F counter is difficult. To check the packet flow, you can increase the traffic rate between the VM.
Note: To capture the traffic on the Open vSwitch, contact Nutanix Support for assistance.
|
KB6162
|
Drives not being mounted after LSI HBA replacement
|
This KB describes a scenario where services fail to start after HBA replacement or a node is stuck in kNew state during cluster expansion.
|
Issues encountered after LSI HBA replacement
DO NOT assume there was an HBA replacement that created this condition. Confirm and refer to KB 10666 https://portal.nutanix.com/kb/10666 as part of your confirmation as to what the root cause is. You may be running into one of the scenarios captured in that KB and there is a good differentiation description between the 3 scenarios, one of which is a scenario in this KB.
Running the list_disks command errors out with the following stacktrace:
Traceback (most recent call last):
The Linux command lsscsi reports all the drives present in the system.The current host's sas_address diverges from the sas_address in /etc/nutanix/hardware_config.json.
nutanix@cvm$ allssh "cat /sys/class/scsi_host/host*/host_sas_address; grep sas_address /etc/nutanix/hardware_config.json"
Issues encountered after adding a new node to the cluster
Node added successfully but only these services show up as running:
cassandra: [5125, 5158, 5159]
Cassandra service is continuously crashing with the following error in cassandra_monitor:
F0808 04:56:15.927150 11336 cassandra_monitor.cc:3867] No metadata disks found on node: 190881906
Hades crashes with the following error:
2018-08-08 04:27:51 INFO disk_manager.py:415 Configuring Hades
Node status in Zeus shows as kNewNode.
node_list {
The current host's sas_address diverges from the sas_address in /etc/nutanix/hardware_config.json.
nutanix@cvm$ allssh "cat /sys/class/scsi_host/host*/host_sas_address; grep sas_address /etc/nutanix/hardware_config.json"
|
In both scenarios, you will need to create a new hardware_config.json file.
Before proceeding with the steps below, verify if the node has multiple HBAs. If so, the Hardware Replacement guide already mentions a script that can be run after replacement, hba_breakfix. This script can be run instead of running the manual steps given below. The script should be run from another Controller VM (CVM) after ensuring that the affected host is not running any VMs. Check the Hardware Replacement document here https://portal.nutanix.com/page/documents/list?type=hardware&filterKey=Component&filterVal=HBA.
Follow these steps:
While still booted in the CVM, take note of the serial numbers of the SSDs that make up your md0 device.Boot the affected host into Phoenix.Once you are in Phoenix, do the following steps:
Before performing the below steps, check for the existence of the RAID as some Phoenix versions create it on boot and mount it to tmp_mnt.
[root@phoenix ~]# cat /proc/mdstat
If the RAID exists, go to step b.
Rebuild RAID for md0. (Your devices might be named differently.)
[root@phoenix ~]# mdadm -A -R /dev/md0 /dev/sda1 ; mdadm --add /dev/md0 /dev/sdb1
Mount both root partitions /dev/md0, /dev/md1. The steps below assume a dual SSD system where you would have mirror devices and that you are in /root. If you are not on a dual SSD system, verify the partitions (typically, sda1 and sda2).
[root@phoenix ~]# mkdir tmproot1 tmproot2
Determine which mirror device contains the .nutanix_active_svm_partition marker. (The commands below show where the file was found. You may also see a .nutanix_active_svm_partition.old file.) In the example below, it is found under tmproot1 directory.
[root@phoenix ~]# ls -al tmproot*/.nutanix_active_svm_partition*
Generate a new hardware_config.json
[root@phoenix ~]# python /phoenix/layout/layout_finder.py local
For Phoenix 4.4.x and later:
[root@phoenix ~]# python /usr/bin/layout_finder.py local
Move the new hardware_config.json under /root to tmproot1/etc/nutanix/. Once done, it will ask you to overwrite the existing file. Press Y.
[root@phoenix ~]# mv hardware_config.json ./tmproot1/etc/nutanix/
Reboot back to the CVM.
[root@phoenix ~]# python /phoenix/reboot_to_host.py
Note: reboot_to_host might fail if Phoenix ISO was plugged in directly to the IPMI Java console. In such cases, just plug out the ISO from Java console and reboot the host.
Notes
Note 1: There was an instance (case 47090) where new disk would still not mount after following the above steps.
If the "host_sas_address" file and "hardware_config.json" show different SAS addresses (or there is an extra SAS address under /etc/nutanix/hardware_config.json), check if host has 2 SAS adapters. If yes, make sure both are enabled on the host and set as paththrough device on the CVM.
Example:
Check how many SAS adapters the host has:
nutanix@cvm$ cat /sys/class/scsi_host/host*/host_sas_address; grep sas_address /etc/nutanix/hardware_config.json
Steps on how to enable SAS adapter:
Log in to vSphere Web client.Select the ESXi host in question.Go to Configure -> Hardware -> PCI devices -> select Edit and look for "LSI Logic" devices.Enable the ones that are not checked.Restart the host and add the new SAS adapter as paththrough device on the CVM.
This particular issue was seen with the second SAS adapter on an NX-3155-G5 node.
Note 2: As seen in case 601387, a healthy drive was not added to the cluster's storage pool due to wrong cabling between the HBA and the drive enclosure (cables swapped between the enclosure and HBA port).
The drive will be mounted in the CVM and writing to it will be possible. In hades.out, you will have the following line logged. (The drive will also not be present in the output of list_disks.)
2019-11-06 15:29:11 ERROR disk_manager.py:2718 Disk /dev/sdf with serial Z1X5RC9C is not mounted in hades proto, skipping the disk
Compare the cabling on other node of the same model in the cluster that does not have the problem with the below commands. (Use -p to select the port. See also KB 4629 http://portal.nutanix.com/kb/4629.)
nutanix@cvm$ sudo /home/nutanix/cluster/lib/lsi-sas/lsiutil -p 1 -i | grep "Current Port State" -A 2
Below are the outputs from 2 CVMs where the cabling is different (cables swapped between HBAs). The CVMs had 2 HBAs installed and the drive that was not admitted is on the second one, which is why -p 2 option was used.
OK:
Swapping the cables to the HBA solved the problem in that case. The details of the investigation are in TH-3128 https://jira.nutanix.com/browse/TH-3128.
Note 3: If you have replaced the hardware_config.json file on a Storage Only node, you will need to add "minimal_compute_node": true flag back to the file.
Before:
"hardware_attributes": {
After:
"hardware_attributes": {
|
""Verify all the services in CVM (Controller VM)
|
start and stop cluster services\t\t\tNote: \""cluster stop\"" will stop all cluster. It is a disruptive command not to be used on production cluster"": ""Get the CPU location (Comes very handy when identifying socket location for failing CPU)""
| null | null | null |
KB14475
|
Space Accounting | Nutanix Storage Usage Guide and FAQ
|
This article contains frequently asked questions about disk space usage and how to find more detailed answers.
|
Nutanix clusters are renowned for their distributed storage capabilities, making them a popular choice for storing virtual machines (VMs). However, the advanced features offered by Nutanix, such as compression or data resiliency, can sometimes make calculating disk space usage a complex task. To help you navigate this challenge, this article aims to give you a comprehensive overview of the techniques that can be used to predict the space used in a Nutanix cluster.
The following are frequently asked questions about space usage and storage, and answers to them:.
|
What VM or VMs are taking up the most space on my cluster?Use KB 14515 - Space Accounting | Determining VM Space Usage https://portal.nutanix.com/kb/14515 to see what options you have for tracking your space usage.
Is the cluster going to run out of space if I don't do anything?Use KB 14549 - Space Accounting | Storage Space Issue Warning and Alerts https://portal.nutanix.com/kb/14549 to see how to check your overall usage and things that could identify potential problems.
Are the space-saving settings on my containers configured correctly?Use KB 14525 - Space Accounting | Are the space saving settings on my containers configured correctly https://portal.nutanix.com/kb/14525 to see best practices and how to check your current configuration and the amount of savings.
How can I tell how much snapshot space is being used on a container?Use KB 14516 - Space Accounting | Identifying Snapshots on a Nutanix Cluster https://portal.nutanix.com/kb/14516 to learn how to identify snapshots in a cluster and find their reclaimable space.
How can I tell if an uploaded image is using up too much space on my cluster?Use KB 14600 - Space Accounting | Cluster space used by images https://portal.nutanix.com/kb/14600 to see how to determine how much space is used by imaged on a cluster.
Why does Prism show more total usage than the VMs should be taking up?Use KB 14705 - Space Accounting | Prism show cluster/container space usage different than total usage from all VMs on it https://portal.nutanix.com/kb/14705 to see where your space could be going.
Why does Prism show a different amount of space usage for a VM than the Guest OS shows?Use KB 14698 - Space Accounting | Prism shows different space usage for a VM than the Guest OS shows https://portal.nutanix.com/kb/14698 to see why there's a difference and what action you can take.
I deleted some data. When will my space be freed up? How will I know when it's done reclaiming?Use KB 2924 - Why does my Nutanix cluster not show free or reclaimed space after deleting many VMs or files? https://portal.nutanix.com/kb/2924 to understand what to expect after deleting VMs and files and reclaiming space.
|
KB10364
|
Gradual space utilization increase in clusters using Nearsync data protection feature
|
Describes the procedure for identification and removable for leaked lws snapshot directories
|
Clusters using the NearSync feature may be affected under certain circumstances by a space leak issue on the container.Note: This KB describes only one of the possible issues with leaked/stale snapshots. Described here issue is different from KB9210, where the staging area usage exhibits the same effect.Note2: There is a general KB-12172 https://portal.nutanix.com/kb/12172 about stale/leaked snapshots with reference to multiple specific KBs for each different issue.BackgroundThe problem starts with a NearSyncFinalizeLWS operation that results in a Timeout.In the example below, in stargate.INFO logs the NearSyncFinalizeLWS issues NfsSnapshotGroup with LWS ID 44465. However, the NfsSnapshotGroup fails with kTimeout, resulting in NearSyncFinalizeLWS failing with kTimeout.
I1109 15:59:03.347205 22105 cg_finalize_lws_op.cc:366] cg_id=4526869826412500882:1570808688267568:3553319818 operation_id=44181021263 Allocated new lws id: 44465
However, the NfsSnapshotGroup actually completes immediately after
I1109 15:59:13.706892 22128 snapshot_group_op.cc:371] op_id=44181021271 SnapshotGroup RPC to the master completed with stargate error kNoError and NFS error 0
The NearSyncFinalizeLWS failure with kTimeout would trigger a retry from Mesos. This is seen for the same LCS UUID 516afc3b-7648-4d50-9472-f38e4bd2e81fStargate now allocates a new LWS ID 44467, issues a NfsSnapshotGroup which succeeds and creates another snapshot for the same LCS UUID 516afc3b-7648-4d50-9472-f38e4bd2e81f.
I1109 15:59:15.316640 22107 cg_finalize_lws_op.cc:366] cg_id=4526869826412500882:1570808688267568:3553319818 operation_id=44181163674 Allocated new lws id: 44467
Since this successful snapshot is known to Cerebro, Mesos GC would issue a CerebroRecursiveRemoveDir for the snapshot files with path prefix snapshot/38/4526869826412500882-1570808688267568-195329338/.lws/516afc3b-7648-4d50-9472-f38e4bd2e81f/44467 and they were deleted.The snapshot files with path prefix .snapshot/38/4526869826412500882-1570808688267568-195329338/.lws/516afc3b-7648-4d50-9472-f38e4bd2e81f/44465 are never deleted.This is because the snapshot that did succeeded in NFS after Stargate CG controller had timed out was never known to Cerebro/Mesos and never associated with LCS UUID 516afc3b-7648-4d50-9472-f38e4bd2e81f.This implies that the NFS snapshot created for LWS ID 44465 is leaked. Not only is it leaving behind undeleted files in the NFS .snapshot namespace these files are also referencing snapshot vdisks that would be consuming space.Root CauseThis behaviour is a day-0 design issue in Stargate. It is silently ignoring the kTimeout of the NfsSnaphotGroup, even though subsequent retry has resulted in creation of a snapshot.Since this NearSyncFInalizeLWS issued by Mesos to create the NearSync snapshot failed, this snapshot path is not known/reported to Mesos and hence it is not removed during Snapshot garbage collection.Solution- Issue is to be addressed in ENG-353080 to prevent further leaking.- For the already leaked snapshots, plan is to build a scrubber that will monitor the files and remove stale data in the background - ENG-357815
|
WorkaroundUntil a fix and a scrubber is available, we can manually identify and cleanup the directories. As it involves data deletion, a Techhelp is mandatory to run through the deletion.Part 1 - IdentificationThe problem can be easily identified by comparing nfs_ls output with the state in cerebro and Mesos.Following script will create a list of all the paths that contain leaked data and can be cleaned up. Download the following script into one CVM in the cluster and verify md5sum is correct:
https://download.nutanix.com/kbattachments/10364/detect-stale-snap.py https://download.nutanix.com/kbattachments/10364/detect-stale-snap.pymd5: 4b2ecad2eb9a02fd691b174ecd21ca6c
Usage of the script:1. Run following command on any CVM to get pd names in file pds.txtList of PDs for normal async DR.
nutanix@NTNX-14SM15510002-C-CVM:10.66.xx.xx:~$ mkdir analysis
and/orList of PDs for Leap DR.
nutanix@NTNX-14SM15510002-C-CVM:10.66.xx.xx:~$ mkdir analysis
2. Run following command on the same CVM to get snapshot vector output in file snapshot_id.txt:
nutanix@NTNX-14SM15510002-C-CVM:10.66.xx.xx:~/analysis$ for pdname in `cat pds.txt`; do cerebro_cli query_protection_domain $pdname list_snapshot_handles="true;regular"; done > snapshot_id.txt
3. Get the nfs_ls output:
nutanix@NTNX-14SM15510002-C-CVM:10.66.xx.xx:~/analysis$ nfs_ls -LiaR > nfs.txt
4. Run the script using files from step 2 &3.
nutanix@NTNX-14SM15510002-C-CVM:10.66.xx.xx:~/analysis$ python detect-stale-snap.py snapshot_id.txt nfs.txt
The resulted "removable.txt" contains a list of all the leaked LWS ID directories that can be removed.Example:/cae_ntx_prod/.snapshot/27/5298392737567312630-1556652582575849-1541953927/.lws/cae_ntx_prod/.snapshot/87/5298392737567312630-1556652582575849-4030421587/.lws/cae_ntx_prod/.snapshot/94/5298392737567312630-1556652582575849-4129031094/.lws/cae_ntx_prod/.snapshot/91/5298392737567312630-1556652582575849-2433463591/.lws/cae_ntx_prod/.snapshot/25/5298392737567312630-1556652582575849-1686415325/.lws/cae_ntx_prod/.snapshot/69/5298392737567312630-1556652582575849-4745376269/.lws/cae_ntx_prod/.snapshot/12/5298392737567312630-1556652582575849-4216400512/.lws/cae_ntx_prod/.snapshot/70/5298392737567312630-1556652582575849-4129031070/.lws/cae_ntx_prod/.snapshot/79/5298392737567312630-1556652582575849-4558995579/.lws/cae_ntx_prod/.snapshot/39/5298392737567312630-1556652582575849-4249052039/.lwsPart2 - Removal(Only to be performed in context of a TechHelp)Once we obtained all the leaked data, we can remove those files by using the clear_snapshot_folder.py script Download the following script into one CVM in the cluster and verify md5sum is correct: https://download.nutanix.com/kbattachments/10364/clear_snapshot_folder.py https://download.nutanix.com/kbattachments/10364/clear_snapshot_folder.pymd5: b1721be3bceeba71f1bfb7a17159e9b9Usage of the script:
nutanix@NTNX-19SM6H320279-B-CVM:10.8.xx.xx:~/analysis$ while read line; do python clear_snapshot_folder.py --prefix $line; sleep 1; done < removable.txt
At least a curator full scan is required for the space to be reclaimed.
|
KB16800
|
Admin Duplicate entity created by Prism Central can deny access
|
Prism Central might create a duplicate admin user which cause UI access issues
|
It has been seen a weird behavior where Prism Central may experience access issues even after login with the local admin user. Issue identification 1.- Enable aplos debug gflag
nutanix@PCVM:~$ allssh 'echo "--debug=True" > ~/config/aplos.gflags'
2.- Check over the ~/data/logs/aplos.out the following error signature :
2024-04-25 17:00:27,703Z INFO athena_auth.py:143 Basic user authentication for user admin
3.- You might also see nuclei not working properly as following sample:
nutanix@NTNX-171-69-216-211-A-PCVM:~$ nuclei
4.- After checking, please turn off debug logs by deleting the two files:
GFlags:
nutanix@NTNX-PCVM:~$ allssh /bin/rm ~/config/aplos.gflags
|
Use the remove_stale_entries_v1.py script to clean last registry which is the duplicate one, on this case 2d01dd6a-12fb-4f44-8d2f-a91a77a3c4c5 https://download.nutanix.com/kbattachments/12005/remove_stale_entries_v1.py https://download.nutanix.com/kbattachments/12005/remove_stale_entries_v1.py
File Size (in Bytes): 2480File MD5: 95b4254ee48e5f0dbcb5e93f5be0cc1f
Download script to /home/nutanix/tmp:
nutanix@CVM:~/tmp$ wget https://download.nutanix.com/kbattachments/9489/remove_stale_entries.py
2. Run the script via the following:
nutanix@CVM:~/tmp$ python remove_stale_entries.py --uuid [USER UUID] --entity_type user --cas
|
KB4615
|
Hyper-V - "setup_hyperv.py setup_scvmm" script fails to verify SCVMM service account due to incorrect IndigoTcpPort value
| null |
Executing the setup_hyperv.py setup_scvmm script to join a Hyper-V cluster to SCVMM fails on the Verifying SCVMM service account step. Following is an example of the failure:
nutanix@cvm:~$ setup_hyperv.py setup_scvmm
The /home/nutanix/data/logs/setup_hyperv.log file shows the following as the cause of the failure:
2017-07-13 14:13:06 INFO zookeeper_session.py:92 setup_hyperv.py is attempting to connect to Zookeeper
|
Warning: The following steps involve modifying the Windows registry. Serious problems might occur if you modify the registry incorrectly by using Registry Editor or by using another method. Create a backup before modifying the registry and modify the registry with caution.The following except from the error in the setup_hyperv.log log points to the cause of the failure:
The type or name syntax of the registry key value IndigoTcpPort under Software\Microsoft\Microsoft System Center Virtual Machine Manager Administrator Console\Settings is incorrect.
This indicates that the registry value for IndigoTcpPort in the HKLM\Software\Microsoft\Microsoft System Center Virtual Machine Manager Administration Console\Settings registry key on the SCVMM server has incorrect data or missing. The IndigoTcpPort value may be present as type REG_SZ without any data, but instead should be type REG_DWORD with data 1fa4, which is hexadecimal for 8100. (Port 8100 is utilized by SCVMM.)To resolve this issue, delete the IndigoTcpPort value of type REG_SZ, if it exists, and create a new REG_DWORD value with data 1fa4. Or, if IndigoTcpPort already exists as type REG_DWORD, ensure the data is set to 1fa4.
|
KB8026
|
How to change number of vNIC queues and enable RSS virtio-net Multi-Queue for AHV VMs
|
Considerations and guidance for using RSS Multi-Queue to tweak network performance for certain network I/O-intensive applications on AHV UVMs.
|
This article discusses considerations and provides guidance for leveraging Multi-Queue in virtio-net to help tweak network performance for certain network I/O-intensive applications when running on AHV UVMs.
If an application uses many distinct streams of traffic, RSS can distribute them across multiple vNIC DMA rings. This increases the amount of RX buffer space by N. Additionally, most guests will pin each ring to a particular vCPU, handling the interrupts and ring-walking on that vCPU, and thereby achieving N-way parallelism in RX processing. Increasing the number of queues beyond the number of vCPUs is not advisable. It will not provide additional parallelism.
The following workloads may have the greatest performance benefit using the solution provided:
VMs/apps where traffic packets are relatively large.VMs/apps with many concurrent connections, with network traffic moving between guests on the same host, guests across hosts or to the hosts themselves, or guest to an external system.To provide relief to vNIC RX packet drop rate increases (per KB 4540 https://portal.nutanix.com/kb/4540 - Handling packet drops from vNICs in AHV) where CPU contention of guest/host has already been ruled-out.
We can increase the 'queue' value of the AHV vNIC to allow the guest OS to leverage Multi-Queue virtio-net on AHV UVMs with intensive network I/O. Multi-queue virtio-net provides an approach that scales the network performance as the number of vCPUs increases, by allowing them to transfer packets through more than one virtqueue pair at a time.
|
It is advised to be conservative when increasing the number of queues. Never set the number of queues to be larger than the total number of vCPUs assigned to VM. Having more queues in vNIC may increase packet re-ordering and possibly lead to more TCP re-transmissions. For this reason, starting with queue size 2 is highly recommended. After making this change, monitor the UVM/application and network performance to confirm the proposed benefits whilst ensuring vCPU usage will not be dramatically/unreasonably increased, before incrementing the queue size further.
Below are the steps to perform for AHV to make additional vNIC queues available to the UVM from the hypervisor side. Refer to your Guest OS documentation for additional relevant information in any required steps to be taken within the guest OS to leverage the additional vNIC queues.
The UVM will need to be powered off to change the number of queues via acli, therefore it is recommended these changes are performed during a planned maintenance window for the UVM and its application(s). The vNIC status may change Up->Down->Up or further Guest OS reboot may be required to finalise the settings depending on the OS implementation steps.
(Optional, Best Practice) Ensure AOS and AHV are up to date. Visit https://portal.nutanix.com/ https://portal.nutanix.com/ for more information.Ensure the AHV UVM is running the latest Nutanix VirtIO driver package. Nutanix VirtIO 1.1.6 or higher is required for RSS support. ( Release Notes https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-VirtIO:Release-Notes-VirtIO | Documentation https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-Guide-v5_18:vmm-vm-virtio-ahv-c.html | Download https://portal.nutanix.com/page/downloads?product=ahv&bit=VirtIO )Confirm the name of the VM via Prism, or via acli on a CVM (Controller VM):
nutanix@cvm$ acli vm.list
Find the MAC address and confirm the current number of assigned 'queues' for the vNIC via Prism / VM, or by using acli on a CVM:
nutanix@cvm$ acli vm.nic_get <VM Name>
Note: acli defines 'queues=' as: Maximum number of Tx/Rx queue pairs (default: 1)
Power off the UVM via Prism / VM, or via acli on a CVM:
nutanix@cvm$ acli vm.shutdown <VM Name>
For the intended UVM, increase the number of vNIC queues made available to the guest by AHV using acli on a CVM:
nutanix@cvm$ acli vm.nic_update <VM Name> <MAC address of the vnic> queues=N
Where N is the number of queues. N should be <= total number of vCPUs assigned to the UVM.
Note: If the VM is not powered off this command will fail and return output: NicUpdate: InvalidVmState: Cannot complete request in state On
Power on the UVM via Prism / VM, or via acli on a CVM:
nutanix@cvm$ acli vm.on <VM Name>
Confirm with the Guest OS vendor on whether additional steps are required within the guest to make use of the additional vNIC queues provided by AHV to the guest. Typically, Microsoft Windows has RSS enabled by default.
For example: For RHEL/CentOS, execute the following command inside the UVM guest OS:
From within the VM, confirm that irqbalance.service is active. If not, start it with "systemctl start irqbalance.service" (It is active by default in CentOS. RedHat may need to activate this):
uservm# systemctl status irqbalance.service
uservm$ ethtool -L ethX combined M
Where the value of M is from 1 to N (from Step 6). The additional queues can be checked as per the example below:
[root@localhost ~]# ethtool -S eth1
As a reminder, after making this change, monitor the VM performance to make sure an expected network performance increase is observed and that the UVM vCPU usage is not dramatically increased to impact application on the UVM.
Also be aware of caveat note from RHEL 7 "Virtualization Tuning and Optimization Guide : 5.4. NETWORK TUNING TECHNIQUES https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-networking-techniques#sect-Virtualization_Tuning_Optimization_Guide-Networking-Multi-queue_virtio-net":
Currently, setting up a multi-queue virtio-net connection can have a negative effect on the performance of outgoing traffic.
For assistance with the above steps, or in case the above mentioned steps do not resolve your UVM network performance concerns, consider engaging Nutanix Support at https://portal.nutanix.com/ https://portal.nutanix.com/
|
KB7141
|
One way trust not supported between cross-domain
|
This KB describes about a scenario where customer tried to set up a one way trust between cross domains that we do not yet support
|
We currently do not support a scenario where a customer can set up a one-way trust between cross domainsAn example case can be described as follows:
Customer has a bind user in domain A.He added the directory successfully using that user.
The users that need to log in are in the domain B, they have created 3 groups in Domain A and mapped 3 groups from the domain B, users are then a member in the Domain B groups.
The customer had 1-way trust configured in a way that only Domain A can access Domain B users, but not the other way, and wanted this to work between these 2 cross-domains.
|
The solution here is to add Domain B as well in Authentication
|
KB16006
|
File Analytics - UI showing old data due to services crashing
|
Request to data server failed' error observed on FA UI After upgrading FA analytics from 3.1 to 3.3.
|
Problem Description
'Request to data server failed' error is observed on FA UI
Troubleshooting and Findings
+ JVM heap, memory usage warning, getting logged continuously++ On FAVM, /mnt/logs/host/monitoring/monitoring.log.INFO
2023-10-30 23:22:49Z,092 WARNING 26366 monitor_analytics_services.py:get_elastisearch_stats: 319 - The JVM heap memory usage percent is 95.00, which is higher than the threshold value of 95
+ Observed circuit_breaking_exception in multiple logs due to data larger than the limit.++ /mnt/logs/containers/elasticsearch/elasticsearch.out
2023-11-15 17:00:13Z,731 ERROR 26366 monitor_analytics_services.py:get_fileserver_info_from_es: 279 - Traceback (most recent call last):
+Below command was failing with the same "Data too large" error message
nutanix@favm:~$ curl -sS "172.28.5.1:9200/_cat/nodes?h=heap*&v"
+Health shows status green
[nutanix@FAVM ~]$ curl -sS "172.28.5.1:9200/_cluster/health?pretty=true"
+Audit Events:++ /mnt/logs/host/monitoring/monitoring.log.INFO
2023-11-18 06:05:06Z,871 INFO 26366 monitor_analytics_services.py:close_oldest_audit_index: 999 - Current audit events for 571ba6bf-e2ed-474e-bb58-b4730db206db fileserver is 167572298
|
Collecting additional information
Before collecting additional information, upgrade NCC. For information on upgrading NCC, see KB 2871 https://portal.nutanix.com/kb/2871.Run a complete NCC health_check on the cluster. See KB 2871 https://portal.nutanix.com/kb/2871.Collect Logbay bundle using the following command. For more information on Logbay, see KB 6691 https://portal.nutanix.com/kb/6691.
nutanix@cvm$ logbay collect --aggregate=true
If FA UI is accessible, logs can be downloaded from FA UI by selecting the “Collect Logs” option under the <username> link at the top right corner of the FA dashboard. Download the tar file on your local computer.If FA UI is not accessible, you can collect logs from Prism UI by navigating to the Health tab and then selecting Actions (on the extreme right) > Collect Logs.
Alternatively, you could collect the logs from the CVM command line with the logbay command. Open an SSH session to CVM and run the command:
nutanix@cvm:~$ logbay collect -s <cvm_ip> -t avm_logs
Log bundle will be collected at /home/nutanix/data/logbay/bundles of CVM IP provided in the command.
You can collect logs from FA VM through FA CLI, FA UI or by running logbay collect logs command on CVM. Open an SSH session to FA VM with “nutanix” user credentials and run the command:
nutanix@favm:~$ /opt/nutanix/analytics/log_collector/bin/collect_logs
This will create a log bundle in the form of a tar file. Upload the logs to the case.
Apply the following fix.
+ We can try to increase the heap size by 1GB, then monitor if FA UI become accessible when the heap reaches >95%
[nutanix@FAVM elasticsearch]$ curl -sS "172.28.5.1:9200/_cat/nodes?h=heap*&v"
+ Edit /mnt/containers/config/deploy.env file, update key es_heap_memory_mb,add 1024 to current valueBefore
nutanix@favm:~$ vim /mnt/containers/config/deploy.env
After
nutanix@favm:~$ vim /mnt/containers/config/deploy.env
+ Verify the change
nutanix@favm:~$ curl -sS "172.28.5.1:9200/_cat/nodes?h=heap*&v"
|
KB11852
|
Nutanix Files:- Failed FSVM reboot during upgrades
|
If FSVM is rebooted during cloud-init, the FSVM may fail to boot properly with all services configured well.
|
After the FSVM boot disk is upgraded, cloud-init would run to configure the File Server Virtual Machine. But cloud-init is not idempotent and if the FSVM is rebooted during cloud-init operation, it skips configuring the File Server VM. This might cause the File Server VM to not boot properly or services to come up properly.Alert Overview | Symptoms
FileServerUpgradeTaskStuck CRITICAL alert would be generated.
|
If FSVM fails to reboot during upgrade after 60 mins, it’s better to recover that FSVM manually.Please follow the following steps to recover FSVM manually:
1. Figure out the FSVM which did not reboot during its upgrade. A running FileServerVmUpgrade Task should have the information.
nutanix@A-CVM:10.XX.XX.10:~$ ecli task.list component_list=minerva_cvm
Check VM_UUID for the FSVM from the CLI using the command ecli task.get <task id>
Example:
nutanix@A-CVM:10.XX.XX.10:~$ ecli task.get 757ae671-eba5-421c-998a-22229ffbe8da
2. Please get the name of the FSVM from step 1 and in Prism UI, proceed to the VM page.3. Power off the FSVM which did not reboot.
4. Detach the boot disk by editing the VM and removing the SCSI disk with index 0.
$ acli vm.get NTNX-FS1-1
5. Clone FSVM boot disk from the Files container and from the boot disk image.
i. Verify the image under FS container using nfs_ls -liar (highlighted in bold)
nutanix@A-CVM:10.XX.XX.10:~$ nfs_ls -liar /Nutanix_FS1_ctr
ii. Create clone image using acli image.create command:
nutanix@A-CVM:10.XX.XX.10:~$ acli image.create FS-vdisk-CLONE source_url=nfs://127.0.0.1/Nutanix_FS1_ctr/el7.3-opt-euphrates_files_400-5.20-stable-23e65a5885b28a24434bb44dd50928c7f8b04f76 image_type=kDiskImage container=Nutanix_FS1_ctr
6. Re-attach the boot disk. Use the device index as SCSI index 0.
nutanix@A-CVM:10.XX.XX.10:~$ acli vm.disk_create NTNX-FS1-1 clone_from_image=FS-vdisk-CLONE bus=scsi index=0
7. Power on the FSVM and Wait for 20-25 mins to check if the vm booted successfully and services are up.
8. Resume the file server upgrade through afs infra.resume_fs_upgrade command on the cvm.
9. Check the upgrade proceeded to the next stage.
|
""Title"": ""A number of issues have been found when generating log bundles using NCC Log Collector. The most important ones are described below:\n\n\t\t\tIssue 1 - Log collector does not complete\n\n\t\t\tAffected NCC versions: 3.0 and 3.0.0.1\n\n\t\t\tRelated ENG tickets: ENG-75570\n\n\t\t\tIn NCC 3.0 and 3.0.0.1 log_collector copies all the compressed log files into the /home/nutanix/data/log_collector/tmp folder and then decompresses all of them. This results in high disk space utilisation on the /home partition. Log collector eventually fails when the utilisation hits 90%. This issue has surfaced after the fix to ENG-61552. When this issue is hit
|
log collector fails to finish and displays the below error message:\n\n\t\t\tERR: There is 90 percent space used. So aborting the log collector. Please use -force to force write the logs.\t\t\t\t\t\t \n\n\t\t\tIssue 2 - Log bundle has files missing and/or gaps in files\n\n\t\t\tAffected NCC versions: All versions prior to 3.0\n\n\t\t\tRelated ENG tickets: ENG-61552 and ENG-65882.\n\n\t\t\tPrior to version 3.0
|
NCC log_collector used to decompress the old log files in their source directory. Scavenger process monitors the space usage of /home/nutanix/data/logs directory and when the usage crosses the threshold size of 9216 MB
|
ENG-96857 and ENG-98082\n\n\t\t\tWhen generating log bundles on a large cluster (8 or more nodes)
|
it will start deleting the old log files. Scavenger process will essentially delete the files decompressed by the Log Collector to pack them into the log bundle. This results in log bundles to have missing files and/or gaps in some files.\n\n\t\t\tIssue 3 - Log collector does not complete for large clusters\n\n\t\t\tAffected versions: All NCC versions\n\n\t\t\tRelated ENG tickets: FEAT-4830
|
KB5689
|
Disabling IPv6 on CVM
|
Article regarding disabling IPv6 on the CVM due to security reasons.
|
You may wish to disable IPv6 on the CVM for security hardening reasons. The customer should be advised that these changes will need to be removed for any re-imaging or cluster expansion if they plan to use the IPv6 discovery method.
|
To view current settings use sudo sysctl -a | grep ipv6 | grep disable on the CVM:
nutanix@CVM:~$ sudo sysctl -a | grep ipv6 | grep disable
Important: Below steps will disrupt services as it requires a restart of the networking services/CVM to effect the change.
The steps below apply to the CVM and not the cluster. To apply the changes to the entire cluster repeat the steps on each CVM of the cluster.To disable IPv6 on CVMs are as follows:
Option 1:
Open a terminal to the CVM.Edit the file /etc/sysctl.conf on a CVM
sudo vi /etc/sysctl.conf
Add the following at the bottom of the file:
net.ipv6.conf.all.disable_ipv6 = 1
Ensure there are no white spaces at the end of the line as it could lead to Hades' inability to mount disks upon reboot of the CVM. Save and close the file.Run the following command on CVM:
sudo sysctl -p
Reboot CVM. If after reboot still reported that ipv6 not disable please follow option 2 below:
Option 2 (AOS 6.5 and later):
Completely disable IPv6 addresses in the CVM. The script restarts all the services as required. No need to reboot CVM manually.
First, run reconfigure and then disable.
manage_ipv6 unconfigure
To re-enable IPv6 revert back the above changes:
Option1:
Open a terminal to the CVM.Edit the file /etc/sysctl.conf.
sudo vi /etc/sysctl.conf
Remove the previously added lines:
net.ipv6.conf.all.disable_ipv6 = 1
Ensure there are no white spaces at the end of the line as it could lead to Hades' inability to mount disks upon reboot of the CVM. Save and close the file.Run the following command:
sudo sysctl -p
Reboot CVM.
Option2 (AOS 6.5):
Run the following command to re-enable IPv6. No need to reboot CVM.
manage_ipv6 enable
Run the following command on CVM to check if IPv6 address are presented:
manage_ipv6 show
Hint: you can use the command sysctl -h for more information.You can also make non-persistent changes on the fly. This method does not cause CVM network services to restart, however, any connections that were active over IPv6 are dropped.
To disable IPv6 :
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
To enable IPv6:
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
|
KB10096
|
Move ESXi to AHV: How to map pci slot number to guest OS bus number
|
This article describes mapping PCI slot numbers to guest-visible PCI bus topology on ESXI and compare it with vdisks on AHV
|
This article is based on vMware article 2047927 https://kb.vmware.com/s/article/2047927 to map vmdk to AHV vdisk after migration from ESXi to AHV using Nutanix Move.Example:VM with multiple disks and SCSI controller addedSource ESXi: Destination AHV:Notice the difference in disks on AHV all vdisks are under single controller whereas vMware had multiple controller.Given all disks are of the same size, challenge lies in identify disks on AHV comparing with the sequence on source, example if /dev/sdb on ESXi is same as /dev/sdb on AHV and so on.
|
Steps:
1. Collect information about vmdks attached to vm from vmx file (e.g: scsix:y.fileName = “sles-11-sp3-static_1-000001.vmdk”, here x refer to controller id and y refer to index id)
2. Execute the python script from vMware article 2047927 https://kb.vmware.com/s/article/2047927 against the vmx file of a VM (note the bus id against the scsi controller)
3. Execute the lsscsi -v on VM and match the bus number got in step 2
4. Group the disk as per scsi controller id, create a map between VMDK file attach position on VM
5. Migrate the VM to AHV
6. Curl vm.GET API (https://<prism-ip:9440>/PrismGateway/services/rest/v2.0/vms/<vmuuid>?include_vm_disk_config=true) create map between device_index and ndfs_filepath (under source_disk_address, this will be the same as vmdk file name)
7. Device Index in above map points to disk attach points on VM for example device_index 0 points to /dev/sda , 1->/dev/sdb, 2->/dev/sdc, 3->/dev/sdd
8. map created in step 4 and Step 6 can used to find position of /dev/sd* in source VM to /dev/sd* in target VMExample:Hard disk 4 on ESXI is on scsi1:1
scsi1.pciSlotNumber = "224"
On guest OS with Python installed, run the Vmware script against vmx file, exampleExecute the lsscsi -v on VM and match the bus number from abovea = Hostadapter ID b = SCSI channel c = Device ID d = LUNGroup the disk as per scsi controller id, create a map between VMDK file attach position on VM, from VM Edit settings and vmx we know SCSI ID to fileName mapping,Run the Prism API query and fetch disk details on AHV, example:
|
KB12834
|
New Dark Site Licensing using Key Generation
|
These are the instructions to the Dark site Key generation.
|
The following Key Generation Licensing is the new Dark site Licensing.This process must be enabled before an account can use.1. Contact Patti Dierks in the Slack Channel #dark-site-license-keys and request to enable.
Provide Account Name and link to the account.
|
Login to Nutanix PortalGo to Licenses
Your view will vary depending on what account view you are in, you can click your name on top left and change view via “Login As”SSH into CVM and enter your license key via the license_key_util command: Apply:nutanix@cvm$ ~/ncc/bin/license_key_util apply key=license_keyShow:nutanix@cvm$ ~/ncc/bin/license_key_util show key=license_key Licenses Allowed: AOS Add-Ons: Software_Encryption FileFor additional help see Page 61 https://download.nutanix.com/documentation/hosted/License-Manager.pdf https://download.nutanix.com/documentation/hosted/License-Manager.pdf
|
KB11590
|
Prism Central: Prism Element Launched from Prism Central (PC.2021.5) is missing VM details and management buttons under VM section
|
Prism Element Launched from Prism Central (PC.2021.5) is missing VM details and management buttons under VM section.
|
After PC upgrade to (PC.2021.5) when Prism Element is launched from PC a number of VM details on the bottom left of the screen and management buttons are missing under VM panel.Launching Prism Element directly shows the VM details and management options as expected.
Steps
Launch Prism Element from Prism Central running version PC.2021.5under VM option view all the VM Details are missing including all buttons to update, launch console migrate, clone or take a snapshot of a VM:
Browser Developer Mode shows the following errors:
17:24:21 nutanix.ui. PrismElementIFrameView : Navigate to URL: el7.3-release-euphrates-5.19.1.5-stable-0f9e00f661436fef1af18a094089744f34ccd8c0/console/index.html#page/vms/table/?action=details&actionTargetId=00052c80-729d-8705-0000-0000000051fa%3A%3Acdd972a8-ced2-48fc-b0ab-71368f20c396&actionTargetName=Al-Linux&actionParentEntityType=cluster&actionParentId=00052c80-729d-8705-0000-0000000051fa&actionSource=datatable_selection&actionTarget=vm&clusterid=00052c80-729d-8705-0000-0000000051fa&prismElementIframeMode=true
Root-cause
The issue affects any cluster running AOS versions less than 6.0 in combination with Prism Central PC.2021.5 due to a JavaScript jQuery library incompatibility.
|
Workaround
The VM Details and Management Operations are all available in Prism Element if launched directly.Users can also review VM details Prism Central directly Under Virtual Infrastructure > VMs
Solution
Upgrade Prism Central to PC.2021.5.0.1
|
KB10780
|
Nutanix Files - Troubleshooting quota issues
|
Troubleshooting issues with quotas in Nutanix Files
|
When a group quota or user quota is in question on a Nutanix Files share, below are a few checks we can do to ensure that the quota is being applied.
|
Viewing quota from Windows clientFollow the below steps to view the quota usage for a user from the client side by mapping the share in question and viewing the properties.1. Right the share in question from Windows File Explore and click "Map network drive" and follow the instructions to map the drive.2. Right click the mapped share and click properties3. Click on the General tab to view the quota usage for that userViewing quota from FSVMTo view the quota usage for a share and user from the FSVM, follow the below.1. From a CVM SSH to a FSVM and run the below command to view quota for a user. on a specific share
quota.get_user_quota <ShareName> <UserName or GroupName>
Example:
2. Run the below command to view all shares with quotas for a user/group
afs quota.user_report <UserName or GroupName>
Example:
Manually calculating the quota usageFollow the below steps to manually calculate the usage for a share, follow the below to find the files owned by the user in question1. From a CVM SSH to a FSVM and run the below command to find the FSVM that is holding the share in question and the share path.
Standard Share:
Example:
2. SSH to the FSVM holding the share/TLD and cd to the share path3. Find the user ID of the user in question
afs ad.uid_from_uname <username> SMB
Example:
4. Run the below command to find all the files and folders for the user ID in question in the share/TLD
find . -uid <UID> -ls
Example:
Finding a user's usage across TLDsThere are cases where a user has created/copied/modified files in different TLDs within a distributed share, which will account for a user quota for that share. Use the below script to identify files for owned but the user in question in TLDs that are spread among the FSVMs for the distributed share in question.1. Create the below file and copy and paste the below script.
#!/bin/bash
2. Make the script executable.
chmod +x /home/nutanix/tmp/checkowner
3. Run the script and enter the distributed share's name and user's UUID for each FSVM
Example:
|
KB6961
|
Degraded I/O performance over time on ESXi clusters with 5+ containers
|
Clusters running ESXi hypervisor with version ESXi 6.5 or later are susceptible to a VMware issue that results in I/O performance degradation over time. This degradation results in higher latency per I/O operation which leads to fewer IOPS than expected.
This article describes the signature of the issue and the work around recommended by VMware.
|
Nutanix clusters running ESXi hypervisor with version ESXi 6.5 and ESXi 6.7 (both up to U2) are susceptible to a VMware issue that results in degradation I/O performance over time. This degradation results in higher latency per I/O operation which leads to fewer IOPS than expected. The actual impact on each cluster will vary depending on how long the ESXi host is running since the last reboot (uptime) and the workload running in the cluster.VMware has confirmed this issue within their NFS implementation of ESXi. The ESXi advanced setting SunRPC.MaxConnPerIP defines the maximum number of unique TCP connections that can be opened for a given IP address. This value is left to the default value of 4 on Nutanix environments. If the number of Nutanix containers mounted from 192.168.5.2 (local CVM) is more than 4 (SunRPC.MaxConnPerIP value), then the existing connections are shared among different vmWorlds.The issue occurs within the cleanup code of these shared connections. Once the ESXi world processes go away, due to vMotion of VMs for instance, the resources that had been previously allocated to them are not cleaned up properly. Over time, as the amount of available resources decreases, it will result in a slowdown in I/O operations and higher latency.
The following symptoms are specific to this issue:
Hypervisor latency is high and increasing over time and drops after a reboot while storage controller latency which accounts for Nutanix latency is low, constant over time and does not change before/after the reboot.
VM latency is higher than expected and application users are complaining about applications being sluggish. The latency range is dependent on how bad the issue is. We have observed values as low as 700 IOPS max with 8 KB block size at 40-50ms at the hypervisor level while seeing 0.2ms on the storage controller which indicates that the latency is at the hypervisor layer:
- Prism Cluster Hypervisor Latency graphs show increasingly higher values on the weekly intervals. Right after the host reboot latency is down to a few milliseconds and then starts to build up again over time:
- Prism Cluster Storage Controller Latency graphs show much lower values in comparison with hypervisor latency the same time frame with no increase in latency over time and no significant change after the reboot:
Note the graphs above can be generated in Prism by following the instructions in the PERFORMANCE MONITORING chapter of the Prism guide https://portal.nutanix.com/page/documents/details?targetId=Web-Console-Guide-Prism-v5_18:wc-performance-management-wc-c.html.
|
VMware code ResolutionVMware confirmed code completion with ESXi 6.5 U3 and 6.7 U3 which has been released earlier this year. VMware has published a public KB article on this issue with the same workaround mentioned above. See VMware KB 67129 https://kb.vmware.com/s/article/67129 for further reference. All Nutanix systems with the mentioned AOS release and foundation version will have the setting of 64 to prevent further issues in the future.WorkaroundFor a workaround refer to KB-7833 https://portal.nutanix.com/kb/7833.The NCC health check esxi_staleworlds_check checks if the number of datastores configured in the ESXi host is higher than the ESXi MaxConnPerIP advanced setting.
ncc health_checks hypervisor_checks esxi_staleworlds_check
If an upgrade to ESXi 6.5 U3 and 6.7 U3 is not possible immediately VMware has provided a workaround which consists of increasing the ESXi advanced setting /SunRPC/MaxConnPerIP (default is 4). The higher setting has no negative effect on system performance or stability and is the new Nutanix default setting.The new value should be set to a higher number than the number of datastores/containers in the cluster. The recommendation is to set this value to 64. However in case the amount of datastores in the hosts is more than 64, please consult with Nutanix support before applying values higher than 64. For instructions on setting advance NFS parameters for reference refer to VMware KB 1007909 https://kb.vmware.com/s/article/1007909.The following command will change this setting in all the hosts:
nutanix@CVM:~$ hostssh esxcfg-advcfg -s <Number> /SunRPC/MaxConnPerIP
You can check the newly configured value with the below command:
nutanix@CVM:~$ hostssh esxcfg-advcfg -g /SunRPC/MaxConnPerIP
You can check the active value with the below command:
nutanix@CVM:~$ hostssh vim-cmd hostsvc/advopt/view SunRPC.MaxConnPerIP
Note: In order for the new setting to become active each of the hosts must be rebooted. One other caveat is that if VMware's host daemon hostd was restarted after the setting was changed but before a host reboot we might also see the active value as being changed while the run time version is still the old value. There is no command available to check which setting is active. If in doubt please either contact support to check when the last reboot of the host was as well as when AOS was upgraded or check yourself with the below commands.The restart of the ESXi host should be after the upgrade to 5.5.9.5, 5.10.4, 5.11 or higher:
nutanix@CVM:~$ hostssh uptimenutanix@CVM:~$ cat ~/config/upgrade.history
Nutanix automated workaroundThis above workaround now is automated to a setting of MaxConnPerIP = 64 and implemented into the following releases:
AOS 5.5.9.5, 5.10.4, 5.11 and all further releases. It is important to understand that the changes happen with the AOS upgrade itself but won't be effective without a host reboot
Imaging new nodes with Foundation 4.4 and higher
|
KB10919
|
Logrotate does not rotate ikat logs
|
Logrotate does not rotate logs causing excessive space usages in /home/nutanix directory.
|
The ikat_proxy service on both Prism Element and Prism Central manages HTTP traffic for the Prism control plane.
On some versions of AOS and Prism Central, an ikat_proxy log file may not rotate, leading to the file growing to an excessive size that consumes more of the filesystem than expected. This growth can prevent workflows such as upgrades from having sufficient space to work.The affected logs are stored in different locations depending on AOS/PC version:
For AOS prior to 5.19.2, 5.20 and 6.0 and PC versions prior to pc.2021.5 the logs are located at /home/nutanix/data/logs/ikat_access_logs/.For AOS and PC versions after the above versions, the ikat logs are located in /home/apache/ikat_access_logs/.
IdentificationMost commonly this issue will be identified when troubleshooting a /home directory space issue on either CVM or PC VM. For PE follow KB-1540 https://portal.nutanix.com/kb/1540. The associated script may identify large files in the ikat_access_logs directory.For PC follow KB-5228 https://portal.nutanix.com/kb/5228 where scenario 7 specifically identifies this issue.
Manually checking the space usage in the home directory will also reveal large logs in the ikat_access_logs directory.Example using /home/nutanix:
nutanix@cvm:~$ sudo du -aSxh /home/nutanix/* | sort -h | tail
Example using /home/apache (AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5):
nutanix@cvm:~$ sudo du -aSxh /home/apache/* | sort -h | tail
|
This issue is resolved in AOS versions 5.20.1, 6.0 / PC version pc.2021.5 and later. However, the log file may need to be rotated or truncated in order to be able to upgrade to a fix version.Note: If the ikat logs have already grown under the old path, they will not be rotated automatically after upgrade to a fix version. This is due to the logs being located in a different directory on the fix version. Truncate the log file manually by following the instructions under "Truncating the file" towards the end of this article. Once the file is truncated, the issue should not reoccur.Follow the workaround steps below if PE or PC is not upgraded to the fix version. If already upgraded to the fix version, skip to "Truncating the file" at the end of this KB.
Workaround - CVM
Check whether the /etc/logrotate.d/ikat file exists on the CVM. If not, as the nutanix user, copy and paste the below command into the command prompt to create and populate the file
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
sudo bash -c "cat << 'EOF' > /etc/logrotate.d/ikat
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
sudo bash -c "cat << 'EOF' > /etc/logrotate.d/ikat
Check whether the /srv/salt/security/CVM/rsyslog/ikatCVM file exists. If not, as the nutanix user, copy and paste the below command to the command prompt to create and populate the file
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
sudo bash -c "cat << 'EOF' > /srv/salt/security/CVM/rsyslog/ikatCVM
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
sudo bash -c "cat << 'EOF' > /srv/salt/security/CVM/rsyslog/ikatCVM
Set the correct file permissions on the /etc/logrotate.d/ikat file:
nutanix@cvm:~$ sudo chmod 644 /etc/logrotate.d/ikat
Add the correct SELinux context to the directory containing ikat logs and files
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
nutanix@cvm:~$ sudo -i
Expected output:
restorecon reset /home/nutanix/data/logs/ikat_access_logs context unconfined_u:object_r:user_home_t:s0->system_u:object_r:var_log_t:s0
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
nutanix@cvm:~$ sudo -i
Expected output:
restorecon reset /home/apache/ikat_access_logs context unconfined_u:object_r:user_home_t:s0->system_u:object_r:var_log_t:s0
Start a log rotation for ikat with the following command:
root@cvm:~# logrotate -f /etc/logrotate.d/ikat
Workaround - PCVM
Check whether the /etc/logrotate.d/ikat file exists. If not, as the nutanix user, copy and paste the below command to the command prompt to create and populate the file
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
sudo bash -c "cat << 'EOF' > /etc/logrotate.d/ikat
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
sudo bash -c "cat << 'EOF' > /etc/logrotate.d/ikat
Check whether the /srv/salt/security/PC/rsyslog/ikatPC file exists. If not, as the nutanix user, copy and paste the below command to the command prompt to create and populate the file
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
sudo bash -c "cat << 'EOF' > /srv/salt/security/PC/rsyslog/ikatPC
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
sudo bash -c "cat << 'EOF' > /srv/salt/security/PC/rsyslog/ikatPC
Set the correct file permissions on the /etc/logrotate.d/ikat file:
nutanix@PCVM:~$ sudo chmod 644 /etc/logrotate.d/ikat
Add the correct SELinux context to the directory containing ikat logs and files
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
nutanix@PCVM:~$ sudo -i
Expected output:
restorecon reset /home/nutanix/data/logs/ikat_access_logs context unconfined_u:object_r:user_home_t:s0->system_u:object_r:var_log_t:s0
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
nutanix@PCVM:~$ sudo -i
Expected output:
restorecon reset /home/apache/ikat_access_logs context unconfined_u:object_r:user_home_t:s0->system_u:object_r:var_log_t:s0
Start a log rotation for ikat with the following command:
root@PCVM:~# logrotate -f /etc/logrotate.d/ikat
If the logrotate command is successful, an archive is created. For example
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
root@CVM/PCVM:~# ls -lh /home/nutanix/data/logs/ikat_access_logs
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
root@CVM/PCVM:~# sudo ls -lh /home/apache/ikat_access_logs
If the problem file has grown and taken up too much space for logrotate to work, you will see the following output when executing logrotate
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
error: error writing to /home/nutanix/data/logs/ikat_access_logs/prism_proxy_access_log.out.1: No data available
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
error: error writing to /home/apache/ikat_access_logs/prism_proxy_access_log.out.1: No data available
In this case, see "Truncating the file" section below before trying to execute the above logrotate command again.
Truncating the fileTruncating the log file may be required if log rotation fails due to lack of free space, or if the large file is in the old /home/nutanix/data/logs/ikat_access_logs/ location and PE/PC has been upgraded to the fix version, where the log is then located in /home/apache/ikat_access_logs/. If the ikat log files are required for an open Nutanix case, make a backup of the file first and then truncate the file with the following command
For AOS 5.19.2, 5.20, 6.0 or lower, and PC 2021.5 or lower:
root@CVM/PCVM:~# echo > /home/nutanix/data/logs/ikat_access_logs/prism_proxy_access_log.out
For AOS above 5.19.2, 5.20, or 6.0, and PC above 2021.5:
root@CVM/PCVM:~# echo > /home/apache/ikat_access_logs/prism_proxy_access_log.out
|
KB16313
|
NDB | Era cluster agent root file system may become full due to database log copies are written in to the root file system instead of the log drives
|
Era cluster agent root file system may become full due to database log copies are written in to the root file system instead of their dedicated log drives
|
One or more Era cluster agent may become offline and in ERA_DAEMON_UNREACHABLE state Reviewing the filesystem usage in the cluster agent, root file system shows 100% usage
[era@localhost ~]$ df -h
When reviewing the contents of the root file system we are unable to find any files that have taken up so much space
[era@localhost ~]$ sudo du -aSxh / | sort -h | tail -n 15
Verified that the issue is not due to open file descriptors as described in KB15648 https://portal.nutanix.com/kb/15648
|
When Log Catchups are enabled for a Time Machine, The Transaction logs from the Database Servers are copied to the Era Cluster Agent in to a specific log drive dedicated for each Time Machine
This issue may occur due to "Copy Log" operation writing the log files to the directory where the log drive is mounted instead of writing in to the mounted log drive(s)
During the Log Copy operation, the log drive is auto resized when reaching a threshold, The above issue can occur when the log drive is unmounted for a resize operation and did not successfully mount the log drive again
However the Log copy operation proceeds to copy the logs in to the directory where the log drive was previously mounted
Subsequent Log Copy operations, may mount the log drive again and Copy Log operations start to run successfully again
Since the log drives are already mounted on these directories, commands such as du is not able to detect the file system usage underneath the mounted disks
To identify the directories where the log files are present, follow the below steps:
Clone the root file system disk of the cluster agent to a test Linux VM
Mount the disk and review the file system usage in the test VM to identify the directories that are consuming the space
Review the file system usage under the below path for the mounted disk
/home/era/era_base/log_drive/
If there are logs saved in the directories in which the log drives are mounted, engage NDB STL via Tech-Help to copy the logs that are stored in the mount point directory back to the log drives and clear the space from the directories
|
KB12214
|
Performing scans of Nutanix AOS with Tenable Nessus
|
This article describes how to perform scans of Nutanix AOS with Tenable Nessus.
|
This article describes how to perform scans of Nutanix AOS with Tenable Nessus.
IMPORTANT: This KB article is provided for LEGACY purposes only. Tenable Nessus now has a custom-built plugin providing more accurate results with less risk of instability caused by scanner misconfiguration. Raw OS scanning of Nutanix appliances is considered deprecated functionality that may be removed during further security hardening at any point. Follow the preferred method documented in KB 14730 http://portal.nutanix.com/kb/14730.
|
Log in to the Tenable Nessus portal, ACAS, or similar scanner conducting Tenable Nessus scanning.
Default landing page after successful login will take you to the Vulnerability Management dashboard as below.
Go to Scans and select New Scan.
Select Advanced Network Scan template.
Fill up the below information under Settings.
Scan NameScanner – from the drop-down menu select correct scanner for the environmentTargets – fill in IP address(es) per requirement
Navigate to Credentials.
Select SSH and set the following:
Authentication method - select "password"Username - enter "nutanix"Password - enter nutanix passwordElevate privileges with - select "sudo"
If you have a certificate-based authentication method, select "certificate" as the Authentication method, and upload the certificate.
Save the scan. It will take you back to the Scans dashboard.
The following steps are optional based on the type or depth of scan you are conducting:
To search for SSL/TLS/DTLS services, navigate to DISCOVERY under the Settings tab. Under General Settings, toggle on the Search for SSL/TLS/DTLS option.
To conduct a more thorough test, remove false alarms, or show false alarms, navigate to the Settings tab > ASSESSMENT > General > Accuracy, and check the boxes for the desired settings.
Save the scan. It will take you back to the Scans dashboard.
Perform the scan:
Go to the Scans dashboard, select your scan and click More > Launch.
Click Launch on the verification pop-up.
Click on the running scan to view progress.
Review results once the scan is complete. Click on Vulnerabilities.
|
}
| null | null | null | null |
}
| null | null | null | null |
KB10445
|
X-Ray fails to add target with NVMe drives
|
X-Ray fails to add target with NVMe drives
|
An attempt to add a cluster with NVMe drives in the X-Ray 3.10 fails immediately.While checking the log on the X-Ray VM the following can be observed in the xray.log:
ERROR Error occurred while performing final discovery - Invalid value for `type` (SSD-PCIe), must be one of ['HDD', 'SSD', 'Unknown']
|
That issue is specific to the version 3.10 of X-Ray and affects only the clusters with NVMe drives.The workaround is to use the earlier versions. X-Ray 3.8 is validated to work correctly.
|
""cluster status\t\t\tcluster start\t\t\tcluster stop"": ""nfs_ls -l \""/CTR_NAME/vm1/file2\""""
| null | null | null | null |
}
| null | null | null | null |
KB10723
|
LCM upgrade fails with "error: (vmodl.fault.ManagedObjectNotFound)"
|
LCM upgrades may fail with the error Failed to run LCM operation. error: (vmodl.fault.ManagedObjectNotFound)
|
When performing an LCM upgrade, if there's a time gap between performing the inventory scan and the upgrade itself it may fail with the following error:
Operation failed. Reason: Failed to run LCM operation. error: (vmodl.fault.ManagedObjectNotFound) { dynamicType = <unset>, dynamicProperty = (vmodl.DynamicProperty) [], msg = "The object 'vim.VirtualMachine:xxxxxx' has already been deleted or has not been completely created", faultCause = <unset>, faultMessage = (vmodl.LocalizableMessage) [], obj = 'vim.VirtualMachine:xxxxxx' } Logs have been collected and are available to download on x.x.x.x at /home/nutanix/data/log_collector/lcm_logs__x.x.x.x__2021-02-03_08-36-30.303309.tar.gz
The error above can be found in genesis.out as you can see here:
2021-02-03 08:36:20 INFO cpdb.py:463 Exception: Mark Update In Progress Failed (kIncorrectCasValue)
|
When you perform an LCM inventory scan the process catalogs all the various components in your cluster and performs the upgrade accordingly, this includes the list of VMs running on the cluster.If you have performed the inventory scan and decided to perform the upgrade itself at a later time the various components may change, this is something that will potentially be more problematic if you have a VDI solution as VMs are constantly deleted and removed.In order to resolve the issue above you simply need to run the inventory scan again and perform the upgrade immediately after.This can also occur if you have affinity rules configured on VMs in the cluster or an invalid datastore, you can read more about it in the KB article from VMware here. https://kb.vmware.com/s/article/2001504
|
KB8719
|
When booting up Ubuntu 18.04 in AHV it boots up and get stuck with message :- /dev/sda/ clean numbersfiles/numbersblocks
|
When booting up Ubuntu 18.04 in AHV it boots up and get stuck with message :- /dev/sda/ clean <numbers>files/<numbers>blocks (New VM)
|
When booting up Ubuntu 18.04 in AHV it boots up and get stuck with message :- /dev/sda/ clean <numbers>files/<numbers>blocks (New VM)Though we can skip the screen by pressing ctrl+alt+f2 or f3 .Successfully able to boot the Ubuntu 18.04 with runlevel 3 without any issue which signifies it is not an issue with AHV.Also below workaround also not working
Change the graphics mode to text in the /boot/grub.cfg file by setting "linux_gfx_mode=text", or use the grub command "set gfxmode = text".Remove "xserver-xorg-video-fbdev" with "sudo apt-get remove xserver-xorg-video-fbdev*"
|
Issue is with gnome (GNOME is a free and open-source desktop environment for Unix-like operating systems.). Hence we need unintsall gnome.Customer may use different Graphical interface like KDE or xUbuntu.How to skip boot screen message and access cli :- Press key ctrl+alt+f2 or f3 or assign IP to the VM from prism and access that IP from putty session.Commands to remove gnome
1 sudo apt remove gnome*
After that reboot , You will not find that issue.
|
KB5567
|
Windows VMs fails to boot due to BCD corruption
|
Steps to recover from BCD corruption in windows VMs
|
The following document illustrates the step-by-step process to recover the BCD corruption in Microsoft Windows VMs and when VM fails to boot because of the following error:
Booting from Hard Disk...
Note: Its preferable to engage Microsoft for troubleshooting such issues and here we are only trying to do it on best effort basis. Please take backup (snapshot) of VM before proceeding, as these steps can make situation even worse.
|
Check if the VM is UEFI. If so, use KB 5622 https://portal.nutanix.com/kb/5622 to migrate the VM data to a BIOS VM or upgrade the cluster to 5.11+ and select to boot the VM with using UEFI firmware ( FEAT-6699 https://jira.nutanix.com/browse/FEAT-6699).To resolve the problem, perform the following steps:Note: If you are facing the BCD corruption issue on a mission-critical or production VM, Nutanix recommends to take a backup using the clone feature available in Prism UI. You can close the snapshot (if any) or clone the VM directly and perform the following steps on a cloned VM and once cloned VMs start, then proceed to follow the same steps on original VM.
Insert the installation media in a CD-ROM by updating the VM configuration and also add 2nd CDROM drive and attach iso with latest virtio drivers. For example, if the VM is using Windows Server 2012 R2, then use the ISO for Windows Server 2012 R2 and start the VM by CD/DVD option.
Open the Windows Command Prompt window. Hint: You can use shift+F10 and alternately you will also get the option to launch cmd prompt in GUI.
Load a specific Windows driver if the disks are not visible automatically. Below command is only needed if VM is booting from SCSI disk and you only need to load vioscsi
E:\>drvload "Windows Server 2012 R2"
Below is sample output from above command: Alternately, one can also use the below command to load the drivers:
drvload E:\path_to_the_driver_inf_file.inf
Use the following command to enter into DISKPART> prompt.
diskpart
Use the following command to list the available disks.
list disk
Select the boot disk. The following command selects the first disk from the available disks.
sel disk 0
0 is the first disk. You may replace 0 with the disk you want to select.
Use the following command to list the available partitions in the disk selected in the preceding command.
list part
Select the first partition using the following command and make the partition active with the active command. And this is only needed when there are no partitions marked as active.
sel part 0
Now validate if the BCD is corrupted or not using the following command. If no output is generated, it means the BCD has a corruption.
bcdedit
Below is one of example where you see that the BCD is corrupt or it actually means BCD database is missing, and one of possible reason for this would be if by chance anyone removes one of partitions, which most likely was the boot partition.
F:\>bcdedit
Use the following command to recover from BCD corruption.
bootrec /rebuildbcd
Restart the VM. The VM should hopefully get recovered from bcd corruption and boots up fine.
|
KB15077
|
Prism - No compatible products found for the uploaded cluster summary file
|
PE licensing error 'No compatible products found while licensing cluster'
|
When a customer tries to update a license on a cluster, they get the below error:
Error : No-compatible-products-found
|
The licenses need to be applied through Prism Central. If customers have not installed Prism Central, their AOS version needs to be 6.0.1 or higher, and the Prism Central version needs to be 2022.4 or higher. After upgrading to the current version of Prism Central, the licenses will be applied by downloading the cluster summary file from Prism Central.
If you need assistance installing Prism Central or applying the license, contact Nutanix Support http://portal.nutanix.com.
|
KB14031
|
NDB | Database VMs in 'DELETED_FROM_CLUSTER' state after DR failover
|
NDB | Database VMs in 'DELETED_FROM_CLUSTER' state after DR failover
|
NDB database VMs gets marked as "DELETED_FROM_CLUSTER" when database VMs are moved or migrated as part of DR failover process.
era > dbserver list
These VMs can be seen in Secondary Site Prism Web Console after failover with IP addresses assigned
VM Name Host IP Addresses Cores Memory Capacity Storage CPU Usage Memory Usage Controller Read IOPS Controller Write IOPS Controller IO Bandwidth Controller Avg IO Latency Backup and Recovery Capable Flash Mode
NDB server logs will have the following error signature
/logs/era_server/server.log:2022-12-05 21:16:02,400 [43-exec-3] ERROR [ERADBServerController] {"errorCode":"ERA-INT-0000001","Reason":"An internal error has occurred","remedy":"Please retry this operation with different input or after some time","message":"An internal error has occurred","stackTrace":[],"suppressedExceptions":[]}. Reason: null
|
NDB server during cluster sync process is not able to fetch the VM configurations as these VMs are migrated to DR site cluster and hence marks the DB VMs as `DELETED_FROM_CLUSTER`. We currently do not support this scenario.Workaround is to manually mark them as UP and set IP addresses using the following commands, obtain 'id' from "era > dbserver list" and IP addresses from Prism WC
era > update era_dbservers set ip_addresses='{}' where id='';
|
KB15012
|
NCC health check system_checks cluster_services_down_check fails for hades in PC
|
The NCC health check system_checks cluster_services_down_check fails for hades in PC.
|
The NCC health Check for cluster_services_down_check is reporting Hades service down in Prism Central.
nutanix@-PCVM:~$ ncc health_checks system_checks cluster_services_down_check
Hades is a disabled service in PC and it is present on the disabled service list.
nutanix@PCVM:~/data/logs$ less genesis.out |grep -i hades
Genesis status does not show the service hades.However fetching the Genesis status through ncc shows Hades service.
nutanix@PCVM:~/ncc/bin$ python
Hades service was listed in genesis because of the hades lock file present in the following path.
nutanix@PCVM:~$ ls -lrt /home/nutanix/data/hades_locks
|
Workaround : Manually move the hades lock file from /home/nutanix/data/hades_locks to ~/tmp folder. This will resolve the issue and NCC check will Pass now.
nutanix@PCVM:~$ ncc health_checks system_checks cluster_services_down_check
|
KB8882
|
Nutanix Files - troubleshooting NFSv4 issues with Kerberos authentication
|
This KB provides troubleshooting steps and a list of possible issues regarding NFS using Kerberos authentication.
|
Starting with Nutanix Files 3.0.0.1, Nutanix Files supports NFSv4, which includes advanced features such as strongly mandated security. Moreover, most recent distributions of Linux, Solaris, and AIX use NFSv4 as the default client protocol. The focus of this KB article is to provide help with possible issues and troubleshooting steps related to joining Linux machine to a domain, mounting NFS export over Kerberos authentication, and directory/file permissions.
|
The following was used as an example: - Centos 7 VM hostname "krbcentos" which is part of MS domain will be used as an example. - Microsoft AD (Domain controller/DNS server) Example domain: labs.local- Nutanix Files 3.6 Example Files server: Hostname: filesknuckles.labs.local List of pre requisites for required for setup NFSv4 over Kerberos:1. Please ensure that forward and reverse DNS entries for Files Server have been added to the DNS server. Mounting a Kerberized NFS export will fail without these entries.2. Create forward and reverse DNS records for Linux machine3. Make sure NTP service on the Linux machine is up. Kerberos requires time to be insync4. Linux machine has to be part of AD domain.5. Port 2049 open between client and serverScenario 1: Unable to "cd" into the mounted export. Get "Access denied"
[aduser@krbcentos ~]$ sudo mount -t nfs -o vers=4.0,sec=krb5 filesknuckles.labs.local:/krb_nfs ~/mnt/krb_nfs/ --verbose
Solution:1. Make sure the NFS client is whitelisted to mount the NFS exportPrism -> File server -> Share/Export -> Update -> Settings -> Default Access2. Verify if the current user has a valid kerberos ticket from DC
[aduser@krbcentos ~]$ klist
3. Get a ticket using "kinit" command:
[aduser@krbcentos ~]$ kinit
4. Veriry if the user has a valid ticket now:
[aduser@krbcentos ~]$ klist
5. Try remounting again:
[aduser@krbcentos ~]$ sudo umount mnt/krb_nfs/
6. cd into the mount point:
[aduser@krbcentos ~]$ cd mnt/krb_nfs/
Scenario 2: Newly created files on the NFS export belong to "nobody:nobody" or anonymous UID/GID 4294967294Solution: 1. Make sure domain set under "/etc/etc/idmapd.conf" is set to the current AD domain name
[aduser@krbcentos ~]$ grep -i domain /etc/idmapd.conf
2. Verify if the current Linux user is listed as part of AD:
[aduser@krbcentos ~]$ id
3. From FSVM verify if the current user can be looked up from AD:
nutanix@NTNX-10-XX-XX-XX-A-FSVM:~$ wbinfo --user-info='dur\aduser'
4. Make sure "All Squash" option is not set under the NFS export options. Scenario 3: Unable to mount NFS export from a Windows machine which is part of AD Domain Solution: Files does not support Kerberos 5, Kerberos 5i, and Kerberos 5p with the NFSv3 protocol. By default, Windows client mounts NFS using NFSv3. Files Guide https://portal.nutanix.com/#/page/docs/details?targetId=Files-v3_6:Files-v3_6 ) Please recommend the customer to mount from a client that supports NFSv4. Scenario 4: Unable to start rpc-gssd daemon on Linux client:
[user@localhost ~]$ systemctl status rpc-gssd
Solution: rpc-gssd daemon is unable to start due to missing /etc/krb5.keytab file. Create a KeyTab on Linux machine using ktutil command: 1. Run ktutil which will create a KeyTab file on a Linux machine in the terminal. Inside ktutil, we create a new hashed record for the domain user, write the password in the console, and store the hash in the krb.keytab file. We will use "[email protected]" account for this example since it has the rights to add a machine to the domain:
ktutil
2. Verify if there is an existing /etc/krb5.keytab file. If not move the krb5.keytab to /etc/
mv krb5.keytab /etc/
3. Start rpc-gssd daemon again:
systemctl start rpc-gssd
and check the status
systemctl status rpc-gssd
4. rpc-gssd should be up5. Join domain:
realm join --user=administrator labs.local
Scenario 5: Unable to mount the export due to permission denied error and subfolders not listing correct permissions.
[email protected]@ubuntu20:/nfs$ sudo mount -t nfs -o vers=4,sec=krb5 gfyvr-fs01.gf.local:/business /nfs/business -v
Solution: After reviewing DNS Server records, it was discovered that there are several manually created PTR entries [Reverse Lookup entries] for the Nutanix File Server. Due to the incorrect PTR entries, clients were unable to mount the NFSv4 export. Per the below example, three PTR records such as "X.223.25.172.in-addr.arpa name = POLARIS.gf.local." were manually created, and were causing issues mounting the NFS export. Even if the export was mounted in the Linux client, directory permissions were not getting cascaded to the sub-directories properly.Use "nslookup" command to validate the PRT records for the Nutanix File Server. The below additional highlighted records were found:
nutanix@NTNX-172-25-222-2-A-FSVM:~$ nslookup 172.XX.XX.1
Validated the same DNS details in the Nutanix File Server as shown below,
nutanix@NTNX-172-25-222-2-A-FSVM:~$ afs dns.list show_verified=true
Please remove any incorrect PTR entries such as "X.223.25.172.in-addr.arpa name = POLARIS.gf.local." in the example above from the DNS Server.
|
KB10875
|
Troubleshooting guide PC entity filters display issues for users after applying custom or built in roles
|
This KB is intended to provide guidance when facing issues in which a determined combination of user and role does not seem to be working accordingly and it is not possible to manage designated entities
|
There have been instances in which the entity filtering on Prism Central is inaccurate, leading to not being able to manage the designated entities for a particular user.This applies to users mapped to a Custom or built-in Role via the "Manage Assignment" wizard (Administration -> Roles)This KB is intended as a troubleshooting guide to provide guidance when facing issues in which a determined combination of user and role does not seem to be working accordingly and it is not possible to manage designated entities.
|
This is an explanation of the different entities that work together to deliver this functionality. It follows an explanation on how they interact with each other and how to check if they are working as intended.First, It is required to know which entities are involved on this operation:
useruser_groupdirectory_serviceroleaccess_control_policy
All of these are v3 entities and can be accessed via the nuclei utility on PC.Currently, only external users (AD or OpenLDAP) can be mapped to a Role via "Manage Assignment" functionality in Prism.Before this role assignment gets created both the Role and User or User Group exist as individual entities. The user gets assigned to a role and a set of entities, a new Access Control Policy (ACP) entity is created.The last part of linking some entities to the user and the role is also called "defining the scope" and it's what completes the ACP creation.In nuclei it is possible to see which ACPs are in place for each user or user_group by querying these entities directly, as well as it is possible to identify which users and roles conformed an ACP by querying the ACP directly. If the role is queried, only information about the permissions is visible and not, for example, which users are linked to the role (that's why we have ACPs).
1. The first step is to collect the specific user, user_group, role(s) and entities (ACPs) from the customer that are subject of the investigation.
Example
In order to provide more clarity on this, consider the following example:Test_role is created with Full Access permissions on everything. Test_user assigned to Test_role and the scope defined is only "All AHV VMs" entities
These are some excerpts (no complete outputs) of data printed by nuclei when checking what was explained above:
nuclei user.get 64be6808-ab5e-5d15-bfbb-c015766306d4 (UUID for test_user)
The default-6267e572b24f ACP was created by the system right after Test_user was mapped to Test_role.
nuclei access_control_policy.get d0f708e8-c6fc-49a2-8840-803962497bf4
From above ACP output we can see if the proper "user to role" mapping exist and also that the filter list matches the defined scope. By default all SELF_OWNED entities and all Category entities are part of the scope and as configured, all VM entities are too.
2. Once the required UUIDs (for users, ACPs and Roles) have been identified and the proper mapping have been confirmed, it will be required to enable aplos debug to trace from the logs if this is being handled as it should when tge user logs into the UI(We can enable debug mode for Aplos service live using KB-15230 https://portal.nutanix.com/page/documents/kbs/details?targetId=kA07V0000004T2rSAE).
1. Turn on the debug gflags
echo "--debug=True" > ~/config/aplos.gflags
2. Restart aplos
genesis stop aplos ; cluster start
3. With Aplos debug enabled and by running the following command we should be able to see the different entities being filtered and the respective ACP associated to them that had been identified in the first step:
egrep -A1 "There are [[:digit:]] ACPs with permission to view" aplos.out
For example, some entities that the user is not allowed to interact with:
2021-03-19 08:10:59,328Z DEBUG acp_processor.py:56 There are 0 ACPs with permission to view entity of kind recovery_plan
On the other hand, this is how the filter looks for VM entity:
2021-03-19 08:11:01,266Z DEBUG acp_processor.py:56 There are 1 ACPs with permission to view entity of kind vm
On this example we are concerned about the Test_user being able to interact with All AHV VMs, so only this will be analyzed. Above output already displays the ACP UUID and the Role UUID, since the ACP is unique for every "user to role to entity" combination, it can be assumed that this entry corresponds to filtering being done for Test_user.A new value is presented on that output, the Filter UUIDs. The ACP information in nuclei did not have these UUIDs, only their configuration. Now that we have a timestamp we can see if the right filtering is occurring:
2021-03-19 08:11:01,267Z DEBUG idf_entity_cache.py:62 Checking cache for key 309ed323-942f-5b9c-8fbe-a89fd0a5a5b2
The three filters are being validated (the two default for Category and SelfOwned entities and the manually configured "All AHV VMS"). Then we can see a a confirmation of this validation:
2021-03-19 08:11:01,271Z DEBUG list_or_groups_engine.py:199 ENTITY TYPE IN IDF vm
This is a successful example of Test_user logging in and the default-6267e572b24f being used to filter the entities based on the permissions and the scope (filter) defined when the user was assigned to Test_role.The expectation is if the filtering is not happen correctly the output above should provide some insights on where/what is not working.
Conclusion
If after following the steps above the filtering is not as expected due to missing entities or showing incorrect entities further investigation will be required to understand why it is failing.In those cases collect the logs including aplos debug and nuclei output for the impacted user and open a TH or ONCALL to get further assistance.
|
KB1277
|
Data collection and download via Nutanix FTP server ****Internal Article****
|
Provides ftp server login details and describes work flows for using the ftp server.
|
When working on customer cases it is common to need data to be uploaded from their environment and analyzed locally. This document provides an overview of how to upload to the Nutanix FTP server and how to retrieve files from that server.An additional method exists for Dell customers - please refer to KB 3674 http://portal.nutanix.com/kb/3674.
|
Nutanix FTP server detailsPublic: ftp.nutanix.com (currently 192.146.155.27)Internal: ftp.corp.nutanix.com (10.22.95.45)Public upload account user / pass: inbound / nutanix/4uPublic download account user / pass: upgrade / Nutanix/4u (note the capital "N")The upload account lands the user in /home/sre/inbound (and is presented as the root directory to the ftp user). The user cannot perform a directory listing, but can change directories if they are aware of the structure, so they can, for example, put files in /<casenumber>/The upgrade account lands the user in /home/sre/software. Directory listings are allowed, however uploads and file modifications/deletions are disallowed.FTP, both internally and externally, is initiated on the standard port - 21.SSH is available only on the internal NIC and is on the standard port - 22.Internal user accounts - the account details are available in Confluence, in the Uploading Logs https://confluence.eng.nutanix.com:8443/display/SPD/Uploading+Logs page.The internal accounts can use FTP or SSH to connect and can therefore perform transfers using SCP if desired.We have noticed that at least for the RTP office there is no direct access to ftp.corp.nutanix.com. (it will resolve but is not pingable and sftp will not connect) The workaround is to connect to COT and access the ftp server from that host. You can download the files from COT to your local system.Direct download access
The following commands can be used to download files directly to a customer's SVM (or any other *nix box) at customer sites if the customer has defined access to the FTP service from the internal networks. If access is not defined directly to the SVM, then use a client machine with access and a regular browser or command line FTP utility. Once downloaded, copy them internally (typically via SCP) to their desired final destination.
FTP server access via curl:
curl -v -u upgrade:Nutanix/4u --ftp-pasv ftp://ftp.nutanix/com// ftp://ftp.nutanix.com/Bayou-GA-2.6.2/nutanix_installer_package-release-bayou-2.6.2-stable-85f46b4421f41c883fae660090620716e80b40d9.tar.gz -o <destination_file.tar.gz>
HTTP server access via curl: this no longer works. Please use ftp or discuss with IT if there is a genuine need for this access.Direct upload accessThe following command is useful for uploading collected customer data from a CVM directly to the ftp server, using curl. It automatically creates the directory (e.g. case1234/) and uploads the file there.curl -v -u inbound --ftp-pasv --ftp-create-dirs --upload-file /home/nutanix/myfile.tar.gz ftp://192.146.155.27:21/case1234/SSH shell accessShell access to manage the FTP content is useful and available via ssh as the sre user. For example, you can check the md5sum of files, or remove a file uploaded by mistake (which is not possible using the FTP client).ssh [email protected]:cot$ ssh [email protected] login: Mon Jun 23 22:28:40 2014 from cot.corp.nutanix.com
sre@dp1-ftp:~$ cd /home/sre/inbound/case1234
sre@dp1-ftp:~/inbound/case1234$ md5sum *
3b2887b16408cdc7d306f7304eba3693 health_check.out
sre@dp1-ftp:~/inbound/case1234$ exit
logout
We have seen an issue when customers/SRE failed to FTP the log bundle because FTP is working in the active mode on the client side where command and data channel uses port 20 and 21 but in passive mode, command and data channel uses port 21. So we need to switch the mode to passive. you need to execute the below command to switch from active to passive:-
quote pasv
|
KB16452
|
Genesis service goes into a crash loop during single_ssd_repair workflow
|
Observing Genesis crashes. The single_ssd_breakfix task keeps running forever due to a missing svmrescue.iso on host
|
Identification Steps :
single_ssd_repair tasks are seen in a stuck state at "sm_post_rescue_steps" and the Genesis service is found to be in a crash loop.Checking the "ssd_repair_status", it shows that the task is at the "sm_post_rescue_steps" stage.
nutanix@CVM:~$ ssd_repair_status
Checking genesis.out on the Genesis leader we can observe the crash with the following error signatures
2024-02-15 11:24:17,341Z CRITICAL 55616496 decorators.py:47 Traceback (most recent call last):
The CVM for which the repair task was initiated reports UP and running at this stage
nutanix@CVM~$ cs | grep -v UP
single_ssd_breakfix ergon task is still running on the cluster
nutanix@CVM:~$ ecli task.list include_completed=0
ZK entry for single_ssd_repair is still populated:
nutanix@CVM:~$ zkcat /appliance/logical/genesis/single_ssd_repair_status
Checking svmrescue.iso, it doesn't exist on the affected CVMs host:
[root@Host ~]# ls -lrth /var/lib/libvirt/NTNX-CVM/svmrescue.iso
|
Cause :
The issue is caused because the workflow cannot find and delete svmrescue.iso on the respective AHV Host. The workflow process ideally deletes the svmrescue.iso, but in this case, svmrescue.iso was already missing from the host before the delete step itself, hence it is stuck at the stage where it is trying to delete a file that is not present. Validate from customer and bash_history on the CVM/Hosts if a manual intervention was done during the single_ssd_repair workflow.
Workaround : Note : Proceed with the workaround steps only if all signatures in the Identification steps match the issue.Since the ssd_repair process is unable to find and delete the svmrescue.iso on the Host, we can manually create a dummy file named svmrescue.iso for the workflow to find it and delete it, that should complete the task and fix the issue.
[root@Host ~]# cd /var/lib/libvirt/NTNX-CVM/
After this the task should automatically proceed and complete, also the genesis service should stop crashing. ENG-634653 https://jira.nutanix.com/browse/ENG-634653 was created to track the UTF error issue, and the ASCII error was because it was unable to find and delete the svmrescue.iso
|
KB5569
|
Degaussing disks voids warranty
|
Degaussing the drives voids the warranty with our suppliers and is not permitted under our terms and conditions
|
Certain customers have policies in place to destroy all the data on the drives before returning. With degaussing, not only the data is destroyed, but the HDD is made inoperable.
|
Degaussing the drives voids the warranty with our suppliers, and this is not permitted under our terms and conditions. Customer would need to buy non-returnable hard disk drive option if they want to have drives destroyed.For more information please check the below links:Support FAQs: https://www.nutanix.com/support-services/product-support/faqs/ https://www.nutanix.com/support-services/product-support/faqs/Nutanix Limited Warranty: http://download.nutanix.com/misc/764-0003-0001_RevB_Nutanix-Limited-Warranty.pdf http://download.nutanix.com/misc/764-0003-0001_RevB_Nutanix-Limited-Warranty.pdf
|
KB15054
|
Nutanix Files - Unable to connect to WORKGROUP ads_connect
|
FSVM domain join operation fails with error "MinervaException: User credentials failed with error - Unable to connect to WORKGROUP"
|
Nutanix Files domain join operation fails with an error "MinervaException: User credentials failed with error - Unable to connect to WORKGROUP"Get the file server VIP
nutanix@NTNX-B-CVM:~$ ncli fs ls
SSH to the Virtual IP
ssh FS-VIP << Replace it with the virutal IP you got from the previous step
Get the file server info
nutanix@NTNX-10-1-106-196-A-FSVM:~$ afs fs.info
Minerva leader = the AFS leader FSVM IP then SSH to the minerva leader IP.From the /home/nutanix/data/logs/minerva_nvm.log in the minerva leader NVM , you can observe the same error below
ERROR 61518064 minerva_task_util.py:2664 Exception: User credentials failed with error - Unable to connect to WORKGROUP
The issue is not related to credentials and it can happen if any of the required ports between FSVM and Active directory are blocked through the firewallExample - DNS (port 53), Kerberos (port 88), LDAP (port 389) We can test port connectivity using the following command from FSVM
nutanix@fsvm:~$ nc -v <AD/DNS server ip> <port number>
|
Please ensure all required ports are opened through Firewall between FSVM and Active DirectoryList of required ports from Portal https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=Files
|
KB2292
|
Model Number for Haswell and later
|
This article lists the model numbering framework for newer CPU Generation (Haswell G4) and above.
|
Moving forward, a new numbering scheme has been designed to consolidate future models fewer use case types.The Compute Heavy series (NX-3050, NX-3051, NX-3060, NX-3061) will consolidate into the NX-3060-G4 Series.High Performance series (NX-6050, NX-6060, NX-6070, NX-6080) will become NX-8035-G4 Series.Storage Heavy applications (NX-6020) will be modelled as NX-6035-G4.GPU nodes for VDI will be part of the 3000 series, along the lines of NX-3050X-G4 (TBD).All flash nodes may become 8000 or 9000 series (TBD).All new models are Configured To Order (CTO). Existing models (Ivybridge and lower) will retain their existing numbering scheme.Refer to KB1136 http://portal.nutanix.com/kb/1136 and KB1324 http://portal.nutanix.com/kb/1324 to determine the Nutanix Model Number.
|
Below explains the numbering scheme in more detail: A - 1st Digit Denotes Use Case (Same as Today)B - 2nd Digit Denotes Number of Nodes in a Block. Marketing would always use 0 here (Same as Today)C - 3rd Digit Denotes Chassis Form FactorD - 4th Digit Denotes Drive Bay Form Factor
Possible models:
Note: NX-1000 Series will be on IvyBridge till Q4FY15[
{
"A": "NX-3060-G4",
"B": "-",
"C": "-",
"D": "-"
},
{
"A": "-",
"B": "NX-8035-G4",
"C": "NX-6035-G4",
"D": "-"
},
{
"A": "-",
"B": "NX-8050-G4",
"C": "-",
"D": "-"
},
{
"A": "-",
"B": "-",
"C": "-",
"D": "-"
},
{
"A": "NX-3070-G4",
"B": "-",
"C": "-",
"D": "-"
}
]
|
KB11921
|
NCC Check sed_key_availability_check Reports KMS as "missing SED disk passwords"
|
A customer who has a KMS missing SED disk passwords will fail an upgrade precheck and/or the ncc health_checks key_manager_checks sed_key_availability_check.
|
The official KB for the 'NCC Health Check: key_manager_checks' is KB8223 and can be viewed here https://portal.nutanix.com/kb/8223. This KB is for a specific issue where one of the KMS servers are missing SED disk passwords.A customer attempting an upgrade or running the ncc health_checks key_manager_checks sed_key_availability_check will fail the health check if any of their KMS servers are missing the SED disk passwords. An example NCC failure will look similar to:
Running : health_checks key_manager_checks sed_key_availability_check
To confirm that the failure is legitimate, manually rerun the NCC check in debug via the following: ncc --use_rpc=0 --debug health_checks key_manager_checks sed_key_availability_checkOnce the check completes, on the CVM from which the check was ran, review the /home/nutanix/data/logs/ncc.log file for the following 'bad' output (ie grep NOT_FOUND /home/nutanix/data/logs/ncc.log). From the output, note the 'NOT_FOUND' message at the end of the response from the KMS. This indicates that we are receiving a response from the KMS and that it cannot find the key/password:
SENDING:
An example of a good send/response is shown here. From the output, note the 'REDACTED_0' at the end of the response from the KMS. This is the expected response when the KMS has the password for the SED:
SENDING:
In an instance where the logs contain a 'NOT_FOUND', that is considered a failure and indicates that the KMS is missing keys for the SED. In the particular environment for the example used in this KB, the customer had two Vormetric KMS configured:
nutanix@CVM:~$ ncli key-management-server ls
As seen in the NCC check ONLY the .102 KMS was being called out in the check:
Node x.x.x.1:
This indicated that just one of the two KMS were having issues. Interestingly when running the self_encrypting_drive script against each disk on one of the nodes being called out in the check, they all pass:
nutanix@CVM:~$ sudo $(which self_encrypting_drive) secure_status disk=/dev/sdX
This is because the self_encrypting_drive script stops when is succeeds as one of the two KMS in this example had the passwords, masking the fact that the other KMS did not. The NCC check is more strict and will report a failure if EITHER KMS has an issue.
|
The solution for this scenario is to ask the customer to reach out to their KMS vendor to determine why there are missing passwords. Providing the DEBUG level statements shown in the description from ncc.log with the bad 'NOT_FOUND' response will help the vendor with problem isolation.
|
KB1624
|
NCC Health Check: vdisk_count_check
|
The NCC health check vdisk_count_check verifies the total number of vDisks on a cluster and will flag any cluster with vDisk count above the defined threshold.
|
The NCC health check vdisk_count_check verifies the total number of vDisks on a cluster and will flag any cluster with vDisk count above the defined threshold.
What is a vDisk?A vDisk is any file of size over 512KB stored on Nutanix DSF (Distributed Storage Fabric), including .vmx and VM hard disks. vDisks are composed of extents that are grouped and stored on disk as an extent group.
What is Shell vDisk?Shell vDisk is a dummy (0 sized) vDisk created on a remote (DR) site as part of retaining the original replication structure found at the source site. Ideally, as part of DR replication, the last vDisk in the chain is replicated from the source cluster with all the data blocks (extents). But, to retain the hierarchy from the source cluster, the intermediate vDisks in the snapshot chain are also created with extent references. These shell vDisks are only needed for replication and serve no other purpose.
vDisk Cluster Limits
The total number of vDisks a cluster can support depends on the AOS version in use:
AOS 4.6.x < 5.5: 200K regular vDisks and up to 1 million shell vDisks.AOS 5.5 and higher: 600K regular vDisks and up to 1 million shell vDisksAOS 5.20 and 6.0 and higher: 600K regular vDisks and up to 4 million shell vDisks
Scenarios Involving High vDisk Counts
High vDisk count is observed on clusters that have DR enabled and are receiving snapshots at aggressive RPOs. The common characteristics of such clusters are:
The cluster is receiving snapshots from other cluster(s).The snapshots are being received at 1-hour intervals, i.e. source cluster is configured to replicate snapshots every 1 hour.NCC check vdisk_count_check will Fail when the number of vDisks is more than the threshold supported on the current AOS version.
Running the NCC check
This check can be run as part of the complete NCC check by running:
nutanix@cvm$ ncc health_checks run_all
Or individually as:
nutanix@cvm$ ncc health_checks system_checks vdisk_count_check
You can also run the checks from the Prism web console Health page: select Actions > Run Checks. Select All checks and click Run.
This check is scheduled to run every day by default.This check will generate an alert after 1 failure.
Sample outputFor status: CRITICAL
Detailed information for vdisk_count_check:
Note: Prior to NCC 4.6.6, the severity of this check is WARN. Output messaging
[
{
"Check ID": "Checks for high vDisk count in the cluster"
},
{
"Check ID": "Aggressive replication/snapshot schedules may generate large number of vDisks on the remote site"
},
{
"Check ID": "Immediately contact Nutanix support for assistance"
},
{
"Check ID": "The cluster may become unstable due to increased load on core Nutanix services"
},
{
"Check ID": "A1182"
},
{
"Check ID": "High vDisk count in the cluster"
},
{
"Check ID": "Number of vdisk_type in the cluster is above the threshold (vdisk_count / vdisk_threshold)"
}
]
|
If this NCC check fails, contact Nutanix Support https://portal.nutanix.com immediately for assistance. Typical actions taken by Nutanix will involve:
Determining the cause of the excessive vDisks on the container (typically related to too-frequent snapshots for the number of vDisks under protection).Altering cluster parameters to process outstanding snapshot tasks.Working with you to determine if your Protection Domain settings require reconfiguration to avoid this going forward.
Note: Excessive Shell vDisks can result in cluster instability. The cluster component stability issues can cause intermittent storage unavailability.
|
KB13566
|
NDB - Provisioning of new database server fails when VM's nic is VMXNET3
|
Provisioning of a new MS SQL database VM fails on Vmware Hypervisor when the VM has VMXNET3 nic
|
Provisioning of MSSQL server VMs from a registered server may fail with an error that Sysprep timed out. Inspecting the newly created VM it will be found to have networking configured on the ESXi hypervisor level, the NIC is present and connected, The nic type is VMXNET3.The VM will be powered on and inside the OS the network will not be configured.
|
Change the network adapter on the source VM to E1000.The VM may have to be unregistered and reregistered in NDB as well.Redeploying the VM when using E1000 nic will work
|
KB15344
|
NCC failed to execute. Exception: need more than 1 value to unpack
|
NCC 4.6.5.1 may fail to update via LCM direct upload but manual install completes successfully. This may result in a scenario where the command "ncc --version" will fail with error 'NCC-failed-to-execute-Exception-need-more-than-1-value-to-unpack'
|
An attempt to upgrade NCC to version 4.6.5.1 via LCM Direct upload may fail with the following error
Operation Failed. Reason: Update of NCC failed on XX.XX.XX.161 (environment cvm) at stage 1 with error: [Failed to get installed NCC version.] Logs have been collected and are available to download on 10.x.x.x at /home/nutanix/data/log_collector/lcm_logs__10.x.x.x__2023-08-14_14-44-58.530554.tar.gz
Attempting to query the version from any CVM on the cluster gives the following message:
nutanix@NTNX-XXXX-A-CVM:XX.XX.XX.161:~$ ncc --version
Trying the upgrade manually by using the installer file, successfully upgrades NCC to 4.6.5.1. However, we could still see the error "NCC failed to execute. Exception: need more than 1 value to unpack" when checking for the version.
|
1. Check the cluster for the presence of cluster_health.gflags file and confirm if it has any extra lines on the output
Sample output with the extra line
2. If extra line is present, remove it by logging into each CVM and editing the cluster_health.gflags file
Sample output after removal of extra line
3. Run the 'ncc --version' command to check and confirm if the correct NCC version is displayed.
nutanix@NTNX-XXXX-A-CVM:XX.XX.XX.161:~$ ncc --version
|
KB16663
|
Management cluster's host memory resource utilization issue
|
Management cluster's host memory resource utilization issue
| null |
IssueDKP uses ClusterAPI architecture, which uses a kubernetes cluster to manage a kubernetes cluster. This could present resource utilization issues on the machine where the management cluster is being hosted. Especially for cases where the management cluster is provisioning large number of nodes for the workload cluster.CauseWhen the management cluster provisions a workload cluster, it spins up a pod, per node that is being provisioned, which executes the Ansible playbook. This might cause the host to reach the memory allocation limit, and might cause some pods to be terminated.Solution / WorkaroundOur Engineering team is continually working on making the CAP* provisioners more efficient. Our suggested workaround at the moment is to leverage the Cluster API's capability of pivoting the management cluster and/or scaling the machine deployment.With Cluster API's pivot capability, you are moving the management of the cluster lifecycle to the workload cluster. This can give you a variety of options in deploying a large number of nodes. Please check the suggested strategies below:1. Create a small workload cluster, then make it self-managed. Once the cluster is self-managed, provision the additional large number of nodes that you require. 2. ClusterAPI also has a CRD object called machine from machinedeployment which can be scaled up or down. This can be used to create a small self managed cluster first, and scale the machinedeployment to the desired replicas.With the 2 strategy mentioned, you are essentially putting the workload of provisioning to multiple nodes instead of just the single node when it is not pivoted.Note: The information above is only a high level overview of the strategies and does not include the details for each provider. Please see the specifics of the provider being used when implementing the recommendation.
|
KB8676
|
[ESXi] Prism 1-Click Hypervisor Upgrade Hung at 14%
|
There is currently (as of 12/2/19) an issue with the ESXi hypervisor pre-upgrade not failing within Prism when it is supposed to - this KB addresses that issue
|
When attempting an ESXi hypervisor upgrade, the customer may run into any issue where the upgrade gets hung at 14%. When checking the upgrade status via CLI, output similar to the following will be displayed:
nutanix@cvm:192.xx.xx.xx:~$ host_upgrade_status
In a normal situation the upgrade would not have started because the test_cluster_config pre-upgrade check would have failed - see KB 6409 https://portal.nutanix.com/kb/6409 for further details. Currently (as of 12/2/19) there is an issue with the pre-upgrade check not failing within Prism, thus letting the hypervisor upgrade start. The hypervisor upgrade will stall out around 14% due to the ESXi host not being managed by the vCenter Server specified within Prism. ESXi hosts not being managed by the vCenter Server can occur if a customer recently added hosts to the Nutanix cluster but did not add them to the VMware cluster as well. This issue can also occur if there are connection issues between the ESXi host(s) and vCenter.The issue of the pre-upgrade check not properly failing is being tracked under ENG-271059 https://jira.nutanix.com/browse/ENG-271059. To verify that you are hitting this issue, the pre-upgrade check will pass in Prism but you will see something similar to the following within the /home/nutanix/data/logs/host_preupgrade.out log:
2019-11-22 14:38:30 INFO zookeeper_session.py:131 host_preupgrade is attempting to connect to Zookeeper
|
To resolve this issue, follow the steps below:
Cancel the upgrade by following "Cancelling One-Click Hypervisor Upgrade" within KB 2075 https://portal.nutanix.com/kb/2075Once the upgrade has been cancelled ensure the tasks have been cleared from Prism - KB 1217 https://portal.nutanix.com/kb/1217
Once the upgrade has been cancelled, check to see if the ESXi host is managed by the vCenter Server that is configured within Prism - if it is not, add it to the vCenter Server by going through the steps below:
Launch the vSphere Web ClientNavigate to the Host and Clusters viewUnder the appropriate vCenter Server click the drop-down and right-click on the appropriate Data Center and select Add Host
Once the host has been added to the vCenter Server, kick off the hypervisor upgradeNOTE: If the ESXi host is already apart of the vCenter Server, troubleshoot any possible connectivity issues between the host and vCenter; also verify that the vCenter Server credentials are correct.
|
KB16430
|
Stale ESXi datastores causing esxcli/localcli storage commands to hang
|
This KB outlines how a stale datastore configuration on ESXi hosts will cause storage commands on the ESXi hosts such as ‘esxcli storage nfs list’ and ‘localcli storage nfs list’ to take longer than expected, sometimes up to a minute to return any output.
A customer may have stale datastores configured on their ESXi hosts for any number of reasons, such as previously having 3rd party backups or having an unsupported external datastore configuration.
This will cause the ‘check_storage_access,’ ‘esx_scratch_setting_check,’ and ‘esxi_staleworlds_check’ NCC checks that rely on the output of the storage commands to reach the timeout threshold of 30 seconds.
|
On ESXi clusters, the check_storage_access, esx_scratch_setting_check, and esxi_staleworlds_check NCC checks that rely on the output of the following storage commands will reach the timeout threshold of 30 seconds and fail, causing upgrade and LCM pre-check operations to also fail.
Detailed information for esxi_staleworlds_check:
Check the behavior of the following storage commands on the ESXi hosts when used with the ‘time’ command:
nutanix@cvm:~$ allssh 'ssh [email protected] time esxcli storage nfs list'
nutanix@cvm:~$ allssh 'ssh [email protected] time localcli storage nfs list'
The time taken for the ESXi hosts to return the expected output can exceed 30 seconds:
Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration
*Note: If the cluster is part of a metro-availability configuration and the datastores are showing as “Read-Only : true,” verify if the issue outlined in KB-8283 https://portal.nutanix.com/kb/8283 applies.If the cluster is not part of a metro-availability cluster, and one or more of the ESXi hosts are displaying the above behavior while the datastores are mounted and not in read-only mode as expected, analyze the datastore configuration further on the ESXi hosts.The ‘esxcli storage nfs list’ and ‘localcli storage nfs list’ commands may show valid/active datastores, but there still may be a stale datastore configured on the ESXi hosts.Use the following configstorecli ESXi command to check for any invalid datastores that may still on the ESXi hosts:
nutanix@cvm:~$ hostssh 'configstorecli config current get -c esx -g storage -k nfs_v3_datastores'
Sample output:
[
In the output above, any active shared Nutanix datastores/containers within the ESXi cluster will be shown with the 192.168.5.2 internal IP storage path and its respective datastore name and path.However, any stale non-Nutanix shared datastores will have a different IP or FQDN in the “hostname” field, and will have an invalid directory path that is not shown in the ‘esxcli storage nfs list’ and ‘localcli storage nfs list’ commands' outputs.Verify that the stale datastore also does not show up as a mounted VMFS filesystem on the ESXi hosts:
nutanix@cvm:~$ hostssh 'ls -l /vmfs/volumes/'
Verify further that the stale datastore is not contained within the cluster’s Zeus configuration:
nutanix@cvm:~$ zeus_config_printer | grep container_name
|
To remove the stale datastore on the ESXi hosts, the configstorecli utility must be used on ESXi hosts since ESXi version 7.0 U1, replacing the previous procedure of editing the /etc/vmware/hostd/config.xml configuration file on earlier versions of ESXi.To read more about the ESXi configstorecli utility, refer to the following VMware KB article: Modify ESXi Configuration Files https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-7AB71AFE-C836-418F-8E01-BEB240805D1F.html1. ssh to an affected ESXi host, and run the following command to store the current NFS datastore configuration of the ESXi host into a .json file:
[root@esx:~] configstorecli config current get -c esx -g storage -k nfs_v3_datastores -outfile nfs_v3_datastores.json
2. Create a backup of the .json file in case any issues may arise and the configuration may need to be restored:
[root@esx:~] cp nfs_v3_datastores nfs_v3_datastores.json nfs_v3_datastores.json.orig
3. Edit the nfs_v3_datastores.json file to remove the entry for the stale datastore from the configuration file:
[root@esx:~] vi nfs_v3_datastores.json
The portion to be removed from the configuration file will be the following entry for the stale datastore:
{
Save and close the file.4. Run the following command to apply the edited configuration to the ESXi host:
[root@esx:~] configstorecli config current set -c esx -g storage -k nfs_v3_datastores -infile nfs_v3_datastores.json --overwrite
5. Verify that the storage commands on the ESXi no longer take the previous amount of time to return its output:
[root@esx:~] time esxcli storage nfs list
[root@esx:~] time localcli storage nfs list
The output should now show that the commands take less than a second to return its output as expected:
Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration
6. Repeat the steps above for any other affected ESXi hosts.7. Once all affected ESXi hosts have its datastore configuration updated and is no longer exhibiting the previous behavior of the storage commands taking too long, verify that the following NCC checks pass:
nutanix@cvm:~$ ncc health_checks hypervisor_checks check_storage_access
nutanix@cvm:~$ ncc health_checks hypervisor_checks esx_scratch_setting_check
nutanix@cvm:~$ ncc health_checks hypervisor_checks esxi_staleworlds_check
If any errors still persist with the NCC checks, refer to the respective KB articles for each NCC check to further triage any pending separate issues. KB-2263 http://portal.nutanix.com/kb/2263: NCC Health Check: check_storage_access KB-2046 http://portal.nutanix.com/kb/2046: NCC Health Check: esx_scratch_setting_check KB-7833 http://portal.nutanix.com/kb/7833: NCC Health Check: esxi_staleworlds_check
|
}
| null | null | null | null |
KB5479
|
Intel S3610 SSD Substitutions
|
Nutanix platform team validated a series of drives from other vendors to be used as break/fix substitutes for the S3610.
|
Intel announced a sudden end-of-life of their S3610 family of solid state device (SSD). As a result, their customers were not given any opportunity to perform a last time buy of the devices they required. Nutanix platform team validated a series of drives from other vendors to be used as break/fix substitutes for the S3610. Drives from these other vendors did not come in the same capacities as the S3610 so a higher capacity drive is used in most cases.
|
The following capacity substitutions with either Samsung SM863 or SM863a family of drives are allowed:
See KB-2794 https://portal.nutanix.com/#page/kbs/details?targetId=kA03200000097v5CAA for procedure upgrading SSD to higher capacity.[
{
"Original S3610 Capacity": "480 GB",
"Original S3610 SKU": "X-SSD-480GB-2.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "N/A",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-480GB-2.5-E"
},
{
"Original S3610 Capacity": "480 GB",
"Original S3610 SKU": "X-SSD-480GB-3.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "N/A",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-480GB-3.5-E"
},
{
"Original S3610 Capacity": "800 GB",
"Original S3610 SKU": "X-SSD-800GB-2.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "X-SSD-960GB-2.5-C",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-960GB-2.5-E"
},
{
"Original S3610 Capacity": "800 GB",
"Original S3610 SKU": "X-SSD-800GB-3.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "X-SSD-960GB-3.5-C",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-960GB-3.5-E"
},
{
"Original S3610 Capacity": "1200 GB",
"Original S3610 SKU": "X-SSD-1200GB-2.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "X-SSD-1920GB-2.5-C",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-1920GB-2.5-E"
},
{
"Original S3610 Capacity": "1200 GB",
"Original S3610 SKU": "X-SSD-1200GB-3.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "X-SSD-1920GB-3.5-C",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-1920GB-3.5-E"
},
{
"Original S3610 Capacity": "1600 GB",
"Original S3610 SKU": "X-SSD-1600GB-2.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "X-SSD-1920GB-2.5-C",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-1920GB-2.5-E"
},
{
"Original S3610 Capacity": "1920 GB",
"Original S3610 SKU": "X-SSD-1600GB-3.5-C",
"Possible Substitution\t\t\tSKU 1 (SM863)": "X-SSD-1920GB-3.5-C",
"Possible Substitution\t\t\tSKU 2 (SM863a)": "X-SSD-1920GB-3.5-E"
}
]
|
KB13114
|
Nutanix Database Service | MSSQL PIT Restore operation fails: The system cannot find the path specified: 'C:\\NTNX\\ERA_BASE\\logdrive_xx\\logs_x\\xxx'
|
This article describes a scenario where MSSQL PIT restore operation fails with error: The system cannot find the path specified: 'C:\\NTNX\\ERA_BASE\\logdrive_xx\\logs_x\\xxx'
|
Nutanix Database Service (NDB) is formerly known as Era.
On the DB server diagnostics log bundle or its "C:\NTNX\ERA_BASE\" folder (../logs/drivers/sqlserver_database/restore_source/<operation ID>.log), there are exception logs similar to below:
[2022-05-13 11:33:03,727] [3544] [ERROR ] [09211958-e775-443b-9549-802f1b288e9f],Traceback (most recent call last):
On the <operation ID>.log or the eracommon.log on "./logs/drivers/sqlserver_database/" directory, Era tried "Get-IscsiTarget" repeatedly, but not able to get the expected output:
[2022-05-13 11:32:56,493] [3544] [INFO ] [09211958-e775-443b-9549-802f1b288e9f],DSIP:x.x.x.x
|
According to Era Port Requirement https://portal.nutanix.com/page/documents/list?type=software&filterKey=software&filterVal=Ports%20and%20Protocols&productType=NDB%20%28Database%20Server%20VM%29, DB server VM needs to connect to PE cluster DSIP on TCP port 3205 and 3260.The above operation fails because the DB server is not able to attach the log drive properly due to DB server VM failing to connect to PE cluster DSIP on TCP ports 3205 and 3260. Here is a successful PIT restore operation log snippet:
DB server connects to DSIP and starts "Get-IscsiTarget":
[2022-05-09 15:10:24,455] [3380] [INFO ] [0000-NOPID],DSIP:x.x.x.x
Get the iSCSI target:
[2022-05-09 15:09:57,970] [3380] [INFO ] [ad22f5fb-c7d9-43ea-9324-796e566fce62],x.x.x.x ::$strComputer = $Host
Mount log drive/VG with iSCSI:
[2022-05-09 15:10:03,948] [3380] [INFO ] [ad22f5fb-c7d9-43ea-9324-796e566fce62],powershell path is:
It mounts successfully and process to next step:
[2022-05-09 15:10:24,471] [3380] [INFO ] [0000-NOPID],command is:powershell.exe -command "chcp 65001 | out-null; C:\NTNX\ERA_BASE\era_engine\stack\windows\python\lib\site-packages\nutanix_era\era_drivers\common\host\power_2ce6ae7a.ps1"
We can test the connection between DB server VM and PE DSIP with below PowerShell commands:
PS/> test-netconnection <PE cluster ISCSI Data Services IP> 3205
As part of the Era Restore workflow:
Era takes a crash consistent snapshot of the database disk before performing the restore. This crash consistent snapshot will be restored as part of the Rollback step in case the restore operation fails.When above PIT restore fails, Era attaches the database from the crash consistent snapshot and resumes DB to previous state. This is to ensure that customer gets back the database with the initial content even if the Restore operation fails.
|
KB14203
|
CentOS 7 and Red Hat 7 VMs can hang as a result of a memory hotplug of more than 150 GB
|
CentOS 7 and Red Hat 7 VMs can hang as a result of a memory hotplug of more than 150 GB. This is a CentOS 7 and Red Hat 7 issue (guest OS) which is being fixed on newer versions CentOS 8 and Red Hat 8. Upgrade guest OS.
|
Nutanix has identified an issue with the memory hotplug on CentOS 7 and Red Hat 7 VMs. If a memory hotplug of more than 150 GB is performed, the guest may hang and log soft lockup warnings. Guest hang is gradual and occurs within 1-2 minutes after the memory hotplug has been completed. If softlockup_panic is enabled, guests may see panics. The panic stack for softlock panic looks similar to below.
[ 622.312917] CPU: 0 PID: 528 Comm: systemd-udevd Tainted: G L ------------ 3.10.0-514.el7.x86_64 #1
|
This issue is already fixed on CentOS 8 and Red Hat 8. Upgrade guest OS to resolve the issue.
|
KB10037
|
LCM: Inventory in DELL platforms fails as "[Errno 111] Connection refused error" response from PTAgent
|
Possible causes behind a 111 response from PTAgent
|
LCM inventory fails due to unresponsive PTAgent with this error in lcm_ops.out:Sample Output:
EXCEPT:{"err_msg": "Inventory failed with error: [[Errno 111] Connection refused]", "name": "release.dell.firmware-entities.14G.esx.update"}
|
If iSCSI LUN is attached to the ESXi host, the issue is with PTAgent and fix will be provided by Dell in next PTAgent release. DELL-1813 https://jira.nutanix.com/browse/DELL-1813 is fixed in LCM-2.3.4 based on the retry options added in DELL-1836 https://jira.nutanix.com/browse/DELL-1836and is waiting on PTAgent fix. DELL-1836 https://jira.nutanix.com/browse/DELL-1836 have applied retry options post PTAgent service restart in LCM-2.3.4(The retries were only added for ESXi hypervisor).
If customer is running older LCM version, please encourage to upgrade LCM to 2.3.4 or latestIf the above issue is observed in LCM-2.3.4 or later, please reach out to Dell Support for proper RCA of why PTAgent service is not coming available on time.
Please note: Connection refused issues can happen for multiple of reasons and we require the root cause from Dell Support.Also, update the provided Dell Support's RCA in your Nutanix case.
|
KB11229
|
Prism UI unavailable on VIP after a node removed from the cluster does not release the IP
|
This KB describes an issue in which Prism UI becomes unavailable on the VIP after the node hosting it is removed from the cluster.
This can happen due a bug which cause genesis to ungracefully shutdown Prism service so the node removes keeps the VIP address configured and the new leader cannot take it over.
|
This KB describes an issue in which Prism UI becomes unavailable on the VIP after the node hosting it is removed from the cluster.This can happen due a bug which cause genesis to ungracefully shutdown Prism service so the node removes keeps the VIP address configured and the new leader cannot take it over.After node removal Prism VIP is kept configured on the removed node causing UI unavailability when accessed using VIP or FQDN.The following signatures are seen for this issue:
Attempting to log into the VIP address redirects to the removed node. You will see the following message:
node is currently unconfigured
In prism_monitor logs of the new leader you will see the duplicate IP warning. The MAC address corresponds to the interface of the node that has been removed:
/home/nutanix/data/logs/prism_monitor.WARNING:W0419 10:02:47.883441 12155 vip_manager.cc:245]
If you login to the removed node, you will see the node still has VIP configured in the :1 subinterface:
[email protected]:~$ ifconfig -a
You may see the "Prism services have not started yet. Please try again later." error message on your web browser, when you try access to Prism console using the cluster VIP
|
Root cause
This is a race condition which may happen post removal and has been observed in the past due to Prism not releasing the leadership when it goes down at the end of the removal and the VIP monitor service responsible of unconfiguring the IP address stopping before a new leader gets elected with Zeus timeout (20 seconds).In TH-6279 https://jira.nutanix.com/browse/TH-6279 VIP was unconfigured 10 seconds after last update leadership event:
genesis.out:
2021-04-19 10:02:41 INFO node_manager.py:3493 Unconfigure service: PrismService
vip_service.out logs end before the new leader has been elected:
I0419 10:02:31.821918 3818 zeus_leadership_ops.cc:2247] Updating leader cache for prism_monitor to 10.60.180.18:9080
This results on the IP staying configured in the node which prevents the new leader from taking over it.For a permanent solution engineering is working on ENG-295461 https://jira.nutanix.com/browse/ENG-295461.
Work around
In order to release the VIP shut down the node or manually shutdown the subinterface where the IP is configured. Once that's done the VIP can successfully migrate to another node in the cluster which will happen automatically.
|
KB9032
|
File Analytics - One or more components of the File Analytics VM are not functioning properly or have failed
|
This KB is to assist in diagnosing alerts relating to File Analytics component failure.
|
Customers may recieve an alert in Prism for:
One or more File Analytics VM components have failed
This generic alert can be caused by any of the File Analytics services being down or crashing.Verify all containers are up and running using:
[nutanix@FAVM ~]$ docker ps -a
If any of the three containers are missing, try recomposing using the applicable command below for the missing container(s):
env $(cat /opt/nutanix/analytics/config/deploy.env) docker-compose -f /mnt/containers/config/docker-compose.yml up -d --no-deps -t 120 Analytics_Gateway
The File Analtyics VM has a monitoring service that logs to /mnt/logs/host/monitoring/monitoring.log.INFO
2023-03-27 12:10:23Z,602 WARNING 34675 base_service.py:check_service_status: 79 - The Event Processor service is not running
The above example clearly states that 'Event Processor' is the problem service and would be the starting point for the investigation.In the same log, if no service is called out directly as 'not running', review the Elastic Search cluster/index status:
[nutanix@FAVM ~]$ egrep "Cluster Status|Index Status" /mnt/logs/host/monitoring/monitoring.log.INFO
Any 'red' index will typically set the overall ES Cluster Status as 'red'.This would be an indicator to check Elastic Search with one of the various scenarios below.Note: This KB outlines several specific scenarios but should not be considered an exhaustive list.Other factors could cause any of the various services (Elastic Search, Kafka, Metadata Collector, Event Processor, Analytics Gateway, etc) to be down or crashing (perhaps intermittently).
|
Scenario 1) Sizing / Memory ConfigurationPer the sizing guidelines https://portal.nutanix.com/page/documents/details?targetId=Release-Notes-Analytics-v3_2_1:ana-fs-analytics-system-limits-r.html, this will be more common on the default 24GiB of memory allocated to the FAVM. You will need to increase from 24GiB to 48GiB.Inside of the FAVM, there is a resource monitor that polls for changes in memory allocation and will automatically redistribute that memory to the various container services.Reviewing the /mnt/logs/host/monitoring/monitoring.log.INFO output, we see the change being applied and services automatically restarted.
2020-02-27 23:02:01,464 INFO 3076 monitoring_service.py:start_monitoring: 99 -
Once that process completes after several minutes, the File Analytics Dashboard should load appropriately.Scenario 2) Elastic Search OOM - Java Heap Memory[nutanix@FAVM ~]$ grep "java.lang.OutOfMemoryError" /mnt/logs/containers/elasticsearch/elasticsearch.out
[2020-02-27T19:34:15,456][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [n5VHN0I] fatal error in thread [elasticsearch[n5VHN0I][write][T#7]], exiting
Verify Heap usage of ElasticSearch using the following:
[nutanix@FAVM ~]$ curl -sS "172.28.5.1:9200/_cat/nodes?h=heap*&v"heap.current heap.percent heap.max 16.7gb 70 23.9gb
Re-Indexing (Applies ONLY to File Analytics instances deployed or upgraded before 3.0)We will want to verify that Re-Indexing is needed using the following:
[nutanix@FAVM ~]$ curl -s "172.28.5.1:9200/_stats/completion" | python -m json.tool
Compare the completion suggester memory usage in bytes to the allocated memory for Elastic Search.
[nutanix@FAVM ~]$ docker ps
If the completion suggester memory exceeds 20% of the allocated Elastic Search memory, re-indexing will be required. Engineering will need to perform this step, and it will require approximately 12 hours of downtime to File Analytics while this is happening. Please open a TH or ONCALL to allow Engineering to perform this work.Scenario 3) Audit Events exceeding the limit of 750 millionIn the monitoring log, we may see an indication for approaching or exceeding the number of audits allowed, which is currently 750 million for all File Servers. Unfortunately, the alerting logic looks at a per File Server instead of an aggregated total.
2021-03-23 15:04:35Z,901 INFO 5253 common.py:get_cvm_details: 70 - Retrieving CVM details
We can also manually check using the following loop to display the current number of audit events per File Server:
[nutanix@FAVM ~]$ for i in `curl -s 172.28.5.1:9200/_cat/indices?v | grep "fs_audit_log_files" | cut -d "_" -f 5 | cut -d "-" -f 1-5 | sort | uniq`; do echo""; echo "Audit Event Total for File Server UUID: $i"; curl -s 172.28.5.1:9200/_cat/indices?v | grep "fs_audit_log_files_$i" | awk '{sum+=$7;}END{print sum;}'; done
Further, either add the total or use:
[nutanix@FAVM ~]$ curl -s 172.28.5.1:9200/_cat/indices?v | grep "fs_audit_log_files" | awk '{sum+=$7;}END{print sum;}'
This will show that we are way over that limit and may the underlying cause of Elastic Search crashing.To mitigate this, lower the rention https://portal.nutanix.com/page/documents/details?targetId=File-Analytics-v3_2:ana-fs-analytics-retention-config-t.html per File Server to allow the File Analyitcs Curator to delete older events based on the newly lowered retention.In cases where large ingestion led to an influx of audit events, manual cleanup from Engineering may be required.Please open a TH if lowering the retention did not help cleanup within at least 3 days, and manual cleanup is required.Scenario 4) ElasticSearch fails to load all shards:This issue can be in combination with 'Scenario 2 - Java Heap OOM', where ElasticSearch fails to load all shards and eventually exceeds Heap memory and crashes.This can also be an independent issue keeping ElasticSearch with 'Red' status as not 100% of all shards have been loaded.Review the percentage of shards loaded by ElasticSearch.
[nutanix@FAVM ~]$ curl -sS "172.28.5.1:9200/_cluster/health?pretty=true"
Note: You will want to run this command a few times, and if you notice that 'initializing_shards' is 0 and there are still 'unassigned_shards' and the 'active_shards_percent_as_number' is less than 100; you may have a corrupted index or issue with a particular shard that will require Engineering intervention.Review the indices in ElasticSearch and look for any with a health status of 'Red':
[nutanix@FAVM ~]$ curl -s 172.28.5.1:9200/_cat/indices?v
Here we see 'fs_audit_log_folders_17af98a3-9830-4b02-9827-fc9763aa312c-2020.06-000001' has a health status of 'Red'. This should be our focal point.You can grep the index in the monitoring logs to see if the status lists as corrupted.
[nutanix@FAVM ~]$ grep "fs_audit_log_folders_17af98a3-9830-4b02-9827-fc9763aa312c-2020.06-000001" /mnt/logs/host/monitoring/monitoring.log*
Further validation:
[nutanix@FAVM ~]$ curl -XGET 172.28.5.1:9200/_cluster/allocation/explain?pretty
[nutanix@FAVM ~]$ curl -XGET http://172.28.5.1:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'}
Note: To address this, please open a TH or ONCALL (if immediate assistance is needed) so that Engineering can properly address this through deletion or recovery.Once resolved, we should see all shards in ElasticSearch loaded and status 'Green'.
[nutanix@FAVM ~]$ curl -sS "172.28.5.1:9200/_cluster/health?pretty=true"
Scenario 5) ElasticSearch crashes periodically:A new issue as of File Analytics 3.1.0 will trigger the alert "One or more components of the File Analytics VM xx.xx.xx.xx are not functioning properly or have failed" when ElasticSearch crashes periodically trying to close an Index. The error in ElasticSearch is "failed to obtain in-memory shard lock"./mnt/logs/containers/elasticsearch/elasticsearch.out
[2022-08-02T15:00:52,507][WARN ][o.e.c.r.a.AllocationService] [7d8921e51f62] failing shard [failed shard, shard [fs_audit_log_files_97ba3b3d-08bf-4455-9475-d237b4464d72-2022.07-000006][1], node[7x6SvFz1SmWGWwRydu
This has been mainly seen when Audit Events are approaching the 750 million limit for FA, indices are being closed, and the lock is not being released.
obtaining shard lock for [starting shard] timed out after [5000ms], lock already held for [closing shard] with age [40769ms]
The monitoring service has been updated in File Analytics 3.2.1 to perform periodic checks of ElasticSearch every 30 seconds up to 5 minutes instead of firing on the first failure.Please manually attach cases to ENG-489753, which is to investigate and remediate ElasticSearch hitting this condition.You may restart the Elastic Search Cluster service container to recover the status to "green" from "red by the following command.
nutanix@favm$ docker stop Analytics_ES1 && sleep 10 && docker start Analytics_ES1
Then, please confirm the status.
nutanix@favm$ curl -sS '172.28.5.1:9200/_cluster/health?pretty=true'
Scenario 6) Kafka Lag leading to the Event Processor crashing:Review errors in /mnt/logs/containers/analytics_gateway/event_processor/event_processor.log.ERROR
2023-02-09 20:44:26Z,899 ERROR 40121 es_utils.py:inner_function: 32 - RequestError(400, u'resource_already_exists_exception', u'index [fs_audit_log_folders_477130b2-59a8-466b-ae1c-8b5dced2ff0e-2022.05-000001/BPx_6Oh9Q029h_7A1yV8sg] already exists') Traceback (most recent call last):
Simiarly in /mnt/logs/containers/analytics_gateway/event_processor/event_processor.log.INFO we see that Event Processor is constantly restarting as documents are failing to index:
2023-02-09 21:57:09Z,387 WARNING 190383 es_util.py:__ignore_bulk_update_errors: 316 - 667 document(s) failed to index.
To check for Kafka Lag, log into the Kafka container:
[nutanix@FAVM ~]$ docker exec -it Analytics_Kafka1 bash
To check lag, we list out the event groups, then describe them.Note: The format is 'event_<fs_uuid>'.
root@localhost:/# /opt/kafka_2.10-0.10.0.1/bin/kafka-topics.sh --list --zookeeper localhost:22181
On File Analytics version > 3.1.0 You may get the error when running the above command.
[root@localhost /]# /opt/kafka_2.13-3.2.1/bin/kafka-topics.sh --list --zookeeper localhost:22181
You can use the command as below to get the desired output.
/opt/kafka_*/bin/kafka-topics.sh --list --bootstrap-server <FAVM IP>:29092
EG:
[root@localhost /]# /opt/kafka_2.13-3.2.1/bin/kafka-topics.sh --list --bootstrap-server 192.168.23.110:29092
A potential workaround is to find the configured log.retention in Kafka server.properties and lower it from the current value down to 1 (hour).While inside of the Kafka container, find and update the server.properties config file:Note: The version in the path for Kafka will vary depending on release; hence using the wildcard '*' to find the path. For each command; replace with the exact path seen in the first output.
[root@localhost /]# ls /opt/kafka*/config/server.properties
Check back in just over an hour and review each step in this scenario again to ensure lag has been eliminated and then set the value back to what it was before, 720 in this case, and restart Kafka.
Scenario 7) File Analytics 3.3.0 ONLY - FAVM main.py consuming large amounts of memory:The following alerts could trigger either in parallel or individually if on FA 3.3.0:
One or more File Analytics VM components have failed
The percent memory available on File Analytics VM <IP> is <%> which is low
Please see KB-15693 https://portal.nutanix.com/kbs/15693 for details on this issue.
|
KB11267
|
AHV node stuck at boot with "A start job is running for dev-disk"
|
AHV node might boot loop with "A start job is running for dev-disk"
|
AHV node might be stuck in a boot loop with "A start job is running for dev-disk":In past cases where this has been seen, we see the following symptoms:
Node stops responding/drops out of the networkNode is rebooted On booting the node is stuck at the screen above - A start job is running for dev-disk
|
The root cause of this issue for AHV platforms is currently unknown and still under investigation. Despite multiple cases having seen this behavior, we still haven't collected the data we need to fully RCA the issue. ENG-416828 https://jira.nutanix.com/browse/ENG-416828 was opened to track this issue but has been closed due to lack of logs; so, if you are seeing this issue, there needs to be a follow-up in that ENG ticket.1. While stuck in this state, please attempt to boot the host to a rescue target for troubleshooting (without rebooting the host):
See if you are able to interact with the console to accomplish this. You may be able to press "Esc" and then "e" to edit the boot optionsPlease reference the following document which describes how to boot to an emergency target:
https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=AHV&title=Debugging+systemd+boot+issues https://confluence.eng.nutanix.com:8443/pages/viewpage.action?spaceKey=AHV&title=Debugging+systemd+boot+issues
If you are unable to boot to a rescue shell without rebooting the host, please skip to step 3 rather than rebooting the host
2. Once you have booted to the rescue target, please gather these details:
Make screenshots of the console both before and after the timer expires.The contents of /etc/fstabOutputs from the following commands:
lsblk -o +UUIDblkidmount
Compare the UUIDs (via blkid and /etc/fstab). If no obvious discrepancies are found and the customer agrees to keep the host in a broken state then open ONCALL and reference ENG-416828 there.Only if customer does not agree to keep host in a broken state, try rebooting the host to see if you can get the host to bootIf the host boots after a reboot, please collect the following for uploading to the ENG
AHV logs from this hostA full logbay log bundle for reference
Please also note the following:
Node's make and modelFW version for BIOS/BMC/LSI controllerAHV and AOS versions
Upload the details to ENG-416828 https://jira.nutanix.com/browse/ENG-416828 and reopen the ENG (if it is not open) to track an RCA
3. If we are unable to get the host to a rescue shell without a reboot, please open an ONCALL to get engineering assistance with troubleshooting this issue live
Be sure to reference ENG-416828 https://jira.nutanix.com/browse/ENG-416828 in your oncall templateBe sure to include screenshots of the screens seen both before and after the timer expiresInclude all relevant troubleshooting steps and the context of your situationThe reason for pushing this to an ONCALL is that prior attempts to gather logs after a reboot of the host have been inconclusive. So if we cannot get additional details without a reboot, then we should have engineering help gather details live.
|
KB1688
|
Collecting Activity Traces
|
In the course of troubleshooting certain issues (e.g.: hung ops), it may be necessary to collect a current sample of Activity Traces from a node on the cluster. This article shows how to use 'links' to check the data and save it to a file for upload back to Nutanix.
|
In the course of troubleshooting certain issues (e.g.: hung ops), it may be necessary to collect a current sample of Activity Traces from a node on the cluster.This article shows how to use 'links' to check the data and save it to a file for upload back to Nutanix.
|
You can view the activity traces through your favorite web browser. However, when collecting for Engineering, it is simplest to use the links browser.
Go to the required page. It is in the format http://{cvm_ip}:{port}/h/traces. Refer to KB 1071 http://portal.nutanix.com/kb/1071 for a list of all the standard 20xx ports. The following will assume you are looking at Stargate - 2009.
links http://localhost:2009/h/traces
Go down to the element you are interested in, such as nfs_adapter, and press <Enter>
Down arrow once to the [0msec, 5msec] bucket and copy the URL at the bottom of the screen:
Press g to bring up the Go to URL option. Note that you can also get there by pressing <Esc> first to bring up the menu and finding the option you are interested in.
Paste the URL into the field
Then, edit it to be from infinity. In this example, the URL becomes:
http://localhost:2009/h/traces?high=infinity&low=0&a=completed&c=nfs_adapter
Select OK to bring up all of the activity traces (i.e. from 0 to infinity), instead of just one of the buckets presented by default.
Down-arrow to 'expand', without selecting any of the other buckets on the way. Select 'expand' by pressing <Enter>. Each entry will expand, and the option 'expand' will change to 'collapse':
Press <Esc> to bring up the menu, then select "Save formatted document"
Enter a logical name for the file:
Press <Enter>, then q to exit out of links.
Check that the file is saved correctly, then arrange to upload it for Engineering's analysis.
|
KB12185
|
Unexpected HA during mass power-on of VMs with memory overcommit enabled
|
Unexpected HA during mass power-on of VMs with memory overcommit enabled
|
In some cases, unexpected termination of individual memory-overcommitted VMs may be seen on AHV clusters.
Symptoms
An unexpected "Host restore VM locality" task is shown in Prism UI shortly after a VM termination.Noticeable decrease in the guests’ responsiveness which applies to many guests, not only the affected ones. In some cases, the guest’s kernel captures it as a soft lockup event:
The following conditions should be met to experience this issue:
Memory overcommit is enabled.VMs are bulk powered on.Powered-on VMs are running heavy CPU and memory workloads during their startup.
Note: this issue is only applicable to the memory-overcommitted VMs, and it does not affect the non-memory-overcommitted VMs.To confirm this issue check "wa" and "%iowait" metrics in the below log files on a CVM, which is running on a problematic host:
"wa" in ~/data/logs/sysstats/top_host.INFO
#TIMESTAMP 1633962119 : 10/11/2021 02:21:59 PM
"%iowait" in ~/data/logs/sysstats/mpstat_host.INFO
#TIMESTAMP 1633962118 : 10/11/2021 02:21:58 PM
Note: The above log snippets are used to highlight the log file portions that you need to look at. When the issue occurs, you can see that iowait time rapidly increases before the issue occurs during bulk power on:
|
This issue is resolved in:
AOS 6.7.X family (STS): AOS 6.7
Upgrade AOS to versions specified above or newer.There’s no easy solution or a quick workaround for the outlined scenario. At this point, we can identify the issue reliably by its specific symptoms during the post-mortem analysis and could recommend the users to spread the powering on of the MO VMs in time and avoid heavy CPU and memory overcommit in the cluster.
|
KB15068
|
Adonis service on Prism Central crashes due to frequent OOM condition.
|
Adonis service crashes on PC as it encounters OOM condition frequently
|
Adonis service crashes on Prism Central as it encounters OOM condition frequentlyTroubleshooting/Log Signature Validation:
Review dmesg logs on the PCVM to check if oom-killer is killing the adonis service - An example is given below:
Thu Jan 19 05:23:06 2023] Memory cgroup out of memory: Kill process 23679 (bash) score 101 or sacrifice child
Plugin errors may be seen upon running NCC health checks:
Detailed information for attached_vm_vg_protection_check:
In the adonis.out file (~/data/logs/adonis.out), "cannot allocate memory" error would be seen:
2022-12-17 17:37:23,976 Log4j2-AsyncAppenderEventDispatcher-6-AsyncFile ERROR An exception occurred processing Appender File org.apache.logging.log4j.core.appender.AppenderLoggi
Cause:'Kanon' (DR-related service) processes all of the system’s protection rules every 5 minutes as outlined below:
The service queries all the categories associated with all the protection rules.For each category, it tries to get associated VMs and VGs.For VMs, it performs an IDF lookup.For VGs, it invokes v4.0.a1 category API.
Inference: Every 5 minutes, Kanon invokes (synchronous) API calls for all the categories associated with all the protection rules in the system and this in turn leads to Adonis service going out of memory and crashing.
|
As a temporal workaround to stabilize the service, we can increase the cgroup limit of Adonis service, to do so1. Create the following file on Prism Central VM
~/config/genesis.gflags
2. The cgroup limit of adonis can be updated by assigning an appropriate value to the below genesis gflag.
--adonis_memory_limit_mb=1196
3. Restart genesis
genesis restart
Note: If it is a scaled-out PC, repeat this process on all the PCVMs.This issue has been resolved in PC 2023.3 and above version.If Adonis keeps consuming memory, engage Senior SRE or STL to investigate the issue further.
|
KB16566
|
Removing Rook Ceph and Configuring Grafana Loki to use an external S3 bucket
|
Removing Rook Ceph and Configuring Grafana Loki to use an external S3 bucket
| null |
A user may wish to remove Rook Ceph for performance reasons. Grafana-Loki and Velero can be configured to use an external S3 bucket removing the dependency on Rook Ceph/Rook Ceph Cluster.
This article covers both removing Rook Ceph/Rook Ceph Cluster and configuring Grafana-Loki and Velero to use an S3 bucket.
Solution
1. Remove metadata.finalizers from the following resources:
cephobjectstore dkp-object-storeobjectBucketClaims dkp-loki dkp-veleroobjectBuckets obc-kommander-dkp-loki obc-kommander-dkp-velerosecrets dkp-loki dkp-velero
Use the command below which opens the vi editor, edit each file removing the finalizers and save.
kubectl -n kommander edit <resource> <resource name>
2. Remove the cephObjectStore, objectBuckets and objectBucketClaims resources used by Grafana Loki and Velero:
kubectl -n kommander delete cephobjectstore dkp-object-storekubectl -n kommander delete obc dkp-loki dkp-velerokubectl delete ob obc-kommander-dkp-loki obc-kommander-dkp-velero
3. Uninstall Velero, Grafana Loki, Rook Ceph and Rook Ceph cluster via the kommander GUI.
4. Delete the existing dkp-loki and dkp-velero secrets:
kubectl -n kommander delete secret dkp-loki dkp-velero
5. Create new dkp-loki and dkp-velero secrets with the required AWS access ID and Secret:
kubectl create secret generic dkp-loki \ --from-literal=AWS_ACCESS_KEY_ID=<key>\ --from-literal=AWS_SECRET_ACCESS_KEY=<secret>kubectl create secret generic dkp-velero \ --from-literal=AWS_ACCESS_KEY_ID=<key>\ --from-literal=AWS_SECRET_ACCESS_KEY=<secret>
6. Create an installation file for grafana loki and velero with the following overrides:
apiVersion: config.kommander.mesosphere.io/v1alpha1kind: Installationapps: grafana-loki: enabled: true values: | loki: structuredConfig: storage_config: aws: s3: https://s3.amazonaws.com bucketnames: loki-bucket endpoint: s3.eu-west-2.amazonaws.com region: eu-west-2 insecure: false sse_encryption: false velero: enabled: true values: | configuration: provider: "aws" backupStorageLocation: bucket: velero-bucket config: region: eu-west-2 s3Url: https://s3.eu-west-2.amazonaws.com credentials: extraSecretRef: dkp-velero
Please note that the s3 key must be present in the Grafana-Loki configuration as its needed to override the default values provided by DKP which are:
storage_config: aws: s3: "http://rook-ceph-rgw-dkp-object-store.kommander.svc:80/dkp-loki" s3forcepathstyle: true
You must not specify accessKeyId or secretAccessKey. Specifying these keys will result in the Grafana Loki pods failing with an error similar to the one below:
failed parsing config: /etc/loki/config/config.yaml: yaml: unmarshal errors:line 89: field accessKeyId not found in type aws.StorageConfigline 95: field secretAccessKey not found in type aws.StorageConfig
You cannot use the same bucket for grafana-loki and velero.
6. Apply the installation file:
dkp install kommander --installer-config values.yaml
|
KB14683
|
Nutanix DRaaS | Static Route to Xi causing replication tasks to fail for customers using Direct Connect.
|
Customers can experience Xi replication failure due to traffic going through VPN VM instead of Direct Connect.
|
Scenario
The customer is normally using Direct Connect to replicate VMs to XiIf the Direct Connect link becomes unavailable, the customer can decide to temporarily add a static route to PCVMs or CVMs and force the Xi replication traffic to go through the VPN VM instead of using Direct Connect (see KB-12196 http://portal.nutanix.com/kb/8283 for more details)Wen the Direct Connect link becomes available again, the static route is manually removed from PCVMs - CVMs configuration
Symptoms
After a reboot of the CVMs replication tasks fail with the following signature
nutanix@NTNX-19SM3F410082-A-CVM:10.16.56.32:~$ ecli task.get 0cec1d70-d6b5-4fbb-b755-522af42df311
The Availability Zone in Prism UI is marked as reachableThe Remote Site on the CVMs is reported as "unreachable"
nutanix@NTNX-19SM3F410082-A-CVM:10.16.56.32:~$ ncli rs ls remote-site-type=entity-centric
On each CVM route -n command is reporting a static route to send Xi Load Balancer destined traffic to the VPN VM
Kernel IP routing table
The static route is defined in the /etc/sysconfig/network-scripts/route-eth0 file:
nutanix@NTNX-19SM3F410082-A-CVM:10.16.56.32:/etc/sysconfig/network-scripts$ sudo cat /etc/sysconfig/network-scripts/route-eth0
|
The static route was made persistent creating the file route-eth0 in the path /etc/sysconfig/network-scripts (see KB-12196 http://portal.nutanix.com/kb/8283 for more details)In order to fix the communication with Xi:
Remove the route in the running routing table:
allssh "sudo ip route delete 206.80.144.0/24 via 10.16.56.8"
Remove the entry from route-eth0 file so the static route doesn't come back from reboot:
allssh "sudo rm /etc/sysconfig/network-scripts/route-eth0"
|
KB15351
|
NDB log catchup operation fails intermittently when different collations are in use
|
ERA-Log Catchup fails with the signature: "The user '' does not have permission to discover the database. Please check if sysadmin privilege is enabled for the user"
|
This KB article explains a NDB log catchup operation failure when different collations are in use by the databases.
ERA-Log Catchup fails with the signature
"The user '' does not have permission to discover the database. Please check if sysadmin privilege is enabled for the user"
Though this error message is invalid, reviewing the logs, the error begins with a collation issue
Review the NDB Operation Logs,
Log location: ERAServer/logs/drivers/sqlserver_database/log_catchup/<operation-id>.log
[2023-06-07 08:48:32,657] [5872] [ERROR ] [0000-NOPID],ERROR: Failed to discover db server metadata.
Followed by the traceback indicating a 'None' user permission issue.
Traceback (most recent call last):
The error "Cannot resolve the collation conflict" typically occurs when you try to compare or perform operations between columns or expressions with different collations. In this case, the collation conflict is between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS."The query below currently exists in NDB which is prone to getting a collation error in use cases where Databases exist with different collations on the SQL Server Instance.If the SQL query below is executed on a database which has a different collation than system databases, it returns a collation error.Note: The query is read only and does not make any changes on the Database.
SET NOCOUNT ON;
|
The collation issue encountered is likely due to the comparison of strings with different collations in the WHERE Clause as part of the code flow. When we use the `NOT IN` operator with a subquery that contains a table variable or a temporary table, it can inherit the collation of the current database, which might differ from the collation of the system databases or other databases being compared.There is currently no workaround available as it requires a code change. Engineering is aware of this issue, and the Issue is resolved in NDB 2.5.4.
|
KB11477
|
Discrepancy between the space usage on object store in Prism Central and Commvault
| null |
The issue started when the Customer noticed high usage for storage in Prism Central for the object store, logical usage is 277 TiB in PC:While when checking from Commvault side by Commvault support, it is around 3 TiB only.
|
We noticed customer objects start with "BUCKET_NAME/RANDOM_ID...", while in-house we see them starting with "RANDOM_ID..." only.
Example objects list from customer (buk-xxxxxx00x is the bucket name):
buk-xxxxxx00x/KDxxxx_01.19.2021_12.16/CV_MAGNETIC/V_1000/CHUNK_22226/SFILE_CONTAINER_004.FOLDER/6
Example objects list in-house:
ERJLOV_05.31.2021_16.40/CV_MAGNETIC/MountPathConfigs.FOLDER/0
We also noticed that when RemoveFile requests are sent by Commvault, even if objects start with "BUCKET_NAME/...", Commvault logs still show "BUCKET_NAME/RANDOM_ID..." instead of "BUCKET_NAME/BUCKET_NAME/RANDOM_ID...".Digging deeper we noticed that RemoveFile of Commvault does prefix-based scans to get list of objects and since the prefix was missing "Bucket name", no objects were getting returned. Commvault then assumed all objects are deleted and hence the space discrepancy.The potential issue could be a misconfiguration from Commvault side. It might happen because of using the virtual host style for Nutanix Objects instead of path style. Please refer to the below document for the difference between the two styles:https://portal.nutanix.com/page/documents/details?targetId=Objects-v3_2:v32-access-endpoint-c.html#nconcept_rxb_nkw_vhbAccording to Commvault, the Virtual host style is supported by Commvault for AWS S3 and path style for all other S3 compatible vendors.
Please contact Commvault support to investigate the issue from their side and confirm the resolution/workaround.
|
KB11429
|
Adding a new node does not automatically apply existing memory reservations
|
This article describes an issue where adding a new node does not automatically apply the existing memory reservations.
|
Scenario
Memory reservations have been made for some component(s) (for example, Microsegmentation) on all the hosts of the cluster. Now, a new node gets added to the cluster, the memory reservation may not be created on the new nodes in the cluster, this will trigger the following alert when attempting to enable/disable Microsegmention.
ID : 56eebf58-4333-453c-8156-913bb848c262
Expectation
Once the node is added, all memory reservations for different components are automatically applied to the new node.Identification1. Check if microsegmentation.out on microsegmentation leader has the below signature
2023-07-01 16:12:50,241Z ERROR notify.py:329 notification=MicrosegmentationControlPlaneFailed remote_uuid=000576c1-4bd3-d1f2-0e18-ac1f6b6d23e3 reason=Microseg control plane: memory reservation on PE cluster 000576c1-4bd3-d1f2-0e18-ac1f6b6d23e3 failed.
2. From any CVM, download the reserve_per_host_memory_v1.1.py script
nutanix@cvm:~$ wget https://download.nutanix.com/kbattachments/11429/reserve_per_host_memory_v1.1.py
3. Check the current memory reservations
nutanix@cvm:~$ ~/bin/reserve_per_host_memory_v1.1.py list_reservations
4. Get the hosts in the cluster.
nutanix@cvm:~$ acli host.list
Compared with the output from step 3, it has additional (two new hosts) 56eebf58-4333-453c-8156-913bb848c262 and 991085ab-368a-4323-9dfa-3483268efe8d host_uuids.
|
This has been seen in the field on AOS 6.5.1.3. Jira ticket ENG-399377 https://jira.nutanix.com/browse/ENG-399377 resolved this issue, but in a very specific edge case where the customer had nodes from before the fix then upgraded, and then expanded the cluster, those expanded nodes would run into the issue.1. For each of the unique components (host), apply memory reservations on the newly added node using reserve_per_host_memory_v1.1.py.Note: The memory reservation value displayed is in MB so it must be converted to bytes.Syntax:
nutanix@cvm$ ~/bin/reserve_per_host_memory_v1.1.py --component_name=<name> --kernel_memory_bytes=<convert_mb_to_bytes> --user_memory_bytes=<convert_mb_to_bytes> --host_uuid=<uuid_of_new_host_in_hex>
Example:
# Converting KernelMemory to bytes
2. Make sure the reservation list has the newly added host.
nutanix@cvm:~$ ~/bin/reserve_per_host_memory_v1.1.py list_reservations
|
KB11379
|
Lazan in a crash loop due to inability to get vdisks stats from Arithmos
|
Lazan service may be seen in crash loop across multiple CVMs.
|
The customer gets an alert related to service restart in Prism.
ID : 5f6c5f69-98b9-4743-99ee-7c026fbaf48c
We can see in logs Lazan crashing after its "Unable to get stats" for a vdisk (6000C29c-19a5-ef78-3780-94df05c7170f)
2021-03-24 05:53:00 ERROR manager.py:2055 Unable to get stats due to Unable to find {u'stats': {u'commonStats': {}}, u'id': u'TkZTOjQ6MzozNzMwMzY0', u'attachVirtualDiskId': u'NTAxNTcwOjYwMDBDMjk3LTMyODAtNmY5Zi1iZWFlLWQxYWY5MGNhZDlhYQ=='} in virtual_disk_stat_map {'50155d:6000C29c-19a5-ef78-3780-94df05c7170f': id: "50155d:6000C29c-19a5-ef78-3780-94df05c7170f"
Above is followed by the below message, and this is when the Lazan service restart:
2021-03-26 04:55:31 CRITICAL decorators.py:47 Traceback (most recent call last):
In the Traceback we can see "Could not get stat" for another vdisk (6000C297-3280-6f9f-beae-d1af90cad9aa)
2021-03-18 10:45:30 ERROR stats_util.py:291 Could not get stat controller.random_write_bytes for entities ['501570:6000C297-3280-6f9f-beae-d1af90cad9aa']: 7
Scenario 1
We are unable to get the stats for this vdisk seen in traceback (6000C297-3280-6f9f-beae-d1af90cad9aa):
nutanix@cvm:~$ arithmos_cli master_get_entities entity_type=virtual_disk search_term='6000C297-3280-6f9f-beae-d1af90cad9aa'
Scenario 2
As seen in some cases the vdisk is a valid vdisk and stats may actually be available and Lazan will report Unable to find for random vdisks.
|
Scenario 1
Collect the Log Bundle.Collect the output of the following command:
nutanix@cvm:~$ arithmos_cli master_get_entities entity_type=virtual_disk
Perform a rolling restart of arithmos in the cluster. This will resolve the issue.
nutanix@cvm:~$ allssh "genesis stop arithmos && cluster start; sleep 30"
Add the logs from points 1 and 2 to ENG-445350 https://jira.nutanix.com/browse/ENG-445350 and ENG-388062 https://jira.nutanix.com/browse/ ENG-388062.Attach the cases to ENG-445350 https://jira.nutanix.com/browse/ENG-445350 and ENG-388062 https://jira.nutanix.com/browse/ ENG-388062.
Scenario 2This issue is resolved in:
AOS 5.15.X family (LTS): AOS 5.15.7AOS 5.20.X family (LTS): AOS 5.20.2
Please upgrade AOS to versions specified above or newer.
|
KB15333
|
MSP service registry gets into CrashLoopBackOff state due to lease object missing some metadata information
|
MSP service registry gets into CrashLoopBackOff state due to lease object missing some metadata information
|
Identification:1. Multiple Pod restart alerts for mspserviceregistry might be observed in Prism.
ID : 3cabba21-b210-4dca-b453-da19057c0779
2. The mspserviceregistry pod appears to be in the CrashLoopBackOff state.
nutanix@PCVM:~$ sudo kubectl get pods -A -n kube-system
3. mspserviceregistry pod logs just before the crash stack trace show mspserrvice registry is unable to acquire the leader lease.
nutanix@PCVM:~$ sudo kubectl logs mspserviceregistry-bf5456ff-zrs6b -n kube-system
4. Pod description output doesn’t give any reason why pod crashed and just shows terminated
nutanix@PCVM:~$ sudo kubectl describe pod mspserviceregistry-bf5456ff-sxs6k -n kube-system
5. Reviewing this manually confirms the Kubernetes lease object in kube-system namespace which has renewtime missing.
nutanix@PCVM:~$ sudo kubectl describe lease mspserviceregistry-lease -n kube-system
|
1. Before applying the workaround make sure the MSP service is in a ready state and the Upgrade state looks fine.
nutanix@PCVM:~$ mspctl cluster list
Workaround:
Please save the existing lease & deployment YAMLs of mspserviceregistry
nutanix@PCVM:~$ sudo kubectl get lease -n kube-system mspserviceregistry-lease -o yaml > msr_lease.yaml
Delete mspserviceregistry lease & deployment object using the below 2 commands
nutanix@PCVM:~$ sudo kubectl delete lease -n kube-system mspserviceregistry-lease
Recreate the deployment using generated YAMLs
nutanix@PCVM:~$ sudo kubectl apply -f msr.yaml
Pod status of mspserviceregistry should now show running and we should see renewTime in the new lease.
nutanix@PCVM:~$ sudo kubectl get pods -A
|
KB6861
|
LCM : Lcmroottask failed: Operation failed. Reason: Download failed, Actual Checksum: xxxx does not match Expected Checksum: xxxx
|
LCM Inventory may fail with checksum mismatch error. This has been observed in an environment where proxy is enabled.
|
LCM Inventory may fail with checksum mismatch error. This has been observed in an environment where proxy is enabled.
Here are some sample error messages that will be seen after LCM inventory fails:
Download failed for release.smc.sum2, Actual Checksum: xxxx does not match Expected Checksum: xxxx
Lcmroottask failed: Operation failed. Reason: Download failed for release.smc.sas3flash_tool,Actual Checksum: xxxx does not match Expected Checksum: ab496b97fe35efc8f43e6a6685c59d9cde0f2cbc28b5ed4f4bea81764d0aff8a
|
The following method will isolate the "network issue" to some extent where we can conclude if the error is caused by download operation or the proxy environment.
List the proxy that is set up on the cluster
ncli http-proxy ls
Export proxy parameters.
In First command: foo is the username and bar is the passwordRun the second command if proxy is set up with FQDN and does not require a password (open or public proxy)
$ export http_proxy=http://foo:bar@<proxy_ip>:<port>/
Download manifest file and extract it (in any directory)
$ curl http://download.nutanix.com/lcm/2.0/master_manifest.tgz -O
Get the module/file name of the file from json that is mentioned in the error message. The filename also contains the expected checksum
$ grep <file> master_manifest.json
Note the File-url from the above command output. Example
release-smc-sum2-b8889d2c63ea449de538a09e609a000f53b79e6b7113cd480977ea2ad501cb80.tgz
Verify if we can retrieve the module by checking it's size using below command.
$ curl -s -S -f -k -L -I http://download.nutanix.com/lcm/2.0/modules/<File-url> |grep Content-Length
Download the file using curl or wget:
$ curl -s -S -f -k -L -I http://download.nutanix.com/lcm/2.0/modules/<File-url> -O
Check the sha256sum of the downloaded file by giving the path to it.
$ sha256sum <downloaded-module-file>
The output returns the actual checksum and expected checksum in the filename (the error message has it as well).We can repeat the process and see if calculated checksum change.
If there is a mismatch in actual checksum and expected checksum, then it can be because of proxy or other firewall/filter component in customer's environment.If there is NO mismatch and LCM inventory still fails with checksum error, we need to engage engineering.
For mismatch in the checksums, there is no workaround. The above steps confirms that the issue is because of the proxy environment. The modules are not downloaded properly. They are either corrupt or partial. The cluster will require direct access so that the module can be downloaded properly or another proxy can be tested.
Cleanup
Just delete the downloaded module file and manifest file from local path
$ rm <module-file>
Concurrent download issue:In some cases you may see the downloads fail and there may not be a proxy configured. Some firewalls do not handle concurrent downloads properly and cause downloads to fail.Run below command from ~/tmp on a cvm. This will download the file 20 times concurrently. Make sure to change the path in the curl command below specific to file path that is failing in your case.Example:
nutanix@cvm$ for i in {01..20}; do curl -sSfkLvvv http://download.nutanix.com/lcm/2.0/master_manifest.tgz -o master_manifest.tgz.$i 2> stderr.txt.$i & done
Now investigate the actual downloaded files.You should see 20 files in ~/tmp. Look at the actual downloaded file and confirm it's actually a gz'd tar file (or whatever we're expecting), for example:❯ file <downloadedfile.tgz >Example:
file *_master_manifest.tgz*
bad_master_manifest.tgz: HTML document, ASCII text, with CRLF, LF line terminatorsgood_master_manifest.tgz: gzip compressed data, from Unix, original size modulo 2^32 522240Do not forget to remove all the downloaded files from ~/tmpIf you see bad downloads in the above test, please tweak transparent proxy or firewall to handle concurrent downloads or use darksite bundle.if this KB does not help you please engage Nutanix Support
|
KB11991
|
Nutanix Move PrepareForMigration VC Operation timed out. (error=0x3030)
|
Prepare for migration task times out. Check for user VMs that used to have iSCSI shared disks connected.
|
When Move migration plan contains a user VM that used to have iSCSI disks attached in the past, the Move task Prepare Migration times out.Srcagent.log (/opt/xtract-vm/logs/srcagent.log) show Setting SAN Policy=OnlineAll then Operation timed out. (error=0x3030)
I0823 20:10:44.289886 7 uvmcontrollerappmobility.go:351] Setting SAN Policy=OnlineAll
|
Choose "Bypass Guest Operations" in the Migration plan. Follow Move User Guide: Creating a Migration Plan https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Move-v4_1:v41-create-migration-plan-t.html for more details.Install the VirtIO driver manually (if not already) on the VMLaunch diskpart.exe , open powershell windows, input diskpart , in new windows and set the SAN policy to OnlineAll:
san policy=onlineAll
Note: Microsoft Windows SAN policy is enabled by default. The policy purpose is to avoid data corruption during power-on of a VM with access to LUNs (SAN Virtual Disks) in case these LUNs are accessed by multiple servers If the LUNs are not accessed by multiple servers, the SAN policy can be changed to OnlineALL on the VM before performing the migration to AHV. References: https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/san https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/san After migration is complete, verify if the disks are online and assign the IP address manually if necessary.
|
KB15256
|
NC2 on AWS - Cluster Protect users must update cloud formation before protecting the cluster
|
This article describes the steps necessary for users to update the AWS cloud formation
|
With the release of AOS 6.7 and PC.2023.3, the Cluster Protect feature has reached General Availability. NOTE: That this feature is only supported for greenfield cluster deployments, and it is NOT to be used on already existing clusters prior to this release.Nutanix Cloud Clusters (NC2) on AWS run on bare metal instances that capitalize on local NVMe storage. A data loss risk might occur in case of failures caused by scenarios including but not limited to Availability Zones (AZ) failures or users terminating bare-metal nodes from the AWS console. With the Cluster Protect feature, you can protect your NC2 cluster data, including Prism Central configuration, user VM data, and Volume Group data, with snapshots stored in AWS S3 buckets. When using Cluster Protect, you can recreate your cluster with the same configurations and recover your user VM data and Prism Central configuration from S3 buckets. The Cluster Protect feature thus helps ensure business continuity even in the event of a complete AWS AZ-level failure.Before taking advantage and implement this functionality, the AWS Cloud Formation Stack must be updated.
|
Before starting to use the feature the following actions need to be performed.
Login to AWS console and navigate to:
Cloud Formation -> Select “Stacks” -> select the stack named “Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod” (if you don’t find this in the currently selected AWS region, try a different one)
Do not mistake this stack with “Nutanix-Clusters-High-Cloud” Stack
Click on “update” -> “replace current template” paste the URL for the NC2 template
https://s3.us-east-1.amazonaws.com/prod-gcf-567c917002e610cce2ea/aws_cf_clusters_high.json
Click “next” a few times confirming the default options and finally “Submit”
This should update the IAM policies on the account and add the needed permissions to get access to S3 buckets used with Cluster Protect.
|
KB9726
|
IPMI web console is not able to log-in
|
This article describes scenarios where the ADMIN account is not able to log in to IPMI and the web console never loads.
|
NX G7 nodes may exhibit a behavior in which after entering the ADMIN username and password, the IPMI web console never loads. This happens with the correct or incorrect username/password.
During this time, IPMITOOL commands may not be responsive.
Example of unresponsive BMC:
nutanix@CVM:~$ ssh [email protected] /ipmitool lan print
|
Engage Nutanix Support at https://portal.nutanix.com https://portal.nutanix.com to assist in troubleshooting the issue.
|
}
| null | null | null | null |
KB11855
|
LCM framework is stuck on "LCM Framework Update is in progress" and lcm commands fail with ERROR: Failed to configure LCM. error: 'enable_https'
|
LCM is stuck due to a corrupted LCM config and commands "configure_lcm -p" and "lcm_upgrade_status" fail with ERROR: Failed to configure LCM. error: 'enable_https'
|
LCM framework may get stuck with the following error on a newly deployed cluster:
“LCM Framework Update is in progress. Please check back when the update process is completed.”
Viewed from PE, LCM may also show the error banner:
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
LCM commands fail
nutanix@CVM:~$ configure_lcm -p
After restarting genesis cluster wide
nutanix@CVM:~$ cluster restart_genesis
LCM leader genesis.out log shows this traceback:
nutanix@CVM:~$ less data/logs/genesis.out
Note the error "zknode /appliance/logical/lcm/schema was not read (no node)". This zknode schema is missing on a newly deployed cluster but is expected once LCM framework is updated.From LCM ZK values below, note that LCM "config" node shows LCM version "2.1.6835", while "update" node indicates version 2.4.1.4.25288.
nutanix@CVM:~$ zkls /appliance/logical/lcm
At the same time, ~/cluster/config/lcm/version.txt indicates that the LCM framework was updated
nutanix@CVM:~$ allssh "cat ~/cluster/config/lcm/version.txt && echo"
The issue occurs after LCM configuration corruption or when there is an inconsistency between LCM zknode version and the current LCM version. The configuration recreation is needed.
|
The issue has been resolved with LCM-2.6, Kindly upgrade to the latest LCM version.If the issue still persists, Kindly open a new ticket along with the new affected version.
|
KB9139
|
LCM does not offer to upgrade AHV if multiple hypervisor versions are detected
|
In some scenarios, LCM may stop offering AHV upgrades if 2 or more AHV versions are detected in the same cluster.
|
In some scenarios, LCM may stop offering AHV upgrades if 2 or more AHV versions are detected in the same cluster.Scenario A: Three different AHV versions in the cluster.This scenario can happen if the LCM version is older than 2.3.1 or if one of AHV nodes was reimaged to the wrong AHV version.When cluster ends in such state doing LCM inventory may fail with the following error:
Invalid update spec (entities not able to update) Please use recommendation API to retrieve valid update spec
Sample steps that can lead to this scenario:
Cluster nodes are on 20170830.319 and AOS 5.10.9. There 5 nodes in the cluster.AOS is upgraded to 5.16.NCC alerts to update AHV from 20170830.319 to 20190916.96.During an upgrade to 20190916.96 one node upgrades successfully the 2nd node fails.Now cluster has 4 nodes on 20170830.319 and one on 20190916.96.AOS is upgraded to 5.16.1.NCC alerts to update AHV to 20190916.110. LCM shows only 20190916.110 as it’s the only AHV version compatible with AOS 5.16.1.Upgrade to 20190916.110 is started via LCM. Upgrade of one node succeeds and then fails. As result 3 nodes on 20170830.319, 1 node on 20190916.96 and 1 node on 20190916.110.LCM will not offer AHV upgrade anymore.
Note: AOS and AHV versions are shown just for an example and can be different.Scenario B: Two different AHV versions in the cluster.This scenario can happen if the LCM version is newer than 2.3.1.
Cluster nodes are on 20170830.319 and AOS 5.10.9. There 5 nodes in the cluster.AOS is upgraded to 5.16.NCC alerts to update AHV from 20170830.319 to 20190916.96.During an upgrade to 20190916.96 one node upgrades successfully the 2nd node fails.Now cluster has 4 nodes on 20170830.319 and one on 20190916.96.AOS is upgraded to 5.16.1.LCM will not offer AHV upgrade anymore.
Note: AOS and AHV versions are shown just for an example and can be different.
|
Scenario AStarting from version 2.3.1 LCM will prevent this situation from happening during upgrades. But if nodes are manually reimaged then the cluster can still end up in such a situation. Please always make sure that you are installing the correct AHV version during reimaging!Make sure to do an RCA and find out how the cluster ended up having 3 different AHV versions.If the cluster is already in the state where 3 different AHV versions are present manual node reimaging is required to get the cluster to the state where only 2 versions will be present. Consider using the Host Boot Disk Repair https://portal.nutanix.com/#/page/docs/details?targetId=Hypervisor-Boot-Drive-Replacement-Platform-v511-Multinode-G3G4G5:Completing%20Hypervisor%20Boot%20Drive%20Replacement workflow (hardware replacement is not needed in this case and should be skipped) to reinstall AHV.Scenario B ENG-294334 https://jira.nutanix.com/browse/ENG-294334 is fixed in LCM 2.3.1.1. Upgrade LCM to 2.3.1.1 or later, perform inventory, and retry AHV upgrade.If a cluster has AHV v1 and v2. LCM will allow an upgrade from v1 to v2 even when v2 is not compatible with the current AOS version. This will bring the cluster to a better state than running 2 different versions of AHV.For example: If a cluster has 20170830.337 and 20190916.110 (AOS 5.16.1.2). LCM allows upgrading from 20170830.337 to 20190916.110 even though 20190916.110 is not compatible with AOS 5.16.1.2.WorkaroundTo recover the cluster from this situation node reimaging is required. Note that nodes should be reimaged to the latest AHV version, which is bundled with AOS that is currently installed on the cluster. As a result, the cluster should have some nodes on the old AHV version and some nodes on the latest AHV version. Once it is done LCM will allow upgrading remaining nodes to the latest AHV.In the example above simplest way would be to reimage the node which is running AHV 20190916.96 to AHV 20190916.110. As a result, some nodes will be running on 20170830.319 and some on 20190916.110 which will allow LCM to continue an upgrade. ENG-397198 https://jira.nutanix.com/browse/ENG-397198 is fixed in LCM 2.4.3. The issue was lesser AHV entities were reported in the UI page because of the bug in the LCM framework code while merging the results from V3 Groups API. Which is fixed in ENG-397198. https://jira.nutanix.com/browse/ENG-397198
|
KB1062
|
Move: VM Migration Scenarios and Solutions
|
This article describes how to move VMs to a Nutanix cluster from ESXi or Hyper-V, or migrate between clusters. It also includes live migration. Not Applicable for migration between AHV clusters.
|
Nutanix Move is the Nutanix-recommended tool for migrating a VM. See the Move documentation https://portal.nutanix.com/#/page/docs/list?type=software&filterKey=software&filterVal=Move&reloadData=false at the Nutanix Support portal.If, for any reason, Nutanix Move cannot be used, this article describes alternative steps on how to migrate live VMs between Nutanix clusters or from a non-Nutanix cluster onto a Nutanix cluster.Nutanix allows administrators to mount NFS datastores between clusters. For example, if you have two separate clusters with NFS datastores, you can mount the NFS datastores from cluster A onto cluster B and vice versa. This is useful when moving VMs between two clusters for migration, for example.Complete all pre-migration tasks first. See also Windows VM Migration Prerequisites. https://portal.nutanix.com/page/documents/details?targetId=Migration-Guide-AOS-v59:vmm-vms-migrating-non-ntnx-ahv-t.html This article lists the steps for the scenario together with some examples.Note: Using a Nutanix container as a general-purpose NFS or SMB share is not recommended because the Nutanix solution is VM-centric. The preferred mechanism is to deploy a VM that provides file-share services. See Nutanix Files https://portal.nutanix.com/#/page/releases/afsDetails for more information.
|
On ESXi, you can perform regular vMotion of VMs between clusters. This can be done without a shared datastore if the requirements are met. For more information, see Requirements and Limitations for vMotion Without Shared Storage https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-9F1D4A3B-3392-46A3-8720-73CBFA000A3C.html.On a large-size cluster, the networking requirements can be easily met using vSphere Distributed Switch (vDS). The following steps provide a workaround for all the other scenarios but also apply to ESXi.
Migrating between Nutanix clusters
For example, you can take two separate clusters with the IPs as below.
Cluster A: CVM and Host IPs
nutanix@xxxxxx01-CVM:~$ svmips
Cluster B: CVM and Host IPs
nutanix@xxxxxx03-CVM:~$ svmips
Perform the following procedure to migrate between Nutanix clusters.
On the Controller VM in Cluster A, add the host IP of Cluster B to the NFS whitelist.
ncli> cluster add-to-nfs-whitelist ip-subnet-masks=10.X.XXX.83/255.255.255.255
Mount the remote datastore presented from cluster B the site A. Log on to the vCenter or vSphere client of cluster A and click Host > Configuration > Storage > Add storage. Fill in the properties in the following fields as shown. Server = IP address of the item that was whitelisted Folder = /<name of the NFS container that is exported from the Nutanix cluster> Datastore Name= Can be any nameClick Finish. The NFS datastore being mounted locally from the remote cluster is displayed. Once the datastore is mounted, live migration of VMs from one Nutanix cluster to another is possible. To perform live migration, the datastore should be mounted locally and remotely (using the NFS whitelist as shown above) with the external IP address of the CVM. By default, when a datastore is mounted on the local address, the 192.168.5.2 address is used. Check HA/DRS settings on the source cluster that may need to be updated. Check for EVC configuration for live migration; otherwise, offline migration has to be performed.Perform a live migration.
This is a two-step process for which two NFS datastores need to be created (two separate containers on Nutanix cluster at the site B):
A "jump" or temporary datastore is mounted using the external CVM IP address. This extra step ("hop") is required for running VMs since ESXi/vSphere understands the Nutanix-internal datastore to be separate from the white-listed datastore due to the unique {IP address}:/{mountpoint} defined. Internal: 192.168.5.2/{container} vs. external example: 10.X.XXX.83/{container} are different to ESXi.The final location of the VMs mounted using the internal 192.168.5.2 IP address. When the NFS datastore is mounted using the external CVM addresses on the local and remote clusters, the same datastore name can be used on both clusters. This makes live storage vMotion possible.
Live migrate VMs using storage vMotion from the old cluster to the "jump" or temporary NFS datastore. Once the VMs are moved to this datastore, they can be Storage vMotioned to the final datastore.Migrate the VMs from the "jump" or temporary NFS datastore to the final datastore mounted locally on the new cluster. During migration, the disk provisioning format can be changed to thin-provisioned to save space if required. Powered-down VMs can be re-registered using the internally-mounted datastore. The VM directories will show up in both the datastores.
Migrating from a non-Nutanix Storage to Nutanix Storage
In this scenario, two devices are used:
ESXi server01 (IP address: 10.X.XXX.30) mounting datastore on non-Nutanix storageNutanix cluster. In this scenario, it is migrated to node 1 (host IP address: 10.XX.5.14, CVM IP address: 10.XX.4.14)
Perform the following procedure to migrate from non-Nutanix storage to Nutanix storage.
To migrate VMs to the Nutanix cluster, the IP of the ESXi server must be added to the cluster NFS whitelist.
ncli> cluster add-to-nfs-whitelist ip-subnet-masks=10.XX.5.30/255.255.255.255
Create two new containers in the Nutanix GUI. Click Home > Storage > +Container.
Temp_1: This is our "jump" datastore, where VMs will migrate from non-Nutanix storage to a Nutanix-storage.
NTNX-PROD-001: This is where the VMs will migrate to the Temp_1 container, and ultimately, where the production VMs will be sourced from.
Important:
When creating the Temp_1 container, select the option "Mount on the following hosts", but do not select any hosts.When creating the NTNX-PROD-001 container, select the option "Mount on all hosts"
Mount the Temp_1 container on server01 and Nutanix cluster node 1. In vSphere client or vCenter Server, go to Configuration > Storage > Add Storage. Click Network File System, and enter the details. Do this on both server01 and Nutanix cluster node 1. Once complete, Temp_1 is mounted on both server01 and Nutanix cluster node 1. NTNX-PROD-001 is only mounted on cluster node 1.
After the mount is complete, you can begin storage vMotions to the Temp_1 datastore.
Host vMotion to move the VM to a Nutanix node. Note: If the existing environment is using AMD processors, you need to cold migrate the VMs to the new host. This requires the VMs to restart.Migrate storage vMotion from Temp_1 (on Nutanix node 1) to the NTNX-PROD-001 datastore. Note: The ESXi host performing the mount must have an external vmk nic set as a VMkerrnel port to attempt the mount.
Once the datastore is mounted on the remote cluster, storage migration is possible. Live Migration of VMs is possible if the datastore is mounted locally and remotely (using NFS whitelist as shown above) with the external IP address of the CVM. Perform the following steps to do a non-live migration with a single container.
Migrate the datastore to a Nutanix container.Power down the VM.Remove the VM from the inventory on the source cluster.Browse datastore on the Nutanix cluster and add the VM's vmx file to the inventory.Power on the VM on the Nutanix cluster. Note: By default, when a datastore is mounted on the local address, the 192.168.5.2 address is used.
Migrating between Hyper-V clusters
For Hyper-V clusters, the migration uses the Shared Nothing Live Migration feature. In this case, only the networking path is shared, not the data.Perform the following steps to migrate a VM from an HP host to a host on the Nutanix cluster.
Log in to the CVM and add the non-Nutanix host IP address to the whitelist:
nutanix@cvm$ ncli cluster add-to-nfs-whitelist ip-subnet-masks="host ip address/subnet"
In SCVMM, drag and drop the VM from the Non-Nutanix cluster to any hosts on the Nutanix cluster.Verify on the Non-Nutanix cluster for Storage MPIO Network and if the MPIO network is the same subnet (VLAN) as Nutanix host. If not, add a second management network interface on the Nutanix host and attach it to the external switch.
Perform the following steps to create a second interface and attach it to the external switch:
Create a new virtual network adapter named 'Temporary' on the Nutanix host. Make sure that it is attached to the external switch:
C:\> Add-VMNetworkAdapter -ManagementOS -Name Temporary -SwitchName ExternalSwitch
Set the IP address/mask, gateway, and the DNS servers.
C:\> New-NetIPAddress -InterfaceAlias “vEthernet (Temporary)” -IPAddress <ip address> -PrefixLength <prefix length> -DefaultGateway <default gateway>
Note: Nutanix Move is the Nutanix-recommended tool for migrating a VM. See the Move documentation https://portal.nutanix.com/#/page/docs/list?type=software&filterKey=software&filterVal=Move&reloadData=falseat the Nutanix Support portal.
|
KB12486
|
NDB - How to add user defined mount points under os directories using secure directories
|
Due to security implications, NDB does not support mount points under OS directories such as /var. You can add user-defined mount points under OS directories using secure directories instead.
|
This article covers a scenario where user-defined mount points must be specified in the common_layer_config in NDB because user-defined mount points under OS directories on Golden images are not picked up by the new DB Servers during the cloning or provisioning operations.A user-defined mount point is a directory in a file system linked to another filesystem. Mount points can make data from a different filesystem available in a folder structure.For example, we have the device "cs_user--linux--box--root" which is used for the root partition of a linux system (in our example, a VM called my-linux-box), and it is mounted in "/".The directory /var/db_data exists in the filesystem mounted in "/",
[user@my-linux-box ~]$ df -h
Let's assume there is a new disk /dev/sdb in this system, and it has been formatted with an ext4 filesystem and is mounted in /var/db_data.
[user@linux-box]$ sudo mount /dev/sdb /var/db_data[user@linux-box]$ lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTSsda 8:0 0 40G 0 disk├─sda1 8:1 0 1G 0 part /boot└─sda2 8:2 0 39G 0 part ├─cs_linux-box-root 253:0 0 35.1G 0 lvm / └─cs_linux-box-swap 253:1 0 3.9G 0 lvm [SWAP]sdb 8:16 0 20G 0 disk /var/db_datasr0 11:0 1 1024M 0 rom[user@linux-box]$
In this case, the device sdb, mounted in /var/db_data is considered a user-defined mount point.Due to security implications, NDB does not support mount points under OS directories such as /var.
|
You can use secure_directories to specify additional directories under OS disk. We consider the disks below to be OS/system directories.
OS_DISKS = ["/","/home","/opt","/var","swap","/boot","/usr","/tmp"]
Any mount points under these directories must be specified in the common_layer_config as detailed below.
Login to the Era server as the era user and run the command era-server.Download the config file using the below command:
era> config driver_config list name=common_layer_config output_file=/tmp/config.out
Edit the above downloaded file /tmp/config.out and add the required user-defined mount points under the mount_list field as below:
{
Upload the config file back to the Era server.
era>config driver_config update name=common_layer_config input_file=/tmp/config.out
Create a new software profile and provision the database/db server.
|
Relevant Logs and Examples undefined
| null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.