How To Install Esxi On Nutanix
This is Part 5 of the Nutanix XCP Deep-Dive, roofing the transmission installation of ESXi and CVM with Phoenix.
This will be a multi-part serial, describing how to design, install, configure and troubleshoot an advanced Nutanix XCP solution from start to finish for vSphere, AHV and Hyper-V deployments:
- Nutanix XCP Deep-Swoop – Office i – Overview
- Nutanix XCP Deep-Dive – Function 2 – Hardware Architecture
- Nutanix XCP Deep-Swoop – Function 3 – Platform Installation
- Nutanix XCP Deep-Swoop – Part 4 – Building a Nutanix SE Toolkit
- Nutanix XCP Deep-Swoop – Function five – Installing ESXi Manually with Phoenix
- Nutanix XCP Deep-Dive – Function 6 – Installing ESXi with Foundation
- Nutanix XCP Deep-Dive – Function vii – Installing AHV Manually
- Nutanix XCP Deep-Dive – Part 8 – Installing AHV with Foundation
- Nutanix XCP Deep-Dive – Part 9 – Installing Hyper-V Manually with Phoenix
- Nutanix XCP Deep-Dive – Part 10 – Installing Hyper-V with Foundation
- Nutanix XCP Deep-Dive – Part 11 – Benchmark Performance Testing
- Nutanix XCP Deep-Swoop – Part 12 – ESXi Design Considerations
- Nutanix XCP Deep-Swoop – Office thirteen – AHV Design Considerations
- Nutanix XCP Deep-Dive – Part 14 – Hyper-V Design Considerations
- Nutanix XCP Deep-Dive – Part 15 – Data Middle Facility Design Considerations
- Nutanix XCP Deep-Swoop – Part xvi – The Risks
- Nutanix XCP Deep-Swoop – Function 17 – CVM Autopathing with ESXi
- Nutanix XCP Deep-Swoop – Part 18 – more than to come as the series evolves (Cloud Connect to AWS and Azure, Prism Cardinal, APIs, Metro, DR, etc.)
You would normally utilize Foundation to deploy a Nutanix cluster, however, you sometimes need to do this manually when Foundation is having issues.
Employ-Instance
You have been given a Nutanix XCP block to bring online and you have tried to use Foundation to deploy the cluster. No bueno.
So you accept decided to follow the transmission process of installing ESXi offset and and so customise each ESXi host with Phoenix to terminate up with three unconfigured Nutanix nodes (Node A, Node B and Node C). Notation: Until a Nutanix Cluster is created, you will not be able to access the Prism UI.
Prerequisites
- You have your Nutanix SE toolkit complete with ESXi ISO (VMware-VMvisor-Installer-201501001-2403361.x86_64.iso) and Phoenix ESXi ISO (phoenix-2.0_ESX_NOS-4.0.2.1.iso).
- Yous have the Nutanix XCP cake continued to your 1GbE LAN switch with your Nutanix SE Laptop.
- You have a DHCP server running from your Laptop with the aforementioned subnet as the installation requires.
- If yous are using Foundation 2.1.10, you can generate the latest Phoenix for ESXi image with a command. For Phoenix 2.0 and below, you can download it from the Nutanix Portal.
Assumptions
- Y'all have your Nutanix SE Toolkit and you lot know what you lot are doing.
Accessing the BIOS to fix the IPMI IP Address
- Connect your VGA monitor and USB Keyboard to Node A.
- Power on Node A by pressing the Power-On push for Node A (located on bottom left mounting ear).
- Wait for the Nutanix logo to appear and press the "Delete" fundamental to enter BIOS setup mode.
- Use the left/right arrow keys to navigate to the "IPMI" tab.
- Use the up/down arrow keys to navigate to the "BMC Network Configuration" object and press "Enter".
- Select "Update IPMI LAN Configuration", select "Yes" and printing "Enter".
- Select "Configuration Address Source", select "Static" and press "Enter".
- Select "Station IP Address", "Subnet Mask" and "Router/Gateway IP Address" and configure the settings yous want.
- Press "F4" or use the left/right arrow keys to navigate to the "Relieve & Exit" tab.
- From your Laptop, make sure you can ping the IP address you just configured and access the IPMI Login interface via your Web browser.
- Repeat steps 1 to 10 for Nodes B and C.
- You should now have three working IPMI IP addresses that you lot can access via your Web browser. Proceed to the adjacent section.
- Important: Exercise not modify any other BIOS parameters unless instructed to do so past Nutanix Support.
- Annotation: within a functioning ESXi node, you can make these changes by using the "ipmitool" from the ESXi SSH Shell.
BIOS Screenshots:
Connecting to IPMI
- From your Laptop open a Web browser and access the IPMI address of Node A (http:// <IPMI IP address>/).
- At the Nutanix IPMI login screen, apply the credentials "ADMIN/ADMIN" and press "Login".
- Select the "Remote Control" icon on the toolbar, press the "Console Redirection" object and so press the "Launch Console" button.
- Accept the Coffee security warnings and look for the Console window to open. You may accept to endeavour dissimilar browsers and Coffee versions to get this working – it tin be painful.
- Press the "Virtual Media" button on the Console toolbar and select "Virtual Storage".
- In the Virtual Storage window, select the "CDROM&ISO" tab, "Logical Bulldoze Blazon" to "ISO File" and "Open up Image".
- Browse to the ESXi ISO paradigm (VMware-VMvisor-Installer-201501001-2403361.x86_64.iso in this example) and press the "Open" push.
- In the Virtual Storage window, press "Plug in" and make sure the "Connection Status History" shows "Plug-In OK" and so press the "OK" button.
- You are now ready to reset the Node and start the ESXi installation process.
- Repeat steps i to 9 for Nodes B and C.
- Yous should now have three Console windows (to Nutanix nodes) with the ESXi ISO mounted and ready for installation. Proceed to the adjacent section.
IPMI Screenshots:
Installing ESXi 5.5
- From the previous department, you should have a Console window open with the ESXi ISO image mounted, ready to install.
- Press the "Virtual Media" push button on the Console toolbar and select "Virtual Keyboard". Depending upon your Laptop OS and system configuration, this maybe required for pressing the function keys during the install.
- Select the "Ability Cycle Server" pick from the "Power Command" icon on the Panel toolbar.
- Wait for the ESXi ISO image to kick and present the EULA screen. Press "F11" to accept and go on.
- In the "Select a Disk to Install or Upgrade" screen, select the "InnoLite SATADOM" storage device and press "Enter".
- If the "ESXi and VMFS Constitute" window appears, select "Install ESXi, overwrite VMFS datastore" and press "Enter".
- In the "Keyboard layout" window, select "U.s.a. Default" and press "Enter".
- In the "Enter a root password" window, you must type "nutanix/4u" and press "Enter". Otherwise CVM will not exist able to connect to ESXi.
- In the "Ostend Install" window, press "F11" to install.
- Printing the "Virtual Media" button on the Console toolbar and select "Virtual Storage".
- Wait for the "Installation Consummate" window to appear.
- In the Virtual Storage window, printing "Plug out" and make sure the "Connexion Status History" shows "Plug-Out OK" to unmount the ESXi ISO image.
- In the "Installation Complete" window, press "Enter" to reboot.
- Repeat steps two to 13 for Nodes B and C.
- Y'all should now have three Nutanix Nodes with ESXi successfully installed. Go along to the next department.
ESXi Install Screenshots:
Using Phoenix to install the Controller VM and Customise ESXi
- From the previous department, y'all should have a Console window open with ESXi successfully installed.
- Press the "Virtual Media" push on the Console toolbar and select "Virtual Storage".
- In the Virtual Storage window, select the "CDROM&ISO" tab, "Logical Drive Type" to "ISO File" and "Open Epitome".
- Browse to the Phoenix ESXi ISO (phoenix-ii.0_ESX_NOS-4.0.ii.1.iso in this case) and printing the "Open" push.
- In the Virtual Storage window, press "Plug in" and brand sure the "Connection Status History" shows "Plug-In OK" and so printing the "OK" push button.
- Select the "Ability Cycle Server" selection from the "Power Command" icon on the Console toolbar.
- Wait for the "Nutanix Installer" screen to appear, then select "Configure Hypervisor" and "Make clean SVM" and then press the "Beginning" push.
- Look for the Nutanix Installation process to complete ("reboot now" message will appear).
- In the Virtual Storage window, printing "Plug out" and make sure the "Connectedness Status History" shows "Plug-Out OK" to unmount the Phoenix ESXi ISO image.
- In the Console window, press "Y" and and so "Enter" to reboot.
- After ESXi boots, you will encounter the message "INSTALLING-PLEASE-Be-PATIENT" on the Console screen (ESXi DCUI). This is a Nutanix VIB executing the first boot installation script, configuring ESXi and registering the CVM vmx file.
- Echo steps 2 to 11 for Nodes B and C.
- Yous should now accept 3 Nutanix Nodes with ESXi customised and CVM successfully installed. Proceed to the side by side department.
Phoenix Install Screenshots:
Configure IP addresses
During this manual install process, everything is configured with DHCP. Even if you configure a static IP for vmk0 initially afterwards the ESXi installation, the Phoenix installation procedure volition configure vmk0 with DHCP. So yous need to touch each node to configure the static IP addresses you require.
- From the previous section, you lot should have the consoles open up to three Nutanix Nodes with ESXi customised and CVM successfully installed.
- Press "F2" on the IPMI Console and configure the "Direction" IP to be a static IP accost. Then logout.
- Employ the vSphere Client to connect to the static IP address of the ESXi host.
- From vSphere Client, open the panel to the CVM and login with the credentials "nutanix/nutanix/4u".
- Edit the file: "/etc/sysconfig/network-scripts/netconf/ifcfg-eth0" and modify/add "BOOTPROTO="none"", "NETMASK="Northward.N.N.Northward"", "IPADDR="N.Northward.N.Due north"", "GATEWAY="Due north.N.N.Northward"" to the correct IP Address settings of the CVM.
- You now have an unconfigured Nutanix node with static IP addresses that is ready to exist joined to a Nutanix cluster.
- Repeat steps 2 to vi for Nodes B and C.
- Note: within a operation cluster y'all tin make these changes past using the URL http:// <IPv6 LinkLocal>:2100/cluster_init.html from your Web browser.
Nutanix CVM Screenshots:
Configuring a Nutanix Cluster via NCLI
- From the vSphere Client, open the console to the CVM and login with the credentials "nutanix/nutanix/4u".
- Run the command "cluster status" and verify that the cluster is unconfigured.
- Run the command "cluster -southward <Node_A_CVM_eth0_IP>,<Node_B_CVM_eth0_IP>,<Node_C_CVM_eth0_IP> create" to create the cluster
- Run the command "ncli cluster add-to-proper noun-servers servers=<DNS_IP>" to configure DNS.
- Run the command "ncli cluster add together-to-ntp-servers servers=<NTP_IP>" to configure NTP.
- Run the command "ncli cluster set-external-ip-address external-ip-accost=<CLUSTER_IP>" to configure the Cluster IP address.
- Run the command "cluster status" and verify that the cluster has been created.
- You now tin can access the Prism UI and continue configuring the Storage Pool and Container(s) for the cluster.
What Happened?
You have performed the following:
- Accessed the BIOS to statically set the IPMI network address.
- Accessed the IPMI URL to launch the Coffee Console and mounted the ISO images to install ESXi/Phoenix.
- Installed ESXi on the InnoLite SATADOM (64GB USB Flash Drive plugged into the Node motherboard).
- Installed Phoenix – which installed a VIB file (with offset boot script) for ESXi on SATADOM and associated CVM files.
- The Phoenix installation launches via the script embedded in the VIB when ESXi is starting time booted and customises ESXi and installs the Controller VM.
- Coincidentally, William Lam has recently written a nice mail service virtually how VIBs can exist used to run scripts inside ESXi, which is what Nutanix are doing here.
- Configured static IP addresses for each ESXi vmk0 and each CVM eth0 interface – this is because the transmission install process uses DHCP by default.
- Used NCLI to create a Nutanix cluster.
If you connect to ESXi using the vSphere Client, y'all can see the results of the VIB first time boot script:
- vSS "vSwitchNutanix" with vSS Portgroup "svm-iscsi-pg" and VMkernel port "vmk-svm-iscsi-pg"
- SSH enabled with SSH alarm masked (UserVars.SuppressShellWarning)
- Defunct iSCSI Software adapter
- Advanced Software Settings for NFS (Net.TcpipHeapMax, Cyberspace.TcpipHeapSize, NFS.MaxVolumes, etc.)
- Controller VM boot from ISO with LSI2008 SCSI Adapter in Passthrough mode and CPU/Retentivity reservations
- NTP configured
- Virtual Machine Startup/Shutdown enable for CVM
If y'all run a partition management plan like GParted, yous volition see the post-obit partitions on each Node:
- InnoLite SATADOM fat16 Partitions – where the ESXi boot and CVM (Service VM) files reside
- SSD ext4 Partitions – where the Nutanix Abode, Cassandra, OpLog, Content Cache and Extent Store reside
- HDD ext4 Partitions – where the Curator and Extent Store reside
vSphere Client to ESXi host images:
GParted images:
Source: https://vcdx133.com/2015/07/22/nutanix-xcp-deep-dive-part-5-installing-esxi-manually-with-phoenix/
Posted by: houstonsponsiguess.blogspot.com

0 Response to "How To Install Esxi On Nutanix"
Post a Comment