ESXi

VMware vCenter 6/6.5: Creating Host Profiles

This post describes how to perform the basic task of creating a host profile.
Description of Hos Profiles:

VMware Host Profiles are available through VMware vCenter Server and enable you to establish standard configurations for VMware ESXi hosts and to automate compliance to these configurations, simplifying operational management of large-scale environments and reducing errors caused by mis-configurations.

Prerequisites:

  1. You need to have a vSphere installation
  2. You need to have admin rights
  3. You need a configured ESXi host that acts as the reference model

Steps:

  1. In vCenter Navigate to the Host profiles view
  2. Click the Extract profile from a host icon
  3. Select the host that will act as the reference model host and click Next
  4. Enter the name and  a description for the new profile and click Next
  5. Review the summary information for the new profile and click Finish
  6. The new profile will appear in the profile list

Video:

Done!

VMware / vCenter: Terms, Acronyms, Glossary {Tag your IT}

Recently I have taken, failed later taken and passed my VMware 2V0–620 – vSphere 6 Foundations Exam and passed. I am now in the process of practicing and studying for proctored exam(s) for the VMware Certified Professional 6 – Data Center Virtualization Certificate.

With that there are many terms, acronyms, and Glossary items I will need to remember.
I am adding a list of terms and will expand on them as I come across new ones.

 

VM: Virtual Machine – a software computer that, like a physical computer, runs an operating system and applications. https://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.vm_admin.doc_50/GUID-CEFF6D89-8C19-4143-8C26-4B6D6734D2CB.html

ESXi: The vSphere Hypervisor from VMware (formerly ESX) is an enterprise-class, type-1 hypervisor.

VMFS: Virtual Machine File System for ESXi hosts, a clustered file system for running VMs

DCUI: Direct Console User Interface

iSCSI: Ethernet-based shared storage protocol.

SAS: Drive type for local disks (also SATA).

FCoE: Fibre Channel over Ethernet, a networking and storage technology.

HBA: Host Bus Adapter for Fibre Channel storage networks.

LUN: Logical unit number, identifies shared storage (Fibre Channel/iSCSI).

IOPs: Input/Outputs per second, detailed measurement of a drive’s performance.

pRDM: Physical mode raw device mapping, presents a LUN directly to a VM.

vRDM: Virtual mode raw device mapping, encapsulates a path to a LUN specifically for one VM in a VMDK.

SAN: Storage area network, a shared storage technique for block protocols (Fibre Channel/iSCSI).

NAS: Network attached storage, a shared storage technique for file protocols (NFS).

NFS: Network file system, a file-based storage protocol.

DAS: Direct attached storage, disk devices in a host directly.

VAAI: vStorage APIs for Array Integration, the ability to offload I/O commands to the disk array.

SSD: Solid state disk, a non-rotational drive that is faster than rotating drives.

VM Snapshot: A point-in-time representation of a VM.

ALUA: Asymmetrical logical unit access, a storage array feature. Duncan Epping explains it well.

VMX: VM configuration file.

VMEM: The page file of the guest VM.

NVRAM: A VM file storing the state of the VM BIOS.

VMDK: The virtual machine disk format, containing the operating system of the VM. VMware’s virtual disk format.

VMSN: Snapshot state file of the running VM.

VMSD: VM file for storing information and metadata about snapshots.

VMSS: VM file for storing suspended state.

VMTM: VM file containing team data.

VMXF: Supplemental configuration file for when VMs are used in a team.

Quiesce: The act of quieting (pausing running processes) a VM, usually through VMware Tools.

NUMA: Non-uniform memory access, when multiple processors are involved their memory access is relative to their location.

Virtual NUMA: Virtualizes NUMA with VMware hardware version 8 VMs.

VSAN: Virtual SAN, a new VMware announcement for making DAS deliver SAN features in a virtualized manner.

vSwitch: A virtual switch, places VMs on a physical network.

vDS: vNetwork Distributed Switch, an enhanced version of the virtual switch.

ISO: Image file, taken from ISO 9660file system for optical drives.

vSphere Client: Administrative interface of vCenter Server.

vSphere Web Client: Web-based administrative interface of vCenter Server.

Host Profiles: Feature to deploy a pre-determined configuration to an ESXi host.

Auto Deploy: Technique to automatically install ESXi to a host.

VUM: vSphere Update Manager, a way to update hosts and VMs with latest patches, VMware Tools and product updates.

vCLI: vSphere Command Line Interface, allows tasks to be run against hosts and vCenter Server.

vSphere HA: High Availability, will restart a VM on another host if it fails.

vCenter Server Heartbeat: Will keep the vCenter Server available in the event a host fails which is running vCenter.

Virtual Appliance: A pre-packed VM with an application on it.

vCenter Server: Server application that runs vSphere.

vCSA: Virtual appliance edition of vCenter Server.

vCloud Director: Application to pool vCenter environments and enable self-deployment of VMs.

vCloud Automation Center: IT service delivery through policy and portals, get familiar with vCAC.

VADP: vSphere APIs for Data Protection, a way to leverage the infrastructure for backups.

MOB: Managed Object Reference, a technique vCenter uses to classify every item.

DNS: Domain Name Service, a name resolution protocol. Not related to VMware, but it is imperative you set DNS up correctly to virtualize with vSphere.

vSphere: Collection of VMs, ESXi hosts, and vCenter Server.

vCenter Linked Mode: A way of pooling vCenter Servers, typically across geographies.

vMotion: A VM migration technique.

Storage vMotion: A VM storage migration technique from one datastore to another.

vSphere DRS: Distributed Resource Scheduler, service that manages performance of VMs.

vSphere SDRS: Storage DRS, manages free space and datastore latency for VMs in pools.

Storage DRS Cluster: A collection SDRS objects (volumes, VMs, configuration).

Shares: Numerical value representing the relative priority of a VM.

Datastore: A disk resource where VMs can run.

vSphere Fault Tolerance: An availability technique to run the networking, memory and CPU of a VM on two hosts to accommodate one host failure.

DPM: Distributed Power Management, a way to shut down ESXi hosts when they are not being used and turn them back on when needed.

vShield Zones: A firewall for vSphere VMs.

vCenter Orchestrator: An automation technique for vCloud environments.

OVF: Standards based format for delivering virtual appliances.

OVA: Packaging of OVF, usually as a URL to download the actual OVF from a source Internet site. Read more here.

VMware Tools: A set of drivers for VMs to work correctly on synthetic hardware devices. Read more on VMware Tools.

vSphere Licensing: Different features are available as the licensing level increases, from free ESXi to Enterprise Plus.

vCloud Suite: The collection of technologies to deliver the VMware Software Defined Data Center.

VMware Compatibility Matrix: List of supported storage, servers, and more for VMware technologies. Bookmark this page!

vSphere role: A permissions construct assigned to users or groups.

Configuration Maximums: Guidelines of how big a VM can be; see the newest for vSphere 5.5.

Transparent page sharing: A memory management technique; eliminates duplicate blocks in host memory.

Memory compression: A memory management technique; applies a compressor to active memory blocks on the host.

Balloon driver: A memory management technique; reclaims guest VM memory via VMware Tools.

Hypervisor swap: A memory management technique; puts guest VM memory to disk on the host.

Hot-add: A feature to add a device to a VM while it is running, such as a VMDK.

Dynamic grow: A feature to increase the size of VMDK while the VM is running.

CPU Ready: The percentage of time that the VM is ready to get a CPU cycle (higher number is bad).

Nested hypervisor: The ability to run ESXi as a VM either on ESXi, VMware Workstation, or VMware Fusion.

Virtual hardware version: A revision of a VM that aligns to its compatibility. vSphere 5.5 is hardware version 10, for example.

Maintenance mode: An administration technique where a host evacuates it’s running and powered off VMs safely before changes are made.

vApp: An organizational construct combining one or more VMs.

Cluster: A collection of hosts in a vSphere data center.

Resource pool: A performance management technique, has DRS rules applied to it and contains one or more VMs, vApps, etc.

vSphere folder: An organizational construct, a great way to administer permissions and roles on VMs.

Datacenter: Parent object of the vSphere Cluster.

vCloud Networking and Security: Part of the vCloud Suite; provides basic networking and security functionality.

vCenter Site Recovery Manager: An automated solution to prepare for a site failover event for the entire vSphere environment.

NSX: New technology virtualizing the network layer for VMware environments. Read more here.

VDI: Virtual desktop infrastructure, also called DaaS (Desktop as a Service) from Horizon View; run as ESXi VMs and with vSphere.

VXLAN: VMs with a logical network across different networks.

vCenter Configuration Manager: Part of vCloud Suite that automates configuration and compliance for multiple platforms.

vCenter Single Sign on: Authentication construct between components of the vCloud Suite.

VM-VM affinity: Sets rules so two VMs should run on the same ESXi host or stay separated.

Storage I/O Control: I/O prioritization for VMs.

NIOC: vSphere Network I/O Control – Enabled by default network I/O control is enabled, distributed switch traffic is divided into the following predefined network resource pools: Fault Tolerance traffic, iSCSI traffic, vMotion traffic, management traffic, vSphere Replication (VR) traffic, NFS traffic, and virtual machine traffic.

 

 

 

VMware vSphere 6.5 Nested Virtualization – Create and Install ESXi 6.5

With vSphere 6.5 and nested ESXi 6.5 hosts I enable myself to get hands on with vSphere advanced features with vCenter without having the physical hardware in my home lab. The advantages to this setup allows me to test out new VMware features or simulate issue that could happen in production.

The term “nested virtualization” is used to describe a hypervisor running under another hypervisor. In this case, I will be installing ESXi 6.5 inside a virtual machine hosted on a physical ESXi 6.5 host.

Requirements:

  • Physical ESXi Host (ESXi 6 – 6.5 – )
  • Physical hardware supporting either Intel EPT or AMD RVI

Steps to setup ESXi 6.5 virtual machine guest:

Log into vCenter or ESXi host with a user with admin credentials. In my case I am using the vSphere web client. *spoiler alert* no more C# (Thick) client for vCenter. However it still works with the ESXi 6.5 hosts.

Switch to the “VMs and Templates” view. Right click a folder and select New Virtual Machine > New Virtual Machine…

Choosing “Custom” configuration select type Other for OS family, doing the same for Guest OS version. *note* Ensure you are choosing 64-bit (Other 64-bit)

Once at the configuration hardware screen; Make a few modifications to the default values.

VM Guest Configuration Settings:

  • Define the CPU to a minimum of 2 or more. This includes cores.
  • Define memory to a minimum of 6GB RAM
  • Define Disk to 2 GB (Thin Disk)
  • Change network adapter type to VMXNET 3
  • Add an addition network adapter (also VMXNET 3)

Additional Configuration Step: Enabling support for 64-bit nested vms

Locate the and expand the CPU properties page and tick the check box next to “Expose hardware assisted virtualization to the guest OS”. This setting will allow you to 64-bit vms inside nested ESXi hosts. Read more about this feature here: https://en.wikipedia.org/wiki/Hardware-assisted_virtualization

Click next and exit configuration

At this point you are ready to install ESXi 6 – 6.5 as a Guest VM.

I leave you with this video of the full process. Thanks for visiting and I hope this helps any of you looking to do the same.

 

Originally posted on my LinkedIn Page:

https://www.linkedin.com/pulse/vmware-vsphere-65-nested-virtualization-create-install-jermal-smith

Installing vCenter Appliance 6.5

With the general availability (GA) release of vSphere 6.5 I decided to upgrade my home lab and learning environment to the latest and greatest of VMware’s product. Not only for learning, but for running the systems I use daily in my lab.

Preparation work:

  • Download and Install ESXi 6.5 to my new lab hardware – Configure ESXi 6.5
  • Download the VCSA 6.5 Installation media and start the install process – See below

I mounted the installation media (ISO) on my Windows notebook and started the installation by navigating to \vcsa-ui-installer\win32\ and starting the installer.exe application.

This will display the Center Server Appliance 6.5 Installer. Seeing how this install will be a new installation of vCenter I selected “Install”

Here you find a two step installation process. The first step will deploy a vCenter Server 6,5 appliance and the second step will be configuring this deployed appliance.

Accept the standard End User License Agreement (EULA) to move forward into the installation.

Next you select the type of installation you need for your environment needs. In my case I have chosen the embedded Platform Services Controller deployment.

Next, choose the ESXi host where you would like to have this vCenter appliance deployed and provide the root credentials of the host for authentication.

Then, provide a name for the vCenter appliance VM that is going to be deployed and set the root password for the appliance.

Based upon your environment size, select the sizing of the vCenter appliance. I went with Tiny as it fits the needs of my Lab environment. Note: It will configure the Virtual Appliance with 10GB of ram so be sure you can support this in yours.

Next, select the datastore where the vCenter appliance files need to reside.

Configure the networking of vCenter appliance. Please have a valid IP which can be resolved both forward / reverse prior to this to prevent any failures during installation.

Review and finish the deployment, and the progress for stage 1 begins. Upon completion, Continue to proceed to configure the appliance. This is stage 2.

The stage 2 wizard begins at this point. The first section is to configure Network Time Protocol (NTP) setting for the appliance and enable Shell access for the same.

Next configure an SSO domain name, the SSO password and the Site name for the appliance. Once the configuration wizard is completed you can login to the web client.

The following short video I made gives you an feel for the install process. Enjoy.

 

 

vSphere 6.5 release notes & download links

 

This weekend I had the fun of getting my hands and feet wet with installs of VMware’s ESXi 6.5 and vCenter 6.5. The links below should be useful to any of you looking to learn about the new release and download bits to install.

Release Notes:

Downloads:

Documentation:

VMware Flings: Embedded Host Client Update

I am excited about the release of VMware Labs Flings release of  version 3 of the Embedded Host Client. For those of you who find yourself out of the loop at time, no worries it happens.  Here is some details about the the embedded host client:

The Embedded Host Client is written purely in HTML and JavaScript, and is served directly from your ESXi host. The installed client is in its development phase at this time and does not have full feature sets, but has implemented a very useful feature set.

These features include:

  • VM operations (Power on, off, reset, suspend, etc).
  • Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
  • Displaying summaries, events, tasks and notifications/alerts
  • Providing a console to VMs
  • Configuring host networking
  • Configuring host services

 

Installation Steps:

  1. Enable SSH on your ESXi host, using DCUI (Direct Console User Interface) or the vSphere web client.
  2. SCP the VMware_bootbank_esx-ui_0.0.2-0.1.3172496.vib to a directory on your ESXi host. In my case I used a shared storage LUN or NFS volume as I will apply this to multiple hosts.
  3. Next issue the following command:

     

Upgrade Steps

  1. Enable SSH on your ESXi host, using DCUI (Direct Console User Interface) or the vSphere web client.
  2. SCP the VMware_bootbank_esx-ui_0.0.2-0.1.3172496.vib to a directory on your ESXi host. In my case I used a shared storage LUN or NFS volume as I will apply this to multiple hosts.
  3. Next issue the following command:

     

Example output from running the above command:

[root@esx1:~] esxcli software vib update -v /vmfs/volumes/nfs/installs/flings/VMware_bootbank_esx-ui_0.0.2-0.1.3172496.vib
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: VMware_bootbank_esx-ui_0.0.2-0.1.3172496
VIBs Removed: VMware_bootbank_esx-ui_0.0.2-0.1.2976804
VIBs Skipped:

 

Tools of choice

WinSCP – http://winscp.net/eng/index.php

Putty – http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

 

For more info on ESXi Embedded Host Client: https://labs.vmware.com/flings/esxi-embedded-host-client

 

Thanks for visiting – jermal

 

OVF Deployment Issue Ubuntu Snappy 15.04-stable (5 cloud)

When you have time, you do something.

Tonight I was headed over the Ubuntu site to grab me the latest version because I was thinking of installing OpenStack when I noticed on their landing page and noticed “Get Ubuntu Core” ; yes something new.

But where is my Raspberry Pi? No worries they have OVF images I can use to deploy to my vCenter Lab here at home. So I started just this and encountered an issue I once had.

Lets walk you through my events.

Downloading the image

  1. Found myself on the Ubuntu Internet of Things landing page: http://www.ubuntu.com/internet-of-things
  2. Located the OVF section of the getting started page: http://developer.ubuntu.com/en/snappy/start/
  3. Downloaded the OVA image (x86): 15.04/stable

Deploying the OVF Template 

  1. Using the vSphere Client, connected to vCenter (or stand alone ESXi host)
  2. Select server to deploy to and choose file > Deploy OVF Template
  3. Browse to the path were you downloaded your OVF image and select it

This is when I received the following error:
The following manifest file entry (line 1) is invalid: SHA256(core-stable-amd64-cloud.ovf)= d4b8922ed38a4eb9055576f7b46f8e92f463398298f3a42af942f25457d4d41c

Troubleshooting Step 1

  1. I extracted the OVA image (core-stable-amd64-cloud) with 7zip
  2. Once extracted attempted the steps detailed above “Deploying the OVF Template”

The same error was thrown once more.

Troubleshooting Step 2

Within the extracted folder exists the following file types: certificate, manifest, ovf (instruction / configuration) and disk image

  1. I remove the SHA256(core-stable-amd64-cloud.ovf)= d4b8922ed38a4eb9055576f7b46f8e92f463398298f3a42af942f25457d4d41c line from the .MF (manifest)
  2. Once removed I attempted the steps detailed above “Deploying the OVF Template”

It failed also, only this time the error started the the remaining SHA256 was also invalid.

Troubleshooting Step 3 – Third time is the charm

  1. Moved into the extracted OVA folder
  2. Deleted the .mf (manifest) file
  3. Followed steps above “Deploying the OVF Template” only this time using the OVF located in the extracted folder

This go around everything worked

So why did this happen?

The template was changed after its creation which invalidated the SHA256 key.  I have made templates myself, only to have to edit something out such as removing a CD Rom reference which later caused me issues.

I hope this helps if you face this incident or something similar

 

Thanks for visiting – jermal

Also published here

HowTo: Export VMware vSphere Sessions

I moved myself to a new workstation and followed my previous steps to export my putty sessions.  This time around I am exporting my Virtual Infrastructure Client settings

  1. From the run prompt (shortcut keys: WinKey+R) enter regedit,, this opens the registry editor
  2. Locate the following branch: HKEY_CURRENT_USER\SOFTWARE\VMware
  3. On the File menu, click Export
  4. In File name, enter a name for the registry file.
  5. Choose a location to save the file; You can now copy this file and import your the sessions data on the new system.

This saves me a lot of time.

 

 

I hope you enjoyed this post, thanks for visiting – jermal

Back on vCenter in my home lab

Oh yeah; anyone else have the warm fuzzy feeling right now; Hashtags: #‎VMware‬ ‪#‎vCenter‬ ‪#‎ESXi‬

 

All in my home lab. Once again I have the management capabilities over systems that I prefer. VMware vCenter 6 is awesome and I am in love with the web interface

Next — Storage upgrade 3.0. That will be 16TB of usable RAID10 storage


There will be NFS, iSCSI and DLNA, and SAMBA

Power Off & On VMware Guest with a Scheduled Task

 

Using Windows task scheduler you can schedule power off and on events for guest systems running in VMware vCenter or a standalone ESXi host.

My steps:

  1. Create a basic task – give it a name and description (optional)
  2. Choose when you want this task to stat
  3. Select the start date and time
  4. Choose “Start a program”
  5. Choose the program you would like to run.  In this setup we will be running the following:
  6. C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe -NoLogo -NonInteractive -File “C:\work\task\jermsmit.ps1”

     
  7. Click Next, Select Yes when Task Scheduler prompts you
  8. On the Finish screen, click Finish  – You can open properties to set this to run unattended

 

The script I am now using does the following:

  1. Loads the VMware PowerCli modules to powershell
  2. Connects to Specified ESXi or vCenter Server
  3. Issues a stop to specific VM Guests
  4. Issues a start to the VM Guest

Script Example:

 

Use cases:

  1. Powering systems down to conserve energy (earth day initiative)
  2. Allow for systems with large workloads to have full system resources without contention during scheduled down periods of the systems that are offline.
  3. Quick restore of nonpersistant environments

 

Thanks for visiting – jermal