Tuesday, November 22, 2011

View 5: Upgrade MSI package

I was testing out and playing around with ThinApp on my View 5.0.  The great thing about View is you are able to assign ThinApp in their MSI package to the various desktop or pools.

However I was curious how can I perform and upgrade when a new version is out e.g. Firefox 7.0.1 to Firefox 8.0?

I decide to search around and came across this post in VMware Blog.  It not only allow me to understand how to perform an upgrade over the old MSI, it also help explained the MSIProductCode, MSIUpgradeCode and the MSIProductVersion.

The upgrade code is very important especially in my example above.  It will have different MSIProductCode due to the version change however, the MSIUpgradeCode has to be the same in order for View to detect this is an upgrade and to uninstall the previous away when you place a higher version of MSIProductVersion.

As the post in the blog mentioned, you can put MSIProductVersion from 1.0 to 2.0.  You can also do incremental from 1.0 to 1.1 as well which is useful for build level upgrade rather than a version upgrade.


View 5: Deploy Persona Management Overview

Finally I have save this post as a draft for the longest time exactly a month and finally find some time to do this.

Have gone through lots of reading of the deployment of View 5.  It almost similar with View 4.6 except for the Persona Management which I was very keen to try it out.

The View Administration for View 5 is a must read to understand the Personal Management and also the VMware View Persona Management Deployment Guide.

If  you are already using Roaming Profile, you can carry on using it with Persona Management turn on working together leaving the depository stick with what is already configured in your Roaming Profiles.

Persona Management will supercede Roaming Profile if you assign a depository path in the group policy.  Honestly I would like ease of work and administration here and will use everything from Persona Management instead.

When doing sizing, if performance is what you are looking for, even with Persona Management or with Roaming profile, please include the persistent disk for your dedicated desktop pool.  This is so as to use this as a cache disk.



Before I start the video on showing some of persona management is done on View 5.0, I first have to prepare a few things.

  1. Have a file server in this case, sg-fs01 with a shared disk of D: drive of 40GB
  2. I create a folder named "win7\profiles"  and follow the guide from here to set the permission for the folders.
  3. I shared the name of the folder as recommend with a '$' so if anyone were to browse the file server it will be transparent unless the person know the exact path.
  4. Export from View manager the ViewPM.adm found in c:\program files (x86)\VMware\Server\extras\GroupPolicyFiles (Program Files for 32bit windows servers)
  5. Create a OU for your virtual desktop for neatness if possible.  Create a group policy and edit it.
  6. In Group Policy Management Editor, browse to Computer Configuration>Policies>Administrative Templates, right click on it and choose Add/Remove Templates.  Import the ViewPM.adm into it.
  7. You will see Classic Administrative Templates (ADM), under it contain VMware View Agent Configuration>Persona Management.
  8. In Roaming & synchronization, Edit Persona repository location and enter your file server location where the profile will be stored.
  9. For Folder Redirection, you can redirect any of the folder listed.


Monday, November 21, 2011

ThinApp: OptimizeFor

I came across this blog posting on using the variable optimizeFor to be place in which place holder in package.ini.  Is it in [Compression] or is it in [BuildOptions].

From ThinApp 4.6.1 Manual, page 85, as refer to the comments, it clearly stated it can be placed in either place holders.  However the VMware blog itself is stating otherwise that it was an error made and it must be place in [Compression] place holder.  There was no one commenting further.

On the launch of ThinApp 4.7, I refer to the ThinApp 4.7 Package.ini Parameters Package guide, on Page 48, the scenario given this time was place it in [BuildOptions] place holder.

This got me very curious.  Is this a mistake brought over from 4.6.1 to 4.6.2 and to 4.7 on the document or did the this blog make a mistake?

So I decide to run this on thinapp 4.7 and try it out.  The results:
OptimizedFor=Memory for both [Compression] and [BuildOptions] as from the windows path I saved in.
OptimizeFor=Disk


The results?  Its either place holder and it will work!


For those who are not clear on this variable, OptimizeFor=Memory, what it does, it has all the files in uncompressed format for for memory performance on loading which reduce the start up time.


As for OptimizeFor=disk, it is instructing ThinApp to do disk space saving.  In such, less disk space is used which means a compressed format is used and in such performance on memory will not be as good as it will first uncompress before loading it the app increasing the start up time.

Update 1:  It also works on ThinApp 4.6.2

Thursday, November 17, 2011

vStorage API: VADP VAAI

I came across some questions by people who were like me who got confused by the new terms of VADP and VAAI or vStorage API.

What are these?  How are these related.  The letters do not tell you much so I am doing a explanation here.

vStorage API are API also known as application programming interface which are a set of rules or codes given out for any one including vendors or principles to use them and integrate them to produce the function which is provided from the API.

In this case, the API has two components:
  1. Data Protection
  2. Array Integration
Do not get confused these two function different things and are meant for different components.  The only similarities is that they allow the function that the API allows to make use of the storage subsystem to lessen the load on the hosts.

Lets talk about the two.

vStorage API for Data Protection (VADP)
In layman terms. This is the API mainly used by backup principles to integrate this function into their software to provide seamless backup of VMs via LAN free which is similar to VMware Consolidate Backup (VCB).  However VCB has end its product line at 1.5 Update 2 and we are encourage to use whatever is provided from backup vendors.

With the VADP, backup software is able to offload the load created by backup of VMs and handle it by the storage subsystem.  This not only allows the host to be able to run its normal task and not affect its performance but also perform backup making use of VMware snapshot without the use of any agent installed in the VM.  This require of cause the storage to support this feature.

The advantage of VADP versus VCB is that you do not have to install VCB on a separate machine and also there is no need of complicate setup and configuration to get VCB to work with your backup software.

A more detail explanation can be found in VMware KB on VADP.

vStorage API for Array Integration (VAAI)
This was explained previously in my post earlier.  Basicallyin short and layman terms, this allows storage vendors to integrate and enable their storage to handle the load whenever a storage related function is activated.

VMware KB also have some very detailed explanation of VAAI.

Wednesday, November 16, 2011

vSphere: iSCSI Multipath

Once again, I have just learned about iSCSI Multipathing and tested it out in my home lab using a openfiler as my iSCSI virtual SAN and with nested ESX 4.1 and ESXi 5.0.

Reason why I taken such a combination of different Hypervisor is to show how this can be done.  In vSphere 5, where ESXi 5.0, there is a GUI to perform the MPIO binding unlike in vSphere 4.x, this has to be done via the command line.  Even changing the MTU value has been made easy in vSphere 5.

You can also do a load balance by changing the path selection to round robin.

vSphere 5


vSphere 4

Update: After setting up the mutipath, reboot the esx host and the path would be shown correct.

iSCSI binding Commands
Changing a vSwitch with MTU 9000 (Optional)
>esxcfg-vswitch -m 9000 vSwitch0

Add a vmknic to a port group with MTU 9000
>esxcfg-vmknic -a -i 172.16.200.91 -n 255.255.255.0 -m 9000 "Port Group"
Where port group is the name of the port group. 

To view your vmknics for iSCSI 
>esxcfg-vmknics -l

My iSCSI adapter is vmhba33.  My vmknic are 1 and 2.
To bind the respective vmknic to adaptor 
>esxcli swiscsi nic add -n vmk1 -d vmhba33

>esxcli swiscsi nic add -n vmk2 -d vmhba33

 
To remove the vmknics, make sure the iSCSI is disabled and not datastore are connected.
To ensure a reboot is recommended.
>esxcli swiscsi nic remove -n vmk1 -d vmhba33
>esxcli swiscsi nic remove -n vmk2 -d vmhba33 

To see the vmknics associated with your adapter 
>esxcli swiscsi nic list -d vmhba35


References:
Changing MTU for vSwitch and vmk
http://www.gavinadams.org/blog/2010/07/19/esxi-41-and-the-9000-byte-mtu-on-vmk0/
vSphere 4.x
http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html
http://www.yellow-bricks.com/2009/03/18/iscsi-multipathing-with-esxcliexploring-the-next-version-of-esx/
http://yourmacguy.wordpress.com/2009/11/09/vsphere-iscsi-multipathing/

vSphere 5
http://blogs.vmware.com/vsphere/2011/08/vsphere-50-storage-features-part-12-iscsi-multipathing-enhancements.html

Friday, November 4, 2011

vSphere 5: vMotion with Multiple nics

The below is a good comparison of Hyper-V Live Migration versus vMotion.  Wiht multiple nics supported for vMotion in vSphere 5 its no longer a constraint.


Performance of vMotion comparison with Hyper-V Live Migration:Virtual Reality

With the new vSphere 5, multiple nics for VMkernel use for vMotion is possible.  With up to 16 nics for 1GE links and up to 4 for 10GE links.

Sadly I am unable to do a demo for this on my home lab since where on earth can I get 10GE link.  But anyway, I would like to point out certain consideration when planning for vMotion with such links.

For 1 GE links, you can bundle up mutiple port groups for vMotion.  I was totally confused for this and after watching this video I got a clearer picture however my next question arised.

How many port groups of vMotion can I create?  Well the answer is simple up to 16.  I was still not really clear so does it tally with the nics used?

Ok here is the simple explanation if you were like me who got confused.  Say if you use 8 nics bundle up for vMotion, you can create up to 8 port groups whichever you do it.  However of cause, you can still use less than 8 but the maximum you can go is 8 port groups since you only have 8 nics.

That is one consideration do you really need that much?  Yes having more means vMotion can really be fast but however do note that with more activities at the nics, it also means more CPU resource required.

The second thing to note, what if I used 10GE?  Well if you have that luxury to use that, it would be good.  In fact, you do not have to create multiple port group since the bandwidth would be good enough.

But hold on, what if I want it better?  You can actually do that but 10GE uplinks been busy also mean lots of computing resource will be used.  Can you take that?  So I am not going to suggest doing that but its all about your design.

Update
Lastly why does vMotion becomes faster when you have more VMkernel port groups for it?  The reason is in vSphere 5, with the support of multiple nics, you can have multiple port groups for vMotion.  This also add that you are having mutliple vMotion initiators when vMotion is triggered.  The advantage of this is vMotion traffic is now send over through these initiators and which helps in improving vMotion.

The amount of traffic in this case is applicable to just one single vCPU.  Ever thought that in future when SMP is possible, the traffic will be much higher and the reason for multiple nics would be more visible.

Also to note, often vMotion traffic will be high when performing it, it is advisable to avoid sharing this nic with FT which is unknowingly to many does produce pretty high bandwidth.  I suggestion to have separate nic for FT rather to share this together with something else.

So when planning for your design, two things to bear in mind:
  1. 1GE or 10GE ports to be used for VMkernel?  The cost and the benefits.
  2. 1 Port group or mutiple port groups? Consider the computing requirements especially when using 10GE.  Suggestion to have more for 1GE but perhaps stick with just one for 10GE.

VMUG Singapore by VMware and HPE

If you are in Singapore, do remember to register for VMUG Singapore event sponsored by VMware and HPE. Look for the event details here . ...