There are several common reasons enterprises choose to adopt these technologies. These include simplified management, security and compliancy, and enabling mobile work styles.
A structured project approach, combined with the correct use of the best tools available in the industry today, will help organizations that are investigating, testing, migrating and using virtualized desktop environments to design, build and manage well-performing centralized desktop infrastructures at the lowest cost possible.
Some of these decisions may have been made already if you have standardized on particular vendors for components.
In either case, it’s a complex stack of components, and changing any one of them may impact the performance of your solution.
Customers may have standardized on one for their server workloads.
This selection may be impacted by the support, performance, application compatibility, or functionality available in various versions.
For example, changing the version of Microsoft Office from 2010 to 2013 can result in up to a 20% decrease in user density.
Other changes, like excluding files in the base image from antivirus scans, may significantly increase density.
It’s important to keep results from previous tests and configurations to use in comparing performance to compare measurements and determine if there’s value in implementing the change.
With the results of tests before and after the change has been implemented, the next step is to decide whether to deploy the configuration change into the production environment.
If the tests indicate that the change will result in improved performance or density, the decision to deploy is an easy one.
While these can indicate how much load the host hardware is under, there’s not a direct correlation between hardware load and end user experience.
For example, monitoring might report that CPU is at 100%, but users currently on the system may not be seeing an impact in their usage.