A primary goal of desktop consolidation is to reduce the Total Cost of Ownership (TCO) of desktop deployment for employees. One way to reduce such costs is to optimize the sharing of resources between separate desktop users. And one way to increase resource sharing is to centralize them in the data center.
The naïve approach would be to try and centralize as much as possible. A typical example would be a Server Based Computing environment, where standard PCs are replaced by thin clients and Terminal Servers deliver desktop functionality on shared servers in the data center.
Resource sharing however also has its penalties. As an example, some applications are not suitable for a Terminal Server environment, resulting in expensive upfront testing and in a hybrid desktop deployment architecture. As another drawback, one has to consider the possibility that if one application of one user crashes, the whole server environment might go down hindering the productivity of all users. Finally, data security is hugely effected by resource sharing and resource isolation.
Scalable resource sharing
So there is a trade-off between cost reduction resulting from resource sharing and cost increase caused by the very same resource sharing. The optimal case obviously depends on the particular situation. Scaling up from minimum to maximum resource sharing, the following desktop delivery possibilities appear:
Laptops are increasingly used as their prices keep going down. Laptops however share even less resources than desktop PCs, as the latter at least share the LAN and WAN infrastructure of the company, while laptops are also used outside the firewall.
- Desktop PCs.
Desktop PCs are the backbone of the traditional desktop delivery model and still constitute the dominant delivery method of today.
- Blade PCs: data center hosted blade PCs with thin clients.
With a blade PC, the computer part of the desktop (i.e. CPU, memory, hard disc) is moved to the datacenter. Resource sharing however is still rather poor and might cover power supply, cooling and storage.
HP’s BladeSystem is a typical example of this species.
- Hardware virtualization: thin clients with Virtual Desktop Infrastructure (VDI) using server-hosted hardware virtualization.
In this model, the desktop is virtualized and runs in its own virtual machine in a server in the data center. Resource sharing is now extended to the server hardware. Each desktop user however still has its own operating system instance, which could be Windows Vista for one user and Linux for another.
A typical example would be VMware’s VDI with ESX on the server side.
- Operating system virtualization: thin clients with VDI using server-hosted operating system virtualization.
Now the virtual desktop runs in an operating system virtualization environment, acting as a kind of container within the server operating system. Resource sharing is further increased as users actually share the operating system. As a result, all users have to use the same operating system and a mix of e.g. Windows and Linux is not supported.
Parallels’ Virtuozzo would be a typical example of the operating system virtualization side.
- Terminal Services: thin clients with Server Based Computing using Terminal Services.
Instead of simulating a separate desktop in the server, this model builds on the multi-user functionality of the server operating system. Therefore, resources sharing is further increased.
A best known example would be Citrix Presentation Server (aka XenApp).
- Ultra-thin clients with Server Based Computing using Terminal Services.
In all of the previous thin client models, the thin client uses an embedded operating system to support remote access via e.g. Microsoft’s RDP protocol or Citrix’s ICA. Resource sharing can further be increased by centralizing the RDP/ICA communication handling, leaving a really ultra-thin client at the user side. Such an ultra-thin client can be used in any of the previous models by the way, and not only in combination with Terminal Servers.
A typical example would be Sun Ray clients with Sun Ray server software.
Effects on capital expenditure and first time deployment:
- The mass market effects of laptops and PCs makes them relatively inexpensive compared to data center hardware.
- Typically, Terminal Servers support 5 to 10 times the number of desktops compared to server-hosted hardware virtualization. Server-hosted operating system virtualizationsupports again 5 to 10 times the number of desktops compared to hardware virtualization.
- The cost of all centralized elements increases as a result of failover and disaster-recovery measures.
Effects on operational expenditures:
- Support costs constitute the major part of operational expenses.
- Energy consumption is of growing interest in the context of operational expenses.
- The more centralized – yet isolated – desktop architectures typically provide the best data security. Generally, this effect is difficult to calculate in operational expenses.
How much centralization is good enough? As always, the answer is “it depends”. Important questions to be answered are: how much freedom do we need to allow the end-users, how important is data security, what level of availability is acceptable, are users mobile and partly offline, etc.
So far, only physical resource sharing has been considered. To make things more complicated, there is also a varying degree of non-physical resource sharing in the mix. Examples are software licenses, O/S and application pooling and application streaming, which all contribute to resource sharing with an effect on both upfront investment and support costs.
The optimal desktop delivery architecture will probably be a mix of multiple technologies for the time being. Unfortunately, this will again increase the costs.