Guest blogger Terry Lynch is impressed with the potential of Hyper-V replicas for data recovery. Also: how the Department of Defence virtualised its desktop infrastructure.
Day 2 of TechEd Australia 2012 and after absolutely gorging myself on sessions yesterday I woke up this morning from the deepest sleep I think I’ve ever had. Unfortunately there was no time for a hangover with an 8:15 session on Hyper-V replicas to make sure I’m wide awake.
Server 2012 opportunities for solutions providers
Some of the new features offered by Windows Server 2012 and the Hyper-V management service give IT service providers a fantastic method for managing their customer networks remotely, using their data centre as a disaster recovery site or even completely virtualising their customer’s environment within a Partner Cloud hosted and managed by the solutions provider themselves.
Server Manager in Windows Server 2012 brings a new dashboard for easily checking the health of all your servers at a glance remotely from your client Windows 8 machine. Microsoft “Best Practice Analyser” scans on roles within your network are run and shows results in a centralised location with any issues being explained in clear language, and they will even tell you how to resolve the issue yourself. Servers across your internal network or in customer networks (with a VPN or other protected connection) can all be seen from the one interface.
Hyper-V replica has been designed to scale from huge enterprise datacentres down to the small businesses running on a single server, as Ben Armstrong, Senior program manager lead from Microsoft explained. Small businesses have on average six downtime periods per year costing roughly $75,000 worth of productivity loss, which should set alarm bells ringing for every SMB owner. Worryingly though, only 50% of SMBs have any real disaster recovery plan in place.
With Hyper-V replica in Windows Server 2012 Microsoft is expecting service providers to bring a live replica solution for small businesses to replicate their server to, allowing small businesses the ability to fail over their server to the service providers datacentre for either a planned outage or disaster recovery. Replication traffic is optimised for WAN networks and initial seeding of a VM to a USB drive is a supported option to minimise disruption to the internet connection for small companies who can’t afford multiple links.
A partner cloud builds upon this concept of hosted virtual machines by completely absorbing the customer’s network within your data centre having servers, virtual desktops and applications all running within your environment. Improvements in multi-tenancy within Windows Server 2012 allow you to host separate customers on the same hardware with full isolation of workloads for your different customers. Network virtualization allows completely separate networks, even running on the same subnet or IP address and the new additions to resource metering allow you to manage QoS for dedicated resources and easily calculate cost-based storage, memory or network utilisation for your billing cycle.
Best practises for deploying a virtualised desktop infrastructure
Who better to learn about the potential pitfalls of virtualising your desktop infrastructure from than the Department Of Defence? A project undertaken last year as a pilot for its “next generation desktop” environment was detailed by Darren Milne, solutions architect at Thales.
With 120,000 user accounts, 70,000 concurrent users on an average day and a peak of 450 logon requests per minute across 400 sites as well as the requirements to keep existing legacy applications which could not be upgraded due to being bespoke and taking the phrase “mission critical” literally, scalability and flexibility were key components to designing a network infrastructure capable of dealing with the complexity of this project.
Using Windows Server 2008R2, Windows Server 2012, App-V for application virtualisation and System Centre 2012 on the server side with Windows 7 embedded or Enterprise on the client side, the DoD was able to come up with a framework for the pilot scheme which would work for all test sites by deploying client images through the network or by USB drives.
The biggest issue faced in this pilot was with applications. Virtualising thousands of applications with complex requirements which could not be upgraded or taken down for maintenance proved to be a challenge. A process of determining the method of deployment by finding weaknesses in the ability for the application to be virtualised provided an answer for every application with the worst-case scenario being “thick client local install” – fortunately only one application needed this level of installation.
After completing the pilot successfully, the lessons learned were that automating processes and having a zero-touch deployment method helped hugely with the rapid deployment of machines to remote environments with fewer issues. Methods of increasing performance by reducing shared storage and ensuring that users data and VM existed within the same hardware or datacentre made sure that the user experience in the virtualised environment was much more responsive and fluid.
Visit Gizmodo’s TechEd 2012 Newsroom for all the news from the show.
Terry Lynch is covering Windows Server 2012 for Gizmodo using his ASUS Zenbook WX32VD.