Guest blogger Terry Lynch kicked off his TechEd Australia 2012 experience with a deep dive into new technologies for private cloud, virtualisation and storage. It’s safe to say he’s pumped about the potential for Hyper-V.
I’ve been at TechEd for only one day now and my head is already swimming with ideas around transforming my company’s internal environment and bringing fantastic new features to our customers. Microsoft does a very good job of drumming up enthusiasm for its products and with the features being touted, it’s not hard to get caught up in the excitement.
On Wednesday I mainly concentrated on virtualisation topics with sessions revolving around the new features of Hyper-V, Virtual Machine storage migration and some high-level coverage of the new storage system in Windows Server 2012
The Road To The Cloud
The driving force behind Windows Server 2012 is the idea that infrastructures should be scalable across the three cloud locations of Private, Partner and Public. Private clouds are in-house servers hosting virtual machines, Partner clouds are outsourced solutions providers who have their own Hyper-V server hosting your infrastructure and finally public cloud is hosted by Microsoft on its Azure platform. Each of these levels offer benefits for scalability, support and customisation features with Private having only in-house support but the best customisation, Partner having great support and Public having practically limitless resources.
The way we get ourselves to this state of flexible scalability is by virtualising everything, and Microsoft means everything. Improvements in the Hyper-V service allow virtual machines with specifications which would satisfy any requirement – maximum limits of 320 logical processors, 4TB of physical memory, 64TB virtual disk size with the new VHDX format. The ability for storage arrays to directly communicate their features and capabilities to Hyper-V using Offloaded Data Transfer (ODX) means that 99 per cent of the world’s SQL servers could quite happily exist within a virtualised environment. And the best part about this? Microsoft’s Hyper-V Server 2012 is completely free.
Hyper-V Server can be downloaded from Microsoft’s web site and installed on servers, giving them all the features of Hyper-V without any of the licensing or overhead costs usually associated with a host Hyper-V operating system. This server can then be managed through PowerShell or remotely using the Hyper-V manager MMC or server manager. With the improvements of Hyper-V in Server 2012 giving hardware performance equal to a bare-metal install the benefits of virtualising everything are outweighing the drawbacks by far.
What’s Next In Virtualisation
The new features in Windows Server 2012 around virtualisation really bring Microsoft up to the competition’s standard and allows administrators to seriously consider Hyper-V as a platform for their virtualised environment.
Hyper-V live migration allows virtual machines to be moved between host servers or within clusters with absolutely zero downtime. The difference between “high availability” and “continuous availability” sound obvious enough but when shown a live demo with a single ping dropping to a migrating machine, the realisation of how useful and impressive this feature is really became apparent. Hyper-V also cleans up after itself, removing the old virtual hard disk and entry in the virtual machine manager for you.
As well as allowing live migrations of VM’s Microsoft has also introduced the concept of “shared nothing” migrations, allowing you to migrate a VM from one server to another with just a network cable (or wireless network, if you really want to show off) No shared storage, pools on the destination server or configuration are needed to make this work; the only requirements are that both servers are joined to the domain, both have live migration enabled and both have network connectivity to each other. Simple.
Improvements in Hyper-V replicas allow you to create a failover copy of your VM on another host anywhere in your environment, whether across the LAN to a different floor or across the WAN in another continent. Hyper-V replicas can be set up through a wizard interface which deploys the replica to another host or seeds it to a USB drive ready to manually import to the replica destination server, saving bandwidth for the initial synchronisation. Once these replicas are set up and recognise each other’s existence, failover testing is a right-click away and within seconds you can see the results of these tests.
Networking in Windows Server 2012 Hyper-V has built upon the existing technology in older versions and gives great new capabilities for monitoring and multi-tenancy environments. The extensible virtual switch allows extensions to be installed in the networking stack for additional functions like traffic monitoring, firewalling and the ability to modify traffic as it passes through the network stack from a VM guest. The new Hyper-V Network Virtualisation replaces the concept of PVLAN which is limited to the VLAN theoretical specification of 4095 networks, although often hardware supports a lot less than this. Entire networks even if they are on the same subnet and IP address range can exist on a single server and be completely oblivious of each other’s existence – perfect for multi-tenancy environments for hosted infrastructure providers.
ReFS and the new Storage System
With Windows Server 2012, Microsoft has introduced its successor to NTFS, ReFS. As well as a new system for managing storage pools and drives, storage in Windows Server 2012 has a new way of thinking about RAID and data storage.
In file and storage management within Windows Server 2012 and Windows 8, you can now create storage pools based on physical disks whether these are SAS, USB, JBOD or NAS devices. Windows server is able to see these drives and collect them into a pool for allocation into a storage space which is then presented to the VM or mounted on the host OS
Microsoft actually recommends that you no longer use your hardware RAID card for internal storage anymore – tests have shown that Windows Server 2012’s storage management has very little speed loss compared to a hardware solution and the benefits and manageability of provisioning storage from one console with the same interface across all servers makes life much easier for server admins.
After collecting your drives into a pool you can then create a storage space in which you select which level of redundancy and performance you would like and can allocate a hot spare. If one of the physical drives should fail this spare drive will immediately power up and begin rebuilding the level of redundancy you have selected for your storage space
Once you have your storage spaces you can finally present then to your virtual machines by creating a volume. One of the neat new things about volumes in Windows Server 2012 is the ability to “thin provision” a volume which allows you to set a capacity of the drive which can be larger than the actual physical space available on the storage space. This allows you to create a server with a 5TB volume which may actually only be running on a 700GB capacity drive. As you near your physical storage limit you can simply add physical disks to your storage pool to extend the space available to use.
The ReFS file system brings new features like data integrity, indexing and massive scalability to Windows Server 2012. Crazy numbers like 18 quintillion files within the same directory and individual file sizes measured in petabytes mean ReFS is very well future-proofed and with file resiliency against data corruption built into ReFS using checksums, your data is much safer stored on a drive formatted this way.
ReFS works well for file storage drives but is currently not supported by Exchange 2010 and is not recommended for SQL databases. Also NTFS features like individual file encryption, file compression and quotas are no longer supported.
Visit Gizmodo’s TechEd 2012 Newsroom for all the news from the show.
Terry Lynch is covering Windows Server 2012 for Gizmodo using his ASUS Zenbook WX32VD.