<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Andrew Morgan &#187; XenServer</title>
	<atom:link href="http://andrewmorgan.ie/tag/xenserver-2/feed/" rel="self" type="application/rss+xml" />
	<link>http://andrewmorgan.ie</link>
	<description>Grumpy ramblings</description>
	<lastBuildDate>Fri, 30 Jun 2017 09:24:25 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0</generator>
	<item>
		<title>On IOPS, shared storage and a fresh idea. (Part 3) tying it all together in the stack</title>
		<link>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-3-tying-it-all-together-in-the-stack/</link>
		<comments>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-3-tying-it-all-together-in-the-stack/#comments</comments>
		<pubDate>Fri, 26 Oct 2012 14:08:13 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Server Based Computing]]></category>
		<category><![CDATA[Server Virtualisation]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[VMware]]></category>
		<category><![CDATA[Windows]]></category>
		<category><![CDATA[Windows Server]]></category>
		<category><![CDATA[XenApp]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[XenServer]]></category>
		<category><![CDATA[HyperV]]></category>
		<category><![CDATA[SBC]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[xenapp]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2393</guid>
		<description><![CDATA[Note: This is part three, have a read of part one and two. Hello there, and thank you for dropping back for part 3&#8230; I suppose I should start with the disappointing news that I have yet to test this option for VDI in a box. And despite Aaron Parker&#8217;s suggestions it wasn&#8217;t due to lack of inspiration, it was down to lack of time! This series has gathered allot of interest from both community and storage vendors alike, and [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignright" alt="" src="/wp-content/uploads/2012/10/andrew-logo1.png?w=595" height="76" width="75" />Note: This is part three, have a read of part <a href="http://andrewmorgan.ie/2012/10/05/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/" target="_blank">one</a> and <a href="http://andrewmorgan.ie/2012/10/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/" target="_blank">two</a>.</p>
<p>Hello there, and thank you for dropping back for part 3&#8230;</p>
<p>I suppose I should start with the disappointing news that I have yet to test this option for VDI in a box. And despite Aaron Parker&#8217;s suggestions it wasn&#8217;t due to lack of inspiration, it was down to lack of time! This series has gathered allot of interest from both community and storage vendors alike, and I feel I should set the record straight before I got any further:</p>
<ol>
<li>This<strong> isn&#8217;t a production idea</strong>, you would be crazy to use this idea in a live environment.</li>
<li>Throughout this entire project, we&#8217;re focusing on pooled stateless. Stateful desktops would be a separate post entirely.</li>
<li>This wasn&#8217;t an attack on products in this market space, merely a fresh view on an old problem.</li>
<li>If i had the skills or funds necessary to run this project to a production solution, I wouldn&#8217;t have posted it. I would already be hard at work creating a reasonably priced product!</li>
</ol>
<p>Now that my declarations are out of the way, I&#8217;d first like to talk about the moral of the story. This isn&#8217;t an unfamiliar expression:</p>
<p><strong>IOPS mitigation is not about read IOPS it&#8217;s about <span style="text-decoration:underline;">WRITE IOPS</span></strong>!</p>
<p>VMware, Citrix and Microsoft have similar but very different solutions for read IOPS negotiation. Similar in the sense that they try to negate storage read IOPS. But the key difference with XenServer is the local disk cache via Intellicache has the out of box functionality to cache the majority of read&#8217;s to local disk (think SSD*) without the baked in soft limit of 512 MB for Microsoft HyperV and VMware respectively.</p>
<p>Long story short, VMware and Microsoft&#8217;s solution is about 512mb of recognizable read IOPS negation un-tuned, but enabled. Of course this value can be tuned upwards, but the low entry point of cache would suggest, at least to me, that tuning up will have an upsetting affect on the host.</p>
<p>This to me is why IntelliCache has the upperhand in the (value add product) VDI space for read IOPS negation and they even throw in the Hypervisor as part of your XenDesktop licensing, so win win, but what about those pesky write IOPS?</p>
<p><span id="more-2393"></span></p>
<p><strong>Let&#8217;s look at Citrix for a moment!</strong></p>
<p>If we review <a href="http://andrewmorgan.ie/2012/10/05/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/" target="_blank">article 1</a> and <a href="http://andrewmorgan.ie/2012/10/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/" target="_blank">article 2</a> for a moment, the half baked idea formed by <a href="https://twitter.com/barryschiffer" target="_blank">Barry Schiffer</a>, <a title="The dutch IT guy!" href="http://www.ingmarverheij.com/">Ingmar Verheij</a>, <a href="https://twitter.com/KBaggerman" target="_blank">Kees Baggerman</a>, <a title="An amazing blog, i regularly read from Remko." href="http://www.remkoweijnen.nl/blog/">Remko Weijnen</a> and I, when combined with Citrix IntelliCache turned out to be a phenomenal combination to create an IOPS avoidance product. But the key point we had in the back of our heads was, Citrix provisioning server already has this driver!</p>
<p>Citrix provisioning server has a RAM cache driver, with configurable size baked into the Provisioning Services product. This driver works in a very similar way to the EWF driver, just without the API and flexibility of a Microsoft driver.</p>
<p>There are customers of Citrix out there using RAM caching with XenApp who I spoke to, they have assigned a large (8gb+ cache, 3-4gb of cache utilised day to day) to negotiate the chance of a spillover, but it could still happen and this is an acceptable risk to their implementation. I struggled to find a XenDesktop customer using this method who would talk about their implementation.</p>
<p>But with XenDesktop in mind, using an overly large cache size to avoid spillover just doesn&#8217;t scale the way a consolidated Server Based Computing model would&#8230; Which leaves us back in this same dilemma, I&#8217;d like to cache in RAM, but I don&#8217;t want to risk users losing data when they perform a task we have not factored for.</p>
<p>So with all this in mind, lets talk about my final hurdle I faced with this project.</p>
<p><img class="alignright size-thumbnail wp-image-2402" title="BSOD" alt="" src="/wp-content/uploads/2012/10/bsod.jpg?w=150" height="112" width="150" /></p>
<p><strong>RAM cache flood and my half baked solution:</strong></p>
<p>The Microsoft EWF driver suffered the same problem as the Citrix provisioning server when the cache filled, i.e. it froze or outright bug checked if you tried to reference a file put into the cache before you filled it with other information.</p>
<p><a href="http://info.kraftkennedy.com/blog/bid/102199/Citrix-Provisioning-Server-Understanding-the-Limitations-of-Write-Cache-in-Target-Device-RAM">Jeff Silverman</a> has done a great article on what happens in this event with the Citrix Provisioning Services RAM cache and I urge you to go read it so I don&#8217;t have to repeat it! Go now, I&#8217;ll wait right here for you to get back!</p>
<p><strong>Read that article?</strong> Ok good, let&#8217;s continue.</p>
<p>To combat this scenario with the Windows EWF driver, I created a very rough around the edges windows service using polling (Sorry Remko) to check the size of the cache periodically via the <a href="http://msdn.microsoft.com/en-us/library/ms933209%28v=winembedded.5%29.aspx">EWFAPI</a> and write the cache out to disk.</p>
<p>The function in the EWFAPI called <a href="http://msdn.microsoft.com/en-us/library/aa940887%28v=winembedded.5%29.aspx">EwfCommitAndDisableLive</a> allowed me to dump the ram cache to disk and subvert the cache, going straight to disk when this event had occurred. By using this routine the ram cache simply disables itself and allows pass-through to disk at this point.</p>
<p><a href="/wp-content/uploads/2012/10/ewfdriverbasicprincipal.png"><img class="aligncenter" title="EWFDriverBasicPrincipal" alt="" src="/wp-content/uploads/2012/10/ewfdriverbasicprincipal.png" height="244" width="516" /></a></p>
<p>With this in mind, I tuned my service to spill to disk when the cache was greater than 80% of ram available, not in use by the operating system when the system booted, This worked well to a point, but the key failure to this approach became apparent when you opened a large number of applications and they struggled to occupy the space available around the cache.</p>
<p>My second attempt however, proved much more fruitful where I monitored for free system bytes in memory and if this dropped below a certain threshold the EWF drive began it&#8217;s dump routine to disk. Once the dump routine was complete, ram cleared and the writes had been committed to storage where the disk continued to use it for the remainder of the session. Once this spill over had occurred, a notification bubble fired in the user session warning them of the degradation of service and they should log off saving work at the next convenient moment&#8230;</p>
<p><em>Voila! no blue screen, spill to disk and user notification of degradation.</em></p>
<p>This wasn&#8217;t fool proof, it was very rough, it didn&#8217;t handle files being written larger than the RAM cache, but In my opinion, it satisfied the <strong>biggest fear</strong> and business case against using the Citrix Provisioning server ram caching, the ram cache flood scenario. I was happy with the results and it scaled well to the size of my lab, it showed what is possible with a RAM filter driver and it allowed me to prove my point before I started to poke the hornets nest of the storage market. So let&#8217;s park the EWF story for now and focus on my favorite subject, Citrix technologies.</p>
<p><em><strong>Note:</strong> I won&#8217;t be making this service and driver solution publicly available, it&#8217;s not a production ready solution and Microsoft would sue the pants off me. I&#8217;m sure you&#8217;ll understand why, but if you want more information drop me an email.</em></p>
<p>The next part&#8217;s of this blog are all theoretical, but I know certain people I want to listen, <em>are</em> listening (Hi Kenneth, hope you are well :)).</p>
<p><strong>Negating the negated. Lets talk about that spill over.</strong></p>
<p>But what about that Spill over event? by using a spill over from ram to disk, we create a situation where we could change a steady, slightly reliable IOPS per from each desktop, to a situation where, &#8220;couldn&#8217;t a collection of spillovers at the same time cause your storage to becoming swamped with IOPS?&#8221;</p>
<p>Absolutely, and I&#8217;ve been thinking about this quite a bit&#8230; But there&#8217;s another hidden potential here even in a RAM spill over scenario&#8230;</p>
<p style="text-align:center;"><em>With a little bit of trickery couldn&#8217;t we also negate this spillover event with a simple provisioning job? </em></p>
<p><a href="/wp-content/uploads/2012/10/difdiskcopycreate.png"><img class="aligncenter size-full wp-image-2408" title="difdiskcopycreate" alt="" src="/wp-content/uploads/2012/10/difdiskcopycreate.png" height="272" width="595" /></a></p>
<p>With a bit of time spent thinking about this scenario, this is what I came up with&#8230;</p>
<p>Why doesn&#8217;t the controller, for XenApp or XenDesktop (PVS / DDC) copy or (my personal preference) create a local, blank differencing disk when a VM boots?</p>
<p>The hypervisor could be completely agnostic at this point, we could spill over to local disk, keeping write IOPS completely away from the shared storage? even in a spill over event?</p>
<p>This approach (in this half baked enthusiasts view) would negate the negated&#8230; But don&#8217;t take my word for it, lets go for some theoretical solutions.</p>
<p><strong>Solution A: Implementing a spill over routine with the Citrix provisioning server driver.</strong></p>
<p>So with my success with the Microsoft driver, I got myself thinking how easy would this be to do utilizing the Citrix Provisioning Services driver? Without access to the code, I&#8217;m going to be making slightly risky statements here, but I have allot of faith in Citrix that they could make this happen.</p>
<p>From everyone I have spoken to about this idea, they all see the value in the ability to spill out of RAM&#8230; So Citrix, please make it so. Below are some idea&#8217;s for deployment methods assuming Citrix do march down this route and the pro&#8217;s and con&#8217;s I see that live with each scenario.</p>
<p>Bear in mind, I&#8217;m not a storage guy, a full time developer or a software architect, I&#8217;m just an enthusiast that see&#8217;s potential, so drop your comments below as to what you think!</p>
<p><span style="text-decoration:underline;"><strong>Machine Creation Services.</strong></span></p>
<p><strong>Idea A: Provisioning services improved RAM caching, Machine Creation services and Intellicache on XenServer.</strong></p>
<p><a href="/wp-content/uploads/2012/10/1-mcs-pvs-driver1.png"><img class="aligncenter size-full wp-image-2406" title="1. MCS &amp; PVS Driver" alt="" src="/wp-content/uploads/2012/10/1-mcs-pvs-driver1.png" height="355" width="595" /></a></p>
<p>Utilizing XenServer and MCS we could leverage the intellicache to negate the read IOPS as we proved in my own EWF driver testing, but still deliver on that spill over mechanism allowing continuation of service.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Read IOPS negation.</li>
<li>RAM caching to remove write IOPS (bar a spill over).</li>
<li>Continuation of service in a cache flood scenario.</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Limited to XenServer.</li>
<li>Over provisioning of RAM necessary per desktop.</li>
<li>RAM spillover will result in a large amount of IOPS to the shared storage.</li>
</ul>
<p><strong>Idea B</strong>: <strong>Provisioning services improved RAM caching, Machine Creation services and Intellicache on XenServer&#8230; With local copy!</strong></p>
<p><a href="/wp-content/uploads/2012/10/2-mcs-pvs-local-copy.png"><img class="aligncenter size-full wp-image-2409" title="2. MCS &amp; PVS &amp; local copy" alt="" src="/wp-content/uploads/2012/10/2-mcs-pvs-local-copy.png" height="329" width="595" /></a></p>
<p>Same benefits as previous, but now, we have zero reliance on the shared storage when the VM is up (except for ID disk actions).</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Read IOPS negation.</li>
<li>RAM caching to remove write IOPS.</li>
<li>Uses local resources in a spill over.</li>
<li>Continuation of service in a cache flood scenario.</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Limited to XenServer.</li>
<li>Over provisioning of RAM necessary per desktop.</li>
</ul>
<p>So that&#8217;s MCS in a nutshell with per VM caching, I think this solution has so much potential I can&#8217;t believe it&#8217;s not been done, but I digress. So lets park that topic for now and move on to Citrix Provisioning Services.</p>
<p><strong><span style="text-decoration:underline;">Citrix Provisioning Services:</span><br />
</strong></p>
<p>So lets look at the &#8220;favorite of many&#8221; technology.</p>
<p><strong>Idea A: Citrix Provisioning Services and Improved RAM caching driver.</strong></p>
<p><a href="/wp-content/uploads/2012/10/pure-pvs.png"><img class="aligncenter size-full wp-image-2407" title="Pure PVS" alt="" src="/wp-content/uploads/2012/10/pure-pvs.png" height="401" width="595" /></a></p>
<p>In a pure Provisioning services environment, we would force our read IOPS via the lan, instead of storage protocol but still deliver a spill back to disk to allow continuation of service.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Hypervisor agnostic.</li>
<li>RAM caching to remove write IOPS (bar a spill over).</li>
<li>Continuation of service in a cache flood scenario.</li>
<li>Potentially no shared storage needed, at all, if caching on the PVS server.</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Read IOPS aren&#8217;t really negated, they&#8217;re just forced over another technology.</li>
<li>Over provisioning of RAM necessary per desktop.</li>
<li>RAM spillover will result in a large amount of IOPS to the shared storage / pvs server.</li>
</ul>
<p><strong>Idea B: Citrix Provisioning Services and Improved RAM caching driver.. With local copy!<br />
</strong></p>
<p><a href="/wp-content/uploads/2012/10/pvslocalcopy.png"><img class="aligncenter size-full wp-image-2413" title="pvslocalcopy" alt="" src="/wp-content/uploads/2012/10/pvslocalcopy.png" height="500" width="576" /></a></p>
<p>Taking the above benefits, but with the gain of utilizing local storage in the spillover event.</p>
<p><strong>Pros:</strong></p>
<ul>
<li>Hypervisor agnostic.</li>
<li>RAM caching to remove write IOPS.</li>
<li>Uses local resources in a spill over.</li>
<li>Continuation of service in a cache flood scenario.</li>
<li>Potentially no shared storage needed, at all.</li>
</ul>
<p><strong>Cons:</strong></p>
<ul>
<li>Read IOPS aren&#8217;t really negated, they&#8217;re just forced over another technology.</li>
<li>Over provisioning of RAM necessary per desktop.</li>
</ul>
<p><span style="text-decoration:underline;"><strong>So lets Review:</strong></span></p>
<p>And there we have it, 4 solutions to IOPS negation utilizing the Provisioning server RAM caching driver, with a little bit of a modification to deliver a robust solution to RAM caching.</p>
<p>The copy and creation of differencing disks again would deliver additional benefits to leverage the hardware you put into each Hypervisor without the Shared Storage investment.</p>
<p><em>Win Win, but is it?</em></p>
<p><strong>There&#8217;s an oversight here:</strong></p>
<p>There&#8217;s a niggle here that&#8217;s been bothering me for some time and you&#8217;ll probably note I mentioned it as a CON to most of the solutions above&#8230; I&#8217;m going to lay it out on the table in complete honesty&#8230;</p>
<p style="text-align:center;"><em>&#8220;Isn&#8217;t over provisioning RAM on a per desktop basis a waste of resource? Wouldn&#8217;t it be better if we could share that resource across all VM&#8217;s on a Hypervisor basis?&#8221;</em></p>
<p>You betcha! If we are assigning out (for argument sake) 1gb of ram cache per VM, that RAM is locked into that VM and if another machine spills, the free RAM in other desktops is wasted.</p>
<p>You would be completely insane not to reserve this RAM per VM, if an over-commit for the VM&#8217;s is reached this RAM will merely spill out to a page file type arrangement, negating <strong>all</strong> your benefits.</p>
<p>Ultimately, assigning RAM in this way could be seen as wasteful in the grand scheme of things&#8230;</p>
<p><span style="text-decoration:underline;"><strong>But there are other ways to <a href="http://www.worldwidewords.org/qa/qa-mor1.htm" target="_blank">skin this cat</a>!</strong></span></p>
<p>So this leads me on to something I was also considering which popped up in a twitter conversation recently. <strong>RAM disk&#8217;s and Hypervisors</strong>.</p>
<p>Current storage vendors will create a storage pool, consisting of ram inside a VM, per hosting hypervisor for local storage to allow good IOPS leveraging that VM. This VM can perform additional RAM optimizations like compression, de-duplication and sharing of similar pages to reduce the count.</p>
<p>This approach is super clever, <em>but</em>, In my humble opinion (please don&#8217;t kill me vendors), this approach is wasteful. Running a VM as a storage repository per host has an overhead to run this virtual machine and takes away from the agnostic solution that is possible&#8230;</p>
<p style="text-align:center;"><strong><em>What If the hypervisor provides the RAM disk?</em></strong></p>
<p style="text-align:left;">So excluding ESXi for a second, as getting a RAM disk into that platform would require VMware to allow access to the stack. Lets look at Hyper-V (2012) and XenServer for a second&#8230;</p>
<p style="text-align:left;">With Unix and Windows platforms, <a href="http://en.wikipedia.org/wiki/List_of_RAM_drive_software" target="_blank">RAM disks have been available for years</a>. They were once a necessity and a number of vendors still provide them for high performance IO environments.</p>
<p style="text-align:left;">Lets say (for arguments sake) Citrix and Microsoft decide to provide a snap-in to their hypervisor to allow native RAM disks (or a vendor writes one themselves!) and maybe, they even decide to provide RAM compression, Spill over to local disk, dedupe, and page sharing on this volume from the hypervisor stack&#8230;</p>
<p style="text-align:left;">Wouldn&#8217;t this extend provide all the benefits we&#8217;ve spoken about, without the need for a VM per host? And using Thin Provisioning allow all desktops to share the large pool of RAMdisk available?</p>
<p style="text-align:center;"><em>Yes, yes, it would.</em></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/ramdisklocalcopy.png"><img class="aligncenter size-full wp-image-2414" title="ramdisklocalcopy" alt="" src="/wp-content/uploads/2012/10/ramdisklocalcopy.png" height="411" width="595" /></a></p>
<p style="text-align:center;"><em>Above is an example of how I would see this working.</em></p>
<p style="text-align:left;">So random pictures are fine and all, but what about the read IOPS negation technologies? and what about combining these with XenServer or Hyper-V?</p>
<p style="text-align:left;"><strong>XenServer and IntelliCache:</strong></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/mcsramdiskintellicache.png"><img class="aligncenter size-full wp-image-2415" title="mcsramdiskintellicache" alt="" src="/wp-content/uploads/2012/10/mcsramdiskintellicache.png" height="308" width="595" /></a></p>
<p style="text-align:left;">Well there you have it now, all the benefits of a per VM filter, leveraging intellicache for reads and spilling out to local disk&#8230; RESULT!</p>
<p style="text-align:left;"><strong>Pro&#8217;s:</strong></p>
<ul>
<li>Read IOPS negation</li>
<li>Write IOPS negation</li>
<li>No Shared Storage required for running VM&#8217;s</li>
<li>Shared pool of RAM for all desktop&#8217;s to use.</li>
</ul>
<p><strong>Con&#8217;s:</strong></p>
<ul>
<li>A small one, but no migration of VM&#8217;s.</li>
</ul>
<p><strong>Provisioning server?</strong></p>
<p><a href="/wp-content/uploads/2012/10/pvsramdisk1.png"><img class="aligncenter size-full wp-image-2418" title="pvsramdisk" alt="" src="/wp-content/uploads/2012/10/pvsramdisk1.png" height="385" width="595" /></a></p>
<p>and again, all the benefits of a per VM filter, reads redirected via lan and spilling out to local disk&#8230; RESULT!</p>
<p><strong>Pro&#8217;s:</strong></p>
<ul>
<li>Write IOPS negation</li>
<li>No Shared Storage required for running VM&#8217;s</li>
<li>Shared pool of RAM for all desktop&#8217;s to use.</li>
</ul>
<p><strong>Con&#8217;s:</strong></p>
<ul>
<li>A small one, but no migration of VM&#8217;s.</li>
<li>No real read IOPS negation.</li>
</ul>
<p style="text-align:left;"><strong>And HyperV + CSV Cache?</strong></p>
<p style="text-align:left;">Well here&#8217;s an accumulation of my thoughts:</p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/hypvcsv.png"><img class="aligncenter size-full wp-image-2416" title="hypvcsv" alt="" src="/wp-content/uploads/2012/10/hypvcsv.png" height="464" width="595" /></a></p>
<p style="text-align:left;">So lets just talk for a second about what we are seeing&#8230; Utilizing a RAM disk with spill over, and copied vhd&#8217;s on boot we are removing the need for shared storage completely and hosting the IOPS from the hypervisor, natively without the need for additional VM&#8217;s.</p>
<p style="text-align:left;">And see that little icon down the bottom? Yep, that&#8217;s right, live migration from host to host thanks to <a href="http://blogs.technet.com/b/uspartner_ts2team/archive/2012/07/23/shared-nothing-live-migration-on-windows-server-2012.aspx" target="_blank">Microsofts Shared Nothing Live Migration</a>!</p>
<p><strong>Pro&#8217;s:</strong></p>
<ul>
<li>Some write IOPS negation</li>
<li>No Shared Storage required for running VM&#8217;s</li>
<li>Liv migration.</li>
<li>Shared pool of RAM for all desktop&#8217;s to use.</li>
<li>write back to local disk.</li>
</ul>
<p><strong>Con&#8217;s:</strong></p>
<ul>
<li>No full read IOPS negation.</li>
</ul>
<p style="text-align:left;"><strong>Review.</strong></p>
<p style="text-align:left;">I&#8217;m sure there&#8217;s loads of reading in here, and there will be tons of thought and questions after this blog post. This has been in my head for roughly 9 months now and it feels victorious to finally get it all down on paper.</p>
<p style="text-align:left;">At the end of the day, Ram Caching is the way of the future, you pay no maintenance on it, you keep your IOPS off your storage and with a little magic from Microsoft, or particularly Citrix. You could see these benefits.</p>
<p style="text-align:left;">If / When these technologies make it out to you, you could quite happily stick with Machine Creation services, leverage intellicache and your shared storage requirement could be a cheap NAS from any vendor. Provisioning services also see&#8217;s allot of benefits from this approach, but the real creme de la creme is in that intellicache feature.</p>
<p style="text-align:left;">The key functionality to these approaches is simple:</p>
<ul>
<li>Cache RAM</li>
<li><strong>Spill to disk</strong></li>
<li>Keep the differences on local storage if it spills.</li>
</ul>
<p style="text-align:left;"><strong>One last thing?</strong></p>
<p style="text-align:left;">In true Mark Templeton fashion, you know I have a one last thing. This actually hit me today while writing this post and to be honest, I&#8217;m so amazed by the potential of this idea I&#8217;m going to build it in the lab this weekend.</p>
<p style="text-align:left;">But until then, a teaser&#8230;</p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/teaser.png"><img class="aligncenter size-full wp-image-2421" title="teaser" alt="" src="/wp-content/uploads/2012/10/teaser.png" height="317" width="595" /></a></p>
<p style="text-align:left;">What if I told you there was already an overlay, freely available baked into a Microsoft technology that would allow you to overlay local storage in Hyper-V with RAM. A powerful API attached to it, and the ability to write specific files to the local disk, subverting the overlay only when needed?</p>
<ul>
<li>RAM disk type technology? &gt; yep</li>
<li>already available? &gt; yep</li>
<li>Powerful API? &gt; yep</li>
<li>already can spill to disk? &gt; yep</li>
</ul>
<p style="text-align:left;"><strong>Yep Yep Yep!</strong> check back in a few days.</p>
<p style="text-align:left;"><em>(If you know what it is already, keep it to yourself until I&#8217;ve gotten out of the lab!)</em></p>
<p><strong>What do you think?</strong></p>
<p>It doesn&#8217;t take a half baked enthusiast like me to see this potential and I&#8217;d be really eager to hear your comments on these approaches. If you would prefer to keep your comments offline, you can reach me on andrew@andrewmorgan.ie.</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-3-tying-it-all-together-in-the-stack/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>On IOPS, shared storage and a fresh idea. (Part 2) GO GO Citrix Machine Creation Services.</title>
		<link>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/</link>
		<comments>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/#comments</comments>
		<pubDate>Wed, 10 Oct 2012 12:00:37 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[XenServer]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[RAM Caching]]></category>
		<category><![CDATA[VDI]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2367</guid>
		<description><![CDATA[Welcome back to part two of this little adventure on Exploring the RAM caching ability of our newly favourite little file system filter driver, the Micrsoft Extended Write Filter. If you missed part 1, you can find it here. So after the first post, my mailbox blew up with queries on how to do this, how the ram cache weighs up vs pvs and how can you manipulate and &#8220;Write Out&#8221; to physical disk in a spill over, so before [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="/wp-content/uploads/2012/10/andrew-logo1.png"><img class="alignright size-full wp-image-2358" title="Andrew logo1" src="/wp-content/uploads/2012/10/andrew-logo1.png" alt="" width="75" height="76" /></a>Welcome back to part two of this little adventure on Exploring the RAM caching ability of our newly favourite little file system filter driver, the Micrsoft Extended Write Filter. If you missed part 1, you can find it <a href="http://andrewmorgan.ie/2012/10/05/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/" target="_blank">here</a>.</p>
<p>So after the first post, my mailbox blew up with queries on how to do this, how the ram cache weighs up vs pvs and how can you manipulate and &#8220;Write Out&#8221; to physical disk in a spill over, so before I go any further, lets have a quick look at the EWF.<br />
<code><br />
</code><br />
<strong>Deeper into the EWF:<img class="alignright" src="/wp-content/uploads/2012/10/winembed_v_rgb_reasonably_small.png?w=595" alt="" width="128" height="128" /></strong><br />
<code><br />
</code><br />
As previously mentioned, the EWF is contained in the operating system of all Windows Embedded Operating systems and also Windows Thin PC. The EWF is a mini filter driver that redirects all reads and writes to memory or local file system depending on where the file currently lives.</p>
<p>In the case of this experiment, we&#8217;re using the <a href="http://msdn.microsoft.com/en-us/library/ms940186(v=winembedded.5).aspx" target="_blank">RAMREG</a> method of EWF. Have a read of that link if you would like more information, but basically the EWF creates an overlay in RAM for the files to live and the configuration is stored in the registry.</p>
<p>Once the EWF is enabled and rebooted, on next login (from an adminstrative command prompt) you can run ewfmgr -all to view the current status of the EWF:<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/ewf.jpg"><img class="size-full wp-image-2368 aligncenter" title="ewf" src="/wp-content/uploads/2012/10/ewf.jpg" alt="" width="595" height="260" /></a><br />
<code><br />
</code><br />
So what happens if I write a large file to the local disk?</p>
<p>Well this basically! Below I have an installer for SQL express which is roughly 250mb,  I copied this file to the desktop from a network share, as you can see below the file has been stuck into the RAM overlay.<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/file-and-ewf-size.jpg"><img class="aligncenter size-full wp-image-2370" title="file and ewf size" src="/wp-content/uploads/2012/10/file-and-ewf-size.jpg" alt="" width="595" height="315" /></a><br />
<code><br />
</code><br />
And That&#8217;s pretty much it! Simple stuff for the moment. When I venture into the API of the EWF in a later post I&#8217;ll show you how to write this file out to storage if we are reaching capacity, allowing us to spill to disk and address the largest concern surrounding Citrix Provisioning Server and RAM caching presently.<br />
<code><br />
</code><br />
<strong>Next up to the block. Machine Creation Services&#8230;</strong><br />
<code><br />
</code><br />
I won&#8217;t go into why I think this technology is cool, I covered that quite well in the previous post. But this was the first technology I planned to test with the EWF.</p>
<p>So I created a lab at home consisting of two XenServers and an NFS share with MCS enabled as  below:<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/drawing1.jpg"><img class="aligncenter size-full wp-image-2372" title="Drawing1" src="/wp-content/uploads/2012/10/drawing1.jpg" alt="" width="595" height="267" /></a><br />
<code><br />
</code><br />
Now before we get down to the nitty gritty, let&#8217;s remind ourselves of what our tests looked like before and after using the extended write filter in this configuration when booting using and shutting down a single VM:<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/review.png"><img class="aligncenter size-full wp-image-2373" title="review" src="/wp-content/uploads/2012/10/review.png" alt="" width="595" height="371" /></a></p>
<p style="text-align:center;"><em>As with last time, black Denotes Writes, Red Denotes Reads.</em></p>
<p><code> </code></p>
<p style="text-align:left;">So looking at our little experiment earlier, we pretty much killed write IOPS on this volume when using the Extended write filter driver. But the read IOPS (in red above) were still quite high for a single desktop.</p>
<p style="text-align:left;">And this is where I had hoped MCS would step into the fray. Even on the first boot with MCS enabled the read negation was notable:</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/first-intellicache-boot.jpg"><img class="aligncenter size-full wp-image-2374" title="First intellicache boot" src="/wp-content/uploads/2012/10/first-intellicache-boot.jpg" alt="" width="595" height="437" /></a></p>
<p style="text-align:center;"><em>Test performed on a newly built host.</em></p>
<p><code> </code></p>
<p style="text-align:left;">At no point do read&#8217;s hit as high as the 400 + peak we saw in the previous test. But we still see spikey read IOPS as the Intellicache begins to read and store the image.</p>
<p style="text-align:left;">So now that we know what a first boot will look like. I kicked the test off again from a pre cached device, this time the image <em>should</em> be cached on the local disk of the XenServer as I&#8217;ve booted the image a number of times.</p>
<p style="text-align:left;">Below are the results of the pre cached test:</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/single-user-test-mcs-ewf.png"><img class="aligncenter size-full wp-image-2375" title="Single User test MCS + EWF" src="/wp-content/uploads/2012/10/single-user-test-mcs-ewf.png" alt="" width="595" height="290" /></a></p>
<p style="text-align:center;"><em>Holy crap&#8230;</em></p>
<p><code> </code></p>
<p style="text-align:left;">Well, I was incredibly impressed with this result&#8230; But did it scale?</p>
<p style="text-align:left;">So next up, I added additional desktops to the pool to allow for 10 concurrent desktop (restraints of my lab size).</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/09-10-2012-14-34-12.jpg"><img class="aligncenter size-full wp-image-2376" title="09-10-2012 14-34-12" src="/wp-content/uploads/2012/10/09-10-2012-14-34-12.jpg" alt="" width="215" height="256" /></a></p>
<p><code> </code></p>
<p style="text-align:left;">Once all the desktops had been booted, I logged in 10 users and decided to send a mass shutdown command from Desktop studio, with fairly cool results:</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/10-vms-shutdown.png"><img class="aligncenter size-full wp-image-2377" title="10 VM's shutdown" src="/wp-content/uploads/2012/10/10-vms-shutdown.png" alt="" width="595" height="287" /></a></p>
<p><code> </code></p>
<p style="text-align:left;">And what about the other way around, what about a 10 vm cold boot?</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/10-vms-booting.png"><img class="aligncenter size-full wp-image-2378" title="10 vm's booting" src="/wp-content/uploads/2012/10/10-vms-booting.png" alt="" width="595" height="289" /></a></p>
<p style="text-align:left;">Pretty much IO Free boots and shutdowns!</p>
<p><code> </code></p>
<p style="text-align:left;"><strong>So lets do some napkin math for a second.</strong></p>
<p><code> </code></p>
<p style="text-align:left;">Taking what we&#8217;ve seen so far, lets get down to the nitty gritty and look at maximums / averages.</p>
<p style="text-align:left;">In the first test, minus EWF and MCS, from a single desktop, we saw:</p>
<ul>
<li>Maximum Write IOPS: 238</li>
<li>Average Write IOPS: 24</li>
<li>Maximum Read IOPS: 613</li>
<li>Average Read IOPS: 83</li>
</ul>
<p>And in the end result, with EWF MCS and even with the workload times 10, we saw the following:</p>
<ul>
<li>Maximum Write IOPS: 26 (2,380*)</li>
<li>Average Write IOPS: 1.9 (240*)</li>
<li>Maximum Read IOPS: 34 (6130*)</li>
<li>Average Read IOPS: 1.3 (830*)</li>
</ul>
<p>* Denotes original figures times 10</p>
<p>I was amazed by the results, of two readily available technologies coupled together to tackle a problem in VDI that we are all aware of and regularly struggle with.</p>
<p><code> </code></p>
<p><strong>What about other, similar technologies?</strong></p>
<p>Now as VMware and Microsoft have their own technologies similar to MCS (CBRC and CSV cache, respectfully). I would be interested in seeing similar tests to see if their solutions can match the underestimated Intellicache. So if you have a lab with either of these technologies, get in touch and I&#8217;ll send you the method to test this.</p>
<p><code> </code></p>
<p><strong>What&#8217;s up next:</strong></p>
<p>In the following two blog posts, I&#8217;ll cover off the remaining topics:</p>
<ul>
<li>VDI in a Box.</li>
<li>EWF API for spill over</li>
<li>Who has this technology at their disposal?</li>
<li>Other ways to skin this cat.</li>
<li>How to recreate this yourself for you own test.</li>
</ul>
<p><code><br />
</code><br />
<strong>Last up I wanted to address a few quick queries I received via email / Comments:</strong><br />
<code><br />
</code><br />
<em>&#8220;Will this technology approach work with Provisioning Server and local disk caching, allowing you to leverage PVS but spill to a disk write cache?&#8221;</em></p>
<p style="padding-left:30px;">No, The Provisioning Server filter driver has a higher altitude than poor EWF, so PVS grabs and deals with the write before EWF can see it.</p>
<p><em>&#8220;Couldn&#8217;t we just use a RAM disk on the hypervisor?&#8221;</em></p>
<p style="padding-left:30px;">Yes, maybe and not yet&#8230;</p>
<p style="padding-left:60px;">Not yet, with Citrix MCS and Citrix VDI in a Box, Separating the write cache and identity disk from the LUN on which the image is hosted is a bit of a challenge.</p>
<p style="padding-left:60px;">Maybe If using Hyper-V v3 with the shared nothing migration, you now have migration options for live vm&#8217;s. This would allow you to move the WC / ID from one ram cache to another.</p>
<p style="padding-left:60px;">Yes, If using Citrix Provisioning server you could assign the WC to a local storage object on the host the VM lives. This would be tricky with VMware ESXi and XenServer but feel free to give it a try, Hyper-V on the other hand would be extremely easy as many ram disk&#8217;s are available online.</p>
<p><em>&#8220;Atlantis Ilio also has inline dedupe, offering more than just ram caching?&#8221;</em></p>
<p style="padding-left:30px;">True, and I never meant, even for a second to say this technology from Atlantis was anything but brilliant, but with RAM caching on a VM basis, wouldn&#8217;t VMware&#8217;s Transparent page sharing also deliver similar benefits? Without the associated cost?</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>On E2E, Geek speak, IOPS, shared storage and a fresh idea. (Part 1)</title>
		<link>http://andrewmorgan.ie/2012/10/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/</link>
		<comments>http://andrewmorgan.ie/2012/10/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/#comments</comments>
		<pubDate>Fri, 05 Oct 2012 11:13:39 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[XenServer]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[RAM Caching]]></category>
		<category><![CDATA[VDI]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2191</guid>
		<description><![CDATA[Note: this is part 1 of a 4 post blog. While attending E2EVC vienna recently, I found myself attending the Citrix Geekspeak session, although the opportunity to rub shoulders with fellow Citrix aficionado&#8217;s was a blast, I found myself utterly frustrated spending time talking about storage challenges in VDI. The topic was interesting and informative, and while there were a few idea&#8217;s shared about solid state arrays, read IOPS negation with PVS or MCS there really wasn&#8217;t a vendor based, one size fits all [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><em><br />
Note: this is part 1 of a 4 post blog.<a href="/wp-content/uploads/2012/10/andrew-logo1.png"><img class="alignright size-full wp-image-2358" title="Andrew logo1" src="/wp-content/uploads/2012/10/andrew-logo1.png" alt="" width="75" height="76" /></a></em></p>
<p>While attending <a href="http://www.e2evc.com/home/Home.aspx">E2EVC</a> vienna recently, I found myself attending the <a href="http://community.citrix.com/display/cdn/Geek+Speak">Citrix Geekspeak</a> session, although the opportunity to rub shoulders with fellow Citrix aficionado&#8217;s was a blast, I found myself utterly frustrated spending time talking about storage challenges in VDI.</p>
<p>The topic was interesting and informative, and while there were a few idea&#8217;s shared about solid state arrays, read IOPS negation with PVS or MCS there really wasn&#8217;t a vendor based, one size fits all solution to reduce both Read and Write IOPS without leveraging paid for (and quite expensive I may add) solutions like Atlantis ILIO, Fusion IO, etc. The conversation was tepid, deflating and we moved on fairly quickly to another topic.</p>
<p><em>So we got to talking&#8230;</em></p>
<p>In the company of great minds and my good friends <a href="https://twitter.com/barryschiffer" target="_blank">Barry Schiffer</a>, <a href="http://www.linkedin.com/in/iainbrighton" target="_blank">Iain Brighton</a>, <a title="The dutch IT guy!" href="http://www.ingmarverheij.com/">Ingmar Verheij</a>, <a href="https://twitter.com/KBaggerman" target="_blank">Kees Baggerman</a>, <a title="An amazing blog, i regularly read from Remko." href="http://www.remkoweijnen.nl/blog/">Remko Weijnen</a> and <a href="http://www.linkedin.com/profile/view?id=4353554&amp;pid=23220286&amp;authType=name&amp;authToken=wzx6&amp;trk=pbmap" target="_blank">Simon Pettit</a>. Over lunch we got to talking about this challenge and the pro&#8217;s and con&#8217;s to IO negation technologies.</p>
<p><strong>Citrix Provisioning Server&#8230; </strong></p>
<p><a href="/wp-content/uploads/2012/10/xendesktop.jpg"><img class="aligncenter size-full wp-image-2349" title="xendesktop" src="/wp-content/uploads/2012/10/xendesktop.jpg" alt="" width="396" height="288" /></a></p>
<p>We spoke about the pro&#8217;s and con&#8217;s delivered by Citrix provisioning server, namely rather than actually reducing read IOPS to shared storage, it merely puts the pressure instead on the network rather than a SAN or local resources in the hypervisor.</p>
<p>The RAM caching option for differencing is really clever with Citrix Provisioning server, but it&#8217;s rarely utilised due to the ability to fill the cache in RAM and the ultimate blue screen that will follow this event.</p>
<p>Citrix provisioning server does require quite a bit of pre work to get the PVS server configured, highly available then the education process of learning how to manage images with Provisioning server, it&#8217;s more of an enterprise tool.</p>
<p><strong>Citrix Machine Creation Services&#8230;</strong></p>
<p><a href="/wp-content/uploads/2012/10/mcs.png"><img class="aligncenter size-medium wp-image-2350" title="MCS" src="/wp-content/uploads/2012/10/mcs.png?w=300" alt="" width="300" height="272" /></a></p>
<p>We spoke about the pro&#8217;s and con&#8217;s of delivered by Citrix Machine Creation Services and Intellicache, this technology is much smarter around caching regular reads to a local device but it&#8217;s adoption is mainly SMB and the requirements for NFS and XenServer were a hard sell&#8230; Particularly with a certain hypervisors dominance to date.</p>
<p>Machine Creation Services is stupidly simple to deploy, install XenServer, stick the image on an NFS share (even windows servers can host these)&#8230; Bingo. Image changes are based on Snapshots so the educational process is a null point.</p>
<p>But again MCS is only negating read IO, and this assumes you have capable disk&#8217;s under the hypervizor to run multiple workspaces from a single disk or array. It&#8217;s also specific to XenDesktop, sadly. So no hosted shared desktop solution.</p>
<p>We mulled over SSD based storage quite alot with MCS and in agreement we discussed how MCS and Intellicache could be leveraged on an SSD, but the write cache or differencing disk activity would be so write intensive it would not be long before the SSD would start to wear out.</p>
<p><strong>The mission Statement:</strong></p>
<p>So all this got us to thinking, without paying for incredibly expensive shared storage, redirecting the issue onto another infrastructure service, or adding on to your current shared storage to squeeze out those precious few IOPS, what could be done with VDI and storage to negate this issue?</p>
<p><strong><img class="alignright size-thumbnail wp-image-2351" title="Drive_RAM" src="/wp-content/uploads/2012/10/drive_ram.png?w=150" alt="" width="150" height="150" />So we got back to talking about RAM:</strong></p>
<p>RAM has been around for a long time, and it&#8217;s getting cheaper as time goes by. We pile the stuff into hypervisors for shared server workloads. Citrix provisioning server chews up the stuff for caching images in server side and the PVS client even offers a ram caching feature too, but at present it limit&#8217;s the customer to a Provisioning server and XenDesktop or XenApp.</p>
<p>But what if we could leverage a RAM caching mechanism? decoupled from a paid for technology, or provisioning server, that&#8217;s freely available and can be leveraged in any hypervisor or technology stack? Providing a catch all, free and easy caching mechanism?</p>
<p>and then it hit me&#8230;</p>
<p><strong>Eureka!<a href="/wp-content/uploads/2012/10/jabber-e1349426780600.png"><img class="alignright size-full wp-image-2356" title="Jabber" src="/wp-content/uploads/2012/10/jabber-e1349426780600.png" alt="" width="45" height="45" /></a></strong></p>
<p>This technology has been around for years, cleverly tucked away in Thin Client architecture and Microsoft have been gradually adding more and more functionality to this heavily underestimated driver&#8230;</p>
<p><strong>Re Introducing ladies and gents, the Microsoft Extended write filter.<a href="/wp-content/uploads/2012/10/winembed_v_rgb_reasonably_small.png"><img class="alignright size-full wp-image-2354" title="WinEmbed_v_rgb_reasonably_small" src="/wp-content/uploads/2012/10/winembed_v_rgb_reasonably_small.png" alt="" width="128" height="128" /></a></strong></p>
<p>The <a href="http://en.wikipedia.org/wiki/Enhanced_Write_Filter" target="_blank">Microsoft Extended Write Filter</a> has been happily tucked away in the Microsoft embedded operating systems since XP and is fairly unfamiliar to most administrators. The Microsoft extended write filter saves all disk writes to memory (except for specific chunks of registry) resulting in the pc being clean each time it is restarted.</p>
<p>This technology has been mostly ignored unless you are particularly comfortable with Thin Client architectures. Interestingly during my initial research for this mini project, I found that many <a href="http://windowsdevcenter.com/pub/a/windows/excerpt/carpchacks_chap1/index.html" target="_blank">car pc enthusiasts</a> or early SSD adopters have been hacking this component out of Windows Embedded and installing it into their mainstream windows operating systems to cut down on wear and tear to flash or Solid state disks.</p>
<p>Microsoft have been gradually building this technology with each release adding a powerful <a href="http://msdn.microsoft.com/en-us/library/ms933204(v=winembedded.5).aspx" target="_blank">API</a> to view cache hits, contents of the cache and even write the cache out to disk as required, in an image update scenario&#8230; or even better, when a RAM spill over is about to happen&#8230;</p>
<p><strong><em>If this technology is tried and trusted in the car pc world, could it be translated to VDI?&#8230;</em></strong></p>
<p><strong>To the lab!<img class="alignright size-full wp-image-2360" title="icon_company" src="/wp-content/uploads/2012/10/icon_company1-e1349428127779.png" alt="" width="75" height="81" /></strong></p>
<p>Excited with my research Thus far, I decided to take the plunge. I began by extracting the EWF drivers and registry keys necessary to run this driver from a copy of windows Embedded standard&#8230;</p>
<p>With a bit of trial, error, hair pulling and programming I managed to get this driver into a clean windows 7 x86 image, so from here I disabled the driver and started testing first without this driver.</p>
<p>(I&#8217;m not going to go into how to extract the driver yet, please check back later for a follow up post)</p>
<p><strong>Test A: XenDesktop,  no RAM caching</strong></p>
<p>To start my testing, I went with a XenDesktop image, hosted on XenServer without intellicache. I sealed the shared image on an NFS shared from a windows server. I performed a cold boot, opened a few applications then shutdown again.</p>
<p>I tracked the read and write IOPS Via Windows Perfmon on the shared NFS disk, and below were my findings:</p>
<p><a href="/wp-content/uploads/2012/10/xd-no-ic.png"><img class="aligncenter size-medium wp-image-2352" title="XD-No-IC" src="/wp-content/uploads/2012/10/xd-no-ic.png?w=243" alt="" width="243" height="300" /></a></p>
<p style="text-align:center;"><em>(Black line denotes Write IOPS to shared storage)</em></p>
<p><strong>Test B: XenDesktop, with the Microsoft Extended Write Filter.</strong></p>
<p>The results below pretty much speak for themselves&#8230;</p>
<p><a href="/wp-content/uploads/2012/10/xd-ic.png"><img class="aligncenter size-medium wp-image-2353" title="XD-IC" src="/wp-content/uploads/2012/10/xd-ic.png?w=244" alt="" width="244" height="300" /></a></p>
<p style="text-align:center;"><em>(again, black line denotes write IOPS)</em></p>
<p>I couldn&#8217;t believe how much of a difference this little driver had made to the write IOPS&#8230;</p>
<p>Just for ease of comparison, here&#8217;s a side by side look at what exactly is happening here:</p>
<p><a href="/wp-content/uploads/2012/10/screenshot.png"><img class="aligncenter size-medium wp-image-2355" title="screenshot" src="/wp-content/uploads/2012/10/screenshot.png?w=300" alt="" width="300" height="187" /></a></p>
<p><strong>So where do we go from here:</strong></p>
<p>Well, no matter what you do, Microsoft <span style="text-decoration:underline;">will not</span> support this option and they&#8217;d probably sue the pants off me if I wrapped this solution up for download&#8230;</p>
<p>But this exercise on thinking outside of the box raised some really interesting questions&#8230; Around other technologies and the offerings they already have in their stack.</p>
<p>In my next few posts on this subject I&#8217;ll cover the below topic&#8217;s at length and discuss my findings.</p>
<ul>
<li>I&#8217;ll be looking at how to leverage this with Citrix Intellicache for truly IO less desktops.</li>
<li>I&#8217;ll be looking at how this technology could be adopted by Microsoft for their MED-V stack.</li>
<li>I&#8217;ll be looking at one of my personal favourite technologies, VDI in a Box.</li>
<li>I&#8217;ll be looking at how we could leverage this API ,for spill over to disk in a cache flood scenario or even a management appliance to control the spill overs to your storage.</li>
</ul>
<p><em>And Finally</em>, I&#8217;ll be looking at how Citrix or Microsoft could quite easily combine two of their current solutions to provide an incredible offering.</p>
<p>And that&#8217;s it for now, just a little teaser for the follow up blog posts which I&#8217;ll gradually release before Citrix Synergy.</p>
<p>I would like to thank <a href="https://twitter.com/barryschiffer" target="_blank">Barry Schiffer</a>, <a title="The dutch IT guy!" href="http://www.ingmarverheij.com/">Ingmar Verheij</a>, <a href="https://twitter.com/KBaggerman" target="_blank">Kees Baggerman</a> and <a title="An amazing blog, i regularly read from Remko." href="http://www.remkoweijnen.nl/blog/">Remko Weijnen</a>. For their help and input on this mini project. It was greatly appreciated.</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2012/10/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/feed/</wfw:commentRss>
		<slash:comments>13</slash:comments>
		</item>
		<item>
		<title>mboot.c32 error when installing Xenserver from usb</title>
		<link>http://andrewmorgan.ie/2011/06/mboot-c32-error-when-installing-xenserver-from-usb/</link>
		<comments>http://andrewmorgan.ie/2011/06/mboot-c32-error-when-installing-xenserver-from-usb/#comments</comments>
		<pubDate>Sat, 11 Jun 2011 22:57:04 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[XenServer]]></category>
		<category><![CDATA[mboot.32]]></category>

		<guid isPermaLink="false">http://andymorgan.wordpress.com/?p=476</guid>
		<description><![CDATA[Recently while installing Xenserver in my home lab I received the following error when booting the memory key and was left with little help from google. mboot.c32: not a COM32R image Having followed the instructions exactly from here I couldn&#8217;t understand where i was going wrong. I had downloaded the latest copy of syslinux (4.04) and I was receiving the above error despite what boot option i tried. I then noticed my error lower in the comments suggesting I try syslinux version [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Recently while insta<a href="/wp-content/uploads/2011/06/i_xenserver.png"><img class="alignright size-full wp-image-477" title="i_xenserver" src="/wp-content/uploads/2011/06/i_xenserver.png" alt="" width="78" height="78" /></a>lling Xenserver in my home lab I received the following error when booting the memory key and was left with little help from google.</p>
<p><em>mboot.c32: not a COM32R image</em></p>
<p>Having followed the instructions exactly from <a href="http://blogs.citrix.com/2010/10/18/how-to-install-citrix-xenserver-from-a-usb-key-usb-built-from-windows-os/" target="_blank">here </a>I couldn&#8217;t understand where i was going wrong. I had downloaded the latest copy of syslinux (4.04) and I was receiving the above error despite what boot option i tried. I then noticed my error lower in the comments suggesting I try syslinux version 3.86, which resulted in a flat out &#8220;boot error&#8221; before even loading the xenserver boot menu.</p>
<p>This may seem like a no brainer to the strong linux guys, but to the rest of us this was quite an annoying little problem. After much head scratching and troubleshooting, the simple and easy solution was to browse into my latest copy of downloaded syslinux &gt; com32 &gt; mboot</p>
<p><a href="/wp-content/uploads/2011/06/mboot.png"><img title="mboot" src="/wp-content/uploads/2011/06/mboot.png" alt="" width="600" height="146" /></a></p>
<p>copy the mboot.c32 file, then paste it onto the root of the memory key, replacing the existing file:</p>
<p><a href="/wp-content/uploads/2011/06/replace.png"><img class="aligncenter size-full wp-image-480" title="replace" src="/wp-content/uploads/2011/06/replace.png" alt="" width="600" height="255" /></a><a href="/wp-content/uploads/2011/06/mboot.png"><br />
</a></p>
<p>Once you&#8217;ve done the above, reboot the host and try booting from the key again. It should be plain sailing from here.</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2011/06/mboot-c32-error-when-installing-xenserver-from-usb/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
