<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Andrew Morgan &#187; IOPS</title>
	<atom:link href="http://andrewmorgan.ie/tag/iops/feed/" rel="self" type="application/rss+xml" />
	<link>http://andrewmorgan.ie</link>
	<description>Grumpy ramblings</description>
	<lastBuildDate>Fri, 30 Jun 2017 09:24:25 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0</generator>
	<item>
		<title>ThinIO facts and figures, Part 4: Storage design and dangerous assumptions.   </title>
		<link>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-4-storage-design-and-dangerous-assumptions/</link>
		<comments>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-4-storage-design-and-dangerous-assumptions/#comments</comments>
		<pubDate>Thu, 30 Oct 2014 20:53:13 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Remote Desktop Services (RDS)]]></category>
		<category><![CDATA[Server Based Computing]]></category>
		<category><![CDATA[ThinIO]]></category>
		<category><![CDATA[ThinScale]]></category>
		<category><![CDATA[VDI in a Box]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[VMware View]]></category>
		<category><![CDATA[Windows]]></category>
		<category><![CDATA[XenApp]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[End User Computing]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[SBC]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[VMware Horizon]]></category>
		<category><![CDATA[xenapp]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=3222</guid>
		<description><![CDATA[Welcome back to this blog series discussing our new product ThinIO. Please find the below three earlier articles in this series: ThinIO facts and figures, Part 1: VDI and Ram caching. ThinIO facts and figures, Part 2: The Bootstorm chestnut. ThinIO facts and figures, Part 3: RDS and Ram caching. In the final blog post in this series, we’re going to discuss storage design and a frequent problem face when sizing storage. Lets get right into it: “Designing for average, [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignright" src="http://andrewmorgan.ie/wp-content/uploads/2014/04/logo.png" alt="" width="216" height="41" />Welcome back to this blog series discussing our new product ThinIO. Please find the below three earlier articles in this series:</p>
<ul>
<li><a href="http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-1-vdi-and-ram-caching/">ThinIO facts and figures, Part 1: VDI and Ram caching.</a></li>
<li><a href="http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-2-the-bootstorm-chestnut/">ThinIO facts and figures, Part 2: The Bootstorm chestnut.</a></li>
<li><a href="http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-3-rds-and-ram-caching/">ThinIO facts and figures, Part 3: RDS and Ram caching.</a></li>
</ul>
<p>In the final blog post in this series, we’re going to discuss storage design and a frequent problem face when sizing storage. Lets get right into it:</p>
<p><span id="more-3222"></span></p>
<h3><strong>“Designing for average, is designing for failure”</strong></h3>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0011.png"><img class="aligncenter size-large wp-image-3223" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0011-1024x626.png" alt="image001" width="625" height="382" /></a></p>
<p style="text-align: center;"><em>Peak IOPS:1015, Average IOPS: 78</em></p>
<p>A frequent mistake I see customers and consultants alike make is taking an average of a sizing requirement and using that as a baseline for sizing environments.</p>
<p>Looking at the figures produced from our internal load tests, we saw just an average of 78 IOPS required on write from a vanilla server without ThinIO to provide storage from this XenApp server.</p>
<p>Now frequently, people will take a figure like that, throw in 25% for growth and bob’s your uncle, ‘order the hardware Mr SAN man’. When I have questioned them about that assumption, they’ll often respond “oh it will be a bit slow if theres contention but it’ll even itself out”.</p>
<p><strong>Right? Wrong. </strong></p>
<p><strong>Things don’t go slow when they are over subscribed, they stop.</strong></p>
<p>Don’t take my word for it! Lets do some simple theoretical math:<img class="alignright  wp-image-3227" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/calculator-150x150.png" alt="calculator" width="98" height="98" /></p>
<p>If you take a storage device and allocate 100 IOPS to this machine, what’s going to happen when a peak like 1000 IOPS is requested? <strong>A lot of queuing.</strong><strong> </strong></p>
<p>In theory, keeping to the 100 IOPS figure, that 1 second burst IO is now taking over 10 seconds to satisfy (1000 / 10).</p>
<p>But it gets worse, all subsequent IO that is requested after that spike occurred is going to also be haulted waiting for this task to occur.</p>
<p>Assuming you’re now mid spike and 10 seconds later the request is finished&#8230; taking your average figure, you now have 10 seconds worth of 100 IO’s per second potentially queued up behind…</p>
<p>Low and behold another login occurs and? <strong>STOP.</strong> Storage timeouts, twirly whirlies, application crashes, hour glasses and the good old <strong>“logins are slow”.</strong></p>
<h2><strong>Oh ok, So how do I size to accommodate this?</strong><strong> </strong></h2>
<p>Well you’re between a rock and a hard place aren’t you. You can’t tell users when to login, the price tag of a SAN sized for peak activity + 20% is going to cost you more than your entire desktop estate. And as you can see, it’s never safe to assume it will run slow.<strong> </strong></p>
<h3><strong>Buying into shared storage is a tricky business</strong><strong> <img class="alignright  wp-image-3228" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/119460-117532-150x150.jpg" alt="119460-117532" width="115" height="115" /></strong></h3>
<p>Storage is expensive. Very expensive. It always annoys me when you hear vendors in this space refering to themselves as ‘reassuringly expensive’. To me this directly translates to ‘We can charge what we want, so we will and you can be reassured we feel the price tag is worth it.’</p>
<p>Storage was never written with desktop workloads in mind, it was written for ‘steady state’ server workloads and was in the progress of going the way of the ‘dodo’ (extinct) up until that first release of vMotion requiring shared storage, which some say was the saving of the market.</p>
<p>Many vendors are going with software or hardware intelligent tiering. This is a great feature, but the real question to ask is how frequently data is moved from the hot tier to lower tier? Press your vendor on this as they more than likely wont know! Microsoft storage spaces is a prime example of this with a really poor optimisation process of just once a day!</p>
<p>Then ask yourself what happens when a base image update occurs and changes the disk layout of the base golden image? Further, stateless technologies from the bigger vendors delete the differencing disk on restart, can you be sure the new disk is going to end up in the smaller, faster SSD or RAM tier? Or is data up there already in contention?</p>
<h3><strong>RAM is far less tricky<img class="alignright size-thumbnail wp-image-3224" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/ram_icon-1-150x150.jpg" alt="ram_icon (1)" width="150" height="150" /></strong></h3>
<p>RAM is commodity, available in abundance and throughout every virtual desktop project I’ve architected and deployed, you run out of CPU resources in a ‘fully loaded’ host way before you will run out of RAM. RAM has no running maintenance cost, Ram is an upfront CAPEX cost and requires little to no maintenance.</p>
<p>The beauty of what ThinIO does with the little resources you assign it, is turn that <strong>desktop workload</strong> into a healthier and happier <strong>server workload</strong> of minimal burst IO and a low steady state IO requirement.</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0033.png"><img class="alignright size-large wp-image-3225" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0033-1024x626.png" alt="image003" width="625" height="382" /></a></p>
<p style="text-align: center;"><em>Note the peak of just 40.5 IOPS and average IOPS of less than 2.</em></p>
<p>with as little as just 200mb cache for each of the 10 users logging in, within an aggressive 3 minute window, we reduced the peak from 1000 to 40. That’s a <strong>96% reduction</strong> in burst IO.</p>
<h3><strong>With ThinIO, you:</strong></h3>
<ul>
<li>reduce your exposure to massive IO spikes.</li>
<li>Improve user logon times.</li>
<li>significantly reduce your daily IOPS run rate.</li>
<li>Increase user productivity by spending less time waiting for the storage.</li>
<li>Commit to nothing up front, test it and see how well it works. If you are happy, then buy in.</li>
</ul>
<h3><strong>Lots of Intelligence baked in:</strong></h3>
<p>ThinIO is acutely aware of key operating system events that will cause these kind of spikes and react accordingly to reduce the spikes in IOPS created. ThinIO constantly watches the behavior and IO pattern of the storage and tunes itself accordingly.</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0053.png"><img class="aligncenter wp-image-3229 size-full" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0053.png" alt="image005" width="1024" height="468" /></a></p>
<p>Unlike other technologies, ThinIO is a true caching and performance solution. We do not move useful data in and out of the cache on demand when cache availability is contencious. We track patterns and frequency of block access to respond accordingly, delivering all the benefits we have mentioned, even with the tiniest cache, without EVER reducing the capability of the storage when overwhelmed.</p>
<p>And on the opposite side of the scale, when underworked, we leverage our cache to deliver deeper read savings as above.</p>
<p>ThinIO also has a powerful API and PowerShell interface to allow you to report and interact with the cache on demand.</p>
<h2><strong>Wrap up:</strong></h2>
<p>And with the end of the series looming, allow me to finish on some easy points:</p>
<p><strong> ThinIO Allows you to:</strong></p>
<ul>
<li>size your SAN outside of the Lamborghini category &amp; price tag for your desktop estate.</li>
<li>rapidly achieve far deeper density on your current hardware when you are feeling the Pinch.</li>
<li>guarantee a level of performance by assigning cache per VM, disallowing other users to steal or hamper caching resources.</li>
<li>Improve user experience and login times immediately.</li>
<li>Reduce the impact of boot storms and similar IO storm scenarios.</li>
</ul>
<p>No other vendor can offer as quick a turn around time with their product. ThinIO installs in seconds and offers a huge range of compatibility.</p>
<p><strong>One more thing:</strong></p>
<p><a href="http://thinscaletechnology.com/download-thinio/"><img class="aligncenter" src="http://thinscaletechnology.com/wp-content/uploads/Download-ThinIO.jpg" alt="" width="300" height="116" /></a></p>
<p>In case you missed ThinIO’s launch day at <a href="http://www.e2evc.com/home/" target="_blank">E2EVC </a>Barcelona, <strong>ThinIO is now in GA</strong>, available from our website and production ready! More marketing to follow! But grab your copy now and get playing!</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-4-storage-design-and-dangerous-assumptions/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ThinIO facts and figures, Part 3: RDS and Ram caching.</title>
		<link>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-3-rds-and-ram-caching/</link>
		<comments>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-3-rds-and-ram-caching/#comments</comments>
		<pubDate>Wed, 22 Oct 2014 21:30:41 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Remote Desktop Services (RDS)]]></category>
		<category><![CDATA[Server Based Computing]]></category>
		<category><![CDATA[ThinIO]]></category>
		<category><![CDATA[ThinScale]]></category>
		<category><![CDATA[VDI in a Box]]></category>
		<category><![CDATA[Windows Server]]></category>
		<category><![CDATA[XenApp]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[Horizon View]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[Remote Desktop services]]></category>
		<category><![CDATA[SBC]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[Vmware]]></category>
		<category><![CDATA[xenapp]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=3202</guid>
		<description><![CDATA[Welcome back to the third instalment of this blog series focusing on our new technology ThinIO! To recap, below you will find the previous articles: ThinIO facts and figures, Part 1: VDI and Ram caching. ThinIO facts and figures, Part 2: The Bootstorm chestnut. Off topic note: two years ago at an E2EVC event, the concept behind ThinIO was born with just a mad scientist idea amongst peers. If you are lucky enough to be attending E2EVC this weekend, David [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/04/logo.png"><img class="alignright  wp-image-2865" src="http://andrewmorgan.ie/wp-content/uploads/2014/04/logo.png" alt="logo" width="189" height="36" /></a>Welcome back to the third instalment of this blog series focusing on our new technology ThinIO!</p>
<p>To recap, below you will find the previous articles:</p>
<ul>
<li class="entry-title"><a href="http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-1-vdi-and-ram-caching/" target="_blank">ThinIO facts and figures, Part 1: VDI and Ram caching.</a></li>
<li class="entry-title"><a href="http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-2-the-bootstorm-chestnut/" rel="bookmark">ThinIO facts and figures, Part 2: The Bootstorm chestnut.</a></li>
</ul>
<h3>Off topic note:</h3>
<p><img class="aligncenter" src="http://www.e2evc.com/home/Portals/0/E2EVC_header.jpg" alt="" width="408" height="51" /></p>
<p>two years ago at an E2EVC event, the concept behind ThinIO was born with just a mad scientist idea amongst peers.</p>
<p>If you are lucky enough to be attending <a href="http://www.e2evc.com/home/Agenda.aspx">E2EVC</a> this weekend, David and I will be there presenting ThinIO and maybe, just maybe there will be an announcement. Our session is on Saturday at 15:30 so pop by, you won&#8217;t be disappointed.</p>
<h3>Back on topic:</h3>
<p>So here&#8217;s a really interesting blog post. Remote Desktop Services (XenApp / XenDesktop hosted shared) or whatever you like to call it. RDS really presents a fun caching platform for us, as it allows us to deal with a much higher IO volume and achieve deeper savings.</p>
<p>We’ve really tested the heck out of this platform for how we perform on Microsoft RDS, Horizon View RDS integration and Citrix XenSplitPersonality with Machine Creation Services.</p>
<p>The figures we are sharing today are based on the following configuration and load test:</p>
<ul>
<li><img class="alignright  wp-image-3174" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/Logo_Login_VSI_Transparent.png" alt="Logo_Login_VSI_Transparent" width="250" height="42" />Citrix XenDesktop 7.6</li>
<li>Windows Server 2012 r2</li>
<li>Citrix User Profile Manager.</li>
<li>16gb of Ram.</li>
<li>4 vCpu.</li>
<li>LoginVSI 4.1 medium workload 1 hour test.</li>
<li>10 users.</li>
<li>VMFS 5 volume.</li>
</ul>
<h3>Fun figures!</h3>
<p>Diving straight in, lets start by looking at the volume of savings across three cache types.</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image001.png"><img class="aligncenter size-large wp-image-3203" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image001-1024x468.png" alt="image001" width="625" height="285" /></a></p>
<p>&nbsp;</p>
<p><span id="more-3202"></span></p>
<h4>Reviewing the details for a moment:</h4>
<p>Running repetitive tests of at least 3 per cache type, we found even at the lowest entry point we would support (50mb per user) we saw phenomenal savings of over 70% on write IO.</p>
<h5>No pressure no diamonds!</h5>
<p>To put that into perspective, at a 512 MB cache for 10 users, our cache reached maximum capacity at the second user login. With 8 users still left to login, cache full and still an hours worth of load testing left, our ThinIO technology was under serious pressure.</p>
<p>This is key to why ThinIO is such a great solution. We won’t just perform great until we fill our cache, we don’t require architecture changes or care about your storage type, we have no lead times or install days, we will carry on to work with what is available to use, to take a large ammount of pressure off storage IOPS and data throughput.</p>
<p>With the figures above, you can see just how well the intelligence behind our cache can scale even when it faces such a steep workload.</p>
<p>Below you will find a breakdown of each test:</p>
<h3>512 MB cache:</h3>
<p>Breaking down into the figures, on the 512mb cache test, it’s clear to see just how well ThinIO deals with the tiniest of caches:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0032.png"><img class="aligncenter size-large wp-image-3204" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0032-1024x590.png" alt="image003" width="625" height="360" /></a></p>
<p>When we side by side this with our baseline averages, you can see we take a huge chunk out of that Spiky login pattern and continue to  reduce the steady state IO as the test continues:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0052.png"><img class="aligncenter size-large wp-image-3205" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0052-1024x580.png" alt="image005" width="625" height="354" /></a></p>
<p>So lets move up and see how we get on!</p>
<h3>1024 mb cache:</h3>
<p>Doubling up our cache size we see a great increase in both read and write savings as you&#8217;d expect.</p>
<p>With 100mb of cache per user, and the average user profile in the test 3 times that size. We are still under pressure. As we will natively favour optimisations to write IO over read, you&#8217;ll see the bulk of improvements happen in write when we&#8217;re under pressure as illustrated in this test:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0071.png"><img class="aligncenter size-large wp-image-3207" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0071-1024x599.png" alt="image007" width="625" height="365" /></a></p>
<p>&nbsp;</p>
<p>With more cache available during the peak IO point, we make further savings on write:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0091.png"><img class="aligncenter size-large wp-image-3208" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0091-1024x586.png" alt="image009" width="625" height="357" /></a></p>
<h3>2048 mb cache:</h3>
<p>and at our recommended value of 200mb per user in Remote Desktop Services, the results are phenomenal! With this size, even still below the 300mb mark per user profile, the read IO gets a really good boost and the write IO saving well over the 95% mark!</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0111.png"><img class="aligncenter size-large wp-image-3209" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0111-1024x537.png" alt="image011" width="625" height="327" /></a></p>
<p>And the side by side comparison is every bit as good as the savings illustrated above, reducing that peak bursty IO to just 41 IOPS:<a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/2048.png"><img class="aligncenter size-large wp-image-3211" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/2048-1024x626.png" alt="2048" width="625" height="382" /></a></p>
<h2><span style="line-height: 1.714285714; font-size: 1rem;">But there&#8217;s more! </span></h2>
<p>As i pointed out in the previous blog, IOPS are just one side of the story. A reduction of data throughput to the disk is also a big benefit when it comes to storage optimisation, and as you can see we make a big difference:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/mbsec.png"><img class="aligncenter size-full wp-image-3212" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/mbsec.png" alt="mbsec" width="788" height="487" /></a></p>
<p>&nbsp;</p>
<h2>Wrap up:</h2>
<p>So there you have it, with ThinIO, a simple, in VM solution, you can you seriously reduce your IO footprint, boost user performance and achieve greater storage density per virtual machine or on Remote Desktop Services technology.</p>
<h4>In the mean time:</h4>
<p>If you would like a chance to test ThinIO pre-release, find access to the public beta below. Thank you for your time and happy testing!</p>
<p><a href="http://thinscaletechnology.com/download-thinio/" target="_blank"><img class="aligncenter wp-image-3171 size-medium" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/Download-ThinIO-Beta-300x101.jpg" alt="Download-ThinIO-Beta" width="300" height="101" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-3-rds-and-ram-caching/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ThinIO facts and figures, Part 2: The Bootstorm chestnut.</title>
		<link>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-2-the-bootstorm-chestnut/</link>
		<comments>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-2-the-bootstorm-chestnut/#comments</comments>
		<pubDate>Sat, 18 Oct 2014 14:30:00 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[ThinIO]]></category>
		<category><![CDATA[VMware]]></category>
		<category><![CDATA[VMware View]]></category>
		<category><![CDATA[Windows]]></category>
		<category><![CDATA[XenApp]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[RDS]]></category>
		<category><![CDATA[Remote Desktop services]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[VMware Horizon View]]></category>
		<category><![CDATA[xenapp]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=3186</guid>
		<description><![CDATA[Welcome back! This blog post is part of a number of posts in advance of our upcoming release, for reference you can find part one below: ThinIO facts and figures, Part 1: VDI and Ram caching. Getting right to it: In this industry when somebody says ‘boot storms!&#8217; &#8211; most of us will respond with: Boot storms are a well documented, boring problem and have many solutions available from vendors and hypervisors alike. Most solutions today rely on a &#8216;shared memory&#8217; [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignright wp-image-2865 " src="http://andrewmorgan.ie/wp-content/uploads/2014/04/logo.png" alt="logo" width="174" height="33" />Welcome back! This blog post is part of a number of posts in advance of our upcoming release, for reference you can find part one below:</p>
<ul>
<li class="entry-title"><a href="http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-1-vdi-and-ram-caching/" target="_blank">ThinIO facts and figures, Part 1: VDI and Ram caching.</a></li>
</ul>
<h2>Getting right to it:</h2>
<p>In this industry when somebody says ‘boot storms!&#8217; &#8211; most of us will respond with:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0022.png"><img class="aligncenter  wp-image-3195" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0022.png" alt="image002" width="163" height="119" /></a></p>
<p>Boot storms are a well documented, boring problem and have many solutions available from vendors and hypervisors alike. Most solutions today rely on a &#8216;shared memory&#8217; storage area to cache &#8216;on boot&#8217;, in theory caching only one startup or one pattern in order to then serve it back to the proceeding desktops to boot.</p>
<p>But why are boot storms an issue? While working on ThinIO we had the unique ability to really dive into the Windows boot process and analyse why boot storms cause the damage they do and in this post we thought we’d share our findings to better document the issue.</p>
<p><span id="more-3186"></span></p>
<h2>Boot data:</h2>
<p>Taking a typical windows 7 boot, to the login screen and idling until all services have started, the data traversing from disk to VM is relatively small. in our testing we found an average of Just 500-600 mb of data is read during this process, and write data barely registers at between 20 and 30mb.</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0031.png"><img class="aligncenter size-full wp-image-3189" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0031.png" alt="image003" width="867" height="515" /></a></p>
<p>But hey, what gives? Taking such low data throughput, why is boot such a contenscious issue? Have I been misled with marketing and vendor nonsense?</p>
<h2><strong>The IO chestnut:</strong></h2>
<p>Sadly no, it’s the way windows requests this data, but don’t take my word for it…. Behold, the incredible mess that is the Windows boot process!</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0051.png"><img class="aligncenter size-full wp-image-3190" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image0051.png" alt="image005" width="867" height="532" /></a></p>
<p>Yep, that’s right, in the time Windows requested roughly 600mb of data, it sent down an astounding 70 thousand IO’s in the space of 2-3 minutes!</p>
<h2><strong>Math time:</strong></h2>
<p>Now if you were to take these figures as they stand, you would take 70,000 IO’s divide this into 560mb and you’d probably end up with an average of about 8k of data requested per IO… You’d be wrong.</p>
<p><em>As my good buddy Conor Scolard would say, ‘when you Assume, you make an ass out of you and me’.</em></p>
<p>To better understand the bounderies of Windows, Windows requests IO’s between the minimum of 512 bytes all the way up the spectrum later in the boot process to 128k and above. But it requests these blocks sparcely, on demand, and not just once per sector, the same blocks are frequently accessed.</p>
<p>The net result of this causes absolute havok on the storage:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image007.png"><img class="aligncenter size-full wp-image-3191" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image007.png" alt="image007" width="867" height="473" /></a></p>
<p>The crux of the issue is, for each one of these IO’s, the storage provider needs to compute the block data requested, seek the data out, then return it.</p>
<p>But 70,000 of these IO operations for a meagre 600mb of data is madness and you can now see exactly why boot storms were labelled as such for those early adopters who had their hands burned by this fact finding mission.</p>
<p><em>I’ll mitigate this issue by just booting my VM’s at night!</em></p>
<p>I’m sure you will! I would also love to see your face if a number of users happen to restart their desktops during the day, cascading 70,000 IO’s per desktop to the storage in a 2 minute window, per desktop!</p>
<h2><strong>Bootstorming IS an issue.</strong></h2>
<p>Now, knowing all this, it makes sense as to why storage and hypervisors alike are using a cache of ram.</p>
<h2><strong>But how does ThinIO fit in here? With Read Ahead of course!</strong></h2>
<p>Knowing the Windows boot process as intimate as only a technology like ThinIO can, there are many, many optimisations we can make to this process.</p>
<p>We can both speed the boot process up and also massively reduce the storage requirement while in VM, without any fancy caching mechanism!</p>
<p>With ThinIO’s read ahead technology, we can deliver just shy of an 80% boot IO reduction with nothing other than having our technology in the virtual machine:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image009.png"><img class="aligncenter size-full wp-image-3192" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image009.png" alt="image009" width="867" height="513" /></a></p>
<p>Taking a ThinIO averaged test and overlaying it to a baseline averaged test, it’s clear just how much impact this technology can have:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image011.png"><img class="aligncenter size-full wp-image-3193" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image011.png" alt="image011" width="867" height="473" /></a></p>
<p>&nbsp;</p>
<h2><strong>Wrap up:</strong></h2>
<p>So there you have it, with ThinIO, a simple, in VM solution, not only can you seriously reduce your IO footprint, boost user performance and achieve greater storage density per virtual machine, you also can also massively negate the impact a booting VM has on your storage.</p>
<p>If you would like a chance to test ThinIO pre-release, find access to the public beta below. Thank you for your time and happy testing!</p>
<p><a href="http://thinscaletechnology.com/download-thinio/" target="_blank"><img class="aligncenter wp-image-3171 size-medium" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/Download-ThinIO-Beta-300x101.jpg" alt="Download-ThinIO-Beta" width="300" height="101" /></a></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-2-the-bootstorm-chestnut/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>ThinIO facts and figures, Part 1: VDI and Ram caching.</title>
		<link>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-1-vdi-and-ram-caching/</link>
		<comments>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-1-vdi-and-ram-caching/#comments</comments>
		<pubDate>Mon, 13 Oct 2014 12:00:33 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Server Based Computing]]></category>
		<category><![CDATA[ThinIO]]></category>
		<category><![CDATA[ThinScale]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[VMware View]]></category>
		<category><![CDATA[Horizon View]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[Vmware]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=3168</guid>
		<description><![CDATA[As we draw ever closer to ThinIO’s big day, I thought I’d put a blog post together talking about the RAM caching, statistics, facts and figures we’ve baked into version 1 to deliver some really kick ass performance improvements with even the smallest of allocations of cache per VM. Test, test, review and tune. Rinse and repeat! We’ve spent months load testing, tuning, fixing and retesting ThinIO. And for the first time we’re going to start talking about the dramatic [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A<img class="alignright wp-image-2865 " src="http://andrewmorgan.ie/wp-content/uploads/2014/04/logo.png" alt="logo" width="216" height="41" />s we draw ever closer to ThinIO’s big day, I thought I’d put a blog post together talking about the RAM caching, statistics, facts and figures we’ve baked into version 1 to deliver some really kick ass performance improvements with even the smallest of allocations of cache per VM.</p>
<h2>Test, test, review and tune. Rinse and repeat!</h2>
<p>We’ve spent months load testing, tuning, fixing and retesting ThinIO. And for the first time we’re going to start talking about the dramatic results ThinIO can have on storage scalability and user perceived performance.</p>
<p>During our extensive testing cycles, we’ve covered:</p>
<p>• Horizon View<br />
• Citrix XenDesktop<br />
• Microsoft RDS</p>
<p>We’ve been seeing very similar, if not identical results when testing against pools on the following storage types too:</p>
<p>• XenServer SR<br />
• VMFS<br />
• NFS<br />
• Microsoft Clustered Shared Volumes</p>
<p><span id="more-3168"></span></p>
<p>For reference the statistics we’re sharing today are based on VDI via VMware Horizon View 6, these figures are averages across at least three independent tests. All details of the tests that exported these results are covered below.</p>
<h2>Testing details:<img class="alignright  wp-image-3174" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/Logo_Login_VSI_Transparent-300x50.png" alt="Logo_Login_VSI_Transparent" width="174" height="29" /></h2>
<p>The VM’s we tested in this particular workload are as follows:</p>
<p>• <strong>Testing Method:</strong> Login VSI 4.1 Medium Workload<br />
• <strong>Operating System:</strong> Windows 7 x64 SP1<br />
• <strong>System ram:</strong> 3gb<br />
• <strong>vCPU:</strong> 2<br />
• <strong>ThinIO cache:</strong> 350mb<br />
•<strong> Technology:  </strong>VMware Horizon View 6<br />
• <strong>Test runtime:</strong> 1 hour*<br />
• <strong>Statistic sample period:</strong> 5 seconds.</p>
<p>With that out of the way, lets jump right in!</p>
<h2>Storage IO:</h2>
<p>The number of IO’s per second is crucially important when dealing with storage; many, many small IO’s sent to sparse locations on disk are a killer to storage technologies, only made worse by certain file systems.</p>
<p>As a storage acceleration and negation technology, were extremely happy with the IO’s reduction we see on the storage:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image002.png" target="_blank"><img class="aligncenter wp-image-3173 size-large" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image002-1024x691.png" alt="image002" width="625" height="421" /></a></p>
<p>Even with just 350 MB of ram as a cache, we achieve phenomenal IO reduction.</p>
<h2>Storage MB / Sec:</h2>
<p>But IO’s are just one part of the puzzle, what about the size of the data requests being sent to the storage?</p>
<p>A true solution to take the pressure off the SAN, improve user performance, and increase storage density needs to tackle both the IO and the <strong>throughput</strong> problem.</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image003.png" target="_blank"><img class="aligncenter size-large wp-image-3172" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image003-1024x648.png" alt="image003" width="625" height="395" /></a></p>
<p>As you can see above, with just 350mb we’re very good at it!</p>
<h2>Side by Side Comparison:</h2>
<p>So rounded figures are fine so long as the data is trustworthy, but here’s a real preview laid bare for your analysis.</p>
<p>Here’s a direct comparison on NFS and VMFS of taking a standard load test IO pattern and comparing it to an identical test with ThinIO installed in the VM:</p>
<h3 style="text-align: center;">VMFS:</h3>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image004.png" target="_blank"><img class="aligncenter wp-image-3169 size-large" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image004-1024x466.png" alt="image004" width="625" height="284" /></a></p>
<h3 style="text-align: center;">NFS:</h3>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/10/image005.png" target="_blank"><img class="aligncenter wp-image-3170 size-large" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/image005-1024x473.png" alt="image005" width="625" height="288" /></a></p>
<p>As you can imagine, we’re extremely proud of what we can achieve with as little as 350mb per desktop.</p>
<p>The beauty of our approach is simplicity, your users can see this benefit not in a matter of weeks, days or even hours. ThinIO can be up and running in minutes, delivering reduced login times, storage acceleration and providing a far deeper density on your current storage.</p>
<h2>Wrap up:</h2>
<p>So there you have it, we’ll be adding additional blog posts in the coming days looking at Remote Desktop Services / XenApp, intelligent cache management built in and our Read Ahead technology, so check back. In the mean time, if you would like a chance to test ThinIO pre-release, find access to the public beta below. Thank you for your time and happy testing!</p>
<p><a href="http://thinscaletechnology.com/download-thinio/" target="_blank"><img class="aligncenter wp-image-3171 size-medium" src="http://andrewmorgan.ie/wp-content/uploads/2014/10/Download-ThinIO-Beta-300x101.jpg" alt="Download-ThinIO-Beta" width="300" height="101" /></a></p>
<p>* Eight-hour figures and complete statistics are also available, we have nothing to hide and we’d encourage you to get in touch with the ThinScale team and we’ll share them with you.</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/10/thinio-facts-and-figures-part-1-vdi-and-ram-caching/feed/</wfw:commentRss>
		<slash:comments>7</slash:comments>
		</item>
		<item>
		<title>ThinIO Public Beta is go!</title>
		<link>http://andrewmorgan.ie/2014/09/thinio-public-beta-is-go/</link>
		<comments>http://andrewmorgan.ie/2014/09/thinio-public-beta-is-go/#comments</comments>
		<pubDate>Mon, 15 Sep 2014 14:27:01 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[Horizon View]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[Remote Desktop services]]></category>
		<category><![CDATA[Storage Accelleration]]></category>
		<category><![CDATA[VDI]]></category>
		<category><![CDATA[VDI in a Box]]></category>
		<category><![CDATA[Vmware]]></category>
		<category><![CDATA[xenapp]]></category>
		<category><![CDATA[XenDesktop]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2894</guid>
		<description><![CDATA[Lets get right to it! Warm up your labs or fire up your golden images ladies and gents, we’re delighted to announce ThinIO’s brief public beta will begin today! This project has taught us some really interesting things about Windows IO, how Windows behaves and how the hypervisor and storage can behave. This project really felt like a David vs. Goliath task as we (members of our community with a desire to simplify this issue) attempted to tackle one of [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="alignright  wp-image-2865" src="/wp-content/uploads/2014/04/logo.png" alt="logo" width="204" height="47" />Lets get right to it!</p>
<p>Warm up your labs or fire up your golden images ladies and gents, we’re delighted to announce ThinIO’s brief public beta will begin today!</p>
<p>This project has taught us some really interesting things about Windows IO, how Windows behaves and how the hypervisor and storage can behave. This project really felt like a David vs. Goliath task as we (members of our community with a desire to simplify this issue) attempted to tackle one of the largest issues in our industry, storage bottlenecks and Windows desktops.</p>
<p>What’s really unique about our approach is there are no hardware lead times, no architecture changes needed and no external dependencies. ThinIO can be installed in seconds and the benefits are seen immediately.</p>
<p><span id="more-2894"></span></p>
<p>We’ve spent countless hours testing, tuning, retesting and even more tuning. We’re extremely happy with the results. This public beta will serve as an opportunity for you to really kick the tyres and believe the hype in what we’ve built while we’re putting together the final touches to release the product in the coming weeks.</p>
<p>During this time, we found achieving positive and consistent IO negation boils down to a number of items:</p>
<ul>
<li>cutting down on the volume of IOPS sent to the storage.</li>
<li>Reducing the data transferred (MB/sec) to and from the storage.</li>
<li>Intelligently cutting down on peak IO, such as boot and user logon.</li>
</ul>
<p>In the coming days we’re going drill down into these categories in more depth. But as a quick overview, here’s a baseline (top) and ThinIO (bottom) session comparison of a Windows 8.1 desktop login, 1 hour Login VSI medium workload and log off with just 350 mb of cache for ThinIO:</p>
<p><a href="http://andrewmorgan.ie/wp-content/uploads/2014/09/image004.jpg"><img class="aligncenter size-full wp-image-2896" src="/wp-content/uploads/2014/09/image004.jpg" alt="image004" width="554" height="323" /></a></p>
<p>Keep an eye out for the coming blog posts, but in the mean time, the ThinIO beta is available to download <a href="http://thinscaletechnology.com/download-thinio/">here</a> now! Go forth and have fun.</p>
<p>Until next time,</p>
<p>A</p>
<p><a href="http://thinscaletechnology.com/download-thinio/" target="_blank"><img class="aligncenter" src="http://thinscaletechnology.com/wp-content/uploads/2014/09/Download-ThinIO-Beta.jpg" alt="" width="313" height="110" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/09/thinio-public-beta-is-go/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ThinIO, taking a peak under the covers.</title>
		<link>http://andrewmorgan.ie/2014/05/thinio-taking-a-peak-under-the-covers/</link>
		<comments>http://andrewmorgan.ie/2014/05/thinio-taking-a-peak-under-the-covers/#comments</comments>
		<pubDate>Wed, 21 May 2014 20:33:52 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[ThinScale]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[ThinIO]]></category>
		<category><![CDATA[VDI]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2879</guid>
		<description><![CDATA[What a busy few weeks, Citrix Synergy already feels like a distant memory. We had a great trip and were dumbfounded by the interest and excitement shown by enthusiasts, customers and vendors around our ThinIO solution, with quite a few people insisting on seeing the inner mechanics and trying to break our demo&#8217;s to ensure the figures they saw were legit! For those unfortunate enough to miss synergy or our Webinar with Erik over at XenAppBlog, here&#8217;s a little blog [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="/wp-content/uploads/2014/04/logo.png"><img class="alignright size-full wp-image-2865" src="/wp-content/uploads/2014/04/logo.png" alt="logo" width="300" height="57" /></a>What a busy few weeks, Citrix Synergy already feels like a distant memory. We had a great trip and were dumbfounded by the interest and excitement shown by enthusiasts, customers and vendors around our ThinIO solution, with quite a few people insisting on seeing the inner mechanics and trying to break our demo&#8217;s to ensure the figures they saw were legit!</p>
<p>For those unfortunate enough to miss synergy or our Webinar with Erik over at XenAppBlog, here&#8217;s a little blog post you will find interesting as I walk you through the inners of ThinIO and why it&#8217;s so simple to deliver disk access with RAM speeds without any of the complexity.</p>
<p><span id="more-2879"></span></p>
<h2>What is ThinIO?</h2>
<p>&nbsp;</p>
<p>ThinIO is a filter driver which operates at a block level, inline between the windows and the disk.</p>
<p>ThinIO sits in the operating system layer and can be used on windows desktop operating systems or server based computing models.</p>
<p>ThinIO delivers a greatly reduced IO footprint on your storage, while also speeding up core items like boot times and login times. ThinIO also helps standardize the peaks your storage will get hit by at busy periods during the day. Ultimately this allows you to size your storage for an average, as opposed to sizing for the worst case scenario during peaks.</p>
<h2>How does it work?</h2>
<p>&nbsp;</p>
<p>When ThinIO starts up, it allocates a configurable cache of reserved RAM to perform its optimisations.</p>
<p>Being the last filter in the stack, ThinIO can still allow windows to perform it&#8217;s own optimisation on IO, delivering value by catching the read and write IO&#8217;s just as they hit the disk.</p>
<p>ThinIO interacts with block data as read&#8217;s and writes traverse the cache. As a read is observed it is retrieved from disk and subsequently stored for future use, meaning any subsequent read will be served directly from cache.</p>
<p>But read&#8217;s a boring, and everyone has a solution for read caching. ThinIO also treats this RAM cache as a storage area for write IO. Write IO is committed nearly instantly to the cache and no IO is sent down to the disk while free space is available in the cache.</p>
<h3>&#8220;But what about if the machines run out of RAM?&#8221;</h3>
<p>&nbsp;</p>
<p>Well I&#8217;m glad you asked! The cache in ThinIO is hard set at a value you configure, so RAM will never be taken from the cache to service other processes. But in the situation where the cache has become 100% volatile write, ThinIO will then begin to spill over to the local disk allowing the Virtual Machine continue to operate.</p>
<p>There&#8217;s more, ThinIO actively manages cache contents to ensure it&#8217;s as relevant as possible. As the cache begins to fill, ThinIO&#8217;s Lazy Page Writer can identify and flush out blocks that have not been frequently used. This allows you to use a relatively small cache size while still deliver the big numbers we&#8217;ll discuss later.</p>
<h2>Designed to be fool proof:</h2>
<p>&nbsp;</p>
<p>ThinIO&#8217;s GUI is fool proof, it&#8217;s intuitive and gives you a really quick view of ThinIO Realtime. ThinIO provides graphical representations of stats on reads, writes and cache usage, as well as an immediate view of the benefit ThinIO has created for the desktop.</p>
<p>The ThinIO console can also remotely connect to machines to ensure you don&#8217;t have to disturb the user while checking performance.</p>
<p>&nbsp;</p>
<p style="text-align: center;"><img src="/wp-content/uploads/2014/05/052114_2033_thiniotakin2.png" alt="" /></p>
<p>&nbsp;</p>
<p>When the cache is enabled, ThinIO also has a realtime statistics window to help you identify disk patterns and cache performance</p>
<p>&nbsp;</p>
<p style="text-align: center;"><img src="/wp-content/uploads/2014/05/052114_2033_thiniotakin3.png" alt="" /></p>
<p>&nbsp;</p>
<h2>Boot and application launch time optimization:</h2>
<p>&nbsp;</p>
<p>ThinIO has some really clever technology built in to optimise the windows boot process and user experience.</p>
<p>During early testing, we observed just how inefficiently windows uses its disk resources during the boot process. Regularly the same files are requested over and over again on boot, if these blocks are non-contiguous, seek times are inherent. Busy servers were requesting up to 80,000 read IOPS during boot and process start.</p>
<p>ThinIO&#8217;s Read Ahead feature allows you to teach windows to be less of a storage monster. As the ThinIO cache is already aware of all blocks needed to boot, or even serve the users first launch of their applications, Read Ahead allows you to boot the machine, with a preloaded cache of required blocks, sorted contiguously.</p>
<p>When ThinIO starts up, it identifies the &#8216;Read Ahead&#8217; configuration file and pauses windows while it reads the required blocks once, in a contiguous pattern around the disk. Once finished, windows continues to boot retrieving the majority of its block data directly from cache.</p>
<p>By doing this, ThinIO was delivering roughly 30% improved boot times while also reducing boot IOPS by over 80% in our testing. In the below graphs, we did a side by side comparison of the windows start-up process with and without ThinIO.</p>
<p>In the Gui below you will see a machine with the ThinIO cache enabled but no read ahead configured, we achieve a good 40% reduction of IOPS on boot and login, which is not bad on it&#8217;s own, but we knew we could make it better:</p>
<p style="text-align: center;"><img src="/wp-content/uploads/2014/05/052114_2033_thiniotakin4.png" alt="" /></p>
<p>So after configuring a &#8216;Read Ahead&#8217; configuration by booting a machine, logging in, opening the core set of applications and committing the file we see the following, large improvement of IOPS saving and cache hit rate on read:</p>
<p style="text-align: center;"><img src="/wp-content/uploads/2014/05/052114_2033_thiniotakin5.png" alt="" /></p>
<p>&nbsp;</p>
<p>So there you have it. By taking an additional 3-4 minutes with your golden image, you reduce nearly 30,000 IOPS to roughly 5,000 IOPS while also reducing boot times. Not only have you taken alot of pressure off your storage, if you launched your users applications core files as part of the read ahead configuration, your user&#8217;s login speeds will receive a really good boost while making their application launch times near instant.</p>
<p>Once the read ahead is complete, the driver will then start to use the data which is no longer needed for more chatty blocks of read or write, so configuring read ahead has zero impact on cache usage in the longer term.</p>
<h2>Deploy, size, done.</h2>
<p>&nbsp;</p>
<p>Out of box, ThinIO takes less than 5 minutes to install and configure delivering you immediate benefits. No hoping, trusting or praying the hardware vendor&#8217;s figures are correct. No SAN or LUN type requirement, no hardware lead time, no hypervisor requirements and no change needed to your architecture. Whether you are doing on premises or even cloud SaaS / DaaS, ThinIO installs without any change.</p>
<p>&nbsp;</p>
<h2>Licensing:</h2>
<p>&nbsp;</p>
<p>ThinIO will ship with a 30 day grace period for you to test to your heart&#8217;s content without any commitment. If ThinIO is not for you, it&#8217;s just a matter of uninstalling it! Keeping in the spirit of the community, ThinIO will even have a free version available!</p>
<p>Ultimately, designing and deploying virtual desktops is difficult, we really wanted to write a product that both delivers and is simple and easy to deploy. We feel we&#8217;ve absolutely hit the mark on this and we look forward to opening the program to full deployment in the coming weeks.</p>
<h2>Sounds great, how do I learn more?</h2>
<p>&nbsp;</p>
<p>Head on over to the <a href="http://thinscaletechnology.com/thinio/" target="_blank">ThinScale Technology</a> web page and read more or register for the private beta.</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/05/thinio-taking-a-peak-under-the-covers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>ThinIO, here comes something incredible.</title>
		<link>http://andrewmorgan.ie/2014/04/thinio-something-incredible-this-way-comes/</link>
		<comments>http://andrewmorgan.ie/2014/04/thinio-something-incredible-this-way-comes/#comments</comments>
		<pubDate>Tue, 29 Apr 2014 14:52:20 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[ThinScale]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[VDI]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2864</guid>
		<description><![CDATA[Well we&#8217;ve been busy! Very, very busy. In the next week you will see the culmination of  two years work on a product we&#8217;re about to release called ThinIO. Cast your mind back if you will to some ramblings and napkin math I devised some time ago in my series on IOPS negation strategies: IOPS, Shared Storage and a Fresh Idea. (Part 1) IOPS, Shared Storage and a Fresh Idea. (Part 2) IOPS, Shared Storage and a Fresh Idea. (Part [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><img class="size-full wp-image-2865 aligncenter" src="/wp-content/uploads/2014/04/logo.png" alt="logo" width="300" height="57" /></p>
<p>Well we&#8217;ve been busy! Very, very busy. In the next week you will see the culmination of  two years work on a product we&#8217;re about to release called ThinIO.</p>
<p>Cast your mind back if you will to some ramblings and napkin math I devised some time ago in my series on IOPS negation strategies:</p>
<p><a href="http://andrewmorgan.ie/2012/10/05/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/" target="_blank"> IOPS, Shared Storage and a Fresh Idea. (Part 1)</a><br />
<a href="http://andrewmorgan.ie/2012/10/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/" target="_blank">IOPS, Shared Storage and a Fresh Idea. (Part 2)</a><br />
<a href="http://andrewmorgan.ie/2012/10/26/on-iops-shared-storage-and-a-fresh-idea-part-3-tying-it-all-together-in-the-stack/" target="_blank">IOPS, Shared Storage and a Fresh Idea. (Part 3)</a></p>
<p><span id="more-2864"></span>In these post&#8217;s a bunch of community guys (<span style="color: #555555;"> </span><a style="color: #2970a6;" href="https://twitter.com/barryschiffer" target="_blank">Barry Schiffer</a><span style="color: #555555;">, </span><a style="color: #2970a6;" href="http://www.linkedin.com/in/iainbrighton" target="_blank">Iain Brighton</a><span style="color: #555555;">, </span><a style="color: #2970a6;" title="The dutch IT guy!" href="http://www.ingmarverheij.com/">Ingmar Verheij</a><span style="color: #555555;">, </span><a style="color: #2970a6;" href="https://twitter.com/KBaggerman" target="_blank">Kees Baggerman</a><span style="color: #555555;">, </span><a style="color: #2970a6;" title="An amazing blog, i regularly read from Remko." href="http://www.remkoweijnen.nl/blog/">Remko Weijnen</a><span style="color: #555555;"> and </span><a style="color: #2970a6;" href="http://www.linkedin.com/profile/view?id=4353554&amp;pid=23220286&amp;authType=name&amp;authToken=wzx6&amp;trk=pbmap" target="_blank">Simon Pettit</a>) devised a cunning plan during <a href="http://www.e2evc.com/home/">E2EVC</a> to see if we could counter the monotony of IOPS and their devastating impact on Virtual Desktop implementations. We threw together a loose test scenario where we could demonstrate how technology from Microsofts Windows Embedded Standard functionality EWF (Extended Write Filter) and Citrix&#8217;s XenServer intellicache with explosive performance and IO reduction statistics.</p>
<p>This blog series got way more attention than we possibly hoped and judging by citrix&#8217;s response by adding ram caching and disk overflow in Citrix provisioning services&#8230; we were definitely listened to. At the end of the series, I elluded to a technology that could be leveraged to achieve some of this, while right, it has taken along time to get right! With the help of our newest collaborator David Coombes, this technology is very much alive and ready for use.</p>
<p><strong>Here&#8217;s the kicker:</strong></p>
<p>Next week at Citrix Synergy, we&#8217;re dropping some big news for this market, we&#8217;re releasing a product that will deliver insanely fast IOPS to any storage utilising inexpensive RAM. With our product, no architecture change is required, no san volume dependencies, no expensive hardware upgrades and no hypervisor gotcha&#8217;s. ThinIO works with all major desktop virtualisation products like XenApp, XenDesktop, VDI in a Box, Microsoft Remote Desktop technologies and even VMware Horizon View!</p>
<p>ThinIO is just a simple installation and off you go. Not only will this product reduce, standardise and improve the speed of IOPS, it will also optimise and reduce boot storms dramatically.</p>
<p>Register for XenAppBlogs webinar <a href="https://xenapptraining.leadpages.net/massively-reduce-and-standardize-disk-iops/" target="_blank">here</a> where we&#8217;ll discuss how ThinIO works for the first time or come visit us in Citrix Synergy <strong>(Booth 513) </strong>to celebrate the culmination of 2 years of work and learn how ThinIO is performant, reliable and an extremely cost effective method to deliver lightning fast experience to your users while protecting your disk storage from grinding to a halt.</p>
<p>Watch this space.</p>
<p><a href="https://xenapptraining.leadpages.net/massively-reduce-and-standardize-disk-iops/" target="_blank">Register for Xenappblogs webinar with ThinScale Technology for the official launch of ThinIO</a></p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2014/04/thinio-something-incredible-this-way-comes/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>On IOPS, shared storage and a fresh idea. (Part 2) GO GO Citrix Machine Creation Services.</title>
		<link>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/</link>
		<comments>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/#comments</comments>
		<pubDate>Wed, 10 Oct 2012 12:00:37 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[XenServer]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[RAM Caching]]></category>
		<category><![CDATA[VDI]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2367</guid>
		<description><![CDATA[Welcome back to part two of this little adventure on Exploring the RAM caching ability of our newly favourite little file system filter driver, the Micrsoft Extended Write Filter. If you missed part 1, you can find it here. So after the first post, my mailbox blew up with queries on how to do this, how the ram cache weighs up vs pvs and how can you manipulate and &#8220;Write Out&#8221; to physical disk in a spill over, so before [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="/wp-content/uploads/2012/10/andrew-logo1.png"><img class="alignright size-full wp-image-2358" title="Andrew logo1" src="/wp-content/uploads/2012/10/andrew-logo1.png" alt="" width="75" height="76" /></a>Welcome back to part two of this little adventure on Exploring the RAM caching ability of our newly favourite little file system filter driver, the Micrsoft Extended Write Filter. If you missed part 1, you can find it <a href="http://andrewmorgan.ie/2012/10/05/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/" target="_blank">here</a>.</p>
<p>So after the first post, my mailbox blew up with queries on how to do this, how the ram cache weighs up vs pvs and how can you manipulate and &#8220;Write Out&#8221; to physical disk in a spill over, so before I go any further, lets have a quick look at the EWF.<br />
<code><br />
</code><br />
<strong>Deeper into the EWF:<img class="alignright" src="/wp-content/uploads/2012/10/winembed_v_rgb_reasonably_small.png?w=595" alt="" width="128" height="128" /></strong><br />
<code><br />
</code><br />
As previously mentioned, the EWF is contained in the operating system of all Windows Embedded Operating systems and also Windows Thin PC. The EWF is a mini filter driver that redirects all reads and writes to memory or local file system depending on where the file currently lives.</p>
<p>In the case of this experiment, we&#8217;re using the <a href="http://msdn.microsoft.com/en-us/library/ms940186(v=winembedded.5).aspx" target="_blank">RAMREG</a> method of EWF. Have a read of that link if you would like more information, but basically the EWF creates an overlay in RAM for the files to live and the configuration is stored in the registry.</p>
<p>Once the EWF is enabled and rebooted, on next login (from an adminstrative command prompt) you can run ewfmgr -all to view the current status of the EWF:<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/ewf.jpg"><img class="size-full wp-image-2368 aligncenter" title="ewf" src="/wp-content/uploads/2012/10/ewf.jpg" alt="" width="595" height="260" /></a><br />
<code><br />
</code><br />
So what happens if I write a large file to the local disk?</p>
<p>Well this basically! Below I have an installer for SQL express which is roughly 250mb,  I copied this file to the desktop from a network share, as you can see below the file has been stuck into the RAM overlay.<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/file-and-ewf-size.jpg"><img class="aligncenter size-full wp-image-2370" title="file and ewf size" src="/wp-content/uploads/2012/10/file-and-ewf-size.jpg" alt="" width="595" height="315" /></a><br />
<code><br />
</code><br />
And That&#8217;s pretty much it! Simple stuff for the moment. When I venture into the API of the EWF in a later post I&#8217;ll show you how to write this file out to storage if we are reaching capacity, allowing us to spill to disk and address the largest concern surrounding Citrix Provisioning Server and RAM caching presently.<br />
<code><br />
</code><br />
<strong>Next up to the block. Machine Creation Services&#8230;</strong><br />
<code><br />
</code><br />
I won&#8217;t go into why I think this technology is cool, I covered that quite well in the previous post. But this was the first technology I planned to test with the EWF.</p>
<p>So I created a lab at home consisting of two XenServers and an NFS share with MCS enabled as  below:<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/drawing1.jpg"><img class="aligncenter size-full wp-image-2372" title="Drawing1" src="/wp-content/uploads/2012/10/drawing1.jpg" alt="" width="595" height="267" /></a><br />
<code><br />
</code><br />
Now before we get down to the nitty gritty, let&#8217;s remind ourselves of what our tests looked like before and after using the extended write filter in this configuration when booting using and shutting down a single VM:<br />
<code><br />
</code><br />
<a href="/wp-content/uploads/2012/10/review.png"><img class="aligncenter size-full wp-image-2373" title="review" src="/wp-content/uploads/2012/10/review.png" alt="" width="595" height="371" /></a></p>
<p style="text-align:center;"><em>As with last time, black Denotes Writes, Red Denotes Reads.</em></p>
<p><code> </code></p>
<p style="text-align:left;">So looking at our little experiment earlier, we pretty much killed write IOPS on this volume when using the Extended write filter driver. But the read IOPS (in red above) were still quite high for a single desktop.</p>
<p style="text-align:left;">And this is where I had hoped MCS would step into the fray. Even on the first boot with MCS enabled the read negation was notable:</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/first-intellicache-boot.jpg"><img class="aligncenter size-full wp-image-2374" title="First intellicache boot" src="/wp-content/uploads/2012/10/first-intellicache-boot.jpg" alt="" width="595" height="437" /></a></p>
<p style="text-align:center;"><em>Test performed on a newly built host.</em></p>
<p><code> </code></p>
<p style="text-align:left;">At no point do read&#8217;s hit as high as the 400 + peak we saw in the previous test. But we still see spikey read IOPS as the Intellicache begins to read and store the image.</p>
<p style="text-align:left;">So now that we know what a first boot will look like. I kicked the test off again from a pre cached device, this time the image <em>should</em> be cached on the local disk of the XenServer as I&#8217;ve booted the image a number of times.</p>
<p style="text-align:left;">Below are the results of the pre cached test:</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/single-user-test-mcs-ewf.png"><img class="aligncenter size-full wp-image-2375" title="Single User test MCS + EWF" src="/wp-content/uploads/2012/10/single-user-test-mcs-ewf.png" alt="" width="595" height="290" /></a></p>
<p style="text-align:center;"><em>Holy crap&#8230;</em></p>
<p><code> </code></p>
<p style="text-align:left;">Well, I was incredibly impressed with this result&#8230; But did it scale?</p>
<p style="text-align:left;">So next up, I added additional desktops to the pool to allow for 10 concurrent desktop (restraints of my lab size).</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/09-10-2012-14-34-12.jpg"><img class="aligncenter size-full wp-image-2376" title="09-10-2012 14-34-12" src="/wp-content/uploads/2012/10/09-10-2012-14-34-12.jpg" alt="" width="215" height="256" /></a></p>
<p><code> </code></p>
<p style="text-align:left;">Once all the desktops had been booted, I logged in 10 users and decided to send a mass shutdown command from Desktop studio, with fairly cool results:</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/10-vms-shutdown.png"><img class="aligncenter size-full wp-image-2377" title="10 VM's shutdown" src="/wp-content/uploads/2012/10/10-vms-shutdown.png" alt="" width="595" height="287" /></a></p>
<p><code> </code></p>
<p style="text-align:left;">And what about the other way around, what about a 10 vm cold boot?</p>
<p><code> </code></p>
<p style="text-align:left;"><a href="/wp-content/uploads/2012/10/10-vms-booting.png"><img class="aligncenter size-full wp-image-2378" title="10 vm's booting" src="/wp-content/uploads/2012/10/10-vms-booting.png" alt="" width="595" height="289" /></a></p>
<p style="text-align:left;">Pretty much IO Free boots and shutdowns!</p>
<p><code> </code></p>
<p style="text-align:left;"><strong>So lets do some napkin math for a second.</strong></p>
<p><code> </code></p>
<p style="text-align:left;">Taking what we&#8217;ve seen so far, lets get down to the nitty gritty and look at maximums / averages.</p>
<p style="text-align:left;">In the first test, minus EWF and MCS, from a single desktop, we saw:</p>
<ul>
<li>Maximum Write IOPS: 238</li>
<li>Average Write IOPS: 24</li>
<li>Maximum Read IOPS: 613</li>
<li>Average Read IOPS: 83</li>
</ul>
<p>And in the end result, with EWF MCS and even with the workload times 10, we saw the following:</p>
<ul>
<li>Maximum Write IOPS: 26 (2,380*)</li>
<li>Average Write IOPS: 1.9 (240*)</li>
<li>Maximum Read IOPS: 34 (6130*)</li>
<li>Average Read IOPS: 1.3 (830*)</li>
</ul>
<p>* Denotes original figures times 10</p>
<p>I was amazed by the results, of two readily available technologies coupled together to tackle a problem in VDI that we are all aware of and regularly struggle with.</p>
<p><code> </code></p>
<p><strong>What about other, similar technologies?</strong></p>
<p>Now as VMware and Microsoft have their own technologies similar to MCS (CBRC and CSV cache, respectfully). I would be interested in seeing similar tests to see if their solutions can match the underestimated Intellicache. So if you have a lab with either of these technologies, get in touch and I&#8217;ll send you the method to test this.</p>
<p><code> </code></p>
<p><strong>What&#8217;s up next:</strong></p>
<p>In the following two blog posts, I&#8217;ll cover off the remaining topics:</p>
<ul>
<li>VDI in a Box.</li>
<li>EWF API for spill over</li>
<li>Who has this technology at their disposal?</li>
<li>Other ways to skin this cat.</li>
<li>How to recreate this yourself for you own test.</li>
</ul>
<p><code><br />
</code><br />
<strong>Last up I wanted to address a few quick queries I received via email / Comments:</strong><br />
<code><br />
</code><br />
<em>&#8220;Will this technology approach work with Provisioning Server and local disk caching, allowing you to leverage PVS but spill to a disk write cache?&#8221;</em></p>
<p style="padding-left:30px;">No, The Provisioning Server filter driver has a higher altitude than poor EWF, so PVS grabs and deals with the write before EWF can see it.</p>
<p><em>&#8220;Couldn&#8217;t we just use a RAM disk on the hypervisor?&#8221;</em></p>
<p style="padding-left:30px;">Yes, maybe and not yet&#8230;</p>
<p style="padding-left:60px;">Not yet, with Citrix MCS and Citrix VDI in a Box, Separating the write cache and identity disk from the LUN on which the image is hosted is a bit of a challenge.</p>
<p style="padding-left:60px;">Maybe If using Hyper-V v3 with the shared nothing migration, you now have migration options for live vm&#8217;s. This would allow you to move the WC / ID from one ram cache to another.</p>
<p style="padding-left:60px;">Yes, If using Citrix Provisioning server you could assign the WC to a local storage object on the host the VM lives. This would be tricky with VMware ESXi and XenServer but feel free to give it a try, Hyper-V on the other hand would be extremely easy as many ram disk&#8217;s are available online.</p>
<p><em>&#8220;Atlantis Ilio also has inline dedupe, offering more than just ram caching?&#8221;</em></p>
<p style="padding-left:30px;">True, and I never meant, even for a second to say this technology from Atlantis was anything but brilliant, but with RAM caching on a VM basis, wouldn&#8217;t VMware&#8217;s Transparent page sharing also deliver similar benefits? Without the associated cost?</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2012/10/on-iops-shared-storage-and-a-fresh-idea-part-2-go-go-citrix-machine-creation-services/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>On E2E, Geek speak, IOPS, shared storage and a fresh idea. (Part 1)</title>
		<link>http://andrewmorgan.ie/2012/10/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/</link>
		<comments>http://andrewmorgan.ie/2012/10/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/#comments</comments>
		<pubDate>Fri, 05 Oct 2012 11:13:39 +0000</pubDate>
		<dc:creator><![CDATA[andyjmorgan]]></dc:creator>
				<category><![CDATA[Citrix]]></category>
		<category><![CDATA[Virtual Desktop Infrastructure]]></category>
		<category><![CDATA[XenDesktop]]></category>
		<category><![CDATA[XenServer]]></category>
		<category><![CDATA[IOPS]]></category>
		<category><![CDATA[Microsoft]]></category>
		<category><![CDATA[RAM Caching]]></category>
		<category><![CDATA[VDI]]></category>

		<guid isPermaLink="false">http://andrewmorgan.ie/?p=2191</guid>
		<description><![CDATA[Note: this is part 1 of a 4 post blog. While attending E2EVC vienna recently, I found myself attending the Citrix Geekspeak session, although the opportunity to rub shoulders with fellow Citrix aficionado&#8217;s was a blast, I found myself utterly frustrated spending time talking about storage challenges in VDI. The topic was interesting and informative, and while there were a few idea&#8217;s shared about solid state arrays, read IOPS negation with PVS or MCS there really wasn&#8217;t a vendor based, one size fits all [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><em><br />
Note: this is part 1 of a 4 post blog.<a href="/wp-content/uploads/2012/10/andrew-logo1.png"><img class="alignright size-full wp-image-2358" title="Andrew logo1" src="/wp-content/uploads/2012/10/andrew-logo1.png" alt="" width="75" height="76" /></a></em></p>
<p>While attending <a href="http://www.e2evc.com/home/Home.aspx">E2EVC</a> vienna recently, I found myself attending the <a href="http://community.citrix.com/display/cdn/Geek+Speak">Citrix Geekspeak</a> session, although the opportunity to rub shoulders with fellow Citrix aficionado&#8217;s was a blast, I found myself utterly frustrated spending time talking about storage challenges in VDI.</p>
<p>The topic was interesting and informative, and while there were a few idea&#8217;s shared about solid state arrays, read IOPS negation with PVS or MCS there really wasn&#8217;t a vendor based, one size fits all solution to reduce both Read and Write IOPS without leveraging paid for (and quite expensive I may add) solutions like Atlantis ILIO, Fusion IO, etc. The conversation was tepid, deflating and we moved on fairly quickly to another topic.</p>
<p><em>So we got to talking&#8230;</em></p>
<p>In the company of great minds and my good friends <a href="https://twitter.com/barryschiffer" target="_blank">Barry Schiffer</a>, <a href="http://www.linkedin.com/in/iainbrighton" target="_blank">Iain Brighton</a>, <a title="The dutch IT guy!" href="http://www.ingmarverheij.com/">Ingmar Verheij</a>, <a href="https://twitter.com/KBaggerman" target="_blank">Kees Baggerman</a>, <a title="An amazing blog, i regularly read from Remko." href="http://www.remkoweijnen.nl/blog/">Remko Weijnen</a> and <a href="http://www.linkedin.com/profile/view?id=4353554&amp;pid=23220286&amp;authType=name&amp;authToken=wzx6&amp;trk=pbmap" target="_blank">Simon Pettit</a>. Over lunch we got to talking about this challenge and the pro&#8217;s and con&#8217;s to IO negation technologies.</p>
<p><strong>Citrix Provisioning Server&#8230; </strong></p>
<p><a href="/wp-content/uploads/2012/10/xendesktop.jpg"><img class="aligncenter size-full wp-image-2349" title="xendesktop" src="/wp-content/uploads/2012/10/xendesktop.jpg" alt="" width="396" height="288" /></a></p>
<p>We spoke about the pro&#8217;s and con&#8217;s delivered by Citrix provisioning server, namely rather than actually reducing read IOPS to shared storage, it merely puts the pressure instead on the network rather than a SAN or local resources in the hypervisor.</p>
<p>The RAM caching option for differencing is really clever with Citrix Provisioning server, but it&#8217;s rarely utilised due to the ability to fill the cache in RAM and the ultimate blue screen that will follow this event.</p>
<p>Citrix provisioning server does require quite a bit of pre work to get the PVS server configured, highly available then the education process of learning how to manage images with Provisioning server, it&#8217;s more of an enterprise tool.</p>
<p><strong>Citrix Machine Creation Services&#8230;</strong></p>
<p><a href="/wp-content/uploads/2012/10/mcs.png"><img class="aligncenter size-medium wp-image-2350" title="MCS" src="/wp-content/uploads/2012/10/mcs.png?w=300" alt="" width="300" height="272" /></a></p>
<p>We spoke about the pro&#8217;s and con&#8217;s of delivered by Citrix Machine Creation Services and Intellicache, this technology is much smarter around caching regular reads to a local device but it&#8217;s adoption is mainly SMB and the requirements for NFS and XenServer were a hard sell&#8230; Particularly with a certain hypervisors dominance to date.</p>
<p>Machine Creation Services is stupidly simple to deploy, install XenServer, stick the image on an NFS share (even windows servers can host these)&#8230; Bingo. Image changes are based on Snapshots so the educational process is a null point.</p>
<p>But again MCS is only negating read IO, and this assumes you have capable disk&#8217;s under the hypervizor to run multiple workspaces from a single disk or array. It&#8217;s also specific to XenDesktop, sadly. So no hosted shared desktop solution.</p>
<p>We mulled over SSD based storage quite alot with MCS and in agreement we discussed how MCS and Intellicache could be leveraged on an SSD, but the write cache or differencing disk activity would be so write intensive it would not be long before the SSD would start to wear out.</p>
<p><strong>The mission Statement:</strong></p>
<p>So all this got us to thinking, without paying for incredibly expensive shared storage, redirecting the issue onto another infrastructure service, or adding on to your current shared storage to squeeze out those precious few IOPS, what could be done with VDI and storage to negate this issue?</p>
<p><strong><img class="alignright size-thumbnail wp-image-2351" title="Drive_RAM" src="/wp-content/uploads/2012/10/drive_ram.png?w=150" alt="" width="150" height="150" />So we got back to talking about RAM:</strong></p>
<p>RAM has been around for a long time, and it&#8217;s getting cheaper as time goes by. We pile the stuff into hypervisors for shared server workloads. Citrix provisioning server chews up the stuff for caching images in server side and the PVS client even offers a ram caching feature too, but at present it limit&#8217;s the customer to a Provisioning server and XenDesktop or XenApp.</p>
<p>But what if we could leverage a RAM caching mechanism? decoupled from a paid for technology, or provisioning server, that&#8217;s freely available and can be leveraged in any hypervisor or technology stack? Providing a catch all, free and easy caching mechanism?</p>
<p>and then it hit me&#8230;</p>
<p><strong>Eureka!<a href="/wp-content/uploads/2012/10/jabber-e1349426780600.png"><img class="alignright size-full wp-image-2356" title="Jabber" src="/wp-content/uploads/2012/10/jabber-e1349426780600.png" alt="" width="45" height="45" /></a></strong></p>
<p>This technology has been around for years, cleverly tucked away in Thin Client architecture and Microsoft have been gradually adding more and more functionality to this heavily underestimated driver&#8230;</p>
<p><strong>Re Introducing ladies and gents, the Microsoft Extended write filter.<a href="/wp-content/uploads/2012/10/winembed_v_rgb_reasonably_small.png"><img class="alignright size-full wp-image-2354" title="WinEmbed_v_rgb_reasonably_small" src="/wp-content/uploads/2012/10/winembed_v_rgb_reasonably_small.png" alt="" width="128" height="128" /></a></strong></p>
<p>The <a href="http://en.wikipedia.org/wiki/Enhanced_Write_Filter" target="_blank">Microsoft Extended Write Filter</a> has been happily tucked away in the Microsoft embedded operating systems since XP and is fairly unfamiliar to most administrators. The Microsoft extended write filter saves all disk writes to memory (except for specific chunks of registry) resulting in the pc being clean each time it is restarted.</p>
<p>This technology has been mostly ignored unless you are particularly comfortable with Thin Client architectures. Interestingly during my initial research for this mini project, I found that many <a href="http://windowsdevcenter.com/pub/a/windows/excerpt/carpchacks_chap1/index.html" target="_blank">car pc enthusiasts</a> or early SSD adopters have been hacking this component out of Windows Embedded and installing it into their mainstream windows operating systems to cut down on wear and tear to flash or Solid state disks.</p>
<p>Microsoft have been gradually building this technology with each release adding a powerful <a href="http://msdn.microsoft.com/en-us/library/ms933204(v=winembedded.5).aspx" target="_blank">API</a> to view cache hits, contents of the cache and even write the cache out to disk as required, in an image update scenario&#8230; or even better, when a RAM spill over is about to happen&#8230;</p>
<p><strong><em>If this technology is tried and trusted in the car pc world, could it be translated to VDI?&#8230;</em></strong></p>
<p><strong>To the lab!<img class="alignright size-full wp-image-2360" title="icon_company" src="/wp-content/uploads/2012/10/icon_company1-e1349428127779.png" alt="" width="75" height="81" /></strong></p>
<p>Excited with my research Thus far, I decided to take the plunge. I began by extracting the EWF drivers and registry keys necessary to run this driver from a copy of windows Embedded standard&#8230;</p>
<p>With a bit of trial, error, hair pulling and programming I managed to get this driver into a clean windows 7 x86 image, so from here I disabled the driver and started testing first without this driver.</p>
<p>(I&#8217;m not going to go into how to extract the driver yet, please check back later for a follow up post)</p>
<p><strong>Test A: XenDesktop,  no RAM caching</strong></p>
<p>To start my testing, I went with a XenDesktop image, hosted on XenServer without intellicache. I sealed the shared image on an NFS shared from a windows server. I performed a cold boot, opened a few applications then shutdown again.</p>
<p>I tracked the read and write IOPS Via Windows Perfmon on the shared NFS disk, and below were my findings:</p>
<p><a href="/wp-content/uploads/2012/10/xd-no-ic.png"><img class="aligncenter size-medium wp-image-2352" title="XD-No-IC" src="/wp-content/uploads/2012/10/xd-no-ic.png?w=243" alt="" width="243" height="300" /></a></p>
<p style="text-align:center;"><em>(Black line denotes Write IOPS to shared storage)</em></p>
<p><strong>Test B: XenDesktop, with the Microsoft Extended Write Filter.</strong></p>
<p>The results below pretty much speak for themselves&#8230;</p>
<p><a href="/wp-content/uploads/2012/10/xd-ic.png"><img class="aligncenter size-medium wp-image-2353" title="XD-IC" src="/wp-content/uploads/2012/10/xd-ic.png?w=244" alt="" width="244" height="300" /></a></p>
<p style="text-align:center;"><em>(again, black line denotes write IOPS)</em></p>
<p>I couldn&#8217;t believe how much of a difference this little driver had made to the write IOPS&#8230;</p>
<p>Just for ease of comparison, here&#8217;s a side by side look at what exactly is happening here:</p>
<p><a href="/wp-content/uploads/2012/10/screenshot.png"><img class="aligncenter size-medium wp-image-2355" title="screenshot" src="/wp-content/uploads/2012/10/screenshot.png?w=300" alt="" width="300" height="187" /></a></p>
<p><strong>So where do we go from here:</strong></p>
<p>Well, no matter what you do, Microsoft <span style="text-decoration:underline;">will not</span> support this option and they&#8217;d probably sue the pants off me if I wrapped this solution up for download&#8230;</p>
<p>But this exercise on thinking outside of the box raised some really interesting questions&#8230; Around other technologies and the offerings they already have in their stack.</p>
<p>In my next few posts on this subject I&#8217;ll cover the below topic&#8217;s at length and discuss my findings.</p>
<ul>
<li>I&#8217;ll be looking at how to leverage this with Citrix Intellicache for truly IO less desktops.</li>
<li>I&#8217;ll be looking at how this technology could be adopted by Microsoft for their MED-V stack.</li>
<li>I&#8217;ll be looking at one of my personal favourite technologies, VDI in a Box.</li>
<li>I&#8217;ll be looking at how we could leverage this API ,for spill over to disk in a cache flood scenario or even a management appliance to control the spill overs to your storage.</li>
</ul>
<p><em>And Finally</em>, I&#8217;ll be looking at how Citrix or Microsoft could quite easily combine two of their current solutions to provide an incredible offering.</p>
<p>And that&#8217;s it for now, just a little teaser for the follow up blog posts which I&#8217;ll gradually release before Citrix Synergy.</p>
<p>I would like to thank <a href="https://twitter.com/barryschiffer" target="_blank">Barry Schiffer</a>, <a title="The dutch IT guy!" href="http://www.ingmarverheij.com/">Ingmar Verheij</a>, <a href="https://twitter.com/KBaggerman" target="_blank">Kees Baggerman</a> and <a title="An amazing blog, i regularly read from Remko." href="http://www.remkoweijnen.nl/blog/">Remko Weijnen</a>. For their help and input on this mini project. It was greatly appreciated.</p>
]]></content:encoded>
			<wfw:commentRss>http://andrewmorgan.ie/2012/10/on-e2e-geek-speak-iops-shared-storage-and-a-fresh-idea-part-1/feed/</wfw:commentRss>
		<slash:comments>13</slash:comments>
		</item>
	</channel>
</rss>
