Tag Archives: whs

tombstone_rip.gif

Windows Home Server Is Terminal

tombstone_ripWhile not unexpected, Microsoft made it official, Windows Home Server joins the Zune, Kin and others in the Microsoft product graveyard. But that doesn’t mean its dead yet. It will be available as an OEM DVD (such as from Newegg) through 2013. Plus, mainstream support doesn’t end until April 16, 2016 so we’ll have security fixes through then at least. I assume we’ll also get general bug fixes if they’re bad enough. OEMs can install it on devices through 2025 but that seems more bizarre than realistic.

Microsoft’s Plan? From the Windows Server 2012 Essentials FAQ (PDF Link):

Q: Will there be a next version of Windows Home Server?
A: No. Windows Home Server has seen its greatest success in small office/home office (SOHO) environments and among the technology enthusiast community. For this reason, Microsoft is combining the features that were previously only found in Windows Home Server, such as support for DLNA-compliant devices and media streaming, into Windows Server 2012 Essentials and focusing our efforts into making Windows Server 2012 Essentials the ideal first server operating system for both small business and home use—offering an intuitive administration experience, elastic and resilient storage features with Storage Spaces, and robust data protection for the server and client computers.

Unless they discount the $425 retail price of the license I don’t see a lot of homes using SBS 2012e as a home server. (Except for enthusiasts who have a Technet subscription.)

I’ll be running my Windows Home Server 2011 until something better suited for me comes along. My Windows Home Server doesn’t know it’s terminal so it will keep chugging along. Technology is constantly changes as are my storage needs. April 2016 is the earliest I would be forced off WHS. I suspect it will seem like old tech long before that and I’ll move to something more appropriate for the times. That is, assuming I still want a central storage box. I’m already heavily invested in Synology NAS’s, which I love, so they certainly have an edge. But if they could replace my WHS they would have done so already.

Considering or running WHS? Does Microsoft’s announcement change anything for you?

Image of Synolog DiskStation 212j

Synology to Windows Home Server Using iSCSI

Image of Synolog DeskStation 212jI’ve been looking at the capabilities of the Synology NAS products by looking over the Synology DiskStation 212j. This time around I gave it a spin as an iSCSI target from Windows Home Server 2011. There’s links at the end for more information about iSCSI, but for my purposes here it can be thought of as a way to present a network connected drive as a local drive to the operating system. The Synology NAS will be addressed by WHS 2011 as a local drive. No additional software is needed, it’s all built in to Synology and Windows Home Server.

This was configured using the Synology DiskStation 4 beta software although the DiskStation 3 software is set up the same way based on the information at the Synology website.

iSCSI Target Types

The Synology DiskStation software supports three different configuration types as an iSCSI LUN:

Regular Files – this configures the target on an already created file volume. This allows flexibility in allocating space. It can be increased anytime, as long as there’s space available on the volume.

Block Level (Single LUN on RAID) – this configures the target on available disks. There can’t be anything else on the disks used and they will be completely allocated. This provides the best performance (according to Synology). The disks can be configured for RAID.

Block Level (Multiple LUNs on RAID) – this configures the target on available disk space. Space already allocated to volumes can’t be used, but the disk(s) can be shared with file volumes.

Configuring iSCSI

The Synology website has good instructions on configuring iSCSI with their software so I won’t repeat it here. But for my simple requirements I was able to run through the wizard and accept the defaults. I didn’t set up any advanced options. When configuring a “Regular Files” LUN the size defaults to 1 GB so I did increase that to a more useful size.

Configuring iSCSI on Windows Home Server 2011 was a bit different than documented by Synology so I’ll run through it here. The configuration is the same for Windows 7 and Windows Storage Server 2008 R2 Essentials. I suspect Windows Server 2008 R2 is also the same along with the other related software such as Small Business Server 2008.

This needs to be done on the server itself so a Remote Desktop connection is needed (assuming the server is headless). Go to Control Panel and select “Set up iSCSI Initiator”. Then answer “Yes” to the prompt to start the iSCSI service.

iSCSI Control Panel iSCSI Service notice

The iSCSI properties dialog will appera – select discovery tab then click the “Discover Portal” button and enter the IP address (or DNS name) of the Synology NAS. Once the info is entered you should see the iSCSI target on the Synology NAS although it will still be listed as inactive. To establish the connection click the “Connect” button. In a strange twist of terminology you want to leave the default “Add this connection to the list of Favorite Targets” in order to make the connection persistent.

iSCSI Discovery Properties dialog Discovered targets list Favorite Connections prompt

At this point the connection is established and the status will change to “Connected”. Once the connection is established you’ll need to switch over the “Disk Management” section of the Computer Management console.

iSCSI properties after connection  Computer Management

When you click on “Disk Management” you’ll be prompted to initialize the disk. If the disk will be larger than 2 TB select “GPT” as the partition table type. Right-click on the newly added disk and select “New Simple Volume” from the context menu. Run through the wizard and when the wizard is done, so are you.

Initialize disk prompt  Create volume menu selection  Drive after formating

Now the disk can be used like any other local disk.

Benchmarks

Performance isn’t a reason for doing iSCSI, at least not with a home network and a low-end Synology DS212j. It’s going to be slower than a local SATA drive, but since I can, I did some benchmarks.

This is Windows Home Server 2011 running on an HP MicroServer with relatively slow Western Digital 1TB Green Drives. It’s a Gigabit network using the MicroServer’s onboard NIC. When running the benchmarks I kept network traffic to a minimum, no streaming video or file copies, but I didn’t turn any devices off, so there was the normal background network traffic. Everything is connected to the same switch.

The DS212j had two 7200 RPM drives in it. One a Western Digital Caviar Black and the other a Hitachi HDT721010SLA360 drive. Both are on Synology’s compatibility list.

The first benchmarks show the local drives, the second shows a “Regular Files” iSCSI target.

Local Drive benchmarks  iSCSI Regular Files benchmarks

I also set up each type of Block Level LUN and benchmarked them. The first is the Single LUN setup which should be the best performer, the second is a Multi LUN setup.

iSCSI single LUN benchmarks  iSCSI Multi LUN connection benchmarks

Wrapping Up

Being able to use the Synology boxes as an iSCSI target is a nice feature. Since it’s accessed over the network it’s not going to out perform a local drive unless you got a data-center class network to run it over. iSCSI doesn’t allow multiple PCs to access the same LUN (except with cluster aware software) since there’s no file locking, so it’s not a suitable replacement for a file share.

The more I explore the Synology software the more I’m considering one of their larger models. While I don’t see any immediate need to swap out anything I use for an iSCSI connected Synology NAS, I do think that an investment in a Synology DiskStation would eventually be used as an iSCSI connected drive somewhere in the future.

Additional Links:

Wikipedia article about iSCSI

Synology iSCSI Best Paractices

Synology iSCSI – How to Use

Trashes folder on a WHS share

Apple Software On WHS Shares

Trashes folder on a WHS shareI run a mixed Windows/Mac home and all my data resides on my Windows Home Server no matter whether it’s Windows or Mac. This means my iPhoto library, iTunes library, Aperture library are all on my Windows Home Server. I recently noticed that these libraries were saving deleted files forever.

The libraries are a directory structure that OS X understands and may present to the user as a single file. For example, iPhoto displays as a single file in OS X unless “show package contents” is selected. Even though my iPhoto library is on a WHS share OS X displays it to me as a single file bundle. As long as the files remain within the library structure all is well. Libraries that maintain their own internal trash bin (i.e. iPhoto and Aperture, maybe more) end up trying to move the files to the OS X trash bin when you empty the library’s trash bin.

I recently noticed that when I emptied the trash in iPhoto that it moved the files to a “.Trashes” folder on my WHS share. (Note the leading dot)  See the first graphic to see what I mean, click it to enlarge) Well actually I noticed this huge .Trashes folder and then found it came from iPhoto and Aperture. If this was an OS X drive running on OS X it would be part of the trash bin and get emptied when I emptied the trash. Aperture also worked the same way once I checked. On the WHS share they live forever,  even OS X didn’t see it as part of the recycle bin.

The .Trashes folder could be deleted just like any other folder without causing a problem. The next time you empty a library’s trash it will be recreated. To see the folder you need to enable viewing hidden files and folder (click for full size for the Windows 7 setting below):

 

Show Hidden Folders Option

I also found that iTunes saved replaced apps to the .Trashes folder. Luckily it doesn’t save replaced or deleted podcasts. If it did I’d probably have run out of disk space. iTunes doesn’t seem to save anything I delete on my own, only the apps it replaced.

It’s only my apps that maintain their own library structure that have this issue. Deleting regular files on my WHS from OS X deletes them immediately.

I guess there is a price to pay for trying to get Microsoft and Apple to play together. But this is a small prices since it’s easily fixed with a scheduled task to delete the directory.

Acer Aspire AH342 Home Server

Acer Aspire Windows Home Server AH342-U2T2H

Acer Aspire AH342 Home ServerThe Acer Aspire Windows Home Server seems to be one of the few Windows Home Servers that can still be purchased in the US. Just before Christmas Newegg had it on sale for $290. After Christmas it went back up to $350 but then dropped further to $260 (it’s list price is $449). Since it includes Windows Home Server v1, and not the latest version, I suspect we’ll see more discounting as Acer tries to clear out it’s stock. Hopefully they’ll have a WHS 2011 version and stay in the market. I took a look at the Acer Aspire AH342-U2T2H.

Windows Home Server v1 will end-of-life in January 2013 so any WHS v1 purchase needs to take that into account. It’s not like the server will turn into a pumpkin at that time, but Microsoft will stop providing updates. This will be after the Windows 8 release date so hopefully Microsoft would release new connector software if it’s needed for WHS. If you’re going to be using the server for remote access, meaning it’s accessible from the internet, the lack of security updates after 2012 would be a concern. If the server is going to only be accessed by computers in the home then it’s less of a concern.

The hardware should support Windows Home Server 2011 if you want to install it later. There’s no onboard video so you’ll either need to install a PCIe x1 video card or do a blind unattended install. The server comes with 2GB of RAM and the specs say that 2GB is the max so that could be an issue depending on what add-ins you install. The Atom D510 CPU is 64-bit so can run WHS 2011.

This server was purchased to provide backup and central storage for a few PCs, basically a low cost NAS. There’s only one drive so to use folder duplication a second drive would have to be added. Because hard drive prices haven’t returned to pre-flood pricing I’m contributing one of my slightly used 2 TB drives for use in the server.

Initial Setup

Because the WHS software delivered with the server is quit old I couldn’t use if for setup since I have Windows 7 clients. If I had Vista or XP clients I could have installed the bundled software and then upgraded. Since I only had Windows 7 I followed these steps:

  1. Unpacked, plugged in and powered on the server. While it was doing its initial setup I went to step 2.
  2. Download the latest connector software from Microsoft and burn it to a CD.
  3. Once the LEDs stopped blinking I was ready to move on. The quick start light said all the blue LEDs would be on solid which is a bit confusing. The panel LEDs include a network LED which blinks for network activity and a hard drive light which blinks for activity. The status LED was blue and red while the drive lights were blue and purple. I moved on once things seemed to settle down.
  4. I popped the connector CD into a Windows 7 PC and ran it. The screenshots for the installation are below. Click for a larger picture.
    1   3
     4  5
      6 7
      8 9
  5. After logging onto the Windows Home Server my next step was to remove the McAfee Anti-virus software. I don’t use AV on my own WHS and if the owner wanted AV McAfee would be my last choice. As it is the included license is limited to 60-days so removing it wasn’t a problem for the server owner. The version pre-installed won’t work once WHS is updated although there might be an update from McAfee (I didn’t bother to inquire).I uninstalled McAfee through Add-Remove programs after RDP’ing into the server. It can’t be removed through the add-in manager.
  6. While still RDP’d into the server I ran Windows update and installed all the available updates.

At this point the Acer Aspire is a basic Windows Home Server v1 box with the latest updates.

Hardware & Features

The server comes with one 2TB Western Digital Green Drive (WD20EADS). I’d prefer a small system drive since I don’t like to share the OS drive with data, In this case it’s not much of a concern since I don’t expect heavy usage. So to take advantage of folder duplication I’ll be adding a second drive which is also a W20EADS drive. For testing purposes I added two more drives.

The server also has a nice compact form factor and will look good on a shelf. There’s also an eSata port and several USB ports (all USB 2). The front USB port has a one-button copy feature I’ll talk about later.

It’s also surprisingly quiet. I’ve got four drives installed and I’m doing a file copy. Even sitting next to the server I have to strain to hear the fan and the drives are silent.

There’s some multimedia software that will probably go unused and I don’t have time to test them. The console has tabs for “iTunes Server” and “Digital Media Server” and Firefly Media Server is installed. The server did show up as a “Media Server” for my LG Blu-Ray player and I was able to stream a video from the server.

The Lights Out add-in is also included although it is an old version (v0.8) so it needed to be upgrade. The add-in was licensed with an oem license but after the upgrade the license reverted to the trial version. Once the trial is over the license will revert to a community addition license which, according to this, has all the features of v0.8 plus a few more. The upgrade was done like installing any other add-in. I didn’t need to uninstall the original add-in although doing so probably would have been a good idea.

The One-button USB copy is interesting but I’d prefer it didn’t try to think so much. I tested with a drive full of DVD rips. It copied the drive to the public share as expected but then it copied about 50 of the .BUP and .IFO files to the video directory and renamed them to avoid duplicates. Pretty useless on their own and breaking the rip directory since they’re missing. It was also interesting that other files with the same names were left alone. So if you already have files in an organized directory structure this feature may change the structure so you may want to skip it and do a regular copy.

The expansion slot allows a video card to be added should one be needed. But it’s a PCI Express x1 slot which isn’t common among video cards. I’d be more inclined to look for a USB 3 expansion card to add some external drives. It will need to be a low-profile card.

I wish Acer would drop the McAfee AV add-in which I view as nothing but crapware. Even if it worked, it’s still only a 60-day license. The Light-Out adding is outdated but at least it was a full license. The included add-in and its license doesn’t provide any benefit once the latest version is installed.

I attached a Lian-Li EX-503 External Enclosure via the eSata port. The server could see four out of the five drives in the enclosure so the eSata port can handle a port multiplier but only up to four drives. There were also four drives in the server bays. I didn’t do any benchmarking or other testing beyond verifying that drives could be seen.

Power Consumption

I did some quick power measurements using a Kill-a-watt power meter. The server was plugged into the Kill-a-Watt which was plugged into the UPS outlet. I started with all 4 drive bays populated. There were three Western Digital 2 TB EADS drives including the one that shipped with the server as the OS drive. The fourth drive was a Hitachi  Deskstar 7K200 drive (2 TB, 7200 RPM).

With all four drives the power usage was between 52 and 56 watts. The 52 watts was when the server was idle, at least as far as access goes. Some background processes may be running although CPU usage did remain low. The 56 watts was during file copies or drive removal processing although it mostly stayed at 55 watts under load.

I removed the Hitachi drive and usage dropped to 44 to 46 watts with occasional and brief drops below 44 watts. When folder duplication was active the power usage was 46 watts.

With two W20EADS drives installed the power usage was 36 watts while idle and 37 watts while processing a client backup. During folder duplication, when both drives would be active, the power usage was 37 watts.

With just the original drive delivered with the server the power usage was 29 watts while idle.

Drive benchmarks

The benchmarks below are the screenshots of the ATTO benchmark results. ATTO was run locally on the server (double-click for full size).

ATTO Benchmark for Drive C:  ATTO Benchmark for Drive D:

There’s not much of a difference between C: and D: since they are the same physical drive.

The screenshot below shows the results of a robocopy from my Windows 7 PC to a server share with duplication enabled.

RobocopyResults_Win7ToAspire

The reported speed for the file transfer was about 2 GB per minute. If my math is right at 8 bits per Byte and 60 seconds per minute that’s about 271 Mbps. Turning the results to MB/s shows a speed of 33.94 MB/s which is significantly slower to the ATTO results run directly on the server, but includes all the server and network overhead. Additional tests produced similar results.

The screenshot below shows the results of a robocopy from the Aspire AH342 to my PC. The copy was started after the server completed drive balancing and wasn’t doing anything else.

Results of RoboCopy from Aspire H342 to Win7 PC

Assuming my math is again correct this is 231 Mbps and 28.93 MB/s.

The file copies were done with mostly video files so the average file size was pretty large and there wasn’t a lot of overhead opening a lot of files.

Summary

The price is certainly the big attraction although if you’re going to add three hard drives to max it out the cost will go up considerably at today’s prices. But if you have the drives or can wait for the flood-induced prices to drop it’s worth it. Personally I think a second drive should be added in order to enable folder duplication or to do backups so that will increase the cost.

Returning to Windows Home Server v1 was both nostalgic and a reminder of the frustrations WHS v1 brought. Removing a drive brings down the server while it’s processed which can be time consuming (hours). That’s not something most people will do as a regular activity so it’s not too much of a concern. There was also the occasional slowdown as some process ran (backup cleanup, drive balancing). After using WHS 2011 for about a year WHS V1 just looked and felt old.

I was impressed with the Acer Aspire AH342 Home Server. It will make a good NAS for sharing files and PC backups, which is why it was bought. But it’s not a product someone can buy off the shelf and expect to get running unless they’re familiar with WHS or have only Windows XP and Vista machines. But once the software’s age related issues are worked out it performed well. Plus I like the nice small cube form-factor and it’s quiet. It can be out in the open and on all the time.

Backup Logo - Laptops connected to backup

Cloudberry Continuous Data Protection

Backup Logo - Laptops connected to backupCloudberry recently added Continuous Data Protection (CDP) to their backup software, including Cloudberry Backup for Windows Home Server 2011. This seems like something I can use to replace my hourly backup so I decided to do some testing. I switched the hourly backup to use a CDP schedule instead. I use the hourly backup to move my most important files offsite (Amazon S3) soon after they’re created. I don’t use RAID so if I lose a disk the data needs to be restored from backup and the hourly backup was my solution. The new CDP option seems like a good fit.

I enabled CDP for my “hourly” backup soon after installing the update. It didn’t work exactly as I expected but the differences didn’t affect the actual backups. I also did some additional stress testing to check out performance and this is what I found.

Using CDP

The CloudBerry blog post said…

Under the hood the changes are captured instantly but the data is uploaded to Cloud storage every 10 minutes.

But I found that the backups occur or are checked for every minute. According to the logs they’re uploaded immediately.  The blog posts also mentions this is configurable although I’ve yet to find where this can be configured in the WHS add-in.

Using CDP takes some getting used to because it changes the way the add-in reports it’s status. It would be less of a problem for someone not used to a regular schedule or is less concerned with checking the status regularly.

The screenshot below shows the status screen for my “hourly” backup several days after CDP was enabled.

Cloudberry CDP status screen

 

Some items of note:

  • The job status is always “running”. The status message uses the term “instant backup” when waiting for files to be backed up.
  • The “files uploaded” only shows the status for the last “instant backup”. If there was nothing to do then the files uploaded is 0. Checking the history shows that file uploads and purges are taking place as required. So while disconcerting, it’s only a cosmetic problem.
  • Since the backup just never ends there’s no email updates for success or errors. I used email to let me know if there was a backup error. The emails only go at the end of a job so even if there are errors (such as an open file) I don’t get an email. In fact, trying to set up an email status report for a CDP backup resulted in a error. This error read more like a bug than a message saying the feature was unavailable.
  • Rather than a 10 minute interval, a 60 second countdown begins when a “instant backup” is completed. Any waiting files are then backed up and uploaded to Amazon S3.
  • If I stop the backup by clicking “Stop Backup” it doesn’t restart. Rebooting the server does restart the CDP backup jobs.
  • Error handling is inconsistent. In my testing the backup would typically ignore errors created by open/locked files. These were valid and when the files where closed they would be backed up. So it was good the job kept running. But there was one instance where a file was moved after being flagged for backup but before it was backed up. This was a valid condition (an iTunes podcast download which downloads to a temp directory and is then moved). The backup job recorded this as an error but then stopped any additional processing. Since CDP backup plans don’t seem to restart on their own this is a problem.

Stress Testing

There wasn’t any noticeable impact changing the my hourly backup to use CDP. The HP MicroServers are relatively low powered and not capable of doing many intense tasks at once. The only add-in I run is the Cloudberry Backup add-in so I was a bit concerned it would impact streaming or other activity. There’s no noticeable load on the server while it’s waiting/looking for updated files. When there’s files to back up the load isn’t any more than the hourly backup and in theory may be less since it spreads the backups out over the hour rather than all at once. Most of the files in this backup plan get updated overnight through automated jobs (website backups, etc…) while the rest of the changes are data file changes. Still I decided to do some load testing.

I copied 121,000 files totaling 60 GB to the same drive I would be streaming a video from. I also copied that set of test files to a second drive. As a control I watched a streaming video while the files were being copied. I RDP’d into the server to do the copies so they were all local drive to drive copies. The streaming worked for awhile until a certain point where it first became slightly annoying until it eventually became unwatchable. At this time there were two file copies going on. One was copying from a directory on the drive (where the files were streaming from) to a second directory on the same drive. Another copy was running from a second drive to the streaming drive.

I have 7 backup plans. A full description can be found in my recent backup review but for purposes of this test I set all backup plans to use CDP. There were three backup plans that matched my test files so they began backing up while the other 4 just watched for files. Each of the test drives had a backup plan dedicated to them and would be doing local backups to eSATA drives so the backup wouldn’t be hindered by network or other limitations. So each drive would be backing up as quickly as the data could be read and written to disk. The third plan included all four drives in the server and backed up to a NAS. So this would be reading from both test drives but only one at a time.

Like the file copies my video stream started off fine and ran for awhile but then it became annoying as it would frequently stop and need to catch up. So no worse than a comparable file copy although still too annoying to be acceptable (while subjective, I doubt anyone would be happy). Not surprising since the backup is not much more than a file copy.

Once the backups were done and the backup plans were just watching I didn’t have a problem streaming and reading files off the server.  Deleting the test files and then letting Cloudberry update their status (I save deleted files for several days so they weren’t actually purged) didn’t affect streaming.

Summary

The good news was that my testing showed that CDP didn’t add any significant overhead above the actual file copies. The bad news is my server isn’t designed to handle a lot of simultaneous activity or file copies. Because of the way I have the shares and rives set up and the way I use the server I may not notice an impact even with CDP set on all plans. Two of the plans go to destinations that aren’t always online so CDP isn’t a good option for them and the other plans rarely have simultaneous changes. Still, CDP is far from a universal solution for me.

I’ve left CDP enabled for what was my hourly backup to Amazon S3 but I’ve returned all the other backup plans to their previous schedule. A lot of times there’s no need for immediate backup and I’d rather wait until all updates are made or a set of files is fully processed. Because of what I send to Amazon S3 I’m less likely to have issues and it’s been fine since being enabled. I do fel I need to monitor it more than I did when I used a hourly backup, if only to make sure it’s still running and that may end up being enough to go back to an hourly schedule if I don’t become more comfortable with CDP’s reliability.

[Update Dec 30, 2011]: I was able to configure email notifications for one CDP plan and it did send a notification when that plan ended with a failure. Unfortunately the CDP plans don’t restart on their own when an error is encountered so I’ve gone back to an hourly schedule for critical backups.