A place for SMB IT channel strategies, partner programs, reseller opportunities, business tips, industry trends, and more.
Look at almost any BDR's website, and they tout their business continuity and disaster recovery services as enterprise class (or grade) at small business friendly prices. But what does best-in-breed (or enterprise class, if you will) backup and general data storage really look like?
When I first began this article, I realized I had a problem. I didn't know what was actually considered data storage and backup best practice among real enterprises. I figured it involved lots of redundancy, RAID arrays, SANs, but that didn't seem sufficient for writing an article. So I reached out to a few of my contacts in the data storage industry with a plea to help me understand what was really happening in the data centers of big business.
Even though I only asked for a paragraph or two, Darren McBride, CEO of Highly Reliable Systems, wrote me back in under an hour with a veritable essay (and given the occasional typo and such, it was obvious he really did write me back). I thought it was interesting and educational enough to reproduce below in full here on SMB Nation. My follow-up article—Cloud, On-Prem or Hybrid? Enterprise Backup and Storage for SMBs—will appear later today on SMB Nation.
Due to the increase in the use of virtualized servers in the enterprise, storage has migrated over the last 10 years from inside the server to external SANs. While small business still largely builds their servers with mirrored boot drives and RAID5 or RAID6 SAS drives installed inside the server chassis, enterprise customers like the flexibility of using shared storage among multiple physical and virtual servers. By centralizing storage the enterprise gains several benefits.
Virtualization platforms like VMware and vMotion allow enterprise customers to move running virtual machines from one physical server to another with zero downtime, continuous service availability and complete transaction integrity. In addition, they minimize wasted drive space compared to having storage in various physical servers. In an environment that has 100 servers and storage physically installed inside the server, each has to be configured with enough empty space to allow for growth. To allow overhead of two to three times the current data size per server, it’s not hard to see how an enterprise wastes a tremendous amount of the purchased hard drive space by putting it in the server. By contrast, with a centralized SAN and shared storage, disks can be virtualized just like machines are. Space can be allocated to each server based on need without wasting it.
Several issues arise when storage is moved from inside the server to a SAN. The first is performance. Anyone who has ever replaced a 7200 RPM drive with a 10,000 RPM or an SSD in a server knows that I/O speed largely dictates the end user experience when running multi-user database applications. Users will report a night and day difference after the upgrade when doing I/O intensive things like running large reports. How can a SAN keep up with locally attached SAS storage and retain performance? The answer is in many cases they can’t. SANs do have the advantage of being more highly engineered and having more spindles (more hard drives), which can make up for some of the performance gap. Using faster file systems and interfaces like Fiber Channel has also been a traditional part of the performance answer.
Enterprise SANs must insure redundancy and reliability. SANs are typically the domain of specialty manufacturers like EMC, Hitachi, HP and now Dell. Many of these vendors use redundant power supplies, RAID arrays, and redundant “controllers” (think of a controller as the motherboard inside the SAN). Much thought goes into making sure the SAN is highly available. In large enterprises, it’s not unusual to see multiple SANs spread over several offices. Software in the SAN then allows them to replicate or “snap shot” to one another for backup and redundancy.
The line between Network Attached Storage (NAS) and a Storage Area Network (SAN) device has blurred over the last 2 to 3 years. NAS has traditionally been storage shared at the file level, whereas SANS have shared their storage at the block level. Software protocols like iSCSI are showing up in many NAS boxes, allowing them to be used more like traditional SANs, usually over Gigabit Ethernet, which may not perform as well as fiber channel or other traditional SAN hardware interfaces.
One key differentiator between lower end iSCSI implementations versus true SANS is the ability to have more than one server to share the same hard drive space or volume. This functionality is important in being able to fail a virtual machine over to another physical machine while retaining connectivity to the shared storage. At Highly Reliable Systems, our iSCSI implementations require the user to use an entire physical drive rather than allowing them to sub-divide the drive and allocate it to different servers. This restriction is only because we’re focused on drive removability and creating a transportable backup media. It also means that sharing drive space between servers is an undesirable feature.