This is a series of posts reviewing the Compellent Storage Center Storage Array.
Compellent Inc, founded in 2002, produces the Storage Center product, a SAN storage array build around commodity hardware. In addition to providing advanced features found on newer storage arrays (such as thin provisioning), the Compellent device has one unique (for now) feature that sets it apart from the competition and that’s the ability to tier storage at the block level, a feature known as Dynamic Block Architecture. Where traditional arrays place an entire LUN onto a single tier of storage, Storage Center breaks down the LUN into smaller chunks, allowing a finer granularity in the way data is written to disk. As we will see with this hardware and software feature review there’s more to the tiering than initially appears.
Before we get into the technical specifications, let’s look at the company in more detail. As previously mentioned, Compellent Inc was founded in 2002 and is based in Minnesota, USA. The company is publicly traded, having filed for IPO in October 2007 and now claims over 1000 customers in 25 countries with over 2000 array deployments. Since IPO, the company has moved into profitability and increased revenue and margin consistently. See the embedded graph for more details. In recent months, Compellent has been seen as an acquisition target, following the bidding war for 3Par by HP and Dell. It remains perhaps one of the only independent SAN storage array vendors that target the tier 1 or Enterprise-class space.
In the remainder of this post, we’ll look at the hardware itself.
Compellent have provided one of their Storage Center Model 30 controllers (CT-SC030) with two disk shelves for the review. The disk shelves contain SSD, SAS and FC drives, enabling configurations of up to three tiers to be tested. We’ll look at those in a moment.
The controller itself is pretty straightforward and is a standard PC chassis and motherboard. It has the following specifications:
- 3GB onboard memory
- Intel Xeon 5160 – 3Ghz
- SuperMicro Motherboard
- Dual redundant power supplies
- Six on-board fans
- 4x PCI-Express expansion slots
- 1x PCI-X expansion slots
- 2x on-board ethernet GigE ports
The expansion slots are used to support external connectivity to hosts and disk shelves. The review model was supplied with Qlogic iSCSI HBAs in slots 1 and 2, a Qlogic QLE2464 in slot 3, providing both front and back-end Fibre Channel connectivity and a SAS controller in slot 6. The only non-commodity part of the hardware is a cache controller that sits in slot 5. This is directly manufactured by Compellent rather than a 3rd party supplier. Power supplies are hot-swappable components, however fans, cache cards and interface cards aren’t unless the array is part of a dual controller configuration and in the case of interface cards, have been configured in a redundant design. This factor is clearly a consideration when choosing a storage system as powering down systems for parts upgrade is both intrusive in maintenance slots as well as in terms of outages due to failed components.
Two disk shelves (termed enclosures) have been provided with the evaluation unit. One houses SSD and Fibre Channel drives and is FC connected from the controller, the other is SAS connected and holds large capacity (1TB) SAS drives. Each enclosure contains dedicated power supplies, fans and I/O modules with redundancy built in. This increases the overall availability of a single Compellent array solution. Fibre Channel enclosures hold up to 16 drives in a horizontal 4×4 configuration occupying 3U; SAS enclosures hold up to 12 drives horizontally in 2U. All drives are hot-swappable. Drive capacities and types currently supported include (excluding EOL models):
- Fibre Channel – 15K 300GB & 15K 450GB
- SATA – 500GB & 1TB
- SSD – 140GB
- SAS – 15K 450GB & 7.2K 1TB
In the evaluation equipment the SSD drives were supplied by STEC and the remaining drives were Seagate models, but presumably could be sourced from multiple manufacturers as the drives reported their standard model names in the Storage Center GUI.
Both the controller unit and enclosures look pretty non-descript (see the videos at the foot of this post showing the controller, with bezel removed). In my opinion, the look of hardware is much less important than the reliability and functionality it offers (HP storage products, for example, all look like servers and enclosures). All of the components of the Storage Center hardware – slots, power supplies, fans – are all visibly monitored from the central management tool, providing consistent reporting on the hardware status at any time. This level of detail is much more important than the colour of the front bezel, in my view. As we will see in the next few posts, the “secret sauce” is achieved through software, rather than bespoke hardware components. In the meantime, enjoy this brief video of the hardware as it was being installed.
- Netapp: The Inflexibility of Flexvols (9,931)
- Windows Server 2012 (Windows Server “8″) – Storage Spaces (9,374)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (7,831)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (5,807)
- Review: Compellent Storage Center – Part II (5,421)
- Data ONTAP 8.0 – Part III (5,048)
- Why Does Microsoft Hyper-V Not Support NFS? (4,844)
- Back to Blogging (4,450)
- How To: Enable iSNS Server in Windows 2008 (4,307)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (4,127)
- ViPR – Frankenstorage Revisited (15)
- Windows Server 2012 (Windows Server “8″) – Virtual Fibre Channel (10)
- How To: Enable iSNS Server in Windows 2008 (9)
- Enterprise Computing: 3par and HDS – 50% Saving – Guaranteed? (7)
- Enterprise Computing: Why Thin Provisioning Is Not The Holy Grail for Utilisation (6)
- HUS VM – Hitachi’s New Midrange Baby (5)
- Manipulating the Message – The Art of Marketing (4)
- Choosing Between Monolithic and Modular Architectures – Part I (4)
- EMC Releases VNX and “Breaks Records” (4)
- Comparing iSCSI Targets – Microsoft, StarWind, iSCSI Cake and Kernsafe – Part I (4)