Child pages
  • System Requirements for Netmail Store
Skip to end of metadata
Go to start of metadata

Redirection Notice

This page will redirect to DOC:Requirements for Netmail Store.

Requirements and Recommendations for Netmail Store 5.3.x
Updated June 11th, 2014

Requirements for Netmail Store 5.2.x  [PDF]

This page provides a selection of hardware shown to be adequate for the Netmail Store part of the platform to operate up to its expected performance level.

Before installing Netmail Store, it is advisable that your hardware complies with the following requirements and recommendations.

 

 Netmail Store Hardware Requirements and Recommendations

Minimum Hardware Requirements

Netmail Store installs and runs on enterprise-class x86 commodity hardware. To ensure high availability and fail over in the event of a node failure, Messaging Architects recommends configuring your cluster with a minimum of three nodes.

The following table lists the minimum hardware requirements for a Netmail Store cluster.

ComponentRequirement
Nodex86 with Pentium-class CPUs
Number of nodesThree
Node boot capabilityUSB flash drive or network drive
Network interfacesOne 100Mbps Ethernet NIC with one RJ-45 port
Hard drivesOne hard drive
RAM2GB

At least three Netmail Store nodes are required in the cluster to ensure adequate recovery space in the event of a node failure.

Netmail Store nodes are designed to run using lights-out management (or out-of-band management) and do not require a keyboard, monitor, and mouse (KVM) to operate.

Note: All nodes in a cluster must use the same speed network interface to remain compatible. Variable network speeds for different nodes are not a supported configuration. For example, a node with a 100Mbps network interface card must not be in the same cluster with a node that has a 1000Mbps card.

Recommended Hardware Requirements

The following table lists the recommended hardware requirements for a Netmail Store cluster.

ComponentRequirement
Nodex86 with Intel Xeon or AMD Athlon64 (or equivalent) CPUs
Number of nodesThree or more
Node boot capabilityUSB flash drive or network drive
Network interfacesTwo Dual Gigabit Ethernet NICs with two RJ-45 ports for link aggregation (NIC teaming)
Hard drivesOne to four standard SATA hard drives
RAM4GB

Although Netmail Store runs on a variety of x86 hardware, this table lists the recommended base characteristics for a Netmail Store cluster.

Netmail Store cluster performance improves exponentially by adding additional systems with more intensive CPU hardware and additional memory. The number of recommended systems varies depending on your storage and performance requirements.

Memory Impacts on Node Storage

The Netmail Store cluster is capable of holding the sum of the maximum stream counts from all nodes in the cluster. The number of individual streams that can be stored on a Netmail Store node depends both on its disk capacity and the amount of system RAM.

The following table provides an estimate of the maximum possible number of streams (regardless of size) you can store on a node based on the amount of RAM in the node.

Amount of RAMMaximum number of immutable unnamed streamsMaximum number of unnamed anchor streams or named streams
4GB33 million16 million
8GB66 million33 million
12GB132 million66 million

Erasure Coded Objects

The number of erasure coded objects that can be stored on a node per GB of RAM is dependent on the size of the object and the configured encoding. The erasure coding manifest takes two index slots per object, regardless of the type of object (named, unnamed immutable, or unnamed anchor). Each erasure coded segment in an erasure set takes one index slot. With the default segment size of 200MB and a configured encoding of 5:2, a stream that is 1GB in size or less requires seven index slots for segments and two slots for the manifest for a total of nine slots. Larger objects can have multiple erasure sets, so you would have multiple sets of segments.

With the default segment size of 200MB, a 3GB object with 5:2 encoding would need three sets of segments requiring 23 index slots (3 erasure sets x 7 segments each + 2 for the manifest).

Configuring Netmail Store for Storage of Primarily Small Streams

By default, Netmail Store is configured to allocate a small amount of disk space to store write and delete journals.

In typical deployments, the amount of default space is sufficient because the remainder of the disk will be filled by objects before the journal space is consumed. However, for installations writing primarily small streams, the journal space can fill up before the disk space.

For these installation types, Messaging Architects recommends increasing the configurable amount of journal space allocated on the disk prior to booting Netmail Store on the node for the first time. The parameters used to change the journal allocation differ depending on the software version in use. Contact your Netmail Store support resource for guidance on setting the optimal parameters for your configuration.

High Performance Clusters

As a high performance Netmail Store cluster, Netmail Store benefits from fast CPUs and processor technologies such as large L1 and L2 caches, 64-bit computing, and fast Front Side Bus (FSB) architectures.

Because Netmail Store is highly scalable, creating a large cluster and spreading the user request load across multiple storage nodes provides significant benefits for data throughput, increasing exponentially as additional nodes are added to the cluster. Using multiple replicas when storing objects in CAStor is an excellent way to capitalize on the benefits of a Netmail Store cluster because they provide redundancy as well as a performance benefit.

When constructing a Netmail Store storage cluster for maximum performance, some variables to consider include:

  • Adding nodes to increase cluster throughput – like adding lanes to a highway
  • Fast or 64-bit CPU with large L1 and L2 caches
  • Fast RAM BUS (front-side BUS) configuration
  • Multiple, independent, fast disk channels, such as SATA-300 (SATA-II), Serial Attached SCSI (SAS), and Fibre Channel (FC)
  • Hard disks with large, on-board buffer caches and Native Command Queuing (NCQ) capability
  • Gigabit (or faster) network topology between all Netmail Store cluster nodes
  • Gigabit (or faster) network topology between the client nodes and the Netmail Store cluster

Note: If the cluster node CPU supports hyper-threading, disable this feature within the BIOS setup to prevent single-CPU degradation in Netmail Store.

Hard Drive Selection

Selecting the appropriate hard drives in the Netmail Store nodes can have a significant impact on their performance, as well as the recovery characteristics in the event of a node or disk failure.

Hard drives in Enterprise computer systems are rated for 24x7 continuous duty cycles and have time-constrained error recovery logic suitable for server deployments where error recovery is handled at a higher level than its on-board controller. In contrast, hard drives in consumer computer systems are rated for desktop use with limited duty cycles and incorporate error recovery logic that can pause all I/O operations for minutes at a time. The extended error recovery periods and non-continuous duty cycles of consumer-rated drives are not suitable or supported for Netmail Store deployments.

Additional features to consider include:

  • Performance
  • Recovery
  • Reliability
  • Data integrity

Performance

When considering performance of the storage sub-system in a node, the following drive features improve data throughput and performance of a node:

  • Buffer cache. Larger, on-board caches improve disk performance.
  • Independent disk controller channels. Reduce storage bus contention.
  • Disk RPM. Faster-spinning disks improve performance.
  • Storage bus speed. Faster data transfer rates between storage components, a feature incorporated in SATA-300, Serial Attached SCSI (SAS), and Fibre Channel hard drives.

Using independent disk controllers is often driven by the storage bus type in the computer system and hard drives. The older ATA-100 and ATA-133 (also referred to as Parallel Advanced Technology Attachment [PATA]) storage buses allow two devices on the same controller/cable. As a result, bus contention occurs when both devices are in active use. Motherboards with PATA buses typically only have two controllers. If more than two drives are used, some bus sharing must occur.

Unlike the older PATA controllers, Serial ATA (SATA) controllers and disks include only one device on each bus to overcome the previous bus contention problems. Motherboards with SATA controllers typically have four or more controllers. Recent improvements in Serial ATA controllers and hard drives (commonly called SATA-300) have doubled the bus speed of the original SATA devices.

Recovery

In addition to performance features, selecting the appropriate drives can impact the failure and recovery characteristics of a node when a drive fails. When choosing the drive capacity in a node, consider the benefits of high capacity drives versus the time required to replicate the contents of a failed drive. Larger drives will take longer to replicate than smaller drives, causing an increased business exposure when a drive fails.

Unlike consumer oriented devices where it is acceptable for a drive to spend a few minutes attempting to retry and recover from a read/write failure, it is best in redundant storage designs such as Netmail Store for the device to emit an error quickly so the operating system can initiate recovery actions. If the drive in a Netmail Store node requires a long delay before returning an error, the entire node may appear to be down, causing the cluster to initiate recovery actions for all drives in the node – not just the failed drive.

The short command timeout value inherent in most enterprise-class drives allows recovery efforts to occur while other system drives continue to support system drive access requests by Netmail Store.

Reliability and Data Integrity

Hard drive reliability from the same manufacturer can vary, depending on the intended use and duty cycles of the drive models. Consumer models targeted for the home user typically assume that the drive will not be used continuously 24x7. As a result, these drives do not include the more advanced vibration and misalignment detection and handling features. Enterprise models targeted for server and RAID applications are often rated for continuous use and include predictable error recovery times, as well as more sophisticated vibration compensation and misalignment detection.

Summary

When selecting your Netmail Store hard drives, consider the drive performance characteristics, such as buffer cache and bus type, the capacity versus recovery time trade-offs, and look for disks designed for SAN and RAID applications with 24x7 duty cycles.

 CSN Requirements and Recommendations

Recommended CSN Hardware and Software

The following table lists the recommended hardware for a CSN configuration.

ParameterDescription
Nodex86 with 64-bit Intel® Xeon® or AMD® Opteron™ (or equivalent) CPUs
RAM

6GB minimum; 12GB recommended

Supported Operating Systems

Red Hat® Enterprise Linux® (RHEL) 6.2 Server 64-bit (English version)

CentOS 6.2 64-bit Server Edition (English version)

Disk Storage

Content Router requires 200 GB of disk space for every 100 million unique objects in the associated Netmail Store cluster.

The data and graphs for the Netmail Store graphical reports require approximately 6MB of space in /var per Netmail Store node.

RAID configuration for redundancy is recommended.

A separate partition with at least 16GB of available space configured for /var/log to prevent logging from over-running other critical processes in /var .

Network

Two Gigabit Ethernet NICs required.

Four Gigabit Ethernet NICs recommended.

A single network configuration requires a dedicated subnet that the CSN can control to allocate storage node IP addresses.

A dual-network configuration requires:

  • Dedicated switch or VLAN for the internal storage network.
  • Managed switch (recommended).
  • Disabled switch spanning tree when using a secondary CSN.

Operating System

The Netmail Store CSN has been developed and tested with the U.S. English versions of RHEL 6.2 Server 64-bit and CentOS 6.2 64-bit Server Edition. Other versions/languages or Linux distributions are not currently supported. Subsequent installation instructions will assume a pre-installed RHEL Linux environment with either internet connectivity or an alternately configured yum repository for use in installing required third party packages. The CSN installation process is dependent on additional rpm packages that are not installed on the system by default. These packages are available on the RHEL distribution media included with the system.

Important: To ensure a package upgrade does not impact CSN networking, yum updates are disabled after installation by locking all software packages to the version installed during initial configuration. Please contact your support resource if you need to enable yum updates for security updates.

Networking

The CSN provides two options for network configuration:

  • Dual-network. This option was available in prior CSN releases and can continue to be used with no network reconfiguration or interface recabling in later releases. It should be used if the Netmail Store cluster should be isolated on a dedicated network so that the external network is isolated from both the PXE network boot server (including DHCP) and cluster multicast traffic. The CSN itself with a dual-network configuration is 'dual-homed' with half its network interfaces cabled to the external network and the other half cabled to the internal network. The storage nodes the CSN manages are cabled to the internal network only. This internal private network ensures the Netmail Store cluster traffic is protected from unauthorized access. In this network configuration, the CSN owns network interface detection and configuration and assumes there will not be any external manipulation of networking files or services. This configuration is ideal for both installations without a lot of networking expertise and those where cluster isolation is desirable.
  • Single-network. This option is new and should be used if the Netmail Store nodes need to be directly addressable on the same network as the CSN without requiring the SCSP Proxy to proxy requests. Both the CSN and the storage nodes it manages are cabled to the same network with single-network. In single network configurations, the CSN assumes a network administrator familiar with the environment in which the CSN runs will configure and maintain the CSN's network interfaces as well as basic network configuration, like configuring a static IP identity for the CSN. A single-network configuration is ideal for installations that have some networking expertise and want the CSN and Netmail Store nodes' IP addresses to reside in a larger subnet range.

Both network options allow definition of what servers the CSN will network boot as Netmail Store nodes. Single-network enables netboot protection by default to prevent accidental format of servers not intended as storage nodes on a larger network. Dual-network does not require netboot protection by default but it can be enabled. Please reference “Adding Netmail Store Nodes” for more information on netboot protection.

The CSN does not support migration from one type of network configuration to another without reinstalling the CSN.

The following table summarizes the pros and cons of each networking option:

FeatureSingle-NetworkDual-Network

Isolate cluster multicast traffic on a separate internal network, using SCSP Proxy to proxy requests to and from the external network

 X

Allow storage nodes to be directly addressable on a larger network without a proxy

X 

Requires a dedicated switch or VLAN for internal storage network

 X

Requires a dedicated subnet that the CSN can control for allocation of storage node IP Addresses

X 

Automate detection and configuration of network interfaces and bonding

 X

Allow manual configuration of CSN networking and bonding by an experienced Linux administrator

X 

Enable netboot protection to only allow network boot of Netmail Store nodes for explicitly defined servers

X
(By default)

X
(After enabling in the CSN console)

The following sections discuss additional details about each type of configuration to aid you in deciding which is more appropriate for your environment.

Dual-network

A dual-network CSN requires access to both the external network as well as a dedicated internal network. Allocation of the address space in the internal network is broken down as follows, depending on the network size selected during initial configuration (small or large):

Network SizeCSNExternal ApplicationsDHCPNetmail Store Netboot
small (/24)x.y.z.0-16x.y.z.17-32x.y.z.33-83x.y.255.254
large (/16)x.y.0.0-254x.y.1.0-254x.y.2.0-254

x.y.3.0-x.y.255.254

The CSN range provides IP Addresses for the various services on the Primary and Secondary CSNs.

The third-party range is provided for third party applications that need to run on the internal network to interface with the Netmail Store cluster. All IP addresses in the third-party range must be static. If you are using CFS with Netmail Store, the IP Address should be assigned in the third-party range. Best practice is to keep a list of all IP Addresses that have been allocated to different applications in the third-party range to prevent IP collisions.

The DHCP range provides an initial, temporary IP Address to Netmail Store nodes during their initial boot until permanent addresses can be assigned to each Netmail Store process by the CSN. Other applications that used the CSN's DHCP server on the internal network would reduce the number of Netmail Store nodes that could be booted at the same time, which is why a separate third-party range has been provided. For a small network, the maximum number of Netmail Store nodes that can be booted at one time is 50, assuming those nodes support a maximum of 3 multi-server processes per node. Netmail Store nodes that support higher multi-server process counts require additional IP Addresses in the Netboot range, decreasing the number of nodes that can be simultaneously booted. In a large network, the maximum number of Netmail Store nodes that can be booted at one time is 254.

The Netboot range is used to provide the IP Addresses seen in the Admin Console for all Netmail Store processes.

Cabling Dual-network Interfaces

A dual-network CSN requires at least one network interface each for the Internal and External networks. Additional available NICs would ideally be split between the two networks, alternating each NIC port's cabled connection across all network cards for redundancy (External, Internal, External, Internal, etc). The CSN's initial configuration script will detect how all NICs are cabled based on whether or not the specified external gateway is reachable from each NIC. The configuration script will print what it detects to allow you to verify correct cabling; see “Dual-network Primary Configuration” for complete details. Once confirmed during initial configuration, the CSN will bond all NICs assigned to each interface into a single bonded interface with balance-alb bonding.

Single-network

A single network CSN requires access to a dedicated subnet within an overall larger network. The subnet must be at least a Class C range with 256 IP addresses and may be as large as a full Class B network. The CSN itself must be configured with a static IP Address, subnet mask and gateway prior to CSN installation. The CSN's IP Address should be within the x.x.x.1 - x.x.x.17 range for the configured subnet. The gateway configured for a single-network CSN will also be used for the Netmail Store nodes when they are configured.

Please reference the RHEL/CentOS documentation for complete instructions on configuring networking. As an example, the following demonstrates what the manually configured em3 interface with a static 172.20.20.11 IP Address, 255.255.255.0 subnet mask and 172.20.20.1 gateway would look like in /etc/sysconfig/network-scripts/ifcfg-em3:

# CSN interface
DEVICE=em3
IPADDR=172.20.20.11
NETMASK=255.255.255.0
GATEWAY=172.20.20.1
BOOTPROTO=none
ONBOOT=yes
USERCTL=no
MTU=1500
NM_CONTROLLED=no

Be sure to restart the network after configuring the IP Address (/etc/init.d/network restart).

Similar to a dual-network configuration, a single-network CSN will divide up the allocated address space into ranges for the CSN, DHCP and Netboot. The allocated ranges will vary depending on the size of the dedicated subnet and will be printed for confirmation during single-network installation. Since a single-network CSN is directly addressable by external clients, there are no IP Addresses reserved for a third-party range in the CSN's subnet.

Cabling Single-network Interfaces

All network interfaces on a single network CSN should be cabled to the same broadcast domain (VLAN) on one or more network switches. A minimum of two cabled network interfaces is still recommended for redundancy. In addition to configuring the CSN's static IP address, any desired bonding configuration (for either redundancy or throughput) should be completed by the administrator prior to starting single-network CSN installation. Please reference the RHEL/CentOS documentation for complete instructions on configuring network bonding.

Pre-Installation Checklist

Before you install CSN, confirm all of the following:

  • You have a valid IP addresses for your DNS and NTP servers.
  • For dual-network configurations: You must have two available external IP addresses: one to access the Primary CSN and another to access the CSN "cluster" (that is, the Primary CSN (required) and Secondary CSN (optional)). If you set up a Primary and Secondary CSN, you access the CSN console using the CSN cluster IP address in the event the Primary CSN is not available. For more information, see Primary vs. Secondary.

   Before you continue, make sure these IP addresses are not already in use.

  • For single-network configurations: You must have statically configured the IP address for the CSN as described in previous sections.
  • All network interface adapters are connected properly. At least one network interface must be enabled during RHEL/CentOS installation so that it is available for use during CSN installation.