Requirements and Recommendations for Netmail Store 5.4.x
This page provides a selection of hardware shown to be adequate for the Netmail Store part of the platform to operate up to its expected performance level.
Before installing Netmail Store, it is advisable that your hardware complies with the following requirements and recommendations.
|Minimum Requirements for Netmail Store|
Netmail Store installs and runs on enterprise-class x86 commodity hardware. At least three Netmail Store nodes are required in a cluster to ensure adequate resiliency to failures.
Netmail Store nodes are designed to run using lights-out management (or out-of-band management) and do not require a keyboard, monitor, and mouse (KVM) to operate.
The following table lists the minimum (and recommended) hardware requirements for a Netmail Store cluster.
Expand for more detailed information about hardware requirements
Memory Sizing Requirements
Review the following sections for factors that will influence how you size memory and erasure coding, as well as how you configure Netmail Store.
How RAM Affects Storage
The storage cluster is capable of holding the sum of the maximum object counts from all nodes in the cluster. The number of individual objects that can be stored on a Netmail Store node depends both on its drive capacity and the amount of its system RAM.
The following table shows estimates of the maximum possible number of replicated objects (regardless of size) that you can store on a node, based on the amount of RAM in the node, with the default 2 replicas being store. Each replica takes one slot in the in-memory index maintained on the node.
How the Overlay Index Affects RAM
Larger clusters (those above 32 nodes by default) need additional RAM resources to take advantage of the Overlay Index.
To store the same number of
Smaller clusters and larger clusters where the Overlay Index is disabled do not need this additional RAM.
How Erasure Coding Affects RAM
The number of erasure-coded objects that can be stored on a node per GB of RAM is dependent on the size of the object and the configured encoding. The erasure-coding manifest takes two index slots per object, regardless of the type of object (named, unnamed immutable, or unnamed anchor). Each erasure-coded segment in an erasure set takes one index slot. Larger objects can have multiple erasure sets, so you would have multiple sets of segments.
For example, with the default segment size of 200 MB and a configured encoding of
Additional RAM: Larger clusters (above 32 nodes by default) need additional RAM resources to take advantage of the Overlay Index. For erasure-coded objects, allocate 10% additional RAM to enable the Overlay Index.
In summary, Erasure coding users about half the space of replication, but it requires more RAM.
How to Configure for Small Objects
Netmail Store allows you to store objects up to a maximum of 4 TB. However, if you store mostly small files, configure your storage cluster accordingly.
By default, Netmail Store allocates a small amount of disk space to store, write, and delete the disk's file change logs (journals). In typical deployments, this default amount is plenty because the remainder of the disk will be filled by objects before the log space is consumed.
However, for installations writing mostly small objects (1 MB and under), the file log space can fill up before the disk space. If your cluster usage focuses on small objects, be sure to increase the configurable amount of log space allocated on the disk before you boot Netmail Store on the node for the first time.
The parameters used to change this allocation differ depending on the software version in use.
By default, Netmail Store is configured to allocate a small amount of disk space to store write and delete journals.
Supporting High-Performance Clusters
For the demands of high-performance clusters, Netmail Store benefits from fast CPUs and processor technologies, such as large caches, 64-bit computing, and fast Front Side Bus (FSB) architectures.
To design a storage cluster for peak performance, maximize these variables:
Important: If the cluster node CPU supports hyper-threading, be sure to disable this feature within the BIOS setup to prevent single-CPU degradation in Netmail Store.
For best performance, try to balance resources across your nodes as evenly as possible. For example, in a cluster of nodes with 7 GB of RAM, adding several new nodes with 70 GB of RAM could overwhelm those nodes and have a negative impact on the cluster.
Because Netmail Store is highly scalable, creating a large cluster and spreading the user request load across multiple storage nodes significantly improves data throughput, and this improvement increases as you add nodes to the cluster.
Tip: Using multiple replicas when storing objects in the cluster is an excellent way to get the most out of Netmail Store, because each copy provides redundancy and improves performance.
Selecting Hard Drives
Selecting appropriate hard drives for the Netmail Store nodes improves both performance and recovery, in the event of a node or disk failure. When selecting drives, these are the key criteria, which are detailed below:
The critical factor is whether the hard drive is designed for the demands of a cluster. Enterprise-level hard drives are rated for 24x7 continuous-duty cycles and have time-constrained error recovery logic that is suitable for server deployments where error recovery is handled at a higher level than its on-board controller.
In contrast, consumer-level hard drives are rated for desktop use only; they have limited-duty cycles and incorporate error recovery logic that can pause all I/O operations for minutes at a time. These extended error recovery periods and non-continuous duty cycles are not suitable or supported for Netmail Store deployments.
The reliability of hard drives from the same manufacturer will vary, because the drive models target different intended use and duty cycles:
You can optimize the performance and data throughput of the storage sub-system in a node by selecting drives with these characteristics:
Use of independent disk controllers is often driven by the storage bus type in the computer system and hard drives.
Drive Capacity and Recovery
You can improve the failure and recovery characteristics of a node when a drive fails by selecting drives with server-class features yet that are not the highest capacity.
Drive Controller Compatibility
The best practice is to check with Netmail before investing in new equipment, both for initial deployment and for future expansion of your cluster. Netmail can help you avoid problems not only with drive controller options but also with network card choices.
Netmail Store greatly simplifies hardware maintenance by making drives independent of their chassis and their drive slots. As long as your drive controllers are compatible, you are free to move drives as you need.
Netmail Store supports a variety of hardware, and clusters can blend hardware as older equipment fails or is decommissioned and replaced. The largest issue with mixing hardware is incompatibility among the drive controllers.
Track types of drive controllers
When you administer the cluster, monitor your hardware inventory with special attention to the drive controllers. Some RAID controllers, for example, reserve part of the drive for controller-specific information (DDF). Once a volume is formatted for use by Netmail Store, it must be used with a chassis having that specific controller and controller configuration.
To save time and data movement, many maintenance tasks involve physically relocating volumes between chassis. Use the inventory of your drive controller types to easily spot when movement of formatted volumes is prohibited due to drive controller incompatibility.
Disable volume autoformatting
For additional safety in a cluster with incompatible controllers, set this option:
This configuration setting prevents volume reformatting if you accidentally move a volume between incompatible controllers.
With automatic drive formatting disabled, you will need to format your new volumes outside the cluster, which you can do using a spare chassis running Netmail Store with the desired controller.
Test compatibility outside cluster
To determine controller compatibility safely, test outside of your production cluster, do the following:
1. Set up two spare chassis, each with the controller being compared.
2. In the first chassis, format a new volume in Netmail Store.
3. Move the volume to the second chassis and watch the log for error messages during mount or for any attempt to reformat the volume.
4. Retire the volume in the second chassis and move it back to the first.
5. Again, watch for errors or attempts to reformat the volume.
6. If all goes well, erase the drive using
If no problems occur during this test, you can confidently swap volumes between these chassis within your cluster. If this test runs into trouble, do not swap volumes between these controllers.
|Minimum requirements for the CSN|
Expand for more detailed information about hardware requirements
The Netmail Store CSN has been developed and tested with the U.S. English versions of RHEL 6.2 Server 64-bit and CentOS 6.2 64-bit Server Edition. Other versions/languages or Linux distributions are not currently supported. Subsequent installation instructions will assume a pre-installed RHEL Linux environment with either internet connectivity or an alternately configured yum repository for use in installing required third party packages. The CSN installation process is dependent on additional rpm packages that are not installed on the system by default. These packages are available on the RHEL distribution media included with the system.
Important: To ensure a package upgrade does not impact CSN networking, yum updates are disabled after installation by locking all software packages to the version installed during initial configuration. Please contact your support resource if you need to enable
The CSN provides two options for network configuration:
Both network options allow definition of what servers the CSN will network boot as Netmail Store nodes. Single-network enables netboot protection by default to prevent accidental format of servers not intended as storage nodes on a larger network. Dual-network does not require netboot protection by default but it can be enabled. Please reference “Adding Netmail Store Nodes” for more information on netboot protection.
The CSN does not support migration from one type of network configuration to another without reinstalling the CSN.
The following table summarizes the pros and cons of each networking option:
The following sections discuss additional details about each type of configuration to aid you in deciding which is more appropriate for your environment.
A dual-network CSN requires access to both the external network as well as a dedicated internal network. Allocation of the address space in the internal network is broken down as follows, depending on the network size selected during initial configuration (small or large):
The CSN range provides IP Addresses for the various services on the Primary and Secondary CSNs.
The third-party range is provided for third party applications that need to run on the internal network to interface with the Netmail Store cluster. All IP addresses in the third-party range must be static. If you are using CFS with Netmail Store, the IP Address should be assigned in the third-party range. Best practice is to keep a list of all IP Addresses that have been allocated to different applications in the third-party range to prevent IP collisions.
The DHCP range provides an initial, temporary IP Address to Netmail Store nodes during their initial boot until permanent addresses can be assigned to each Netmail Store process by the CSN. Other applications that used the CSN's DHCP server on the internal network would reduce the number of Netmail Store nodes that could be booted at the same time, which is why a separate third-party range has been provided. For a small network, the maximum number of Netmail Store nodes that can be booted at one time is 50, assuming those nodes support a maximum of 3 multi-server processes per node. Netmail Store nodes that support higher multi-server process counts require additional IP Addresses in the Netboot range, decreasing the number of nodes that can be simultaneously booted. In a large network, the maximum number of Netmail Store nodes that can be booted at one time is 254.
The Netboot range is used to provide the IP Addresses seen in the Admin Console for all Netmail Store processes.
Cabling Dual-network Interfaces
A dual-network CSN requires at least one network interface each for the Internal and External networks. Additional available NICs would ideally be split between the two networks, alternating each NIC port's cabled connection across all network cards for redundancy (External, Internal, External, Internal, etc). The CSN's initial configuration script will detect how all NICs are cabled based on whether or not the specified external gateway is reachable from each NIC. The configuration script will print what it detects to allow you to verify correct cabling; see “Dual-network Primary Configuration” for complete details. Once confirmed during initial configuration, the CSN will bond all NICs assigned to each interface into a single bonded interface with balance-alb bonding.
A single network CSN requires access to a dedicated subnet within an overall larger network. The subnet must be at least a Class C range with 256 IP addresses and may be as large as a full Class B network. The CSN itself must be configured with a static IP Address, subnet mask and gateway prior to CSN installation. The CSN's IP Address should be within the x.x.x.1 - x.x.x.17 range for the configured subnet. The gateway configured for a single-network CSN will also be used for the Netmail Store nodes when they are configured.
Please reference the RHEL/CentOS documentation for complete instructions on configuring networking. As an example, the following demonstrates what the manually configured
# CSN interface
Be sure to restart the network after configuring the IP Address (
Similar to a dual-network configuration, a single-network CSN will divide up the allocated address space into ranges for the CSN, DHCP and Netboot. The allocated ranges will vary depending on the size of the dedicated subnet and will be printed for confirmation during single-network installation. Since a single-network CSN is directly addressable by external clients, there are no IP Addresses reserved for a third-party range in the CSN's subnet.
Cabling Single-network Interfaces
All network interfaces on a single network CSN should be cabled to the same broadcast domain (VLAN) on one or more network switches. A minimum of two cabled network interfaces is still recommended for redundancy. In addition to configuring the CSN's static IP address, any desired bonding configuration (for either redundancy or throughput) should be completed by the administrator prior to starting single-network CSN installation. Please reference the RHEL/CentOS documentation for complete instructions on configuring network bonding.
Before you install CSN, confirm all of the following:
Before you continue, make sure these IP addresses are not already in use.